Papers by andrea bondavalli

Proceedings of the 12th International Conference on Availability, Reliability and Security
As our society massively relies on ICT, security services are becoming essential to protect users... more As our society massively relies on ICT, security services are becoming essential to protect users and entities involved. Amongst such services, non-repudiation provides evidences of actions, protects against their denial, and helps solving disputes between parties. For example, it prevents denial of past behaviors as having sent or received messages. Noteworthy, if the information flow is continuous, evidences should be produced for the entirety of the flow and not only at specific points. Further, non-repudiation should be guaranteed by mechanisms that do not reduce the usability of the system or application. To meet these challenges, in this paper, we propose two solutions for non-repudiation of remote services based on multi-biometric continuous authentication. We present an application scenario that discusses how users and service providers are protected with such solutions. We also discuss the technological readiness of biometrics for non-repudiation services: the outcome is that, under specific assumptions, it is actually ready.

IEEE Open Journal of Intelligent Transportation Systems
Failures of the vehicle camera may compromise the correct acquisition of frames, that are subsequ... more Failures of the vehicle camera may compromise the correct acquisition of frames, that are subsequently used by autonomous driving tasks. A clear understanding of the behavior of the autonomous driving tasks under such failure conditions, together with strategies to avoid safety is jeopardized, are indeed necessary. This study analyses and improve the performance of Traffic Sign Recognition (TSR) systems for road vehicles under the possible occurrence of camera failures. Our experimental assessment relies on three public datasets, which are commonly used for benchmarking TSR systems. We artificially inject 13 different types of camera failures into the three datasets. Then, we exploit three deep neural networks (DNNs) to classify either a single frame of a traffic sign or a sequence (i.e., a sliding window) of frames. We show that sliding windows significantly improves the robustness of the classifier against altered frames. We confirm our observations through explainable AI, which allows understanding why different classifiers have different performance in case of camera failures. INDEX TERMS Traffic sign recognition, camera failures, deep learning, sliding windows, meta learning, robustness. The review of this article was arranged by Associate Editor Chi-Hua Chen. signs to improve driving and road safety. Due to distraction, fatigue, or adverse operating conditions, human drivers can miss or misinterpret an important traffic sign [13], [14] potentially leading to dangerous situations. Automatic TSR systems embed Machine Learning (ML), and especially Deep Neural Network (DNN) classifiers which process image frames captured by cameras installed on the vehicle. Those TSR systems are known to accurately classify traffic signs, even reaching perfect classification performance under nominal operating conditions [15], [16], [17]. Unfortunately, the adverse environmental conditions, or the malfunctions of the camera, may produce low-quality frames that may negatively impact the performance of classifiers. Examples include, but are not limited to: occlusions, shadows, defects of the camera lens, changes in environmental light, raindrops on the camera lens, out-of-focus, flare [18], [19]. Therefore, to guarantee safety of the driving task, it is necessary to study

Anais Estendidos do X Latin-American Symposium on Dependable Computing (LADC Estendido 2021), 2021
Traffic sign detection and recognition is an important part of Advance Driving Assistance Systems... more Traffic sign detection and recognition is an important part of Advance Driving Assistance Systems (ADAS), which aims to provide assistance to the driver, autonomous driving, or even monitoring of traffic signs for maintenance. Particularly, misclassification of traffic signs may have severe negative impact on safety of drivers, infrastructures, and human in the surrounding environment. In addition to shape and colors, there are many challenges to recognize traffic signs correctly such as occlusion, motion blur, visual camera’s failures, or physically altering the integrity of traffic signs. In Literature, different machine learning based classifiers and deep classifiers are utilized for Traffic Sign Recognition (TSR), with a few studies consider sequences of frames to commit final decision about traffic signs. This paper proposes a robust TSR against different attacks/failures such as camera related failures, occlusion, broken signs, and patches inserted on traffic signs. We are pla...

ACM/IMS Transactions on Data Science, 2021
Anomaly detection aims at identifying unexpected fluctuations in the expected behavior of a given... more Anomaly detection aims at identifying unexpected fluctuations in the expected behavior of a given system. It is acknowledged as a reliable answer to the identification of zero-day attacks to such extent, several ML algorithms that suit for binary classification have been proposed throughout years. However, the experimental comparison of a wide pool of unsupervised algorithms for anomaly-based intrusion detection against a comprehensive set of attacks datasets was not investigated yet. To fill such gap, we exercise 17 unsupervised anomaly detection algorithms on 11 attack datasets. Results allow elaborating on a wide range of arguments, from the behavior of the individual algorithm to the suitability of the datasets to anomaly detection. We conclude that algorithms as Isolation Forests, One-Class Support Vector Machines, and Self-Organizing Maps are more effective than their counterparts for intrusion detection, while clustering algorithms represent a good alternative due to their lo...

IEEE Transactions on Dependable and Secure Computing, 2019
Anomaly detection can infer the presence of errors without observing the target services, but det... more Anomaly detection can infer the presence of errors without observing the target services, but detecting variations in the observable parts of the system on which the services reside. This is a promising technique in complex software-intensive systems, because either instrumenting the services' internals is exceedingly time-consuming, or encapsulation makes them not accessible. Unfortunately, in such systems anomaly detection is often ineffective due to their dynamicity, which implies changes in the services or their expected workload. Here we present our approach to enhance the efficacy of anomaly detection in complex, dynamic softwareintensive systems. After discussing the related challenges, we present MADneSs, an anomaly detection framework tailored for the above systems that includes an adaptive multi-layer monitoring module. Monitored data are then processed by the anomaly detector, which adapts its parameters depending on the current system behavior. An anomaly alert is provided if the analysis conducted by the anomaly detector identify unexpected trends in the data. MADneSs is evaluated through an experimental campaign on two service-oriented architectures; software faults are injected in the application layer, and detected through monitoring of underlying system layers. Lastly, we quantitatively and qualitatively discuss our results with respect to state-of-the-art solutions, highlighting the key contributions of MADneSs.

Qualitative and Quantitative Validation of Legacy Distributed Algorithms through NekoC
ABSTRACT In this paper we present NekoC, an extension to the Neko framework that allows to includ... more ABSTRACT In this paper we present NekoC, an extension to the Neko framework that allows to include C/C++ code within the tool. Neko is a framework and a communication platform that allows rapid prototyping of distributed algo-rithms; the same implementation of an algorithm can be exercised both on top of real and simulated networks, allowing simulative and experimental qualitative and, using the NekoStat extension, quantitative analyses. The Neko framework is written in Java, being thus highly portable; however it requires the translation into Java of existing algorithms, written in other languages, that one wants to analyze. The NekoC extension allows the direct integration of existing C/C++ algorithms in Neko applications avoiding a translation into Java that may be error prone. Moreover the Java language may fail in correctly represent some low level details of the algorithms. Since most of the running distributed algorithms are written in C/C++, allowing a direct analysis of C/C++ existing legacy distributed algorithms, NekoC widely extends the applicability of Neko and improves the faithfulness of analyses performed. The paper describes the extensions made and illustrates the use of the tool on an algorithm whose Java translation does not have the original behavior.
Reliable and self-aware clock: complete description
ABSTRACT This Technical Report provides a complete and exhaustive (even if preliminary) descripti... more ABSTRACT This Technical Report provides a complete and exhaustive (even if preliminary) description of the Reliable and Self-Aware Clock (R&SAClock). the Reliable and Self-Aware Clock (R&SAClock) is a software component that allows to compute synchronization uncertainty, i.e. a conservative estimation on distance of a local clock from global time. The R&SAClock is a new software clock for resilient time information that provides both current time and current synchronization uncertainty, i.e. an estimation of the distance of local clock from an external global time. It is a low-intrusive component that hides to users the existence of both the synchronization mechanisms in use (possibly more than one) and the software clock.

Dependable Computing for Critical Applications 7
In this paper we focus on the analytical modeling for the dependability evaluation of phased-miss... more In this paper we focus on the analytical modeling for the dependability evaluation of phased-mission systems. Because of their dynamic behavior, phased-mission systems offer challenges in modeling. We propose the modeling and evaluation of phased-mission systems dependability through the Deterministic and Stochastic Petri Nets (DSPN). The DSPN approach to the phased-mission systems offers many advantages, concerning both the modeling and the solution. The DSPN model of the mission can be a very concise one, and it can be efficiently solved for the dependability evaluation purposes. The solution procedure is supported by the existence of an analytical solution for the transient probabilities of the marking process underlying the DSPN model. This analytical solution can be fully automated. We show how the DSPN models capabilities are able to deal with various peculiar features of phased-mission systems, including those systems where the next phase to be performed can be chosen at the time the preceding phase ends.
Hierarchical Modelling and Evaluation of Phased-Mission Systems

IEEE Transactions on Reliability, 2021
Model-based evaluation is extensively used to estimate the performance and reliability of dependa... more Model-based evaluation is extensively used to estimate the performance and reliability of dependable systems. Traditionally, these systems were small and self-contained, and the main challenge for model-based evaluation has been the efficiency of the solution process. Recently, the problem of specifying and maintaining complex models has increasingly gained attention, as modern systems are characterized by many components and complex interactions. Components share similarities, but at the same time, also exhibit variations in their behavior due to different configurations or roles in the system. From the modeling perspective, variations lead to replicating and altering a small set of base models multiple times. Variability is taken into account only informally, by defining a sample model and explaining its possible variations. In this article, we address the problem of including variability in performability models, focusing on stochastic activity networks (SANs). We introduce the formal definition of stochastic activity networks templates (SAN-T), a formalism based on SANs with the addition of variability aspects. Differently from other approaches, parameters can also affect the structure of the model, like the number of cases of activities. We apply the SAN-T formalism to the modeling of the backbone network of an environmental monitoring infrastructure. In particular, we show how existing SAN models from the literature can be generalized using the newly introduced formalism.
Design, Methods, and Tools for ORC
ABSTRACT

IEEE Systems Journal, 2016
A dramatic shift in system complexity is occurring, bringing monolithic system designs to be prog... more A dramatic shift in system complexity is occurring, bringing monolithic system designs to be progressively replaced by modular approaches. In the latest years this trend has been emphasized by the System of Systems (SoS) concept, in which a complex system or application is the result of the integration of many independent, autonomous Constituent Systems (CS), brought together in order to satisfy a global goal under certain rules of engagement. The overall behavior of the SoS, emerging from such complex interactions and dependencies, poses several threats in terms of dependability, timeliness and security, due to the challenging operating and environmental conditions caused by mobility, wireless connectivity, and the use of off-the-shelf components. Referring to our experience in mobile safety-critical applications gained from three different research projects, in this paper we illustrate the challenges and benefits posed by the adoption of an SoS approach in designing, developing and maintaining mobile safety-critical applications, and we report on some possible solutions.
Experimental evalutation of the QoS of Failure Detectors on Wide Area Network

2012 Ninth European Dependable Computing Conference, 2012
Low cost wireless solutions for safety-critical applications are attractive to leverage safety-cr... more Low cost wireless solutions for safety-critical applications are attractive to leverage safety-critical operation in new application areas. This work assesses the feasibility of providing synchronous and time bounded communication to standard IEEE 802.11 devices with low effort modifications. An existing protocol for time bounded communication in wireless systems is adapted to a generic safety-critical application with low bandwidth requirements, but strict bounds on time behavior. Experimental and simulation studies are conducted in which the protocol is implemented on top of IEEE 802.11e Distributed Coordination Function (DCF). The experimental results for packet loss ratio, communication delays, and broadcast completion are used to calibrate a stochastic simulation model that allows to extrapolate the expected long-term performance of the protocol. Both the experimental results and the simulation extrapolation show that necessary availability requirements can be met with 802.11e prioritization in the investigated cross-traffic and interference scenarios.

Proceeding International Conference on Dependable Systems and Networks. DSN 2000
Multiple-Phased Systems, whose operational life can be partitioned in a set of disjoint periods, ... more Multiple-Phased Systems, whose operational life can be partitioned in a set of disjoint periods, called "phases", include several classes of systems such as Phased Mission Systems and Scheduled Maintenance Systems. Because of their deployment in critical applications, the dependability modeling and analysis of Multiple-Phased Systems is a task of primary relevance. However, the phased behavior makes the analysis of Multiple-Phased Systems extremely complex.. This paper is centered on the description and application of DEEM, a dependability modeling and evaluation tool for Multiple Phased Systems. DEEM supports a powerful and efficient methodology for the analytical dependability modeling and evaluation of Multiple Phased Systems, based on Deterministic and Stochastic Petri Nets and on Markov Regenerative Processes.

Responsive Computer Systems: Steps Toward Fault-Tolerant Real-Time Systems, 1995
This paper proposes a framework for software-implemented, adaptive fault tolerance in a real-time... more This paper proposes a framework for software-implemented, adaptive fault tolerance in a real-time context. It extends previous work in two main ways: by including features that explicitly address the realtime constraints; and by a flexible and adaptable control strategy for managing redundancy within application software modules. This redundancy-management design is introduced as an intermediate level between the system design (which may itself consist of multiple levels of design) and the low-level, non-redundant application code. Application designers can specify fault tolerance strategies independently for the individual application modules, including adaptive strategies that take into account available resources, deadlines and observed faults. They can use appropriate design notations to notify the scheduling mechanisms about the relative importance of tasks, their timing requirements and both their worst-case and actual usage of resources. Run-time efficiency can thus be improved while preserving a high degree of predictability of execution.

2005 International Conference on Dependable Systems and Networks (DSN'05)
This paper describes an experiment performed on Wide Area Network to assess and fairly compare th... more This paper describes an experiment performed on Wide Area Network to assess and fairly compare the Quality of Service provided by a large family of failure detectors. Failure detectors are a popular middleware mechanism used for improving the dependability of distributed systems and applications. Their QoS greatly influences the QoS that upper layers may provide. It is thus of uttermost importance to equip a system with an appropriate failure detector and to properly tune its parameters for the most desirable QoS to be provided. The paper first analyzes the QoS indicators and the structure of push-style failure detectors and then introduces the choices for estimators and safety margins used to build several (30) failure detectors. The experimental setup designed and implemented to allow a fair comparison of QoS of the several alternatives in a real representative experimental setting is then described. Finally the results obtained through the experiments and their interpretation are provided.
Proceedings 1997 High-Assurance Engineering Workshop
This paper deals with the modelling and evaluation of mission-phased systems devoted to space app... more This paper deals with the modelling and evaluation of mission-phased systems devoted to space applications. We propose a two level hierarchical method that allows to model such systems and to master the complexity of the analysis. Our approach considers a separate modelling and resolution of the phases, and of the dependencies among phases caused by the usage of the same system components in the different phases. Moreover, it accounts for a dynamic choice on whether some phases have to be skipped. The proposed method turns out in a great flexibility, easy applicability and reusability of the defined models. Furthermore, it permits not only to obtain information on the overall behaviour of the system, but also at the same time to focus on each single phase and hence allows to detect system dependability bottlenecks.
Proceedings of EUROMICRO 96. 22nd Euromicro Conference. Beyond 2000: Hardware and Software Design Strategies
The need of efficient implementation, safety and performance requires early validation in the des... more The need of efficient implementation, safety and performance requires early validation in the design of computer control systems. The detailed timing and reachability analysis in the development process is particularly important if we design equipments or algorithms of high performance and availability. In this paper we present a case study related to the early validation of control systems modeled by data flow networks. The model is validated indirectly as it is transformed to Petri nets in order to be able to utilize the tools available for Petri nets.
2008 IEEE International Conference on Dependable Systems and Networks With FTCS and DCC (DSN), 2008
This workshop summary gives a brief overview of the workshop on IIResilience Assessment and Depen... more This workshop summary gives a brief overview of the workshop on IIResilience Assessment and Dependability Benchmarking" held in conjunction with the 38 th IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2008). The workshop aims at the presentation and exchange of ideas from the world wide research community and fostering discussions in order to give answers to the needfor improving trustworthiness and understand the current risks inherent to computer systems and infrastructures. In particular the workshop aims at addressing key research challenges related to effective and accurate methods for measuring, assessing and benchmarking dependability and resilience.
Uploads
Papers by andrea bondavalli