Model-Based Safety-Cases for Software-Intensive Systems
https://doi.org/10.1016/J.ENTCS.2009.09.007…
6 pages
1 file
Sign up for access to the world's latest research
Abstract
Safety cases become increasingly important for software certification. Models play a crucial role in building and combining information for the safety case. This position paper sketches an ideal model-based safety case with defect hypotheses and failure characterisations. From this, open research issues are derived.
Related papers
Today, Model Based Safety Analysis processes become more and more widespread to achieve the safety analysis of a system. However and at our knowledge, there is no formal testing approach to ensure that the formal model is compliant with the real system. In the paper, we choose to study AltaRica model. We present a general process to well construct and validate an AltaRica formal model. The focus is made on this validation phase, i.e. verifying the compliance between the model and the real system. For it, the proposed process recommends to build a specification for the AltaRica model. Then, the validation process is transformed to a classical verification problem between an implementation and a specification. We present the first phase of a method to verify the compliance between the model and the specification.
Lecture Notes in Computer Science, 2007
Some high integrity software systems require the rigorous validation of safety properties. Assessing whether software architectures are able to meet these requirements is of great interest: to avoid the risk that the implementation does not fulfill requirements due to a bad design, and, to reduce the development cost of safety critical parts of the system. Safety analyses like FMECA and FTA are two methods used during preliminary safety assessments. We have implemented tools to automatically generate safety analyses from the models of the architecture: a UML profile for safety, modeling languages to express safety analyses, and a model transformation chain. Safety analysts can use these tools to annotate the models, analyze the architecture, and recommend system engineers mitigation means to apply for improving the architecture.
2018
The position of software regarding the global system safety is subject to significant variations among the various application domains and their safety standards. As a consequence, the position regarding whether, how and to which extent software safety analyses could or should contribute to the global safety assessment also varies. In Civil Aviation [ARP 4754A; ARP 4761; DO 178C], Nuclear [IEC 61513, IEC 60880] and to some extent Space [ECSS Q40; ECSS Q80], safety analyses are performed at system level and on functions, subsystems and equipment, but not under the form of dedicated safety analyses applied to software. In these domains, the rationale is that software contributes to system safety through adherence to software development and validation rules i.e. through an argument on confidence in software correctness to an extent adapted to the consequences of failures. However it is worth noting that the assessment of the consequences of failures, and hence the determination of the Development Assurance Level, or Software Criticality Category, etc., result from safety analyses performed at system and not at software level. Conversely in domains such as railway [EN 50129] or automotive [ISO 26262], whereas the overall safety rationale is very similar, it is still required to perform in addition dedicated safety analyses applied to software. In the Base Safety Standard [IEC 61508], the software safety analysis encompasses a set of normative means supporting the functional safety assessment. In the process industries domain [IEC 61511], the concept is only emerging. In this paper, from our experience in safety practice and standard in several domains and discussions within a working group dedicated to cross-domains comparison of safety standards, we propose a description of classical software safety analysis techniques and discuss the nature of arguments they can provide, according to their various objectives and situation in the global process and safety argumentation. We also discuss why software complexity increase has progressively made completeness of system functional safety requirements an important issue. Inspired by STPA and contract-based design in software engineering, we make constructive propositions to mitigate the incompleteness risk. We sketch out a general specification setting, in which another kind of "software safety" analysis would come into play. We conclude on whether and how this could fit in the global system safety assessment.
IEEE Systems Journal, 2010
Safety-critical software-intensive systems of systems require rigorous verification and validation to ensure that they function as per requirements. Unlike verification, validation is typically an ill-defined activity for software development. This paper presents a well-defined validation metrics framework which uses hazard analysis, and the derived software requirements for mitigating the identified hazards, as proxies in gauging the sufficiency of the software safety requirements early in the software development process. Moreover, traditional hazard analysis techniques are insufficient to deal with the complexity and size of systems of systems. This paper examines the nature and types of hazards associated with systems of systems and presents a new technique for analyzing one type of emergent hazard known as an interface hazard.
Nowadays, certification of safety-critical software systems requires submission of safety assurance documents, often in the form of safety cases. A safety case is a justification argument used to show that a system is safe for a particular application in a particular environment. Different argumentation strategies are applied to derive the evidence for a safety case. They allow us to support a safety case with such evidence as results of hazard analysis, testing, simulation, etc. On the other hand, application of formal methods for development and verification of critical software systems is mandatory for their certification. In this paper, we propose a methodology that combines these two activities. Firstly, it allows us to map the given safety requirements into elements of the formal model to be constructed, which is then used for verification of these requirements. Secondly, it guides the construction of a safety case demonstrating that the safety requirements are indeed met. Con...
International Journal of Embedded Systems, 2016
Software product lines (SPLs) provide an engineering basis for the systematic reuse of artefacts used for development, assessment, and management of critical embedded systems. Hazards and their causes are safety properties that may change according to the selection of variants in a particular SPL product. Therefore, safety analysis assets such as fault trees and failure modes and effects analysis (FMEA) cannot be directly reused because they are dependent upon the selection of product variants. In this paper, model-based safety analysis techniques and SPL variability management tools are used together to reduce the effort of product safety analysis by: reusing SPL hazard analysis, and providing automatic safety analysis for each SPL product. The benefit of applying the approach is the reduction of effort to perform product safety analysis. The proposed approach is illustrated using the Hephaestus variability management tool and the HiP-HOPS model-based safety analysis tool to generate fault trees, and FMEA for products of an automotive hybrid braking system SPL. The safety assessment artefacts generated by the approach provide feedback for the SPL development process helping safety engineers to make decisions earlier in the development lifecycle.
System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees.
Communications in Computer and Information Science, 2013
Most formal assessment and evaluation techniques and standards assume that software can be analysed like any physical item. In safety-critical systems, software is an important component providing functionality. Often it is also the most difficult component to assess. Balanced use of process assessment and product evaluation methods is needed, because lack of transparency in software must be compensated with a more formal development process. Safety case is an effective approach to demonstrate safety, and then both process and product are necessary evidence types. Safety is also a likely candidate to be approached as a process quality characteristic. Here we present a tentative set of process quality attributes that support achievement of safety requirements of a software product.
19th Australian Conference on Software Engineering (aswec 2008), 2008
Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability
The verification of safety requirements becomes crucial in critical systems where human lives depend on their correct functioning. Formal methods have often been advocated as necessary to ensure the reliability of software systems, albeit with a considerable effort. In any case, such an effort is cost-effective when verifying safety-critical systems. Often, safety requirements are expressed using safety contracts, in terms of assumptions and guarantees. To facilitate the adoption of formal methods in the safety-critical software industry, we propose a methodology based on well-known modelling languages such as the unified modelling language and object constraint language. The unified modelling language is used to model the software system while object constraint language is used to express the system safety contracts within the unified modelling language. In the proposed methodology a unified modelling language model enriched with object constraint language constraints is transforme...

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (9)
- M. Bozzano et. al. ESACS: an integrated methodology for design and safety analysis of complex systems. In Proc. ESREL 2003, pages 237-245, 2003.
- O. Akerlund et. al. ISAAC, a framework for integrated safety analysis of functional, geometrical and human aspects. In Proc. ERTS 2006, 2006.
- Holger Giese, Matthias Tichy, and Daniela Schilling. Compositional Hazard Analysis of UML Components and Deployment Models. In Proc. 23rd International Conference on Computer Safety, Reliability and Security (SAFECOMP), volume 3219 of LNCS. Springer Verlag, 2004.
- Michael Jackson. Software Requirements and Specifications. Addison-Wesley and ACM Press, 1996.
- Anjali Joshi, Steven P. Miller, Michael Whalen, and Mats P.E. Heimdahl. A proposal for model-based safety analysis. In Proc. 24th Digital Avionics Systems Conference, Oct 2005.
- Tim Kelly and Rob Weaver. The goal structuring notation -a safety argument notation. In Proc. DSN 2004 Workshop on Assurance Cases, 2004.
- Nancy G. Leveson, Stephen S. Cha, and Timothy J. Shimeall. Safety verification of ada programs using software fault trees. IEEE Softw., 8(4):48-59, 1991.
- D. Parnas and J. Madey. Functional Documents for Computer Systems. Science of Computer Programming, 1(25):41-61, October 1995.
- David John Pumfrey. The Principled Design of Computer System Safety Analyses. PhD thesis, Department of Computer Science, University of York, 1999.