Keywords
Citation
(2004), "CEA gives Caveat on the safety of critical software", Aircraft Engineering and Aerospace Technology, Vol. 76 No. 3. https://doi.org/10.1108/aeat.2004.12776cab.009
Publisher
:Emerald Group Publishing Limited
Copyright © 2004, Emerald Group Publishing Limited
CEA gives Caveat on the safety of critical software
CEA gives Caveat on the safety of critical software
Keywords: Safety, Software, Aircraft
Few motorists realise that their car is most probably packed with as much software as an Apollo space rocket of the 1960s and 1970s! But relying on software has its drawbacks... Bearing this in mind, researchers at CEA of France have developed an innovative software tool called Caveat, which enhances safety control of critical software across industry. Caveat is the result of more than 10 years of research at the Laboratory for Technology and Systems Integration (LIST) at the Commissariat à l'Energie Atomique (CEA, the public-sector R&D establishment). The development of this tool was funded by CEA jointly with aerospace group Airbus France and power generator EDF. These two partners tested the tool throughout its development, ensuring that it meets the various needs of industry.
Software is a feature of our daily lives, even though ordinary users are often not aware of it. Sometimes, it contains bugs that can be as lethal as the most dangerous bacteria and viruses identified in biology. For example, dedicated software is used in aircraft control systems – and tragedy is guaranteed if it goes wrong. Precisely, Airbus France has been using Caveat to verify software and systems that are being developed. Caveat is currently being used in its first application: a subset of safety critical software for flight controls on the A380 programme. For its part, EDF is using Caveat to verify the reliability of critical software developed by subcontractors and supplied in turnkey systems.
Software is said to be “critical” when any error in its execution can have an impact on people's safety or serious repercussions for the system that it controls: this is the case, for example, for the command-and-control systems in a nuclear power station, for flight controls aboard an aircraft, or for the control system aboard a driverless train. In critical systems, the IT systems have to be totally safe and reliable, as is the case in other fields of engineering such as mechanical engineering, electrical engineering, and so on. This necessity is all the more important in view of the increasing computerisation of everyday systems.
To ensure the safety of critical software, designers follow a rigorous approach in their choice of developers (highly qualified and experienced personnel), development methods (choice of development language, operating system, etc.) and verification strategy. Testing is the most widely used technique for verification: it consists in performing multiple executions (known as dynamic software analysis) with input data estimated to be representative of the conditions of use. The number of tests that would need to be performed to be certain of the software' complete verification is too large to be practicable (even with the most powerful computers). It is therefore necessary to limit the cases tested to those estimated to be representative of the system's real uses.
The verification stage of software development represents on average half the costs of developing critical systems. In aeronautics, it can be as high as 80 per cent of the cost. It is therefore a key factor in the commercialisation process in terms of cost control and time to market. These constraints are further increased by competitive pressures. In addition, correcting an error affecting a given software program is vastly more expensive if it is detected late. Early detection of errors can be carried out before the testing stage with the use of static analysis tools, i.e. tools that do not execute the software. The result of this analysis is valid for all software operations, whatever the input values, and not just for certain sets of values as is the case for testing. It is with precisely this approach that LIST, at the CEA, has developed Caveat, an innovative software tool that enhances safety control in critical software across industry.
Caveat is based on proof technique and the techniques of software static analysis. Caveat mainly detects errors due to a result not having the expected properties. It also highlights possibilities of errors in execution of the software such as infinite loops and division by zero. Its principle of operation consists in comparing the software to be verified, which can consist of some tens of thousands of lines of code for example, with the property expected from the result which is expressed by a mathematical formula only a few lines in length. From this comparison, Caveat formulates a mathematical condition: for example, for the result to be as expected, the parameter “a” must be less than zero. The user then verifies that this parameter is less than zero in all cases. If so, this is taken as proof that the result corresponds to an expected property. If the proof fails, the tool gives indications as to the reasons for the failure. In most cases, this will be due to insufficient assumptions, which then need to be clarified.
While other static analysis software tools on the market simply detect possible errors, due to division by zero for example, Caveat is said to be able to detail the various cases where this error could occur. It therefore provides finer analysis of the causes of potential errors and can also demonstrate that the error will not occur given the conditions of software operation. This refinement of the result is provided through several interactions with the user, whereas other software programs are entirely automatic.
Details available from: FTPB. Tel: +44 (0) 207 235 5330; Fax: +44 (0) 207 235 2773.