Flying safely with AI - certification in aviation

The rising number of passengers and the associated air traffic are increasingly pushing the civil aviation system to its capacity limits. The use of artificial intelligence (AI) could play an important role in meeting the challenges of this development. As part of the KIEZ 4-0 project, fortiss has therefore developed concepts together with its project partners from aviation and science to enable the certification of the safety of AI-supported applications in aviation. The results of the project were presented at the online closing event at the beginning of February 2024.

Feb 12, 2024, 4:07:08 PM
Silvia Hervé, fortiss - Landesforschungsinstitut des Freistaats Bayern für softwareintensive Systeme

Until now, the use of complex AI applications in aviation has been problematic due to a lack of appropriate certification procedures. In order for this technology to be used there, procedures must be developed to certify AI systems and thus prove their safety. Efforts in this direction are extremely extensive, as aircraft components are particularly safety-critical and are therefore subject to very high quality and safety standards. The KIEZ 4-0 project (Artificial Intelligence European Certification under Industry 4.0) has made a significant contribution to how AI systems can be certified in aviation. As part of this project, demonstrators and use cases were used to present a method that shows how the reliability of avionics applications can be certified. In addition, it was analyzed to what extent AI is suitable for certification in aviation and what adjustments are necessary. The joint project was funded by the Federal Ministry for Economic Affairs and Climate Action (BMWK) as part of the Aviation Research Program (LuFo VI-1) and led by Airbus Defence and Space GmbH. In addition to Airbus, the German Aerospace Center (DLR), several Fraunhofer Institutes, German Air Traffic Control (DFS) and other partners were also involved in the three-year project. The consortium also worked with the European Aviation Safety Agency (EASA) to promote certification at an international level. Incorporating certifiction requirements in early development phases Dr. Yuanting Liu, head of the fortiss competence field Human-centered Engineering, led the project and coordinated the collaboration with the researchers from the competence fields Software Dependability and Human-centered Engineering. The fortiss competence field of the Software Dependability developed formal verification-based solutions, employing model checking and theorem proving techniques, to prove the safety and reliability of AI-based systems. Specifically, these methods were applied to fulfill the mandatory proof requirements for assessing the safety and correctness of AI systems in avionics, ensuring that industry's high safety standards are maintained or even improved. Subsequently, the fortiss experts applied these methodologies to the algorithms implementing the collision avoidance functionality of the Temporal Planning Network (TPN) Planner provided by Airbus. This system is used to create and manage flight plans in aviation in order to ensure safe and efficient air traffic. Through model checking and theorem proving approaches, they verified the main algorithms responsible for checking violations of static and dynamic no-fly zones, thus identifying design errors and potential violations within the code. The fortiss competence field of the Human-centered Engineering worked on specific guidelines that are useful for the certification of such systems. Current certification of Human Factors primarily focuses on avoiding human errors. However, the application of AI in more complex situations introduces a new potential source of error. Therefore, it is necessary to consider the potential for AI errors during the system development process. Rather than focusing on how AI can optimally fulfill its task, developers should first clarify what task the AI should fulfill in the first place. This question is crucial and should be addressed early on through the involvement of operators in a human-centered design approach. The concept was explored in the use case of diversions, where flights are unable to reach their planned destination, for instance due to bad weather, a medical emergency or a technical failure. AI can support pilots in deciding on an alternative airport. For this, it is important to define the AI's task in such a way that pilots remain fully involved in the decision-making process so that they can recognize possible AI errors and compensate for them with their human context knowledge. Recommendations for the certification of components with symbolic AI and Human-Factors in particular were derived from this work and exchanged with EASA. AI visions for avionics The realization of future aviation topics such as single-pilot aircraft and air cabs poses a particular challenge, especially with regard to the integration of AI systems. The interaction between the AI and the human pilot or passenger plays a decisive role here. This undertaking requires not only further technological development, but also a review and possibly adaptation of certification methods and processes. The specific requirements relate not only to aircraft technology, but also to the interaction with airports and air traffic control to ensure the smooth functioning of the complex overall system.

Contact for scientific information:

Dr. Yuanting Liu Head of Competence Field Human-centered Engineering fortiss GmbH Research Institute of the Free State of Bavaria for software-intensive systems T: +49 (89) 3603522 427 liu@fortiss.org