Two years ago – in June 2019 – I attended the joint EASA/FAA Conference in Cologne on behalf of ECA. My mission: find out how aircraft certification would evolve after the two fatal crashes and subsequent grounding of the B737MAX. I was only one of the few pilots present, attending along with aircraft manufacturers (Boeing, Airbus, Embraer, ATR...) or OEMs (Honeywell, Thales, Rockwell-Collins...) curious to see what new requirements they would face in the future. To be honest, I didn’t get much additional information on the MAX than what was publicly available since most of the discussions between regulators and the manufacturer were probably held behind closed doors. But what I learned about the efforts on Reduced Crew Operations (RCO) was appalling.
Hundreds of millions of Euros are being spent on autonomous flight research as was confirmed by Filipe Verhaeghe (CEO UNMANNED) during the Man vs. Machine webinar, organised by BeCA earlier this year. Sure, we have seen rapid advances in drone technology over the last decade and it is nice to know that these things have a homing function in case of malfunction, but I am talking passenger aircraft. Pushed by the need for improved urban mobility, companies like Volocopter already have flying prototypes and just obtained prerequisite EASA approval for production.
This is not a one-off.
The market of these Electric Vertical Take-Off & Landing (eVTOL) platforms is booming, and cities seem to be lining up to welcome this new technology. While most companies foresee having a (single) safety pilot on board at the start of their operations, their ultimate objective is to move to fully autonomous vehicles possibly with a remote override. But this trend towards increased autonomy is not limited to aerial taxis.
During the last ICAO Air Navigation Commission (May 2021), Airbus presented its vision for the future, replacing the concept of ‘reduced’ crew operations with two new (better sounding) acronyms: SiPO (Single Pilot Operations) and eMCO (Extended Minimum Crew Operations). And if you think this is a far and distant future, think again. Airbus just completed its Autonomous Taxi, Take-Off, and Landing (ATTOL) project that saw an A350-1000XWB perform normally pilot-flown manoeuvres entirely on its own in early 2020, while just recently Cathay Pacific announced a partnership targeting 2025 for single-pilot cruise flight operations.
Who is responsible when a self-learning system causes an accident?
Replacing pilots requires new technology driven by – you guessed it – Artificial Intelligence (AI). Thanks to increased computing power available today, Machine Learning (ML) promises us unseen efficiency gains and autonomous decision making. I don’t deny that AI has great potential and already powers some interesting applications. But aren’t we moving too fast? Do we understand its limitations? And what about ethical issues? Who is responsible when a self-learning system causes an accident? While some think that having only a single pilot is a terrible idea, EASA is convinced this is the way forward. One month after the Airbus’ ATTOL demonstration, the Agency conveniently published its AI Roadmap proposing a very ambitious timeline confirming Airbus’ target date for the approval of single-pilot cruise flight operations by 2025 and looking for certification of pilotless Commercial Air Transport (CAT) aircraft as soon as 2030!
The roadmap describes three levels of AI/ML applications: LEVEL 1: assistance to human, LEVEL 2: human/machine collaboration, LEVEL 3: (more) autonomous machine. At the most ‘advanced’ level this means that while “the human is in the loop at design and oversight time”, the “machine performs functions, with no human intervention in operations”. Let’s be clear, pilots welcome reliable and well-designed technology. Without it, the aviation industry would have never achieved the enviable safety levels we have today. Safety nets such as (E)GPWS, TCAS and more recently ROPS play a significant role in avoiding accidents.
However, all of these systems are based on classic technology where linear computer algorithms respond to sensor information based on predefined rules. And even though we have used this approach for decades, the increasing complexity in today’s aeroplanes sometimes results in unexpected system behaviour such as misbehaving flight-control systems or flawed MCAS.
On the one hand, technology increases aviation safety, on the other hand we still need human pilots to intervene when things don’t go as expected. It’s not man VERSUS machine, but rather man AND machine working together. Introducing AI/ML, may be useful, but should be done very cautiously. We should not simply hand over control and responsibility of an aircraft - let alone a commercial jet filled with passengers - to a system with a mind of its own. We need to slow down the pace of the current AI/ML development, make it transparent – however difficult this may be due to proprietary rights and complexity – and prove trustworthiness at all times. We cannot allow new players entering the aviation sector to push their disruptive ideas and jeopardise safety in the name of technology or economics.
Contrary to what manufacturers, OEMs and apparently also certification authorities are aiming for, we should take baby steps towards a new future of commercial jet transport. A future in which AI and ML will undoubtedly play an important role. As pilots, we don’t oppose this technology because we are afraid of losing our jobs. We are taking care of the safety of our passengers. That is what we do and will continue to do. The AI train is moving too fast and we need to slow it down and take it one step at a time.
by Cpt. Rudy Pont, BeCA