A Shift in How We Think About Accidents
If you look at older aviation accidents, many of them are relatively easy to categorise.
- Engine failure
- Structural failure
- Fuel starvation
- Control surface malfunction
There is usually a clear initiating component failure that drives the chain of events.
But when you look at more recent accidents — AF447, MCAS, Überlingen, DC airspace collisions — something important changes.
The aircraft often continues functioning technically, while the system as a whole fails in a different way.
Not through collapse of components.
But through collapse of shared understanding of system state.
From Component Failure to System State Failure
Modern aircraft are no longer simple mechanical systems.
They are tightly coupled:
- sensor networks
- flight control computers
- automation logic layers
- human operators
- external system inputs (ATC, traffic, procedures)
In this environment, individual components can still be functioning correctly.
But the system can still fail.
Because what matters is not whether a component works.
It is whether the system maintains a coherent representation of reality.
The Critical Shift: From Physical Failure to Information Failure
In older systems, failure was physical.
In modern systems, failure is often informational.
Examples:
- AF447 → valid systems operating on invalid airspeed data
- MCAS → correct logic acting on incorrect sensor input
- Überlingen → two valid collision-avoidance systems issuing conflicting commands
- DC airspace collision → incomplete shared situational awareness across human and procedural layers
In each case:
Nothing “stops working” in a traditional sense.
Instead: the system stops agreeing on what actually happended.
What “System Interpretation” Actually Means
Every aircraft operates with a model of reality.
That model is built from:
- sensor inputs
- system state estimation
- automation logic
- human interpretation
In a stable system:
- all layers converge on the same understanding
- small discrepancies are resolved through redundancy
- feedback loops remain consistent
But in complex edge cases:
- sensors disagree
- automation degrades or reconfigures
- humans interpret incomplete signals
- external systems add additional constraints
Now the system is no longer operating on a single shared model.
It is operating on multiple partial models of reality simultaneously.
Why Redundancy Does Not Always Solve the Problem
Redundancy is often assumed to improve safety.
And in most cases, it does.
But redundancy assumes something important:
that disagreement between redundant elements can be correctly identified and resolved.
When redundancy itself becomes ambiguous — for example:
- multiple sensors disagreeing
- multiple systems providing different outputs
- human interpretation overriding automation
then redundancy no longer converges on truth.
It diverges into competing interpretations.
Human Operators as Part of the Interpretation Layer
In modern aircraft, the pilot is not outside the system.
The pilot is part of the system’s interpretation mechanism.
But unlike automated systems:
- humans operate with incomplete data
- humans infer meaning from patterns
- humans rely on prior experience under uncertainty
This is powerful in normal operations.
But in degraded states, it introduces variability into the system’s interpretation process.
Especially when:
- automation disengages unexpectedly
- sensor validity is uncertain
- time pressure increases
- feedback loops become inconsistent
Why These Accidents Look Different
If you compare AF447, MCAS, Überlingen, and DC airspace collisions, a pattern emerges.
They do not look like traditional failures.
Instead, they show:
- systems operating correctly in isolation
- multiple correct decisions producing incorrect outcomes
- information inconsistency rather than mechanical breakdown
- decision paths diverging from actual system state
This is why these accidents are harder to intuitively understand.
There is no single broken part.
There is a breakdown in state alignment across the system.
The Core Problem: Loss of a Single Truth Model
At the heart of these events is one issue:
The system no longer maintains a single, coherent model of reality.
Instead, you get:
- sensor-based reality
- automation-based reality
- human-perceived reality
- procedural reality
All coexisting.
All partially correct.
But not aligned.
And once that alignment is lost, control becomes fragmented.
Implications for Safety Engineering
This has direct consequences for how we think about safety design.
It shifts the focus away from:
- preventing individual component failures
and toward:
- ensuring consistency of system state representation
- managing disagreement between system layers
- designing for interpretability under degraded conditions
- maintaining stable control authority hierarchies
Because in modern aviation:
the biggest risk is no longer failure of parts but divergence of interpretation between working parts
Closing Thought
Modern aviation systems rarely fail because something stops working.
They fail when too many things are working — but not agreeing.
And once the system stops agreeing on what is happening…
every subsequent action, no matter how correct in isolation, is based on a fragmented understanding of reality.
That is the real shift in aviation safety today.
Not from safe to unsafe systems.
But from coherent to incoherent system interpretation under stress conditions.

