A Shift in How We Think About Accidents
If you go back and look at older aviation accidents, they’re usually quite straightforward to categorise.
Engine failure
Structural failure
Fuel starvation
Control surface issues
There’s typically a clear starting point.
Something breaks, and that failure drives everything that follows.
But when you start looking at more recent accidents—AF447, MCAS, Überlingen, DC airspace collisions—you notice something different.
The aircraft itself is often still “working.”
Systems are powered, controls respond, logic executes.
And yet, the system as a whole still fails.
Not because something collapsed physically…
but because something more subtle broke down.
What actually collapses is the shared understanding of what’s going on.
From Component Failure to System State Failure
Modern aircraft aren’t simple mechanical machines anymore.
They’re tightly connected systems made up of:
sensor networks
flight control computers
layers of automation
human operators
external inputs like ATC and procedures
In that kind of environment, individual parts can be working perfectly fine.
And the system can still fail.
Because the real question isn’t:
“Is this component working?”
It’s:
“Does the system still have a consistent understanding of reality?”
The Critical Shift: From Physical Failure to Information Failure
In older systems, failure was mostly physical.
Something broke, and you dealt with it.
In modern systems, failure is often about information.
More specifically—bad, inconsistent, or misunderstood information.
You can see that pattern clearly:
AF447 → systems working with unreliable airspeed data
MCAS → correct logic reacting to bad sensor input
Überlingen → two valid systems giving conflicting instructions
DC airspace collision → gaps in shared situational awareness
In all of these:
Nothing completely “stops.”
Instead, the system stops agreeing with itself.
What “System Interpretation” Actually Means
Every aircraft is constantly building a picture of reality.
That picture comes from:
sensor data
internal system calculations
automation decisions
human interpretation
When everything is working well:
all of those layers line up
small differences get resolved
feedback loops stay stable
But in edge cases, things start to drift:
sensors don’t agree
automation changes behaviour or drops out
humans are left interpreting incomplete signals
external inputs add more complexity
Now you don’t have one clear picture anymore.
You have multiple partial ones, all existing at the same time.
Why Redundancy Does Not Always Solve the Problem
Redundancy is one of the main tools we rely on for safety.
And most of the time, it works really well.
But it depends on something we don’t always talk about:
that when systems disagree, we can clearly figure out which one is right.
When that breaks down, things get messy.
You might have:
multiple sensors disagreeing
different systems producing different outputs
the human making a call that conflicts with automation
At that point, redundancy isn’t giving you clarity.
It’s giving you competing versions of reality.
Human Operators as Part of the Interpretation Layer
In modern aircraft, the pilot isn’t outside the system.
They’re part of how the system makes sense of itself.
But humans don’t process information the same way machines do.
They:
work with incomplete data
fill in gaps based on experience
look for patterns, especially under pressure
That’s incredibly powerful when things are stable.
But when the system becomes inconsistent, it adds another layer of variability.
Especially when:
automation drops out unexpectedly
sensor reliability is unclear
time pressure increases
feedback becomes inconsistent
At that point, interpretation becomes harder—not easier.
Why These Accidents Look Different
If you compare cases like AF447, MCAS, Überlingen, and DC airspace collisions, a pattern starts to emerge.
They don’t look like traditional failures.
Instead, you see:
systems behaving correctly on their own
multiple reasonable decisions leading to bad outcomes
problems driven by inconsistent information
decisions drifting away from the actual state of the system
That’s why they’re harder to understand at first glance.
There isn’t a single broken part to point at.
What’s broken is alignment.
The Core Problem: Loss of a Single Truth Model
At the center of all this is one key issue:
The system no longer has a single, shared version of reality.
Instead, you get layers:
sensor-based reality
automation-based reality
human-perceived reality
procedural or external reality
All of them are partially right.
But they’re not aligned.
And once that alignment goes, control starts to fragment.
Implications for Safety Engineering
This changes how we need to think about safety.
It’s less about:
preventing individual failures
And more about:
keeping the system’s understanding consistent
managing disagreement between layers
designing systems that are still interpretable when degraded
making sure control authority stays clear and stable
Because in modern aviation:
the biggest risk isn’t just something failing—
it’s different parts of the system reaching different conclusions while still “working.”
Closing Thought
Modern aviation systems rarely fail because something simply stops working.
They fail when too many things are still working…
but no longer agreeing.
And once that shared understanding is lost, every action that follows—even if it’s reasonable on its own—
is based on a fragmented view of reality.
That’s the real shift we’re dealing with now.
Not just safe vs unsafe systems.
But systems that either stay coherent under stress…
or quietly fall apart in how they interpret what’s happening.
Related Posts

