Safety Breaks at Undefined Boundaries, Not Failures

edgecaseair

There’s a pattern in aviation safety that’s easy to miss because it doesn’t look like a failure.

Nothing “breaks.”
Nothing alarms.
Nothing is obviously wrong.

And yet, something important quietly stops working the way we assumed it would.

It usually happens at the edges of the system—not in the centre where we design, analyse, and certify things—but at the boundaries where reality gets messy.


 

We spend most of our effort designing the centre

Most safety engineering effort is naturally concentrated where systems are:

  • well-defined
  • repeatable
  • testable
  • certifiable

That’s where we build:

  • redundancy logic
  • failure classifications
  • hazard analyses
  • certification arguments

And it works well… in the centre.

But systems don’t fail in the centre very often anymore.

They fail at the edges.


 

The edge is where assumptions go to get stressed

Every system has implicit boundaries:

  • environmental limits
  • operational expectations
  • human workload assumptions
  • maintenance conditions
  • configuration stability

We rarely write these down explicitly because, in design time, they feel “obvious.”

But in operation, those boundaries become active.

And here’s the key point:

The edge is not where the system is undefined. It’s where our definitions stop being precise enough.


 

Most real-world safety issues are boundary problems

When you look closely at serious incidents, a pattern emerges:

It’s rarely:

  • total system failure
  • completely unknown hazard
  • unpredictable physics

It’s usually:

  • a known system operating just outside its expected envelope
  • a known procedure used under slightly different conditions
  • a known assumption that no longer holds in practice

In other words:

The system didn’t fail. It was asked to operate at a boundary we didn’t fully design for.


 

Boundaries move quietly over time

One of the most interesting things in operational safety is that boundaries are not fixed.

They drift:

  • procedures get adapted
  • workloads increase
  • technologies layer over old systems
  • operational tempo changes
  • “temporary” workarounds become permanent

And because nothing explicitly breaks, we rarely notice the shift.

The system still “works,” so we assume the boundary is still valid.

But it isn’t the same boundary anymore.


 

Safety systems are very good at the middle, and fragile at the edges

This creates a subtle imbalance:

In the middle:

  • redundancy behaves predictably
  • models align with reality
  • failure modes are well understood

At the edges:

  • assumptions multiply
  • interactions become nonlinear
  • human adaptation dominates system behaviour
  • small deviations matter more than design intent

And this is where safety becomes less about engineering control—and more about awareness.


 

The uncomfortable truth: we don’t fully know where the edge is until we cross it

No matter how good the modelling is, there are always:

  • unknown operational combinations
  • rare environmental conditions
  • unanticipated human responses
  • cross-system interactions nobody fully mapped

So the system teaches us its boundaries through experience.

Not in theory.

In operation.

Often in hindsight.


 

So what actually improves safety?

It’s not just more analysis. Not just more redundancy. Not just tighter procedures.

It’s something more subtle:

Designing systems that remain safe even when the boundaries are slightly wrong.

That means:

  • tolerating ambiguity in assumptions
  • detecting drift early rather than reacting late
  • making operational conditions visible, not assumed
  • treating “near-normal” behaviour as a signal, not noise

In other words:

Safety isn’t just about preventing failure inside known boundaries.

It’s about noticing when the boundaries themselves are no longer valid.


 

A final thought

The most dangerous phrase in safety engineering might be:

“It works as expected.”

Because the real question is always:

“Expected under which version of reality?”

And that’s where most systems quietly stop being as safe as we think they are.

Related Posts