There is a line that sits quietly behind most safety decisions, usually referenced without much discussion, coming from the Work Health and Safety Act 2011, which says that cost can only be considered after risk, and even then only where it is grossly disproportionate.
On paper, that aligns neatly with how certification activities are supposed to work, particularly in the context of Functional Hazard Assessments, where the intent is straightforward: define the system, identify its functions, determine how those functions can fail, describe the resulting effects, and classify the severity based purely on consequence, not on what is convenient or achievable.
At least, that is the intent.
In the early stages of a program, this separation tends to hold reasonably well, because the system is still fluid, design decisions are still being shaped, and the cost of change is relatively low, which means that when a failure condition is identified and its effects are assessed, the resulting severity classification is usually a fairly direct reflection of the system itself.
But certification does not happen in isolation, it happens inside a program, and programs have a tendency to evolve in ways that are not always visible within the safety process itself.
As the design matures, suppliers become involved, interfaces begin to lock in, and the need for cost estimates—often in the form of rough orders of magnitude—starts to introduce delay into what was previously a relatively clean analytical process, and during that delay something subtle begins to shift.
The question is no longer just “what happens if this function fails,” but gradually becomes entangled with “what does this mean for the design,” “what does this mean for certification,” and more importantly, “what does this mean for the schedule.”
Time, although not explicitly part of the “grossly disproportionate” test, becomes embedded in the decision-making process, because delays introduce cost, late changes introduce more cost, and the further along the program progresses, the less flexibility there is to respond to what the analysis is actually telling you.
At that point, the idea of cost is no longer limited to the mitigation itself, it includes redesign effort, supplier impact, certification rework, and schedule disruption, and without anyone explicitly redefining the problem, the comparison has quietly changed.
Instead of asking whether a mitigation is grossly disproportionate to the safety risk, the question becomes whether it is disproportionate to the current state of the program.
That distinction is rarely stated, but it is often present.
What makes this particularly difficult to see is that the shift does not usually occur as a clear decision point, it happens gradually, and it often shows up not in the decision itself, but in the analysis that supports it.
The FHA, which is supposed to describe the system in a structured and consistent way, begins to reflect not just the system, but the constraints around it.
Effects are described with slightly more caution, severity boundaries become subject to more discussion, assumptions carry more weight than they should, and justifications become more detailed, not necessarily because the system has changed, but because the context has.
From a process perspective, everything still appears correct, the structure is followed, the terminology is appropriate, and the outputs look consistent with expectations, but the underlying purpose has shifted from describing reality to aligning with what can be managed within the program.
This is where the concept of “grossly disproportionate” becomes difficult, because although the wording has not changed, the reference point has.
It is no longer anchored purely in risk, but in a combination of risk and circumstance, and over time, the cost of doing something begins to appear more disproportionate than the risk of not doing it.
None of this requires bad intent, and in most cases it is simply the natural outcome of working within a constrained delivery environment, where decisions are influenced by timing, resources, and the practical realities of getting a system certified.
But it does introduce a problem.
Certification frameworks assume that safety assessments like FHA provide a clear and stable description of what happens when a system fails, and that design and program decisions are made in response to that description, not embedded within it.
Once those boundaries start to blur, the FHA is no longer just an assessment, it becomes part of the negotiation.
And once that happens, it becomes much harder to rely on it as a consistent representation of the system.
Which leaves a fairly simple, but often overlooked point.
If cost, schedule, or program pressure influence how a failure condition is described or how its severity is classified, then the analysis is no longer purely about the system, it is about the situation the system is in.
And those are not the same thing.
So what can you actually do about it?
You are not going to stop schedule pressure, budget constraints, or delivery commitments from influencing decisions, and in most cases, you are not even in a position to challenge them directly.
What you can control is something much more specific.
You can control whether those pressures influence the description of the risk, or only the decision that follows it.
That distinction is small on paper, but it is everything in practice.
The first thing to be clear about is this.
Your role in an FHA is not to make the system acceptable, it is to describe what happens when it is not.
Once you start adjusting severity, softening effects, or shaping assumptions so that the outcome “fits,” you are no longer doing safety analysis, you are doing design negotiation, even if it does not feel like it at the time.
A simple way to hold that line is to treat the FHA as something that must stand on its own, even if no decision is ever made, so that if someone reads it without context, it answers one question clearly:
“What happens if this function fails, and how bad is it?”
—not—
“What are we prepared to accept given the schedule?”
In practice, the pressure does not need to be explicit for it to influence the outcome, and it often appears in the form of reasonable questions that carry an implicit direction, such as whether something is really hazardous, whether it could be justified as major, or whether assumptions can be interpreted differently.
In those situations, one of the most effective things you can do is make the separation explicit without turning it into a confrontation, by structuring the conversation in a way that keeps the analysis and the decision distinct.
For example, stating that from an FHA perspective the effect leads to a certain severity, and that what is done with that information is a separate decision, keeps the analysis clean while still allowing the discussion to continue.
It is also important to capture constraints for what they are, rather than allowing them to influence the analysis indirectly, so if schedule, design maturity, or supplier limitations are affecting what can be done, they should be recorded as constraints, not embedded within the effect descriptions or severity classifications.
There is also value in resisting the urge to adjust the analysis early based on what the program can realistically accommodate, because once that adjustment is made, the original position is lost, and with it the ability to have a meaningful discussion about what the system actually requires.
Sometimes the most useful thing you can do is let the FHA remain slightly uncomfortable, not exaggerated, not conservative for its own sake, but simply accurate.
Finally, it is worth being comfortable with the idea that the decision made may not align with the analysis, because that is not a failure of the FHA, it is a reflection of the broader program context.
Your role is not to eliminate that gap, but to make sure it is visible and understood.
Because once that visibility is lost, everything starts to look consistent on paper, even when it is not in reality.
And that is where safety work stops being useful.
Related Posts

