top of page

The Danger of “Good News” Investigations: When Incident Reports Tell You Everything Is Fine

  • Luke Dam
  • 3 hours ago
  • 5 min read

There’s a powerful idea often attributed to Professor Andrew Hopkins that deserves far more attention in incident investigation practice:

If your analysis only delivers good news, it hasn’t been done properly.

Hopkins originally used this concept in the context of audits, warning against “good news audits” that reinforce confidence instead of revealing risk.


But the same thinking applies even more critically to incident investigations.


Because when an investigation produces simple, clean, reassuring conclusions, it doesn’t just miss learning.


It actively sets the organisation up for the next event.


What a “Good News Investigation” Looks Like

Most “good news investigations” don’t look broken.


They are usually well-structured, clearly written, and logically presented. They often conclude that the incident occurred because someone didn’t follow procedure, made an error, or failed to apply their training.


On the surface, this feels reasonable. Something went wrong, someone made a mistake, and action has been taken.


The recommendations follow a familiar pattern. Retrain the worker. Reinforce the procedure. Remind the team.


The report is neat. The narrative is simple. The organisation moves on.


But beneath that simplicity is a subtle and dangerous message:

The system is fine. The problem was the person.

And that is where the real issue begins.


Why These Investigations Persist

These types of investigations don’t happen by accident. They exist because they meet a number of organisational needs.


They provide closure. Leaders get a clear answer and a sense that the issue has been dealt with. There is no ambiguity, no lingering uncertainty, and no need to dig further.


They are also efficient. A conclusion like “human error” can be reached quickly, without the time and effort required to explore the broader system.


Perhaps most importantly, they are comfortable. If the cause sits with an individual, then it doesn’t challenge the design of the system, the decisions of leadership, or the way work is organised. There is no need for escalation, no need for difficult conversations, and no disruption to how things currently operate.


This is exactly the kind of thinking Hopkins warned about.


Because when investigations consistently deliver good news about the system, they are not revealing reality.


They are protecting it from scrutiny.


The ICAM View: Why This Is a Problem

From an ICAM perspective, the idea that an incident can be explained by a single act of human error simply doesn’t hold up.


ICAM, developed by Gerry Gibb while working with James Reason, is based on the understanding that incidents emerge from multiple interacting factors across a system. What we see at the point of failure is only the final step in a much longer chain.


When an investigation stops at “the operator didn’t follow the procedure,” it confuses the outcome with the cause.


A more useful question is: why did it make sense for the operator to act that way at the time?


People don’t arrive at work intending to fail. Their actions are shaped by the conditions around them- time pressure,

competing priorities, the usability of procedures, the design of equipment, and the expectations set by their supervisors.


If those conditions are not explored, then the investigation has not actually explained anything. It has simply described what is already obvious.


What Gets Missed

When investigations remain at the surface, they miss the factors that actually shape performance.


They miss the quiet compromises that develop over time as people adapt to get the job done. They miss the small workarounds that become normal because the formal process doesn’t quite fit reality. They miss the pressures, often invisible to those designing the system, that influence how decisions are made in the moment.

They also miss the role of the organisation itself.


Every system contains latent conditions. These are the underlying weaknesses that sit in the background: unclear procedures, conflicting priorities, resource constraints, or design limitations. On their own, they don’t cause incidents. But when they combine with everyday work, they create the conditions where failure becomes possible.


A “good news investigation” leaves all of this untouched.


It focuses on the final action and ignores everything that made that action likely.


The Illusion of Learning

One of the most dangerous outcomes of these investigations is not what they find, but what they make people believe.


On paper, the organisation appears to have learned. The investigation is complete, actions have been assigned, and the event has been closed out.


But nothing meaningful has changed.


The same pressures still exist. The same system design remains in place. The same trade-offs are being made every day.


This creates an illusion of learning. It feels like progress, but it is actually stagnation.


And over time, that illusion builds confidence. Confidence that the system is working. Confidence that risks are understood. Confidence that issues are being managed.


Until the next incident proves otherwise.


What Strong Investigations Do Differently

A strong investigation feels very different.


It is rarely neat, and it is almost never simple. Instead of producing a single cause, it builds a picture of how multiple factors came together.


It spends less time asking who failed, and more time understanding how the situation made sense to the people involved. It explores the gap between how work is supposed to happen and how it actually happens in practice.

Importantly, it is willing to surface uncomfortable truths.


It may reveal that procedures are not usable in the real world. It may highlight that production pressures are shaping decisions more than intended. It may show that the system, as designed, relies on people constantly adapting just to keep things working.


These are not easy findings to deliver. But they are the ones that lead to meaningful change.


Because they shift the focus from fixing people to improving systems.


Moving Beyond “Human Error”

The phrase “human error” is one of the most common and least useful conclusions in safety.


It gives the appearance of explanation without actually explaining anything.


Of course a human was involved. That is true in almost every incident. But stopping there tells us nothing about why the system allowed that error to occur, or why it wasn’t prevented.


If anything, human error should be the starting point of an investigation, not the conclusion.


It should trigger deeper questions about context, conditions, and design.


When it becomes the final answer, it shuts those questions down.


The Role of Leadership

The quality of investigations is shaped long before the investigation begins.


It is shaped by what leaders expect, what they reward, and how they respond to bad news.

If leaders prioritise speed, investigations will be quick and shallow. If they react negatively to uncomfortable findings, investigations will become cautious and sanitised. If they are satisfied with simple answers, that is what they will receive.


But if leaders ask, “What are we missing?” - and genuinely want the answer - they create space for deeper analysis.


They signal that the goal is not to close the investigation, but to understand the system.


And that changes everything.


A Final Reflection

There are few conclusions more concerning than this:

“The system is fine. The problem was the person.”

Because in a complex system, that is almost never true.


Every action is shaped by the environment in which it occurs. Every decision is influenced by conditions that exist long before the moment of failure.


When investigations fail to uncover those conditions, they don’t just miss insight.


They reinforce a false belief that nothing needs to change.


And that is the real danger of a “good news investigation.”


Closing Thought

If your investigations consistently produce simple answers, clean conclusions, and comfortable outcomes…


It’s worth asking a harder question:

Are we learning, or are we just telling ourselves a story we want to believe?

Because the purpose of an investigation is not to provide reassurance.


It is to reveal reality.


And reality is rarely that tidy.


 
 
 

Comments


bottom of page