top of page

The Limitations of AI in ICAM Investigations

  • Luke Dam
  • 11 minutes ago
  • 5 min read

What Investigators and Organisations Must Understand Before Relying on Artificial Intelligence

Artificial Intelligence is rapidly finding its way into workplace investigations.


From interview transcription and timeline building to report drafting and trend analysis, AI promises faster investigations, greater consistency, and reduced administrative burden. For ICAM practitioners, this is understandably appealing.


But there is a risk few organisations are talking about:

ICAM is a thinking discipline- and AI is not a thinker.


Used without care, AI can quietly undermine the very purpose of ICAM: deep learning about how systems really operate, and why risk becomes normalised over time.


This article is not anti-AI. I am a big fan and am deeply interested in how AI can be applied in the workplace. This article is pro-ICAM.


What follows is a practical exploration of where AI helps, where it fails, and what investigators must remain accountable for.


ICAM Is About Understanding Systems- Not Producing Reports

ICAM was never intended to be a mechanistic process.

At its core, ICAM is about:


  • Understanding how work is actually done

  • Exploring interactions between people, equipment, procedures, and decisions

  • Identifying systemic contributors, not individual blame

  • Supporting meaningful organisational learning


AI tools are excellent at producing outputs. ICAM requires judgement.

That distinction matters more than ever.


1. AI Does Not Understand “Work as Done”

One of the foundational principles in ICAM is recognising the gap between:


  • Work as Imagined – how procedures, policies, and leaders believe work occurs

  • Work as Done – how work actually occurs under real-world constraints


AI systems are trained almost entirely on formal artefacts:


  • Procedures

  • Policies

  • Standards

  • Past investigation reports

  • Documented expectations


In other words, AI primarily understands work as imagined.

It does not experience:


  • Time pressure

  • Conflicting priorities

  • Fatigue

  • Informal rules

  • Tacit knowledge

  • Normalised shortcuts that keep systems running


Without careful human interpretation, AI risks reframing normal operational adaptations as “deviations”- precisely the trap ICAM was designed to avoid.


2. AI Finds Patterns- ICAM Explains Causes

AI excels at identifying patterns:


  • Repeated rule breaches

  • Frequently cited contributing factors

  • Similar wording across reports

  • Recurring hazard types


ICAM investigations are not pattern-finding exercises.

They are causal sensemaking exercises.

A pattern of “failure to follow procedure” tells us almost nothing on its own. ICAM asks:


  • Why was the procedure difficult to follow?

  • What competing goals existed?

  • What organisational decisions shaped those conditions?

  • Why did the risk appear acceptable at the time?


AI can surface patterns. It cannot determine why those patterns exist.

Without human analysis, pattern recognition can easily become pattern justification.


3. AI Cannot Make Ethical Judgements

Every ICAM investigation contains ethical decisions, whether acknowledged or not.

Investigators make judgment calls about:


  • How interview data is represented

  • How much emphasis is placed on individual actions

  • How organisational decisions are described

  • What language is used to avoid blame

  • How psychological safety is preserved


AI has no ethical agency.

It cannot assess:


  • Power imbalances

  • Fear of reprisal

  • Emotional harm

  • Cultural context

  • Trust implications


A sentence that is technically neutral can still be ethically harmful. Only a human investigator can make that call.


4. AI Amplifies Existing Investigation Quality- Good or Bad

AI does not improve investigation practice by default.

It replicates and amplifies whatever quality already exists.

If an organisation’s historical investigations:


  • Focus heavily on frontline behaviour

  • Avoid uncomfortable organisational findings

  • Use superficial contributing factors

  • Over-reliance on generic controls


Then AI trained on those artefacts will:


  • Normalise weak analysis

  • Reproduce shallow conclusions

  • Give poor practice a polished finish


This is particularly dangerous because AI outputs often sound confident, professional, and authoritative- even when the underlying reasoning is flawed. 


Think GIGO.


5. Interviews Are Sensemaking Conversations- Not Data Inputs

ICAM interviews are not about extracting information.

They are about sensemaking.

Effective investigators:


  • Notice hesitation

  • Explore ambiguity

  • Pick up emotional cues

  • Adapt questions in real time

  • Build trust

  • Allow stories to evolve


AI can:


  • Transcribe interviews

  • Summarise themes

  • Identify repeated phrases


AI cannot:


  • Sense fear

  • Detect defensiveness

  • Recognise when something is being withheld

  • Adjust questioning based on trust dynamics


Treating interviews as data inputs rather than human conversations fundamentally degrades ICAM quality.


6. AI Struggles With Complex, Non-Linear Systems

ICAM is explicitly designed for complex socio-technical systems.

These systems involve:


  • Feedback loops

  • Trade-offs between safety and production

  • Drift into failure

  • Accumulating latent conditions

  • Decisions made far from the point of impact


AI tools, despite their sophistication, still tend toward:


  • Linear narratives

  • Simplified cause-effect chains

  • Discrete categories

  • Static representations


They can describe complexity- but they cannot reason within it the way experienced investigators can.


7. AI Encourages Premature Closure

One of the most common investigation failure modes is stopping too early.

AI increases this risk.

Why?


  • It produces fast answers

  • Outputs feel complete

  • Draft reports look “finished”

  • Uncertainty is smoothed over


ICAM requires investigators to sit with discomfort:


  • To challenge first explanations

  • To keep asking “what else?”

  • To explore explanations that are politically or emotionally uncomfortable


Speed is not always progress.


8. AI Is Blind to Organisational Power and Politics

Many investigation failures have nothing to do with technical analysis- and everything to do with organisational power.

AI cannot see:


  • Which findings are “unsafe” to raise

  • How budget decisions shape risk

  • How middle management filters information

  • Why some risks are tolerated

  • Where accountability subtly disappears


Human investigators navigate these realities every day. AI cannot.


9. AI Introduces Legal and Regulatory Risk

AI use in investigations creates new exposures:


  • Data privacy and confidentiality

  • Explainability of conclusions

  • Accountability for judgment

  • Regulatory scrutiny


Regulators and courts may ask:


  • Who made the decision?

  • How was the evidence weighed?

  • Can the reasoning be explained?

  • What human oversight existed?


“An AI tool produced this conclusion” is not a defensible answer.


10. The Myth of AI Objectivity

AI is often described as objective or neutral.

It is not.

AI reflects:


  • The biases in its training data

  • The assumptions in its prompts

  • The culture of the organisation using it

  • The limitations of its design


In ICAM, the illusion of objectivity is more dangerous than acknowledged subjectivity- because it discourages challenge.


Where AI Can Add Value in ICAM

Used carefully, AI can support, not replace, investigative thinking.

Appropriate uses include:


  • Transcription (with strong governance)

  • Timeline collation

  • Document indexing

  • Administrative drafting

  • Consistency checks


AI should reduce cognitive load, not outsource judgment.


Principles for Using AI Safely in ICAM Investigations

If AI is used, organisations should adopt clear guardrails:


  1. Human judgement remains central

  2. AI outputs are always challengeable

  3. No AI-generated conclusions without investigator validation

  4. Transparency about AI use

  5. Strong data governance

  6. Ongoing review of bias and drift

  7. Training investigators in AI limitations


Final Thought: ICAM Is a Human Discipline

ICAM exists because incidents are not technical failures alone- they are human and organisational phenomena.

AI cannot:


  • Understand lived experience

  • Navigate trust

  • Balance competing values

  • Hold moral responsibility

  • Learn lessons in a meaningful way


AI may become a powerful assistant.


But it must never become the investigator.


The future of ICAM will not be decided by technology- but by whether investigators retain the courage, curiosity, and critical thinking that no algorithm can replicate.


 
 
 

Comments


bottom of page