Title Banner
By Gary Butler 5 min read


🚩 Introduction

There’s a concept known as the normalisation of deviance; when individuals or teams gradually make exceptions to rules or standards, and those exceptions become accepted practice, simply because nothing bad seems to happen. The danger is subtle. Each time, people push the boundary a little further until the risk quietly builds up, hidden by a streak of luck.



⛽ The Pilot Example: Pushing Boundaries on Fuel

F-35A Lightning II
F-35A Lightning II

Picture this in aviation: A RAAF pilot is scheduled to return the plane back to base whenever the fuel drops past a conservative threshold. One evening, with the mission goal in sight and pressure mounting, they wait just five more minutes, before deciding to divert. The flight lands safely. On the next flight, maybe they stretch that limit by another few minutes.

Each time, it works out.
Each time, the new normal inches closer to the edge.

The habit “We’ve cut it close before, and it’s always been fine.” forms. Eventually, their luck runs out. The decision to turn back just a bit later; this time; comes too late to make it safely.

What began as a one-off exception has gradually become perilous routine.



🚀 Tragedy at NASA: The Challenger Disaster

Challenger Crew
Challenger Crew

NASA’s Challenger space shuttle disaster tragically brought this pattern into focus. Engineers repeatedly raised alarms about faulty O-ring seals in the shuttle boosters, especially in cold temperatures. Yet launches went ahead, the flaw hadn’t caused disaster, so it felt safe to proceed. Each successful mission normalised ignoring the risk. Then, on a cold morning in January 1986, Challenger’s O-rings failed, and seven lives were lost. Official investigations; including Diane Vaughan’s extensive research; found that what doomed Challenger wasn’t one reckless act, but years of cumulative, normalised shortcuts: a chilling case of deviance becoming routine.



💻 Everyday DevOps: How It Happens in Software

This same drift toward risk happens in software development. Teams under pressure might:

  • Skip code reviews just this once.
  • Write the tests in the next commit.
  • Ship code with failing tests, assuming there won’t be impact.
  • Leave known vulnerabilities unpatched until later.

If nothing immediately breaks, these behaviours embed themselves into team culture, quietly undermining quality until a major outage or breach exposes how far standards have slipped.



🛡️ Staying Safe: How to Spot and Stop Normalisation of Deviance

  • Refresh standards routinely - document why each safeguard exists.
  • Encourage everyone to ask, “Are we crossing a line because we got lucky last time?”
  • Review incidents and near-misses - even ones with no apparent harm.
  • Recognise and reward those who flag early risks or push back when standards erode.

Normalisation of deviance isn’t about reckless people, it’s about the comforting pull of habit, especially under pressure. Whether launching a space shuttle or releasing code, vigilance and healthy skepticism keep high standards from quietly unraveling.



🎯 Wrapping Up

History, across all fields, shows the cost when teams stop questioning how close to the edge they are living. By continually challenging our normals, we defend quality, safety, and trust; no matter what we’re building.



📚 Explore Further