Most organisations understand how technical risk behaves. They know how to test systems, model failure modes, build contingencies, and track defects. They are far less fluent in how behavioural risk behaves, even though it is often the decisive factor in whether change delivers its promised value. Behavioural risk is not about attitudes or morale.1 It is about how people actually behave when new ways of working collide with pressure, incentives, identity, and imperfect systems. This distinction matters, because behavioural risk rarely announces itself clearly or early.2
What behavioural risk actually is
Behavioural risk is the risk that people will not, cannot, or do not consistently behave in ways the new operating model depends on. That statement is deceptively simple.
It is not defiance. It is not laziness. It is not even resistance in the traditional sense. In most cases, it is a rational response to competing demands. Behavioural risk sits in the gap between what a solution assumes people will do and what they realistically do when:
- time is constrained
- performance is measured differently than intended
- accountability is unclear
- capability is uneven
- professional judgement is challenged
- trade-offs are left implicit
When that gap persists, value does not disappear overnight. It erodes.
Where behavioural risk hides
One of the reasons behavioural risk is underestimated is that it often hides in plain sight. It shows up as:
- workarounds that “just get things done”
- selective use of new processes
- parallel tracking in spreadsheets or legacy tools
- informal delegation back to specialists
- quiet reversion to old habits under pressure
From a distance, the system looks live. Reports are submitted. Training is completed. Dashboards are green. Up close, the behaviour the business case depends on is conditional, inconsistent, or fragile. By the time this shows up in performance data, value has already been lost.
Why behavioural risk is systematically underestimated
There are a few recurring reasons organisations downplay behavioural risk. First, there is an assumption of rational compliance. Many change efforts implicitly assume that once people understand what to do, they will do it. In real operating environments, understanding is rarely the limiting factor.3
Second, there is misplaced confidence in activity metrics. Completion of communications, training, or readiness assessments is often treated as evidence of reduced risk, even though these measures capture exposure, not behaviour.
Third, behavioural risk is politically awkward. Naming it forces difficult conversations about incentives, workload, leadership behaviour, and structural contradictions. These are easy to postpone and hard to have once momentum is already slipping.
None of these assumptions hold under sustained pressure.
How behavioural risk turns into operational and financial impact
Behavioural risk does not usually cause immediate failure. It degrades performance incrementally. The pattern is familiar once you know what to look for:
- throughput slows as work is double-handled
- data quality degrades as shortcuts become normalised
- error rates rise as cognitive load increases
- customer or citizen experience becomes uneven
- support and stabilisation costs climb
- benefits plateau well below forecast
These effects are rarely traced back to behaviour. They are treated as operational noise, growing pains, or external disruption. They are none of those things. They are predictable consequences.
Why mandates and controls don’t eliminate behavioural risk
When leaders sense risk, the instinctive response is often to increase control. Mandates are reinforced. Compliance is monitored. Exceptions are discouraged. Sometimes this is necessary, but control does not eliminate behavioural risk — it often changes its shape.4 Under heavy control, behavioural risk tends to migrate into:
- surface compliance without ownership
- informal workarounds that stay hidden
- reduced upward feedback
- compliance theatre
From a governance perspective, this can feel reassuring. From a value perspective, it is often worse. The organisation loses visibility into how work is actually being done.
Behaviour as early warning, not nuisance
One of the most underused capabilities in change is the ability to treat behaviour as diagnostic information, not a problem to suppress. Early behavioural signals often appear before formal metrics shift:
- where people hesitate
- where work slows unexpectedly
- where unofficial tools appear
- where judgement calls increase
- where managers quietly buffer their teams
These signals are not failures. They are information. Organisations that learn to read them early can intervene while value is still recoverable. Organisations that dismiss them as resistance usually discover the cost later.
Why behavioural risk belongs on the risk register
Risk registers typically focus on what can be specified, tested, and controlled. Behavioural risk resists all three, which is why it is often excluded. But exclusion does not reduce exposure. When behavioural risk is treated explicitly as risk, it becomes discussable without blame, it gains visibility at the right levels, mitigation becomes proactive rather than reactive, and ownership becomes clearer. Most importantly, leaders stop being surprised by outcomes they should have expected.
A more realistic view of change risk
Behavioural risk does not mean people are the problem. It means people are operating inside systems that ask them to reconcile competing demands, often without sufficient clarity or support. Recognising behavioural risk for what it is allows leaders to protect value more deliberately. Ignoring it simply ensures that value erosion will arrive later, dressed up as something else.
This is one way of thinking about why change succeeds or fails. Other pieces go deeper into how risk actually shows up, and how leaders can respond before value is lost.
-
Argyris, C. (1957). Personality and Organization: The Conflict Between System and the Individual. Harper & Brothers. Argyris demonstrates that organisational systems produce behaviour that diverges from espoused values — what people do is a function of structural incentives and constraints, not attitudes or morale. Behavioural risk is therefore a structural diagnostic category, not a people management category. ↩︎
-
Feldman, M. S., & Pentland, B. T. (2003). “Reconceptualizing Organizational Routines as a Source of Flexibility and Change.” Administrative Science Quarterly, 48(1), 94–118. https://doi.org/10.2307/3556620. Feldman and Pentland show that actual routines (ostensive patterns) diverge from formal procedures (performative enactments) in ways that are invisible in standard reporting. Behavioural risk accumulates in this gap before it is detectable in formal metrics. ↩︎
-
Pfeffer, J., & Sutton, R. I. (2000). The Knowing-Doing Gap: How Smart Companies Turn Knowledge Into Action. Harvard Business School Press. Pfeffer and Sutton document that understanding what to do rarely predicts actually doing it — the gap between knowledge and action is a structural phenomenon shaped by incentives, norms, and accountability, not information deficits. ↩︎
-
Milgram, S. (1974). Obedience to Authority: An Experimental View. Harper & Row. Milgram’s research demonstrates that authority-based compliance and genuine behavioural internalisation are distinct phenomena. Mandated compliance produces obedient behaviour in observable contexts while leaving underlying motivations unchanged — producing the surface compliance / hidden workaround pattern described here. ↩︎