When change produces unintended consequences, organisations often treat them as execution issues.1 People misunderstood. Managers applied the rules inconsistently. Teams took shortcuts. These explanations are comforting because they imply that the design was sound and the problem lies in implementation. In reality, unintended consequences almost always reflect design decisions — particularly decisions about incentives, metrics, and risk distribution.
Why unintended consequences are so predictable
Unintended consequences rarely come out of nowhere. They emerge when organisations ask people to behave one way while rewarding, measuring, or protecting them for behaving another.2 In these conditions, people do not act irrationally. They optimise for survival. The behaviour that follows may not be what leaders intended, but it is almost always what the system made safest.
The design elements that matter most
Several design elements consistently generate unintended consequences during change.3 They include:
- performance metrics that reward old behaviours
- targets that penalise learning or transition costs
- incentives that prioritise short-term results
- governance processes that discourage early issue surfacing
- accountability structures that absorb risk upward or downward
When these elements are misaligned, behaviour will follow the path of least resistance. Intent does not compensate for design.
Why blaming execution misses the point
When unintended consequences appear, organisations often respond by tightening controls.
They issue clarifications. They reinforce expectations. They monitor compliance more closely. Sometimes this reduces visible deviation. Often, it drives it underground. The underlying incentives remain unchanged, so the behaviour adapts rather than disappears. Execution-focused fixes treat symptoms. Design-driven fixes address causes.4
How unintended consequences erode value
The cost of unintended consequences is rarely immediate. They accumulate through:
- inefficiencies that become normalised
- workarounds that harden into parallel processes
- data quality issues that undermine decision-making
- delayed benefits realisation
- erosion of trust in leadership intent
Over time, these costs outweigh the visible gains of the change itself. By the time leaders recognise the pattern, reversing it is expensive.
Translating unintended consequences into enterprise risk
Unintended consequences are not just operational annoyances. They represent:
- misallocated investment
- exposure to performance and reputational risk
- weakened control over outcomes
- reduced return on transformation spend
In public sector contexts, they can also increase scrutiny when outcomes diverge from stated objectives despite significant effort and funding. These are enterprise-level risks, not behavioural footnotes.
Why organisations avoid redesign
Redesigning incentives and performance systems is uncomfortable. It forces leaders to revisit assumptions, renegotiate targets, and accept short-term disruption. It exposes trade-offs that may have been left implicit. As a result, many organisations attempt to manage around misaligned design rather than correcting it. This prolongs pain and entrenches unintended behaviour.
Designing for the behaviour you actually need
Organisations that reduce unintended consequences are explicit about behavioural design. They ask:
- What behaviour does success actually require under pressure?
- What happens to someone who behaves that way today?
- Where are we asking people to take risk without protection?
- Which metrics and incentives need to change, and when?
These questions are harder than reinforcing compliance. They are also far more effective.
A more honest accountability for outcomes
Unintended consequences are not evidence of bad actors. They are evidence of misaligned systems. When organisations treat them as design feedback rather than execution failure, they regain control over outcomes. Designing incentives deliberately does not eliminate complexity. It ensures complexity does not work against you. This is one way of thinking about why change succeeds or fails. Other pieces go deeper into how incentive design, metrics, and risk distribution shape outcomes during transformation.
-
Galbraith, J. R. (1973). Designing Complex Organizations. Addison-Wesley. Galbraith’s contingency design framework establishes that organisational structures must match the information-processing requirements of the tasks they are designed to perform. When the task changes (due to transformation) but the design does not (performance systems, incentives, and accountability structures remain calibrated to the old task), the organisation develops informal workarounds to manage the mismatch. These workarounds are precisely the “execution issues” that leadership diagnoses as the problem — when they are actually evidence that the formal design is inadequate to the new task. ↩︎
-
Thompson, J. D. (1967). Organizations in Action. McGraw-Hill/Prentice Hall. Thompson’s analysis of organisations under uncertainty shows that organisational actors consistently buffer their technical cores from external demands — they protect what they are measured and rewarded for from the disruption of changing requirements. When change asks people to behave one way while the performance system continues rewarding another, actors behave rationally: they protect their technical core (the measured activity) and adapt the new requirement around it. The consequence is not defiance — it is the predictable output of the buffering mechanism that Thompson identifies as a fundamental feature of organised action under uncertainty. ↩︎
-
Perrow, C. (1984). Normal Accidents: Living with High-Risk Technologies. Basic Books. Perrow demonstrates that in complex, tightly coupled systems, certain design configurations make unexpected interactions between components not just possible but predictable. The same principle applies to change design: performance systems, accountability structures, and incentive mechanisms interact in ways that produce specific, predictable consequences. The consequences are not random — they follow from the design. When organisations treat the resulting behaviours as execution surprises, they are misreading what the system architecture was always going to produce. ↩︎
-
Argyris, C., & Schön, D. A. (1978). Theory in Practice: Increasing Professional Effectiveness. Jossey-Bass. Argyris and Schön’s single-loop versus double-loop learning distinction maps directly onto the symptom/cause divide. Single-loop responses detect a deviation from expectation and correct it within the existing framework — execution fixes. Double-loop responses ask whether the framework itself is producing the deviation — design fixes. Organisations that default to execution-focused responses after unintended consequences emerge are enacting single-loop learning: they correct the specific deviation while the incentive design that produced it remains intact, guaranteeing the next iteration of the same problem. ↩︎