Designing Intervention Boundaries — How Scope Determines Efficacy and How Organisations Get Both Wrong

Change management interventions frequently fail not because the work is poorly executed but because the boundary around the intervention is poorly designed. The boundary determines what you’re trying to solve and what you’re treating as a given. Get the boundary wrong, and you’re solving the wrong problem — or no problem at all.

Intervention boundaries define what the intervention includes and excludes. They determine which parts of the organization’s structure, incentives, and operations are within scope for change, and which are treated as constraints to work within. Seem like a technical detail? It’s not. The boundary is where strategy meets reality. It’s where you decide: Are we solving the actual problem, or are we treating the actual problem as “just how things are”?

Organizations typically set intervention boundaries in one of two ways, and both are problematic. They either draw boundaries so tightly that the intervention cannot address what’s actually causing the problem, or they draw them so broadly that the intervention becomes unwieldy and diffuses across the entire organization. In the first case, you’re leaving the root cause untouched. In the second case, you’re trying to fix everything and nothing.

Clear boundary design requires diagnostic work upfront: understanding which structural factors are actually driving the problem, which of those factors the intervention can realistically influence, and which factors are genuinely external constraints. That distinction is not obvious. It requires judgment.

How Boundaries Determine What Can Change

Consider a concrete example: an organization with persistently low adoption of a new system. Change management sets the intervention boundary and it includes communication campaigns, training programmes, user support, adoption tracking. These are reasonable interventions. But notice what the boundary excludes: the performance management system, the incentive structure, the operating model that determines how work actually flows.

The implicit assumption is that adoption failure reflects insufficient understanding or resistance to change rooted in fear. But what if the actual problem is different? What if the new system creates more work because it requires manual data entry that the old system automated? What if the performance metrics reward speed of delivery, and the new system slows delivery? What if people’s job security depends on possessing skills the new system makes obsolete?

In these cases, adoption failure isn’t irrational. It’s the rational response to a change that harms people’s interests. Communication, training, and support cannot overcome rational resistance based on accurately perceived harm.1 The intervention boundary is too tight. It excludes the factors actually driving the problem. You’re trying to persuade people to do something that genuinely disadvantages them — and that never works.

Now consider the opposite problem. An organization recognizes that adoption requires changing incentives. So they expand the intervention boundary dramatically: change performance metrics, redesign job roles, restructure teams, adjust career paths, update skill development programmes, rethink compensation. The intervention becomes so sprawling that it becomes unmanageable. Every stakeholder group has concerns. Interdependencies proliferate. The initiative loses shape. Months in, nobody’s sure what the actual intervention is anymore. Clarity about what changed and why gets lost in the complexity.

The boundary is too loose. It includes factors that, while related to adoption, are not essential to addressing the core problem. You’re trying to change too much, and the result is that nothing changes clearly.

Setting Boundaries Requires Diagnosis

Effective boundary design demands diagnostic work first. You need clarity about what’s actually driving the problem before you can design the boundary.

If adoption is low because people genuinely don’t understand how to use the system, the intervention boundary includes training and user support. It excludes the incentive system. If adoption is low because the system makes people’s current skills obsolete and threatens their job security, the intervention boundary includes role redesign and skill development. It may include incentive redesign. But it might still exclude changes to organizational structure — those aren’t the root cause. If adoption is low because the system is poorly designed for actual work patterns, the intervention boundary includes system redesign. It doesn’t include training on how to use a poorly designed system.

The core boundary question is: Which parts of the organization’s structure, process, incentive, and capability must be addressed in order for this intervention to succeed? Everything else is outside scope, even if it’s somewhat related to adoption. This requires diagnostic skill in distinguishing between three different kinds of factors:2

Root causes are the factors actually driving the problem. If adoption fails because incentives contradict it, incentive change is a root cause. Contributing factors amplify the problem but aren’t root causes. If communication is unclear on top of a structural incentive problem, poor communication is a contributing factor, not the root cause. Related-but-separate issues exist and might feel like they should be addressed, but they’re not actually causing this particular problem. The organization might have a legitimate opportunity redesign opportunity, but if it’s not what’s blocking adoption right now, it’s outside the boundary for this intervention.

The distinction matters because you need root cause understanding to set a defensible boundary. Without it, you either include too much or exclude too much.

The Politics of Boundary Design

Here’s what matters: boundary design is not only a technical question. It’s deeply political, and understanding that is crucial to getting the boundary right.

Tight boundaries protect the intervention from scope creep and keep it manageable. They also protect powerful stakeholders from having to change.3 If the boundary excludes the incentive system, the head of compensation doesn’t have to renegotiate. If the boundary excludes the operating model, the head of operations isn’t affected. That protection feels efficient until you discover that the excluded factor is exactly what was preventing adoption all along.

Loose boundaries appear comprehensive and signal that the organization is willing to change anything that needs changing. They also have a secondary effect: they distribute the burden of change across the entire organization, which means everyone experiences disruption and no single stakeholder bears the concentrated cost. That can look fair, but it also means accountability becomes diffuse. When everything’s in scope, nothing’s in focus.

Clear boundary design requires governance — explicit decisions about scope made openly. What’s included? Why? What’s excluded? Why? Who has authority to change the boundary if the intervention isn’t working? These decisions cannot stay implicit or be negotiated ad hoc. They have to be explicit and governed.4

Without that explicit governance, here’s what happens: the scope drifts. It starts tight and gradually becomes loose as stakeholders push for inclusion of factors that affect them. Or it starts loose and becomes tight as the intervention team realizes it cannot manage the scope and starts cutting things out. Either way, the organization ends up unclear about what intervention was actually implemented. Was it the original design? Did it shift? Who decided? Those ambiguities matter.

Boundaries as Learning Opportunities

Well-designed intervention boundaries create learning opportunities. That’s the real payoff.

If the intervention is designed with clear scope, and the intervention succeeds, the organization learns which factors actually matter for adoption. You now have evidence. If the intervention is designed with clear scope, and it fails despite meeting its own metrics (people are trained but still don’t adopt), the organization learns something crucial: the root causes were outside your boundary. Your diagnosis was incomplete. That failure teaches you something.

That learning informs the next intervention. Either expand the boundary to include the missed factors, or redesign the initial intervention to address factors that were thought external but turned out to be critical. You iterate forward based on evidence.

But if intervention boundaries are unclear — if they drift or are negotiated implicitly — the organization cannot learn from failure. When the intervention fails, it’s genuinely unclear whether it failed because the work was poorly executed, because the boundary was wrong, or because the initial diagnosis was incorrect. So you don’t know what to fix next time. Clear boundary design is prerequisite to learning from intervention experience. Without it, every intervention feels like it failed for different reasons. With it, failure becomes diagnostic data.5


  1. Argyris, C. (1957). Personality and Organization: The Conflict Between System and the Individual. Harper & Brothers. Argyris demonstrates that organisational systems produce behaviours that contradict espoused values — what people do reflects the structural incentives and constraints they operate within, not their attitudes or willingness. Resistance to adoption is not irrational when the change genuinely disadvantages people; it is the accurate behavioural expression of a structural conflict the intervention has not resolved. ↩︎

  2. Schein, E. H. (1999). Process Consultation Revisited: Building the Helping Relationship. Addison-Wesley. Schein’s foundational principle is that diagnosis must precede intervention — and that accurate diagnosis requires distinguishing between surface symptoms, contributing factors, and root causes. Interventions designed without this distinction address what is visible rather than what is causal, and boundary-setting without diagnosis is therefore boundary-setting without foundation. ↩︎

  3. Pfeffer, J. (1981). Power in Organizations. Pitman. Pfeffer’s analysis of how formal authority and real influence diverge in organisations shows that structural decisions — including what is inside or outside intervention scope — are shaped by political dynamics as much as technical analysis. Excluding powerful stakeholders from scope is not neutral; it reflects the political feasibility of challenging their domains and produces interventions designed for comfort rather than efficacy. ↩︎

  4. Simons, R. (1995). Levers of Control: How Smart Managers Use Accounting Systems for Strategy, Learning, and Control. Harvard Business School Press. Simons shows that aligning authority (what can be decided), accountability (what will be measured), and control systems (how decisions are monitored) requires explicit design — without it, scope drifts as each constituent acts within their own authority envelope. Intervention boundaries, like control systems, require governance to remain stable. ↩︎

  5. Argyris, C., & Schön, D. A. (1996). Organizational Learning II: Theory, Method, and Practice. Addison-Wesley. Argyris and Schön show that meaningful organisational learning requires the ability to attribute outcomes to specific decisions — when scope is ambiguous, failure cannot be diagnosed accurately and the organisation cannot determine what to change. Clear boundaries are not just operational devices; they are the prerequisite for double-loop learning from intervention experience. ↩︎