Change management literature is filled with best practices. Stakeholder mapping. Readiness assessments. Sponsor roadmaps. Communication cadence models. These frameworks are genuinely useful — they give structure to complexity, they prevent reinvention, and they create a shared language that helps teams align. But here’s the subtle assumption that often goes unexamined: the belief that a practice proven to work in one organizational context will work the same way in another. That assumption is where the trouble begins.
Why best practices feel so reassuring
Best practices reduce anxiety. They signal that someone has solved this problem before, they offer a proven sequence to follow when uncertainty is high, and they imply reliability. In large initiatives, that reassurance matters — it gives leaders and teams something concrete to hold onto when everything else feels uncertain.
But reassurance is not the same as fit. A stakeholder mapping model designed for a centralized hierarchy may falter when decision-making is distributed across a matrixed structure.1 An engagement strategy that succeeded in a stable environment may destabilize an organization already under load. Best practices are abstractions. Organizations are specific structural realities. And the gap between the abstract and the specific is where real outcomes get determined.
What gets overlooked when we assume portability
Every organization distributes authority differently. Every organization has its own incentive patterns, its own tolerance for conflict, its own political boundaries, and its own history of trust and mistrust. A sponsorship model that assumes decisive executive authority won’t hold if power is distributed across competing factions. A readiness assessment that assumes stable workload may misinterpret fatigue as resistance — completely different diagnoses that would call for completely different responses.2
When context reshapes how people actually behave, the model must adapt accordingly. If it doesn’t — if you hold rigidly to the framework regardless of what you’re observing — friction emerges. And here’s the part that’s rarely acknowledged: that friction usually gets blamed on execution.3 Teams assume they didn’t implement it correctly, didn’t train people well enough, or didn’t have enough fidelity to the model. But the real issue is contextual fit.
The pattern that often unfolds
Here’s what typically happens. A best practice gets selected. It’s implemented with genuine care and attention. And it produces mixed results. Instead of stepping back to examine whether the practice actually fits this particular context, effort intensifies. More training. More enforcement of the model. More insistence that if this worked elsewhere, we just need to do it better here.
This response is understandable — it protects the credibility of the framework and maintains team confidence. But it can also prevent the structural interpretation that’s actually needed. You end up staying true to the best practice rather than questioning whether that best practice is actually the best practice for this particular situation. And the organization never gets the chance to adapt the practice to what’s actually true about its structure.
How experienced practitioners approach this differently
Experienced practitioners rarely abandon frameworks. Instead, they translate them.4 They ask: How does authority actually move in this organization? What incentives might quietly undermine this step? Where are the hidden bottlenecks that the framework doesn’t account for? They treat models as starting hypotheses — useful reference points — rather than finished answers that simply need better execution.
That’s not deviation from the practice. It’s practitioner judgment. It’s the ability to hold the framework lightly, use its structure to think, and then adapt based on what you observe about how the system actually works. And that’s often what distinguishes practitioners who get durable results from those who get compliance with a model.
Why this distinction matters going forward
Best practices are genuinely valuable. But when portability is assumed rather than examined, misfit accumulates quietly. Teams follow what they believe are the right steps while structural friction grows beneath the surface. Over time, confidence in the discipline itself erodes — not because the practice is flawed, but because it was applied without adaptation to the specific structural realities the organization actually has.
This is one reason experienced practitioners sound less dogmatic than newcomers. They’ve seen enough contexts to understand that the framework is a starting point, not a destination. The subsequent articles in this series explore how diagnostic compression and structural misalignment interact with framework rigidity, and how that combination shapes outcomes over time.
-
Mintzberg, H. (1979). The Structuring of Organizations: A Synthesis of the Research. Prentice Hall. Mintzberg demonstrates that organisational forms are not interchangeable — each structural configuration (simple, machine, professional, divisional, adhocracy) creates distinct authority flows and coordination requirements. A practice designed for one configuration cannot be assumed to function in another. ↩︎
-
Schein, E. H. (1999). Process Consultation Revisited: Building the Helping Relationship. Addison-Wesley; and Maslach, C., & Leiter, M. P. (1997). The Truth About Burnout: How Organizations Cause Personal Stress and What to Do About It. Jossey-Bass. Schein establishes that diagnostic accuracy must precede any intervention; Maslach and Leiter provide the empirical grounding for why fatigue and resistance produce different observable signals that an accurate readiness diagnostic must distinguish. ↩︎
-
Rosenzweig, P. (2007). The Halo Effect: And the Eight Other Business Delusions That Deceive Managers. Free Press. Rosenzweig documents how organisations systematically attribute poor outcomes to execution failure rather than examining whether the underlying model fit the context — a cognitive error that forecloses structural diagnosis. ↩︎
-
Klein, G. (1998). Sources of Power: How People Make Decisions. MIT Press. Klein’s field research with expert decision-makers in high-stakes domains shows that experienced practitioners treat frameworks as recognition cues and starting hypotheses, not fixed procedures — expertise is precisely the capacity to adapt the model to what the situation actually presents. ↩︎