Measurement as Sense-Making — Metrics Should Clarify Tension, Not Conceal It

Measurement is often treated as neutral — simply a way to observe progress, demonstrate accountability, validate execution. But metrics are interpretive instruments. They’re not windows onto reality; they’re shaped by what someone decided to measure and how. They shape what leaders see. And what leaders see shapes what they choose to correct and what they leave untouched.1

Why measurement becomes defensive

When initiatives are under scrutiny, measurement often shifts from learning to reassurance. Dashboards are simplified. Outliers are contextualised quickly. Trend lines are emphasised over variance. This is rarely intentional distortion. It is narrative protection. Measurement becomes stabilising theatre — designed to reassure stakeholders, sponsors, and boards that the change is on track.2

The risk of interpretive narrowing

When measurement is narrowed to reinforce progress, tension disappears from view. Cross-functional friction goes unquantified. Incentive misalignment remains invisible. Authority bottlenecks are treated as execution delays rather than governance failures. The initiative looks orderly on the dashboard. The system underneath may not be. Informal buffering is happening. Managers are compensating. Local adaptation is growing.

If governance decisions rely on interpretive narrowing, interventions target the wrong layer. Training increases. Communications expand. Timelines adjust. More reinforcement is added. But the structural contradictions that caused the friction persist. The system becomes increasingly skilled at reporting progress while the underlying tension remains unaddressed.

What sense-making measurement requires

Measurement as sense-making does something different. It highlights ambiguity instead of resolving it prematurely. It surfaces tension instead of smoothing it over. It tracks contradiction instead of forcing false coherence.3 It asks: where are behavioural reversions occurring — patterns that suggest people are retreating to old ways? Where is informal buffering compensating for structural gaps that should be visible? Where do incentives actually conflict with stated priorities?

These metrics are harder to present. They are less reassuring. Leaders don’t get to say “everything is on track.” But they clarify the system. They illuminate what’s really happening beneath the surface. They enable governance to make decisions based on tension, not on reassurance.

The governance consequence

When measurement conceals tension, leadership decisions become miscalibrated. Interventions focus on surface compliance rather than structural alignment. Authority misalignment remains unresolved because it’s invisible in the metrics. Local adaptations proliferate because people adapt to the tension that governance isn’t addressing.

Over time, confidence in reporting declines. Employees and managers see the metrics but experience different reality. The metrics say adoption is 85%. The lived experience says they’re struggling. Because metrics feel disconnected from lived experience, they lose credibility. Restoring trust in measurement requires transparency about tension, not silence about progress.4

Why this matters

Measurement is not simply about tracking. It is about interpretation. It is about what sense the organisation makes of itself. If metrics reinforce stability narratives while structural friction accumulates beneath the surface, governance becomes reactive — intervening only after problems have hardened. If metrics illuminate tension early, governance remains anticipatory — able to make preventive choices while the system is still fluid.

This is one way of understanding how measurement discipline interacts with sponsorship and diagnostic framing to either stabilise change or extend misalignment. Measurement design is not technical. It is strategic. It shapes what leaders know and therefore what they can do.


  1. Weick, K. E. (1995). Sensemaking in Organizations. Sage. Weick establishes that measurement is not a neutral observation technology but a sensemaking instrument — it enacts the reality it purports to describe by directing attention, structuring interpretation, and organising what can be acted upon. Metrics do not capture an independent reality; they co-create the organisational reality that leaders navigate. This means that the choice of what to measure is not merely technical but constitutes the organisation’s interpretive framework for change. ↩︎

  2. Argyris, C., & Schön, D. A. (1978). Theory in Practice: Increasing Professional Effectiveness. Jossey-Bass. Argyris and Schön’s distinction between single-loop and double-loop learning maps precisely to the distinction between diagnostic and learning measurement. When measurement is used to reassure — to confirm that the change is on track — it operates as single-loop: it detects deviations from target and suppresses them without questioning the underlying assumptions. Double-loop measurement questions whether the targets themselves are appropriate. Reassurance-oriented dashboards are institutionalised single-loop systems. ↩︎

  3. Bazerman, M. H., & Moore, D. A. (2009). Judgment in Managerial Decision Making (7th ed.). Wiley. Bazerman and Moore document that decision-makers under motivated reasoning actively resist information that disconfirms prior commitments — not through intentional distortion but through systematic attentional bias. Measurement designed as sense-making works against this bias by deliberately including disconfirming data: metrics that surface contradiction and tension rather than confirming that the initial design was correct. The corrective to motivated reasoning is structural, not attitudinal — it requires building disconfirmation into the measurement architecture. ↩︎

  4. Merton, R. K. (1948). “The Self Fulfilling Prophecy.” The Antioch Review, 8(2), 193–210. https://doi.org/10.2307/4609267. Merton’s analysis of how official definitions shape social reality has an inverse implication for measurement: when official metrics diverge systematically from experienced reality, the official metrics lose their capacity to direct behaviour because people navigate by experience, not by reports. High adoption rates on dashboards that people in the organisation do not recognise as reflecting their situation produce cynicism about measurement itself — a credibility cost that then persists even when measurement improves. ↩︎