Intrenion

Pattern: Metric gaming

Measures are optimized to look good rather than to reflect reality.

Situation

  1. In this condition, performance is primarily assessed using a small set of quantitative indicators that proxy for broader objectives.
  2. In this condition, reported metrics show improvement over time while frontline observations do not show equivalent changes in underlying outcomes.
  3. In this condition, employees adjust workflows and priorities to maximize what is counted by the formal measurement system.
  4. In this condition, activities that are difficult to quantify receive less visibility and fewer resources than those tied to tracked indicators.
  5. In this condition, reporting periods trigger concentrated efforts to adjust classifications, timing, or documentation of results.
  6. In this condition, internal conversations focus on hitting targets rather than examining the substance of the work.
  7. In this condition, discrepancies between reported performance and stakeholder experience are noticeable but formally unaddressed.

Assessment

  1. This occurs because complex objectives are reduced to measurable proxies that are easier to track than the underlying outcomes.
  2. This occurs because incentives such as compensation, promotion, or survival are tied directly to reported indicators rather than independently verified impact.
  3. This occurs because supervisors and external stakeholders rely on aggregated metrics to reduce oversight costs and information asymmetry.
  4. This occurs because rule-based reporting systems reward compliance with measurement definitions rather than fidelity to substantive intent.
  5. This occurs because actors who do not optimize for the metric are disadvantaged relative to those who exploit classification flexibility.
  6. This occurs because quantifiable indicators provide defensible justification in audits and reviews, even when they distort operational reality.
  7. This occurs because once metrics are embedded in contracts, dashboards, and governance processes, changing them threatens existing authority structures and reporting comparability.

Consequence

  1. Without altering the linkage between rewards and proxy metrics, resource allocation becomes increasingly concentrated on activities that maximize measured indicators rather than underlying outcomes.
  2. Without separating reported performance from substantive impact, strategic drift becomes unavoidable as formal success diverges from mission reality.
  3. Without revising measurement rules or oversight mechanisms, classification flexibility continues to expand, and reported data integrity becomes unstable.
  4. Without alternative evaluation channels, individuals who refuse to game metrics face progressive marginalization within competitive ranking systems.
  5. Without confronting discrepancies between stakeholder experience and reported results, reputational instability becomes likely once inconsistencies are publicly exposed.

Decisions

  1. We decide to track and report a parallel personal performance log that records substantive outcomes alongside official metrics because this gives us independent evidence of real impact instead of relying solely on the organization’s dashboard figures, and accept that this additional documentation increases our workload and may create tension if discrepancies are noticed.
  2. We decide to refuse any reclassification or timing adjustment that alters the factual substance of results even if it improves reported numbers because this gives us defensible integrity over our own contributions instead of participating in technically compliant metric inflation, and accept that this may slow our advancement relative to peers who optimize the figures.
  3. We decide to allocate a fixed portion of our working time to mission-critical tasks that are not captured by current indicators because this gives us maintained substantive competence and outcome continuity instead of dedicating all available effort to measured activities, and accept that our visible performance scores may plateau or decline.

Direct formulations

  1. I will maintain my own record of substantive outcomes and refer to it when my performance is discussed, rather than treating the official dashboard as the full account of my work.
  2. I will not reclassify, defer, or frame results in ways that change their factual substance even if it improves the reported numbers.
  3. I will spend a fixed portion of my time on mission-critical work that is not measured, and I will not reallocate that time solely to raise tracked indicators.