Fundamentals 15 min read

Why Designing Metrics Matters: From Definition to Good Indicator Practices

This article explains why metrics are essential, defines what a metric is, describes the four essential elements of metric design, outlines the three‑step design process, discusses measurement scales and time characteristics, and provides criteria for evaluating good metrics.

Data Thinking Notes
Data Thinking Notes
Data Thinking Notes
Why Designing Metrics Matters: From Definition to Good Indicator Practices

01 Why Design Metrics?

We know too little – just like Jon Snow, most people underestimate how little they actually understand. Without metrics, the information we can obtain is extremely limited or costly.

Metrics let us learn more by conveying sufficient information at an acceptable cost.

Middle‑aged Jia goes for a health check; the doctor gives a vague assessment instead of blood pressure, body fat, or blood sugar values.

Little Yi is stopped for a DUI test; the officer asks how much he drank, but without a blood‑alcohol metric, the penalty cannot be determined.

CEO asks about sales performance; the VP replies “great” without mentioning total sales, per‑capita output, or trends.

Without the tool of metrics, the amount of information we can gather would be very limited or extremely expensive.

02 What Is a Metric?

Common everyday metrics include height, weight, temperature, GDP. Their commonality is that they are numeric values; their differences lie in the meaning of those numbers.

A metric is a defined numeric value used to quantify a fact. The abstraction can be single‑step or multi‑step:

Simple facts (e.g., weight) can be measured directly with an atomic metric.

More complex facts (e.g., body composition) require derived metrics such as BMI or body‑fat rate.

Highly complex facts (e.g., a country's economic situation) need a whole system of metrics like GDP, often involving many layers of abstraction.

03 How to Design a Metric?

1. Metric Design Process and Classification

From a statistical and data‑governance perspective, the design process consists of three steps: abstraction, processing, and limiting.

Data are first abstracted into atomic metrics (raw counts such as premium amount, number of customers, user count). Atomic metrics are then processed in three ways to form derived metrics (e.g., enrollment rate, average order value, CSI) through comparison, statistical calculation, or index design. Finally, applying dimensional limits creates derived metrics (e.g., metrics per region, per product).

2. Metric Scale Characteristics

Metrics have four measurement scales: nominal, ordinal, interval, and ratio. Each scale determines which mathematical operations are valid.

Interval scales cannot be multiplied or divided directly: temperature is interval; saying “20°C is twice as hot as 10°C” is misleading. Ordinal scales cannot be added or subtracted: satisfaction scores can be compared but not summed.

3. Metric Time Features

Time affects metric accuracy and timeliness. Two asynchronies exist: between multiple facts and between a fact and its metric calculation.

Events often occur after the underlying fact (e.g., refunds happen after orders). Data pipelines may introduce delays, causing a lag between fact occurrence and metric availability.

We need to balance “T+n” (e.g., T+15 days for insurance refund rate) with real‑time requirements.

04 What Makes a Good Metric?

We evaluate metrics on four dimensions:

Effectiveness: Does the metric reflect the quantified fact?

Reliability: Is the metric stable over repeated measurements?

Sensitivity: Can the metric capture changes in the fact?

Operability: Can the metric be used daily to drive improvements?

Summary

Metrics help acquire more information at low cost.

A metric is a defined numeric value that quantifies a fact.

Four essential elements of metric design: name, owner, meaning, calculation.

Design process: abstraction → processing → limiting, producing atomic, derived, and derived‑with‑dimension metrics.

Good metrics are evaluated by effectiveness, reliability, sensitivity, and operability.

(Source: Zhihu – Good Analyst)

operationsmetricsdata analysisindicator designmeasurement scales
Data Thinking Notes
Written by

Data Thinking Notes

Sharing insights on data architecture, governance, and middle platforms, exploring AI in data, and linking data with business scenarios.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.