Building and Managing an Indicator System: Methodology, Models, and Practices
The article defines an indicator system as a structured set of interrelated metrics and dimensions, explains its lifecycle and hierarchy, presents OSM and AARRR models for construction, details metadata and dimension management, addresses business‑technical challenges, outlines a roadmap for implementation, and showcases DiDi’s deployment of thousands of indicators across dozens of domains.
This article introduces the concept of an indicator system, explaining that it is a systematic organization of inter‑related metrics (indicators) and dimensions to provide a holistic view of business performance.
It defines indicators as quantifiable measurements derived from subdividing business units, and distinguishes between result‑type indicators (outcome‑focused, often lagging) and process‑type indicators (action‑focused, leading). Dimensions are described as the perspectives or attributes (e.g., city, gender) that give meaning to indicators.
The lifecycle of an indicator system includes definition, production, consumption, and deprecation, with continuous operations, quality assurance, and data‑driven governance throughout.
Two scientific models for constructing indicator systems are presented:
OSM model (Objective‑Strategy‑Measurement) – a horizontal framework that links business goals, strategies, and the resulting metrics.
AARRR (Acquisition, Activation, Retention, Revenue, Referral) – a classic pirate‑model for growth analysis across the user lifecycle.
Indicator hierarchy is detailed as three tiers: T1 (company‑level strategic indicators), T2 (business‑strategy level indicators), and T3 (operational‑execution level indicators). The article provides a concrete example from DiDi’s ride‑hailing business, mapping OSM components to specific metrics such as conversion rate, cancellation rate, supply‑demand ratio, and driver satisfaction.
Dimension management is discussed, covering both business‑level information (name, definition, classification) and technical metadata (whether a dimension has a physical table, enum vs. date dimension, code/name fields). The article also outlines indicator metadata, including basic information, technical mapping, and derived relationships for lineage tracing.
Management challenges are analyzed from business, technical, and product perspectives, highlighting issues like unclear metric definitions, inconsistent naming, duplicated data pipelines, and lack of productized support.
Targeted management goals are set for technology (unified naming, calculation, and source), business (standardized data export and scenario coverage), and product (tooling for indicator lifecycle, decision‑support products).
The implementation roadmap includes modeling (dimension modeling, fact tables, DWM layer), development (production, operation, quality control), and productization (indicator dictionary tools, automated generation, standardized APIs). Visual diagrams (omitted here) illustrate the architecture and workflow.
Finally, the article reports on the adoption of the methodology and tools within DiDi, noting over 5,000 indicators covering 88 data domains, 385 business processes, and 52 scenarios, and outlines future plans for API‑based data services.
Didi Tech
Official Didi technology account
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.