How to Organize Experience Evaluation with the QMD Framework: A Step‑by‑Step Guide
This article explains how to systematically organize experience evaluation using the QMD framework and PDCA cycle, covering goal definition, stakeholder roles, business selection, timeline planning, location setup, assessment methods, reliability checks, business review, reporting, and continuous improvement to ensure reliable, business‑aligned UX outcomes.
Why: Evaluation Goals
The new experience evaluation mechanism aligns quality improvement with business objectives, aiming to enhance product usability and core business metrics.
Who: Relevant Personnel
The evaluation is driven by a dedicated Experience Management Group (interaction designers and user‑research engineers) and a QMD project team that coordinates assessment, presentation, and follow‑up across all business units. Each business contributes six experts (2 product, 2 interaction, 2 visual) for cross‑evaluation.
What: Business Selection
Businesses are chosen based on four criteria: importance of experience, presence of measurable goals, iteration frequency, and user scale.
When: Timeline Planning
Evaluations follow a quarterly PDCA cycle. Planning occurs at the start of each quarter, with weekly reports to handle exceptions and buffer time for release‑related constraints.
Where: Evaluation Location
Physical meeting rooms are booked in advance, with provisions for remote participation during pandemic‑related remote work periods.
How: Assessment Method
Experts use scoring tools and subjective feedback forms (e.g., 58 Questionnaire, online collaborative sheets). Basic indicators are refined to be clear, accurate, universal, and actionable, reducing from 15 to 10 second‑level metrics and adding motion and semantics criteria.
Business‑specific indicators are defined using the GSM model (Goal‑Signal‑Metric), reverse thinking, traceability, and contextual adaptation.
Experience Evaluation Process
After preparation, experts evaluate according to standardized procedures, ensuring reliability through scorer reliability checks. If reliability is low, data groups are adjusted and re‑evaluated.
Reliability Evaluation
Raw scores and subjective responses are pre‑processed, and reliability is validated. In cases of low reliability, additional experts are recruited and data re‑grouped until acceptable reliability is achieved.
Business Review
The review presents scores, comparisons, subjective issues, and screenshots, allowing businesses to understand conclusions, provide feedback, and iterate. Designers must possess basic data analysis and spreadsheet skills.
Report Generation
Reports link scores to issues, highlight gaps, and include QMD team optimization suggestions. High‑priority items are flagged by weight, deviation percentages, and competitive comparisons.
Validation and Continuous Improvement
The upgraded evaluation system is validated by checking whether applied recommendations lead to expected business metric improvements. Regular retrospectives refine the SOP, ensuring minimal human bias and reliable outcomes.
58UXD
58.com User Experience Design Center
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.