Mastering Evaluation Models: From TOPSIS to Entropy‑AHP for Decision‑Making
This article explains the concept of evaluation problems, outlines five essential elements, introduces common models such as TOPSIS, entropy weight, AHP, and fuzzy evaluation, and discusses how to combine and innovate evaluation models for more effective decision‑making.
1 Evaluation Models
2020 A problem “The Best Summer Job” is a typical evaluation (or comprehensive decision) problem. An evaluation problem measures multiple similar objects across several dimensions, aggregates these measures, and produces an overall score or ranking. The difficulty lies in balancing competing indicators.
Summer jobs like office clerk, teaching assistant, or sales clerk each have distinct features; we assess them holistically rather than focusing on a single aspect. Evaluation models provide the mathematical framework for such problems.
Evaluation problems consist of five elements:
Evaluation object : the alternatives being assessed (e.g., various jobs).
Evaluator : the individual or group performing the assessment (the modeling team).
Evaluation indicators : metrics that capture attributes of the objects; multiple indicators describe different aspects of the system.
Weight coefficients : quantitative representation of each indicator’s relative importance.
Comprehensive model : a mathematical formulation that combines indicator values into a final score.
2 Common Evaluation Models
Classic and frequently used models include:
TOPSIS
Rank Sum Ratio
Grey Relational Analysis
Entropy Weight Method
Analytic Hierarchy Process (AHP)
Fuzzy Evaluation Method
TOPSIS integrates data, entropy weight and AHP handle weight determination, and fuzzy evaluation refines imprecise information. These methods are common in HiMCM competitions and were prominent in the 2020 award‑winning papers.
Beyond model selection, attention must also be paid to:
Indicator selection – choosing representative, measurable attributes.
Data preprocessing – transforming raw data for consistency and comparability.
Model suitability – balancing simplicity, comprehensiveness, and implementation difficulty.
2.1 Composite Indicators
Abstract indicators such as “fatigue”, “distance”, or “difficulty” are often hard to quantify directly. Instead, concrete factors (e.g., work content, location, required education) are combined to form composite indicators like “job difficulty”. This synthesis makes the model more feasible and user‑friendly.
Smaller, concrete indicators are easier to obtain data for, and a model essentially aggregates these sub‑indicators into higher‑level metrics.
The purpose of composite indicators is not to complicate the problem but to make the model more practical by using easily measurable inputs.
2.2 Combining Evaluation Models
Different models have distinct strengths; they can be combined. For example, entropy weight can determine indicator weights, which are then fed into TOPSIS (where all indicators share the same weight). This yields an “Entropy‑TOPSIS” (ETOPSIS) or “modified TOPSIS” model, representing a micro‑innovation. The 2020 A problem award‑winning paper 10549 used entropy weight and AHP averaged together to balance subjective and objective weights.
Paper 10701 combined AHP and TOPSIS:
2.3 Merging Evaluation Models with Other Models
Beyond internal combinations, evaluation models can be paired with different model types. For instance, after building an evaluation model, one could incorporate a predictive model (e.g., classification or clustering) to recommend top‑scoring jobs, similar to recommendation systems in apps. Introducing such cross‑model innovation can make a solution stand out.
Finally, while classic evaluation model workflows are clear, rigidly applying a single method limits creativity. Each problem has unique traits; understanding and exploiting these nuances leads to more suitable and distinctive models.
Materials Download
The papers mentioned can be obtained by replying with the following codes in the public account chat:
2020A10549 and 2020A10701
Model Perspective
Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.