Mastering Evaluation Models: From TOPSIS to Entropy‑AHP for Decision Making
This article explains the fundamentals of evaluation models, outlines five key components of such problems, reviews common methods like TOPSIS, entropy weight, and AHP, and discusses innovative strategies such as composite indicators and combining evaluation models with each other or with predictive algorithms.
1 Evaluation Models
2020 A problem "the best summer job" is a typical evaluation (or comprehensive decision) problem. An evaluation problem involves measuring multiple similar objects across several dimensions, then aggregating these measurements into an overall score or ranking. The difficulty lies in balancing competing indicators and integrating them sensibly.
Many summer job options—office clerk, teaching assistant, salesperson, etc.—each have distinct characteristics; the goal is to consider all aspects and select the optimal job. Evaluation models provide the mathematical framework for this, as described in the referenced model series.
Evaluation problems consist of five elements:
Evaluation object : the items being assessed, e.g., the candidate jobs.
Evaluator : the individual or group performing the assessment, e.g., the modeling team.
Evaluation indicators : metrics that quantify attributes of the objects; in this case, factors influencing how "good" a job is.
Weight coefficients : numerical representations of the relative importance of each indicator.
Comprehensive model : a mathematical formulation that combines the indicator values into a final score.
2 Common Evaluation Models
Classic and widely used evaluation models include:
TOPSIS
Rank-sum ratio method
Grey relational analysis
Entropy weight method
Analytic Hierarchy Process (AHP)
Fuzzy evaluation method
TOPSIS integrates data, while entropy weight and AHP are used for determining indicator weights; fuzzy evaluation can handle imprecise information. These highlighted methods are common in HiMCM competitions and were also prevalent in the 2020 top‑award papers.
Beyond selecting a model, attention should also be paid to:
Indicator selection – choosing reasonable, representative, and measurable attributes.
Data preprocessing – transforming raw data to ensure comparability and consistency.
Model suitability – balancing simplicity, comprehensiveness, and implementation difficulty; choose a model that fits the problem rather than chasing complexity.
The article does not detail each model; interested readers can consult the linked series. Instead, it highlights three ways to enhance model innovation: composite indicators, combining evaluation models, and integrating evaluation models with other techniques.
2.1 Composite Indicators
When evaluating jobs, abstract criteria such as "fatigue", "distance", or "difficulty" may be identified. These can be formalized as "work intensity", "commute situation", and "job difficulty". Since job postings rarely provide direct scores for such abstract criteria, we instead collect concrete data (e.g., job content, company address, required education) and synthesize them into the abstract indicators. For example, "job difficulty" could be derived from required education level, years of experience, and professional skill requirements.
Large abstract indicators are hard to obtain data for, whereas finer-grained indicators are easier. An evaluation model can be viewed as a process of composing indicators: smaller sub‑indicators are aggregated into larger ones. The following figures illustrate this concept.
Creating composite indicators is not meant to complicate the problem; rather, it makes the model more feasible by allowing users to input concrete data (e.g., education, work duration) instead of vague concepts like "difficulty" or "comfort".
2.2 Combining Evaluation Models
Different models have distinct strengths—some focus on weight determination, others on data processing. They can be combined, for instance by using entropy weight to compute indicator weights and then feeding those weights into TOPSIS. This yields an "Entropy‑TOPSIS" (or "ETOPSIS") model, also referred to as "mTOPSIS" (modified TOPSIS). The 2020 A‑problem top‑award paper 10549 employed a combination of entropy weight and AHP, averaging the two to balance subjective and objective weights.
The 10701 paper used an AHP+TOPSIS combination:
2.3 Combining Evaluation Models with Other Models
Beyond internal combinations, evaluation models can be merged with models from other categories. While the summer‑job problem is clearly an evaluation task, incorporating predictive models—such as classification or clustering algorithms—can provide novel recommendations. Using a machine‑learning predictor to estimate user preferences and then recommending the highest‑scoring jobs adds a fresh perspective that can impress judges.
In summary, classic evaluation model workflows are clear, but rigidly applying a single method limits creativity. Understanding the unique aspects of each problem and exploring hybrid or innovative approaches leads to more suitable and distinctive models.
Data Download
To obtain the papers mentioned, reply with the codes 2020A10549 and 2020A10701 in the public account chat to receive download links.
Model Perspective
Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.