Fundamentals 20 min read

Choosing Between Subjective and Objective Weighting Methods for Multi‑Criteria Evaluation

This article reviews common techniques for determining indicator weights in multi‑criteria assessments, comparing subjective approaches such as the Delphi and Analytic Hierarchy Process with objective methods like coefficient of variation, entropy, correlation‑based and CRITIC, illustrating each with procedural steps, formulas, empirical data from listed companies, and comparative analysis of results.

Model Perspective
Model Perspective
Model Perspective
Choosing Between Subjective and Objective Weighting Methods for Multi‑Criteria Evaluation

Currently, there are many specific methods for determining weights, which can be divided into two major categories based on the source of the original data: subjective weighting methods and objective weighting methods.

Subjective Weighting Methods

Subjective weighting methods use the knowledge and experience of experts (or individuals) to determine indicator weights. The original data are obtained from experts' subjective judgments, such as the Delphi method, pairwise comparison, ratio‑ranking, and Analytic Hierarchy Process (AHP). These methods share the characteristic that each evaluation indicator's weight is given by experts based on their experience and practical judgment. Different experts may produce different weights, and the main drawback is the large degree of subjectivity, which is not fundamentally improved by increasing the number of experts or carefully selecting them.

The advantage of subjective weighting is that experts can reasonably rank the indicators according to the actual problem, providing an ordered sequence of importance even if the exact weights are not precise.

Delphi Method

Also known as the expert conference method, it gathers experts' experience and opinions to determine indicator weights through repeated feedback and revision.

Select experts – a critical step, as the choice directly affects result accuracy. Experts should have both practical experience and solid theoretical background, and must consent to participation.

Distribute the list of indicators and relevant data, along with a unified weighting rule, to the selected experts for independent weight assignment.

Collect the results and compute the mean and standard deviation of each indicator's weight.

Return the calculated results and supplementary material to the experts, asking them to re‑determine the weights based on the new information.

Repeat steps 3 and 4 until the deviation of each indicator's weight from its mean falls below a predefined threshold, indicating consensus among experts.

The Delphi process is a repeated cycle of investigation, opinion collection, summary analysis, feedback, and re‑investigation, allowing experts to work in isolation while aggregating collective wisdom.

Analytic Hierarchy Process (AHP)

AHP, proposed by Thomas L. Saaty in the 1970s, decomposes a complex problem into a hierarchical structure, performs pairwise comparisons between elements of adjacent levels, constructs a judgment matrix, calculates priorities, checks consistency, and finally obtains a weighted ranking of factors. It can quantify non‑quantitative items and translate subjective judgments into objective weights.

Establish a hierarchical structure by breaking the problem into levels and grouping related elements.

Construct the judgment matrix using a nine‑point relative importance scale.

Calculate the product of each row's elements and take the n‑th root to obtain the geometric mean.

Normalize the geometric means to derive the subjective weights.

Perform a consistency test; if the consistency ratio (CR) is below a threshold, the matrix is considered consistent.

Objective Weighting Methods

Objective weighting methods derive weights from the statistical properties of the indicators themselves, without requiring expert opinions. Common objective methods include the coefficient of variation, entropy method, correlation‑based weighting, and the CRITIC method.

Coefficient of Variation Weighting

The coefficient of variation (CV) reflects the amount of information each indicator provides: indicators with larger variation among evaluation units contain more information and should receive larger weights. Because raw variances are not comparable across different units and scales, the CV (standard deviation divided by the mean) is used and then normalized to obtain information‑based weights.

Empirical analysis using data from 35 listed companies in Hunan Province (after removing extreme values, 28 companies remain) illustrates the calculation process and results.

[Table of company data omitted for brevity]

Eight indicators were finally selected for further analysis.

Entropy Method Weighting

Originating from thermodynamics and introduced to information theory by Shannon, entropy measures the disorder of a system. In this context, a smaller entropy for an indicator implies greater variation and thus more information, leading to a higher weight.

Standardize each indicator and compute the proportion of each value.

Calculate the entropy for each indicator.

Derive the difference coefficient and normalize it to obtain the entropy‑based weight.

Independence Weighting

To avoid redundancy among indicators, the correlation coefficient between each pair of indicators is used as a measure of repeated information. Indicators with lower overall correlation receive higher weights.

Standardize the original data and compute the correlation matrix.

Sum each column of the correlation matrix to obtain a vector reflecting the total redundancy of each indicator.

Take the reciprocal of this vector and normalize it to get the independence‑based weights.

CRITIC Method (Combination of Information Quantity and Independence)

CRITIC (Criteria Importance Through Intercriteria Correlation) combines the amount of information (measured by standard deviation) and the degree of independence (measured by correlation) to calculate objective weights.

Compute the standard deviation of each indicator (information quantity).

Calculate the correlation coefficients between indicators (independence).

Combine the two measures to obtain a comprehensive weight for each indicator.

Comparison of Weighting Results

All methods were applied to the same dataset, and the resulting weights were compiled for comparison. The entropy and coefficient‑of‑variation methods produced the largest differences among indicator weights, reflecting their reliance on variation information. The correlation‑based method yielded more uniform weights, while the CRITIC method produced intermediate variability, aligning with its theoretical basis of balancing information quantity and redundancy.

Correlation Analysis of Weighting Results

Pairwise correlation coefficients among the four objective weighting methods showed no significant correlations, and some coefficients were negative, indicating that no single method consistently outperforms the others. This suggests that combining multiple weighting results may be necessary for robust multi‑criteria evaluation.

Reference: "Modern Comprehensive Evaluation Methods and Selected Cases" by Du Dong, Pang Qinghua, Wu Yan.

DelphiAHPMulti-criteria Decisionobjective weightingsubjective weightingweighting methods
Model Perspective
Written by

Model Perspective

Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.