How Markov Chains Can Rank Sports Teams: A Simple Voting Model
This article explains a Markov‑based scoring method for ranking sports teams, treating each match as a vote where weaker teams award points to stronger ones, and shows how to construct a stochastic matrix, handle dangling nodes, compute the steady‑state vector, and derive final rankings, analogous to Google’s PageRank.
This new sports rating method uses an old technique of Markov, so we call it the Markov method. In 1906 Markov invented what later became known as Markov chains to describe random processes. Although originally applied to linguistic analysis of Pushkin’s poem, these chains have found many applications.
Main Idea of the Markov Method
The Markov scoring method can be summed up in one word: voting. Each contest between two teams gives the weaker team a chance to vote for the stronger one.
Many ways exist to measure the votes a team receives. The simplest uses win‑loss outcomes: a losing team gives one vote to each team that beat it. More advanced models consider scores, assigning votes equal to the point differential when a team loses to a stronger opponent. An even more refined model lets each team cast votes equal to its own points lost against each opponent. The team with the most votes gets the highest rank. This idea is a modification of Google’s PageRank algorithm.
Using Wins for Voting
When only win‑loss data is used, the voting matrix looks like this.
Because Duke lost to all opponents, it gives equal‑weight votes to every other team. To extract a ranking vector from this matrix, we normalize each row to produce a (nearly) random matrix.
Currently the matrix is quasi‑random because Miami has never lost, similar to the dangling‑node problem in web ranking. In web ranking, the solution replaces all‑zero rows with a uniform distribution. We adopt this idea to make the matrix fully stochastic.
Although dangling nodes are common on the web, they are rare in a complete sports season.
Following the PageRank analogy, we compute the steady‑state vector of this random matrix, which is the principal eigenvector obtained by solving the eigen‑system. A short story illustrates why the steady‑state vector can rank teams: a “wall‑flower” fan always supports the currently best team. Starting from any team, the fan follows links (wins) to the next team, eventually spending the most time on the top‑ranked team. Mathematically, this fan performs a random walk on the Markov chain, and the proportion of time spent in each state equals the steady‑state vector.
For the five‑team example, the steady‑state scoring vector and corresponding rankings are shown below.
Summary of the Markov Scoring Method
The shaded box below summarizes the notation used in describing the Markov method for ranking sports teams.
Number of statistical data types to be incorporated into the Markov model
Original voting matrix for each type of statistical data
Votes each team receives based on the statistical data
Random matrix derived from the corresponding voting matrix
Final random matrix constructed from the above
Weights corresponding to the statistical data
Ensured irreducibility of the random Markov matrix
Team Rating Using the Markov Method
Generate a voting matrix from the chosen statistical data and construct the corresponding stochastic matrix.
Then compute the steady‑state (principal eigen) vector of this matrix; if the matrix is reducible, first make it irreducible and then compute the eigenvector.
Model Perspective
Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.