Artificial Intelligence 9 min read

How Metropolis-Hastings Improves MCMC Sampling Efficiency

This article explains the detailed‑balance condition for Markov chains, shows why finding a transition matrix for a given stationary distribution is hard, and demonstrates how Metropolis‑Hastings modifies MCMC to achieve higher acceptance rates with a concrete Python example.

Model Perspective
Model Perspective
Model Perspective
How Metropolis-Hastings Improves MCMC Sampling Efficiency

Detailed Balance Condition of Markov Chains

Before constructing a transition matrix that yields a desired stationary distribution, we must satisfy the detailed‑balance condition: for a non‑periodic Markov chain with transition matrix \(P\) and distribution \(\pi\), the equality \(\pi_i P_{ij}=\pi_j P_{ji}\) must hold for all states \(i, j\). In matrix form this ensures convergence of the chain.

Finding a matrix that meets this condition for an arbitrary target distribution is generally difficult; a random matrix will rarely satisfy the detailed‑balance equations.

MCMC Sampling

Markov Chain Monte Carlo (MCMC) addresses this difficulty by introducing an acceptance probability that forces the detailed‑balance condition to hold. The algorithm proceeds as follows:

Choose an arbitrary transition matrix, a target stationary distribution, a maximum number of steps, and the required sample size.

Sample an initial state from a simple proposal distribution.

Iterate the following steps: Sample a candidate state from the conditional distribution. Draw a uniform random number. If the uniform number is less than the acceptance probability, accept the move; otherwise keep the current state.

The acceptance probability can be very small (e.g., 0.1), leading to many rejected moves and low sampling efficiency.

Metropolis‑Hastings (M‑H) Sampling

Metropolis‑Hastings improves MCMC by adjusting the acceptance probability to increase the chance of accepting proposals. When the proposal distribution is symmetric, the acceptance ratio simplifies to the ratio of target densities.

The refined algorithm is:

Input an arbitrary transition matrix, target stationary distribution, step limit, and desired sample count.

Sample an initial state from a simple distribution.

For each iteration: Generate a candidate state from the proposal distribution. Compute \(\alpha = \min\bigl(1, \frac{\pi(\text{candidate})}{\pi(\text{current})}\bigr)\). Draw a uniform random number \(u\). If \(u < \alpha\), accept the candidate; otherwise retain the current state.

M‑H Sampling Example

The target stationary distribution is a normal distribution with mean 3 and standard deviation 2. The proposal distribution is a normal distribution with mean equal to the current state and variance 1. The following Python code implements the Metropolis‑Hastings sampler and visualizes the results.

<code>import random
import math
from scipy.stats import norm
import matplotlib.pyplot as plt
%matplotlib inline

def norm_dist_prob(theta):
    y = norm.pdf(theta, loc=3, scale=2)
    return y

T = 5000
pi = [0 for i in range(T)]
sigma = 1
t = 0
while t < T-1:
    t = t + 1
    pi_star = norm.rvs(loc=pi[t-1], scale=sigma, size=1, random_state=None)
    alpha = min(1, (norm_dist_prob(pi_star[0]) / norm_dist_prob(pi[t-1])))
    u = random.uniform(0, 1)
    if u < alpha:
        pi[t] = pi_star[0]
    else:
        pi[t] = pi[t-1]

plt.scatter(pi, norm.pdf(pi, loc=3, scale=2))
num_bins = 50
plt.hist(pi, num_bins, density=1, facecolor='red', alpha=0.7)
plt.show()</code>

Summary of M‑H Sampling

Metropolis‑Hastings fully resolves the need for samples from any probability distribution in Monte‑Carlo methods and is widely used in production. In the era of big data, it faces two challenges: (1) many proposals are rejected, and (2) high‑dimensional joint distributions are hard to compute. Gibbs sampling addresses these issues, and the next article will discuss Gibbs sampling.

Source : Liu Jianping (Pinard) https://www.cnblogs.com/pinard/p/6638955.html

Pythonsamplingmonte carloMCMCMarkov ChainMetropolis-Hastings
Model Perspective
Written by

Model Perspective

Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.