When to Stop Searching? Unveiling the 37% Rule with Simulations
This article explores the classic optimal‑stopping (secretary) problem through a philosophical story, mathematical analysis, Python simulations, and a proof that the best strategy is to reject roughly the first 37 % of candidates, yielding about a 40 % chance of selecting the optimal option.
Story
Plato asks Socrates how to find the most suitable partner; Socrates gives a wheat‑field metaphor, illustrating that indecision leads to missing the best choice.
Strategy
We model the problem as an optimal‑stopping problem: a sequence of N candidates, each seen once, and we may accept one and stop.
Assume a series of choices 1,2,…,N; we can meet each once and either accept (stop) or reject and continue.
For illustration we consider three candidates A (best), B, C and enumerate all six possible orders.
We evaluate three strategies defined by a cutoff k: reject the first k‑1 candidates, then pick the first later candidate that is better than all seen so far.
Enumerating all permutations shows that k=2 yields a 2/3 success rate, while k=1 and k=3 give only 1/3.
Simulation
We wrote a Python program (shown below) that shuffles 1‑n, repeats 10 000 trials for each n, and records the k that maximizes the probability of selecting the maximum element.
<code>#@jit(nopython=True)
n_list = [3,5,10,20,30,40,50,100]
def cal_best_k(n_list, M=10000):
"""n_list: list of total element counts
M: number of random trials per n
"""
k_list = [] # best k for each n
k_prob_list = [] # corresponding success probability
for n in n_list:
k_prob_best = 0
k_best = None
queue = np.arange(0, n)
for k in range(1, n):
k_find = 0
for _ in range(M):
np.random.shuffle(queue)
if max(queue[:k]) == n-1:
k_find += 0
else:
for k1 in range(k, n):
if queue[k1] > max(queue[:k1]):
if queue[k1] == n-1:
k_find += 1
break
k_prob = k_find / M
if k_prob >= k_prob_best:
k_prob_best = k_prob
k_best = k
k_list.append(k_best)
k_prob_list.append(k_prob_best)
return n_list, k_list, k_prob_list
</code>The results show the optimal cutoff k grows roughly as 0.37 n and the maximal success probability stays around 0.4.
n=3 → k=2, probability ≈ 0.667
n=4 → k=2, probability ≈ 0.5
n=5 → k=3, probability ≈ 0.6
… up to n=20 → k≈8, probability ≈ 0.4
Plotting k/n versus n confirms the optimal position stays near the first 40 % of the sequence, and the corresponding success probability hovers near 40 %.
Mathematical proof
Formally, reject the first r candidates, then select the next candidate that is better than all previous ones. The probability of eventually picking the best candidate is maximized when r ≈ n/e ≈ 0.37 n, giving a maximal probability of 1/e ≈ 0.368.
This matches the simulation results.
Applications
The 37 % rule can guide decisions such as choosing a partner, a house, or any situation where a limited set of options must be evaluated sequentially.
For example, if you meet ten potential partners between ages 20 and 30, start considering seriously after the third or fourth meeting (around 37 % of the total), and then choose the first later person who seems better than all previous ones.
References
Optimal Stopping Theory, https://blog.csdn.net/DwenKing/article/details/112263119#
37 % Rule, https://zhuanlan.zhihu.com/p/82742044
Model Perspective
Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.