Artificial Intelligence 10 min read

Unlocking TLBO: How Teaching-Learning Based Optimization Finds Global Optima

This article explains the teaching-learning based optimization (TLBO) algorithm, its teaching and learning phases, provides a complete Python implementation, and demonstrates its performance on minimizing a simple quadratic function with visualized convergence.

Model Perspective
Model Perspective
Model Perspective
Unlocking TLBO: How Teaching-Learning Based Optimization Finds Global Optima

Principle of Teaching-Learning Based Optimization

TLBO, introduced by Rao et al. in 2011, models a classroom where the teacher (the best individual) guides the students (the population) and students also learn from each other, iteratively improving the solutions until the optimal one is reached.

Teaching Phase

Each student updates its position by moving toward the teacher and the population mean, using a random teaching factor B (rounded to 1 or 2). The update formula is X_new = X_current + rand*(Teacher - B*Mean).

Learning Phase

Students randomly pair with another student; the worse student moves toward the better one, again applying boundary checks to keep positions within the search space.

Complete Python Implementation

<code>import numpy as np
import random
import copy
import matplotlib.pyplot as np

def initialization(pop, ub, lb, dim):
    '''种群初始化函数'''
    X = np.zeros([pop, dim])
    for i in range(pop):
        for j in range(dim):
            X[i, j] = (ub[j] - lb[j]) * np.random.random() + lb[j]
    return X

def BorderCheck(X, ub, lb, pop, dim):
    '''边界检查函数'''
    for i in range(pop):
        for j in range(dim):
            if X[i, j] > ub[j]:
                X[i, j] = ub[j]
            elif X[i, j] < lb[j]:
                X[i, j] = lb[j]
    return X

def CaculateFitness(X, fun):
    '''计算种群的所有个体的适应度值'''
    pop = X.shape[0]
    fitness = np.zeros([pop, 1])
    for i in range(pop):
        fitness[i] = fun(X[i, :])
    return fitness

def SortFitness(Fit):
    '''适应度值排序'''
    fitness = np.sort(Fit, axis=0)
    index = np.argsort(Fit, axis=0)
    return fitness, index

def SortPosition(X, index):
    '''根据适应度值对位置进行排序'''
    Xnew = np.zeros(X.shape)
    for i in range(X.shape[0]):
        Xnew[i, :] = X[index[i], :]
    return Xnew

def TLBO(pop, dim, lb, ub, MaxIter, fun):
    '''教与学优化算法'''
    X = initialization(pop, ub, lb, dim)
    fitness = CaculateFitness(X, fun)
    GbestScore = np.min(fitness)
    indexBest = np.argmin(fitness)
    GbestPositon = np.zeros([1, dim])
    GbestPositon[0, :] = copy.copy(X[indexBest, :])
    Curve = np.zeros([MaxIter, 1])
    for t in range(MaxIter):
        print('第' + str(t) + '次迭代')
        for i in range(pop):
            # Teaching phase
            Xmean = np.mean(X)
            indexBest = np.argmin(fitness)
            Xteacher = copy.copy(X[indexBest, :])
            beta = random.randint(0, 1)
            Xnew = X[i, :] + np.random.random(dim) * (Xteacher - beta * Xmean)
            for j in range(dim):
                if Xnew[j] > ub[j]:
                    Xnew[j] = ub[j]
                if Xnew[j] < lb[j]:
                    Xnew[j] = lb[j]
            fitnessNew = fun(Xnew)
            if fitnessNew < fitness[i]:
                X[i, :] = copy.copy(Xnew)
                fitness[i] = copy.copy(fitnessNew)
            # Learning phase
            p = random.randint(0, dim - 1)
            while i == p:
                p = random.randint(0, dim - 1)
            if fitness[i] < fitness[p]:
                Xnew = X[i, :] + np.random.random(dim) * (X[i, :] - X[p, :])
            else:
                Xnew = X[i, :] - np.random.random(dim) * (X[i, :] - X[p, :])
            for j in range(dim):
                if Xnew[j] > ub[j]:
                    Xnew[j] = ub[j]
                if Xnew[j] < lb[j]:
                    Xnew[j] = lb[j]
            fitnessNew = fun(Xnew)
            if fitnessNew < fitness[i]:
                X[i, :] = copy.copy(Xnew)
                fitness[i] = fitnessNew
        fitness = CaculateFitness(X, fun)
        indexBest = np.argmin(fitness)
        if fitness[indexBest] <= GbestScore:
            GbestScore = copy.copy(fitness[indexBest])
            GbestPositon[0, :] = copy.copy(X[indexBest, :])
        Curve[t] = GbestScore
    return GbestScore, GbestPositon, Curve

def fun(X, a=1):
    A = 2 * a + X[0]**2 - a * np.cos(2 * np.pi * X[0]) + X[1]**2 - a * np.cos(2 * np.pi * X[1])
    return A
# 参数设置
pop = 50
MaxIter = 1000
dim = 2
lb = -10 * np.ones(dim)
ub = 10 * np.ones(dim)
GbestScore, GbestPositon, Curve = TLBO(pop, dim, lb, ub, MaxIter, fun)
print('最优适应度值:', GbestScore)
print('最优解[x1,x2]:', GbestPositon)
# 绘制适应度曲线
plt.figure(1)
plt.plot(Curve, 'r-', linewidth=2)
plt.xlabel('Iteration', fontsize='medium')
plt.ylabel('Fitness', fontsize='medium')
plt.grid()
plt.title('TLBO', fontsize='large')
plt.show()</code>

Results

Best fitness: 6.51156684e-10; best solution: [[-3.48031085e-06 4.39144808e-06]].

Iteration Curve

Reference

Python智能优化算法:从原理到代码实现与应用, 作者:范旭

OptimizationalgorithmPythonmetaheuristicteaching-learningTLBO
Model Perspective
Written by

Model Perspective

Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.