Search and Optimization Algorithms in Game AI
Game AI relies on a variety of search techniques—ranging from uninformed breadth‑first and depth‑first methods to heuristic‑driven A*, minimax with alpha‑beta pruning, and Monte Carlo Tree Search—as well as optimization approaches such as hill climbing, simulated annealing, genetic and evolution strategies, multi‑objective evolutionary algorithms, and neuroevolutionary methods like NEAT to generate intelligent, balanced, and adaptable game behavior.
Artificial Intelligence can essentially be viewed as a "search problem" - searching for a feasible path in a high-dimensional space. In game development, search algorithms based on "tree search" are widely used, where nodes represent game states and edges represent state transitions through actions.
Uninformed Search (Brute-force Search) includes Breadth-first and Depth-first search, which traverse the game state tree without considering the AI's goals. While inefficient, variants like iterative width search still find applications in certain game aspects.
Best-First Search guides the search toward more promising nodes using heuristics. The widely-used A* algorithm selects actions based on the cost from current state to next state and the estimated distance to the goal. A* won the 2009 Mario AI Championship and is commonly used for pathfinding in games like StarCraft and Warcraft III.
For adversarial games with zero-sum objectives, Minimax Search is extensively used in perfect-information two-player games. The algorithm assumes both players act rationally to maximize their own gains. Alpha-beta pruning improves efficiency by eliminating branches that cannot yield better outcomes than already examined options.
Monte Carlo Tree Search (MCTS) , invented in 2007, achieved great success by using random rollouts to estimate node values without requiring an explicit state evaluation function. MCTS consists of four steps: Selection (using UCB formula to balance exploration and exploitation), Expansion (adding new nodes), Simulation (rolling out to terminal states), and Back-propagation (updating values up the tree).
Optimization Algorithms differ from tree search by constructing objective functions to maximize utility across the solution space:
Local Search methods include Hill Climber (which may get stuck in local maxima) and Simulated Annealing (which introduces randomness to accept worse solutions with probability exp(-ΔU/T), inspired by metallurgical annealing). The algorithm starts with high temperature allowing exploration, then gradually cools to converge to global optima.
Genetic Algorithms (GA) maintain a population of solutions, applying crossover and mutation to generate new generations. Better solutions (measured by fitness function) are preserved. The FI-2pop algorithm handles constraints by maintaining separate feasible and infeasible solution queues.
Evolution Strategies (ES) represent individuals using real-valued parameters (mean and variance for each dimension) instead of binary encoding, allowing evolution of Gaussian distributions.
Multi-objective Evolutionary Algorithms find Pareto-optimal solutions that cannot improve one objective without degrading others - important for game balance design.
Neuroevolution combines evolutionary algorithms with neural networks, using evolution to optimize network weights and topology. The NEAT algorithm has been used to create AI that plays Mario. Recent work by Uber AI Lab demonstrates neuroevolution can scale to large neural networks.
Tencent Cloud Developer
Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.