Artificial Intelligence 13 min read

A Survey of Neural Architecture Search: Search Spaces, Optimization Strategies, and Recent Results

This article surveys neural architecture search, classifying existing methods, describing common search spaces—including global and cell‑based designs—detailing optimization strategies such as reinforcement learning, evolutionary algorithms, surrogate models, one‑shot and differentiable approaches, and highlighting recent results and trends in the field.

Top Architect
Top Architect
Top Architect
A Survey of Neural Architecture Search: Search Spaces, Optimization Strategies, and Recent Results

Researchers' increasing interest in automating machine learning and deep learning has driven the development of neural architecture search (NAS), which aims to reduce the computational and expertise barriers of designing neural networks. This article provides a unified classification and comparison of existing NAS methods, analyzing their components and recent advances.

NAS Search Spaces – The search space defines the set of possible architectures. Two main types are discussed: (1) Global search spaces , where a template constrains the allowable operations and connections, illustrated in Figure 1; (2) Cell‑based (or unit‑based) search spaces , where a small directed acyclic graph (cell) is repeatedly stacked to form larger networks, as shown in Figures 3 and 4. Examples include the NASNet space and mobile‑oriented spaces such as those proposed by Tan et al. (2018) and Dong et al. (2018).

Optimization Methods – The article reviews several strategies for optimizing the NAS objective function f. Reinforcement learning (RL) approaches model the architecture selection as a sequential decision process, using methods such as temporal‑difference learning (SARSA, Q‑learning) and policy‑gradient controllers (e.g., Zoph & Le, 2017). Evolutionary algorithms (EA) treat NAS as a population‑based black‑box optimization, with components like initialization, selection, recombination, mutation, and survivor selection (see Figure 11). Surrogate‑model based optimization learns a predictive model \(\hat f\) of architecture performance to guide search, exemplified by Luo et al. (2018) who jointly train an auto‑encoder and surrogate. One‑shot NAS trains a single over‑parameterized network that shares weights across the entire search space, reducing search cost (Pham et al., 2018). Differentiable NAS (e.g., Liu et al., 2018) jointly optimizes network weights \(\theta\) and architecture parameters \(\beta\) via gradient descent. Hypernetworks (Brock et al., 2018) generate weights for arbitrary architectures conditioned on their descriptions.

Results and Summary – Table 2 summarizes the performance of various NAS algorithms on the CIFAR‑10 benchmark, including search time and accuracy, and compares them with random search and manually designed architectures. The discussion highlights that cell‑based spaces, especially NASNet, often yield superior results, and that recent trends focus on reducing computational cost while maintaining or improving accuracy.

Paper link: https://arxiv.org/abs/1905.01392

Machine Learningreinforcement learningNeural Architecture SearchNASevolutionary algorithmsOne-shot
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.