TY - GEN
T1 - Local Search is a Remarkably Strong Baseline for Neural Architecture Search
AU - Den Ottelander, Tom
AU - Dushatskiy, Arkadiy
AU - Virgolin, Marco
AU - Bosman, Peter A.N.
PY - 2021
Y1 - 2021
N2 - Neural Architecture Search (NAS), i.e., the automation of neural network design, has gained much popularity in recent years with increasingly complex search algorithms being proposed. Yet, solid comparisons with simple baselines are often missing. At the same time, recent retrospective studies have found many new algorithms to be no better than random search (RS). In this work we consider the use of a simple Local Search (LS) algorithm for NAS. We particularly consider a multi-objective NAS formulation, with network accuracy and network complexity as two objectives, as understanding the trade-off between these two objectives is arguably among the most interesting aspects of NAS. The proposed LS algorithm is compared with RS and two evolutionary algorithms (EAs), as these are often heralded as being ideal for multi-objective optimization. To promote reproducibility, we create and release two benchmark datasets, named MacroNAS-C10 and -C100, containing 200K saved network evaluations for two established image classification tasks, CIFAR-10 and CIFAR-100. Our benchmarks are designed to be complementary to existing benchmarks, especially in that they are better suited for multi-objective search. We additionally consider a version of the problem with a much larger architecture space. While we find and show that the considered algorithms explore the search space in fundamentally different ways, we also find that LS substantially outperforms RS and even performs nearly as good as state-of-the-art EAs. We believe that this provides strong evidence that LS is truly a competitive baseline for NAS against which new NAS algorithms should be benchmarked.
AB - Neural Architecture Search (NAS), i.e., the automation of neural network design, has gained much popularity in recent years with increasingly complex search algorithms being proposed. Yet, solid comparisons with simple baselines are often missing. At the same time, recent retrospective studies have found many new algorithms to be no better than random search (RS). In this work we consider the use of a simple Local Search (LS) algorithm for NAS. We particularly consider a multi-objective NAS formulation, with network accuracy and network complexity as two objectives, as understanding the trade-off between these two objectives is arguably among the most interesting aspects of NAS. The proposed LS algorithm is compared with RS and two evolutionary algorithms (EAs), as these are often heralded as being ideal for multi-objective optimization. To promote reproducibility, we create and release two benchmark datasets, named MacroNAS-C10 and -C100, containing 200K saved network evaluations for two established image classification tasks, CIFAR-10 and CIFAR-100. Our benchmarks are designed to be complementary to existing benchmarks, especially in that they are better suited for multi-objective search. We additionally consider a version of the problem with a much larger architecture space. While we find and show that the considered algorithms explore the search space in fundamentally different ways, we also find that LS substantially outperforms RS and even performs nearly as good as state-of-the-art EAs. We believe that this provides strong evidence that LS is truly a competitive baseline for NAS against which new NAS algorithms should be benchmarked.
KW - Evolutionary algorithm
KW - Local Search
KW - Multi-objective NAS
KW - NAS baseline
KW - Neural Architecture Search
KW - Random search
UR - http://www.scopus.com/inward/record.url?scp=85107295010&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-72062-9_37
DO - 10.1007/978-3-030-72062-9_37
M3 - Conference contribution
AN - SCOPUS:85107295010
SN - 978-3-030-72061-2
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 465
EP - 479
BT - Evolutionary Multi-Criterion Optimization
A2 - Ishibuchi, Hisao
A2 - Zhang, Qingfu
A2 - Cheng, Ran
A2 - Li, Ke
A2 - Li, Hui
A2 - Wang, Handing
A2 - Zhou, Aimin
PB - Springer
CY - Cham
T2 - 11th International Conference on Evolutionary Multi-Criterion Optimization, EMO 2021
Y2 - 28 March 2021 through 31 March 2021
ER -