Can Weight Sharing Outperform Random Architecture Search? An Investigation With TuNAS
Citations Over TimeTop 10% of 2020 papers
Abstract
Efficient Neural Architecture Search methods based on weight sharing have shown good promise in democratizing Neural Architecture Search for computer vision models. There is, however, an ongoing debate whether these efficient methods are significantly better than random search. Here we perform a thorough comparison between efficient and random search methods on a family of progressively larger and more challenging search spaces for image classification and detection on ImageNet and COCO. While the efficacies of both methods are problem-dependent, our experiments demonstrate that there are large, realistic tasks where efficient search methods can provide substantial gains over random search. In addition, we propose and evaluate techniques which improve the quality of searched architectures and reduce the need for manual hyper-parameter tuning.
Related Papers
- → Random or Genetic Algorithm Search for Object-Oriented Test Suite Generation?(2015)56 cited
- → Continuous functions minimization by dynamic random search technique(2006)38 cited
- Research on Adaptive Random Search Algorithm Based on Metropolis Criterion(2015)
- → Grid Search, Random Search, Genetic Algorithm: A Big Comparison for NAS(2019)558 cited
- → An Adaptive Random Search Algorithm with Tuning Capabilities(2003)