Hyperparameter optimization: Foundations, algorithms, best practices, and open challenges
Citations Over TimeTop 1% of 2023 papers
Abstract
Abstract Most machine learning algorithms are configured by a set of hyperparameters whose values must be carefully chosen and which often considerably impact performance. To avoid a time‐consuming and irreproducible manual process of trial‐and‐error to find well‐performing hyperparameter configurations, various automatic hyperparameter optimization (HPO) methods—for example, based on resampling error estimation for supervised machine learning—can be employed. After introducing HPO from a general perspective, this paper reviews important HPO methods, from simple techniques such as grid or random search to more advanced methods like evolution strategies, Bayesian optimization, Hyperband, and racing. This work gives practical recommendations regarding important choices to be made when conducting HPO, including the HPO algorithms themselves, performance evaluation, how to combine HPO with machine learning pipelines, runtime improvements, and parallelization. This article is categorized under: Algorithmic Development > Statistics Technologies > Machine Learning Technologies > Prediction
Related Papers
- → Initializing Bayesian Hyperparameter Optimization via Meta-Learning(2015)394 cited
- → Deep Neural Network with Hyperparameter Tuning for Detection of Heart Disease(2021)23 cited
- → Evaluation of Hyperparameter-Optimization Approaches in an Industrial Federated Learning System(2022)14 cited
- → Bayesian Hyperparameter Optimization for Ensemble Learning(2016)35 cited
- → Evaluation of Hyperparameter-Optimization Approaches in an Industrial Federated Learning System(2021)