A Survey of Ensemble Learning: Concepts, Algorithms, Applications, and Prospects
Citations Over TimeTop 1% of 2022 papers
Abstract
Ensemble learning techniques have achieved state-of-the-art performance in diverse machine learning applications by combining the predictions from two or more base models. This paper presents a concise overview of ensemble learning, covering the three main ensemble methods: bagging, boosting, and stacking, their early development to the recent state-of-the-art algorithms. The study focuses on the widely used ensemble algorithms, including random forest, adaptive boosting (AdaBoost), gradient boosting, extreme gradient boosting (XGBoost), light gradient boosting machine (LightGBM), and categorical boosting (CatBoost). An attempt is made to concisely cover their mathematical and algorithmic representations, which is lacking in the existing literature and would be beneficial to machine learning researchers and practitioners.
Related Papers
- → Boosting algorithms for network intrusion detection: A comparative evaluation of Real AdaBoost, Gentle AdaBoost and Modest AdaBoost(2020)204 cited
- → Advance and Prospects of AdaBoost Algorithm(2014)197 cited
- MadaBoost: A Modification of AdaBoost(2000)
- → Supplemental Boosting and Cascaded ConvNet Based Transfer Learning Structure for Fast Traffic Sign Detection in Unknown Application Scenes(2018)11 cited
- The Typical Algorithm of AdaBoost Series in Boosting Family(2003)