ENSEMBLE LEARNING
Citations Over Time
Abstract
This chapter explains the basic characteristics of ensemble learning methodologies, and aims to compare bagging and boosting approaches. One of primary goals of data mining is to predict an “unknown” value of a new sample from observed samples. The ensemble learning methodology consists also of two sequential phases: training phase and testing phase. The random subspace method is a method of ensemble learning, which is based on theory of stochastic discrimination. The chapter distinguishes between different implementations of combination schemes for different learners. Combination schemes include the following: global approach; local approach; and multistage combination. Bagging was the first effective method of ensemble learning and is one of the simplest methods. Boosting is the most widely used ensemble method and one of the most powerful learning ideas introduced in ensemble learning community. AdaBoost is the most popular boosting algorithm. AdaBoost combine “weak” learners into a highly accurate classifier to solve difficult highly nonlinear problems.
Related Papers
- → Boosting algorithms for network intrusion detection: A comparative evaluation of Real AdaBoost, Gentle AdaBoost and Modest AdaBoost(2020)204 cited
- → Advance and Prospects of AdaBoost Algorithm(2014)197 cited
- → Supplemental Boosting and Cascaded ConvNet Based Transfer Learning Structure for Fast Traffic Sign Detection in Unknown Application Scenes(2018)11 cited
- The Typical Algorithm of AdaBoost Series in Boosting Family(2003)
- → Boosting Ensembles of Weak Classifiers in High Dimensional Input Spaces(2009)