Researching on combining boosting ensembles
Citations Over Time
Abstract
As shown in the bibliography, training an ensemble of networks is an interesting way to improve the performance with respect to a single network. The two key factors to design an ensemble are how to train the individual networks and how to combine them to give a single output. Boosting is a well known methodology to build an ensemble. Some boosting methods use an specific combiner (Boosting Combiner) based on the accuracy of the network. Although the Boosting combiner provides good results on boosting ensembles, the simple combiner Output Average worked better in three new boosting methods we successfully proposed in previouses papers. In this paper, we study the performance of sixteen different combination methods for ensembles previously trained with Adaptive Boosting and Average Boosting in order to see which combiner fits better on these ensembles. Finally, the results show that the accuracy of the ensembles trained with these original boosting methods can be improved by using the appropriate alternative combiner. In fact, the Output average and the Weighted average on low/medium sized ensembles provide the best results in most of the cases.
Related Papers
- → Boosting algorithms for network intrusion detection: A comparative evaluation of Real AdaBoost, Gentle AdaBoost and Modest AdaBoost(2020)204 cited
- → Advance and Prospects of AdaBoost Algorithm(2014)197 cited
- → Supplemental Boosting and Cascaded ConvNet Based Transfer Learning Structure for Fast Traffic Sign Detection in Unknown Application Scenes(2018)11 cited
- The Typical Algorithm of AdaBoost Series in Boosting Family(2003)
- → Boosting Ensembles of Weak Classifiers in High Dimensional Input Spaces(2009)