PAC-Bayesian learning of linear classifiers
2009pp. 353–360
Citations Over TimeTop 1% of 2009 papers
Abstract
We present a general PAC-Bayes theorem from which all known PAC-Bayes risk bounds are obtained as particular cases. We also propose different learning algorithms for finding linear classifiers that minimize these bounds. These learning algorithms are generally competitive with both AdaBoost and the SVM.
Related Papers
- → Boosting algorithms for network intrusion detection: A comparative evaluation of Real AdaBoost, Gentle AdaBoost and Modest AdaBoost(2020)204 cited
- → Predictive Analysis of Mental Health Conditions Using AdaBoost Algorithm(2022)44 cited
- → Parallelizing AdaBoost by weights dynamics(2006)47 cited
- → Research on AdaBoost.M1 with Random Forest(2010)17 cited
- → Diverse Adaboost Through Heterogenous Component Classifiers(2022)1 cited