Analyzing bagging
Citations Over TimeTop 1% of 2002 papers
Abstract
Bagging is one of the most effective computationally intensive procedures to improve on unstable estimators or classifiers, useful especially for high dimensional data set problems. Here we formalize the notion of instability and derive theoretical results to analyze the variance reduction effect of bagging (or variants thereof) in mainly hard decision problems, which include estimation after testing in regression and decision trees for regression functions and classifiers. Hard decisions create instability, and bagging is shown to smooth such hard decisions, yielding smaller variance and mean squared error. With theoretical explanations, we motivate subagging based on subsampling as an alternative aggregation scheme. It is computationally cheaper but still shows approximately the same accuracy as bagging. Moreover, our theory reveals improvements in first order and in line with simulation studies. In particular, we obtain an asymptotic limiting distribution at the cube-root rate for the split point when fitting piecewise constant functions. Denoting sample size by n, it follows that in a cylindric neighborhood of diameter $n^{-1/3}$ of the theoretically optimal split point, the variance and mean squared error reduction of subagging can be characterized analytically. Because of the slow rate, our reasoning also provides an explanation on the global scale for the whole covariate space in a decision tree with finitely many splits.
Related Papers
- → Estimation of the general population parameter in single- and two-phase sampling(2023)1 cited
- → Monte Carlo Simulation and Improvement of Variance Reduction Techniques(2018)1 cited
- → A new class of ratio type estimators in single- and two-phase sampling(2022)1 cited
- → Influence Functions for Risk and Performance Estimators(2019)1 cited
- → Estimators for the Product of Two Population Means Using Auxiliary Attribute in the Presence of Non-response(2022)