Sparse Algorithms Are Not Stable: A No-Free-Lunch Theorem
IEEE Transactions on Pattern Analysis and Machine Intelligence2011Vol. 34(1), pp. 187–193
Citations Over TimeTop 1% of 2011 papers
Abstract
We consider two desired properties of learning algorithms: sparsity and algorithmic stability. Both properties are believed to lead to good generalization ability. We show that these two properties are fundamentally at odds with each other: A sparse algorithm cannot be stable and vice versa. Thus, one has to trade off sparsity and stability in designing a learning algorithm. In particular, our general result implies that ℓ(1)-regularized regression (Lasso) cannot be stable, while ℓ(2)-regularized regression is known to have strong stability properties and is therefore not sparse.
Related Papers
- → Physician-Friendly Machine Learning: A Case Study with Cardiovascular Disease Risk Prediction(2019)71 cited
- → A Novel Application of Unsupervised Machine Learning and Supervised Machine Learning-Derived Radiomics in Anterior Cruciate Ligament Rupture(2021)12 cited
- → Application of Machine Learning for SARS-CoV-2 Outbreak(2021)11 cited
- → A Benchmark Study by using various Machine Learning Models for Predicting Covid-19 trends(2023)3 cited
- → Improving Clinical Prediction of Later Occurrence of Breast Cancer Metastasis Using Deep Learning and Machine Learning with Grid Search(2022)2 cited