Optimization with Sparsity-Inducing Penalties
Citations Over TimeTop 1% of 2012 papers
Abstract
Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate nonsmooth norms. The goal of this monograph is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted l2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view.
Related Papers
- → Regression Shrinkage and Selection Via the Lasso(1996)50,737 cited
- → Regularization and Variable Selection Via the Elastic Net(2005)20,429 cited
- → Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers(2010)13,387 cited
- → A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems(2009)11,862 cited
- → Model Selection and Estimation in Regression with Grouped Variables(2005)7,368 cited