An analysis of linear models, linear value-function approximation, and feature selection for reinforcement learning
2008pp. 752–759
Citations Over TimeTop 10% of 2008 papers
Abstract
We show that linear value-function approximation is equivalent to a form of linear model approximation. We then derive a relationship between the model-approximation error and the Bellman error, and show how this relationship can guide feature selection for model improvement and/or value-function improvement. We also show how these results give insight into the behavior of existing feature-selection algorithms.
Related Papers
- → Approximations to Error Functions(2007)25 cited
- → On the approximation error in high dimensional model representation(2008)20 cited
- Multivariate Approximation Schemes and the Approximation of Linear Functionals.(1974)
- → An Approximation for the Error of the Normal Approximation to a Linear Combination of Independently Distributed Random Variables(1988)
- Error Processing of Sparse Identification of Nonlinear Dynamical Systems via L ∞ Approximation.(2021)