Accelerating Extreme Search Based on Natural Gradient Descent with Beta Distribution
2021
Citations Over Time
Abstract
Natural gradient descent is an optimization method developed from the information geometry. It works well for many applications due to the better convergence and can be a good alternative for gradient descent and stochastic gradient descent in machine learning and statistics. The goal of this work is to propose a natural gradient descent algorithm with the beta distribution and the stepsize adaptation. We compare the minimizing process of gradient descent with natural gradient descent with respect to Gauss and Beta distributions. Additionally, the calculating of the Fisher matrix for computing natural gradient will be represented.
Related Papers
- → A new conjugate gradient method and its global convergence under the exact line search(2014)3 cited
- → A modified nonlinear conjugate gradient method for unconstrained optimization(2015)12 cited
- → A modified Liu-Storey conjugate gradient method and its global convergence for unconstrained optimization(2010)8 cited
- Global Convergence of a Class of Conjugate Gradient Algorithms Containing a Parameter with Weak Wolfe Line Search(2009)
- Nonlinear Survey Data Processing Based on the Conjugate Gradient Method and the Steepest Descent Method(2004)