Algorithms for the Solution of the Nonlinear Least-Squares Problem
Citations Over TimeTop 10% of 1978 papers
Abstract
This paper describes a modification to the Gauss–Newton method for the solution of nonlinear least-squares problems. The new method seeks to avoid the deficiencies in the Gauss–Newton method by improving, when necessary, the Hessian approximation by specifically including or approximating some of the neglected terms. The method seeks to compute the search direction without the need to form explicitly either the Hessian approximation or a factorization of this matrix. The benefits of this are similar to that of avoiding the formation of the normal equations in the Gauss-Newton method. Three algorithms based on this method are described; one which assumes that second derivative information is available and two which only assume first derivatives can be computed.
Related Papers
- → Newton-SOR Iteration for Solving Large-Scale Unconstrained Optimization Problems with an Arrowhead Hessian Matrices(2019)7 cited
- → Quasi-Newton Methods(2019)7 cited
- → Fast Newton-CG Method for Batch Learning of Conditional Random Fields(2011)3 cited
- → An Improvement of Computing Newton’s Direction for Finding Unconstrained Minimizer for Large-Scale Problems with an Arrowhead Hessian Matrix(2019)1 cited
- → Newton-2EGSOR Method for Unconstrained Optimization Problems with a Block Diagonal Hessian(2019)1 cited