Training a Two-Layer ReLU Network Analytically
Citations Over TimeTop 17% of 2023 papers
Abstract
Neural networks are usually trained with different variants of gradient descent-based optimization algorithms such as the stochastic gradient descent or the Adam optimizer. Recent theoretical work states that the critical points (where the gradient of the loss is zero) of two-layer ReLU networks with the square loss are not all local minima. However, in this work, we will explore an algorithm for training two-layer neural networks with ReLU-like activation and the square loss that alternatively finds the critical points of the loss function analytically for one layer while keeping the other layer and the neuron activation pattern fixed. Experiments indicate that this simple algorithm can find deeper optima than stochastic gradient descent or the Adam optimizer, obtaining significantly smaller training loss values on four out of the five real datasets evaluated. Moreover, the method is faster than the gradient descent methods and has virtually no tuning parameters.
Related Papers
- → Training a Two-Layer ReLU Network Analytically(2023)7 cited
- Combined algorithms for training RBF neural networks based on genetic algorithms and gradient descent(2007)
- → THE USE OF CONTROL THEORY METHODS IN TRAINING NEURAL NETWORKS ON THE EXAMPLE OF TEETH RECOGNITION ON PANORAMIC X-RAY IMAGES(2021)1 cited
- → Dual Gradient Descent Algorithm on Two-Layered Feed-Forward Artificial Neural Networks(2007)
- → Training a Two Layer ReLU Network Analytically(2023)