Recent Advances in Stochastic Gradient Descent in Deep Learning
Citations Over TimeTop 1% of 2023 papers
Abstract
In the age of artificial intelligence, the best approach to handling huge amounts of data is a tremendously motivating and hard problem. Among machine learning models, stochastic gradient descent (SGD) is not only simple but also very effective. This study provides a detailed analysis of contemporary state-of-the-art deep learning applications, such as natural language processing (NLP), visual data processing, and voice and audio processing. Following that, this study introduces several versions of SGD and its variant, which are already in the PyTorch optimizer, including SGD, Adagrad, adadelta, RMSprop, Adam, AdamW, and so on. Finally, we propose theoretical conditions under which these methods are applicable and discover that there is still a gap between theoretical conditions under which the algorithms converge and practical applications, and how to bridge this gap is a question for the future.
Related Papers
- → Stochastic Gradient Descent(2015)47 cited
- → Training a Two-Layer ReLU Network Analytically(2023)7 cited
- → Accelerating Extreme Search Based on Natural Gradient Descent with Beta Distribution(2021)4 cited
- → Computational Complexity of Gradient Descent Algorithm(2021)3 cited
- → Computational Complexity of Gradient Descent Algorithm(2021)