Reducing the Dimensionality of Data with Neural Networks
Science2006Vol. 313(5786), pp. 504–507
Citations Over TimeTop 1% of 2006 papers
Abstract
High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.
Related Papers
- → Analysis of weight initialization methods for gradient descent with momentum(2015)11 cited
- → An initial alignment between neural network and target is needed for gradient descent to learn(2022)2 cited
- → Predicting the success of Gradient Descent for a particular\n Dataset-Architecture-Initialization (DAI)(2021)1 cited
- Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent(2020)