On the importance of network architecture in training very deep neural networks
Citations Over Time
Abstract
Very deep neural networks with hundreds or more layers have achieved significant success in a variety of vision tasks spanning from image classification, detection, to image captioning. However, simply stacking more layers in the convolution operation could suffer from the gradient vanishing problem and thus could not lower down the training loss further. The residual network [1] pushes the model's depth to extremely deep by proposing an identity mapping plus a residual learning term and addresses the gradient back-propagation bottleneck well. In this paper, we investigate the residual module in great extent by analyzing the structure ordering of different blocks and modify them one by one to achieve lower test error on CIFAR-10 dataset. One key observation is that removing the original ReLU activation could facilitate the gradient propagation in the identity mapping path. Moreover, inspired by the ResNet block, we propose a random-jump scheme to skip some residual connections during training, i.e., lower features could jump to any subsequent layers and bypass its transformations directly to the higher level. Such an upgrade to the network structure not only saves training time but also obtains better performance.
Related Papers
- → Direction of the Bottleneck in Dependence on Inventory Levels(2016)3 cited
- AN UPGRADE PRACTICE OF A DOUBLE-RACK SOURCE ~(60)Co SOURCE IRRADIATOR(2009)
- Research and Implementation for Database Upgrade of Information System(2012)
- Research on Software Upgrade of Surface Meteorological Observation Operational System(2012)
- → LHCb Run 2 trigger and upgrade reconstruction(2019)