Multiactivation Pooling Method in Convolutional Neural Networks for Image Recognition
Citations Over TimeTop 10% of 2018 papers
Abstract
Convolutional neural networks (CNNs) are becoming more and more popular today. CNNs now have become a popular feature extractor applying to image processing, big data processing, fog computing, etc. CNNs usually consist of several basic units like convolutional unit, pooling unit, activation unit, and so on. In CNNs, conventional pooling methods refer to 2×2 max‐pooling and average‐pooling, which are applied after the convolutional or ReLU layers. In this paper, we propose a Multiactivation Pooling (MAP) Method to make the CNNs more accurate on classification tasks without increasing depth and trainable parameters. We add more convolutional layers before one pooling layer and expand the pooling region to 4×4, 8×8, 16×16, and even larger. When doing large‐scale subsampling, we pick top‐k activation, sum up them, and constrain them by a hyperparameter σ . We pick VGG, ALL‐CNN, and DenseNets as our baseline models and evaluate our proposed MAP method on benchmark datasets: CIFAR‐10, CIFAR‐100, SVHN, and ImageNet. The classification results are competitive.
Related Papers
- → A improved pooling method for convolutional neural networks(2024)119 cited
- Pooling in high-throughput drug screening.(2009)
- → A fully trainable network with RNN-based pooling(2019)22 cited
- Alpha-Pooling for Convolutional Neural Networks.(2018)
- → A Fully Trainable Network with RNN-based Pooling(2017)1 cited