Why and when should you pool? Analyzing Pooling in Recurrent Architectures
Citations Over Time
Abstract
Pooling-based recurrent neural architectures consistently outperform their counterparts without pooling on sequence classification tasks. However, the reasons for their enhanced performance are largely unexamined. In this work, we explore three commonly used pooling techniques (mean-pooling, max-pooling, and attention 1 ), and propose max-attention, a novel variant that captures interactions among predictive tokens in a sentence. Using novel experiments, we demonstrate that pooling architectures substantially differ from their nonpooling equivalents in their learning ability and positional biases: (i) pooling facilitates better gradient flow than BiLSTMs in initial training epochs, and (ii) BiLSTMs are biased towards tokens at the beginning and end of the input, whereas pooling alleviates this bias. Consequently, we find that pooling yields large gains in low resource scenarios, and instances when salient words lie towards the middle of the input. Across several text classification tasks, we find max-attention to frequently outperform other pooling techniques. 2
Related Papers
- → Statistical tests and identifiability conditions for pooling and analyzing multisite datasets(2018)38 cited
- → The Impact of Fixed-Cost Pooling Strategies on Test Collection Bias(2016)13 cited
- Simple but effective techniques to reduce biases.(2019)
- → When can Multi-Site Datasets be Pooled for Regression? Hypothesis Tests, $\ell_2$-consistency and Neuroscience Applications(2017)1 cited