Adapting Pretrained Text-to-Text Models for Long Text Sequences
Citations Over TimeTop 10% of 2023 papers
Abstract
We present an empirical study of adapting an existing pretrained text-to-text model for long-sequence inputs. Through a comprehensive study along three axes of the pretraining pipeline – model architecture, optimization objective, and pretraining corpus, we propose an effective recipe to build long-context models from existing short-context models. Specifically, we replace the full attention in transformers with pooling-augmented blockwise attention, and pretrain the model with a masked-span prediction task with spans of varying lengths. In terms of the pretraining corpus, we find that using randomly concatenated short-documents from a large open-domain corpus results in better performance than using existing long document corpora, which are typically limited in their domain coverage. With these findings, we build a long-context model that achieves competitive performance on long-text QA tasks and establishes the new state of the art on five long-text summarization datasets, often outperforming previous methods with larger model sizes.
Related Papers
- → A Recurrent BERT-based Model for Question Generation(2019)156 cited
- Multilingual Summarization Evaluation without Human Models(2010)
- → A Bengali Text Generation Approach in Context of Abstractive Text Summarization Using RNN(2020)14 cited
- → Research on Text Summarization Generation Based on LSTM and Attention Mechanism(2021)1 cited
- On the Applications of the Experience Summarization in Modern Teaching and Research(2000)