MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers
Citations Over TimeTop 1% of 2021 papers
Abstract
We generalize deep self-attention distillation in MINILM In particular, we define multi-head selfattention relations as scaled dot-product between the pairs of query, key, and value vectors within each self-attention module. Then we employ the above relational knowledge to train the student model. Besides its simplicity and unified principle, more favorably, there is no restriction in terms of the number of student's attention heads, while most previous work has to guarantee the same head number between teacher and student. Moreover, the fine-grained self-attention relations tend to fully exploit the interaction knowledge learned by Transformer. In addition, we thoroughly examine the layer selection strategy for teacher models, rather than just relying on the last layer as in MINILM. We conduct extensive experiments on compressing both monolingual and multilingual pre-trained models. Experimental results demonstrate that our models 1 distilled from base-size and large-size teachers (BERT, RoBERTa and XLM-R) outperform the state-of-the-art.
Related Papers
- → Hubungan sekolah dan masyarakat(2019)7 cited
- → The metaphors of the mathematics teacher candidates in elementary schools concerning the concepts of “Relation, Equivalance Relation and Ordered Relation(2013)4 cited
- Study and Two Types of Typical Usage of DataGrid Web Server Control(2005)
- Using DataGrid Control to Realize DataBase of Querying in VB6.0(2000)
- Susquehanna Chorale Spring Concert "Roots and Wings"(2017)