GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
2023pp. 4895–4901
Citations Over TimeTop 1% of 2023 papers
Abstract
Multi-query attention (MQA), which only uses a single key-value head, drastically speeds up decoder inference. However, MQA can lead to quality degradation, and moreover it may not be desirable to train a separate model just for faster inference. We (1) propose a recipe for uptraining existing multi-head language model checkpoints into models with MQA using 5% of original pre-training compute, and (2) introduce grouped-query attention (GQA), a generalization of multi-query attention which uses an intermediate (more than one, less than number of query heads) number of key-value heads. We show that uptrained GQA achieves quality close to multi-head attention with comparable speed to MQA.
Related Papers
- → A survey of query expansion, query suggestion and query refinement techniques(2015)55 cited
- → Distributed Query Plan Generation using Ant Colony Optimization(2015)12 cited
- → Log mining to support web query expansions(2009)6 cited
- → Query Optimization in Uncertain and Probabilistic Databases(2023)1 cited