Contextual Speech Recognition with Difficult Negative Training Examples
Citations Over TimeTop 10% of 2019 papers
Abstract
Improving the representation of contextual information is key to unlocking the potential of end-to-end (E2E) automatic speech recognition (ASR). In this work, we present a novel and simple approach for training an ASR context mechanism with difficult negative examples. The main idea is to focus on proper nouns (e.g., unique entities such as names of people and places) in the reference transcript and use phonetically similar phrases as negative examples, encouraging the neural model to learn more discriminative representations. We apply our approach to an end-to-end contextual ASR model that jointly learns to transcribe and select the correct context items. We show that our proposed method gives up to 53.1% relative improvement in word error rate (WER) across several benchmarks.
Related Papers
- Stable Discriminative Dictionary Learning Via Discriminative Deviation(2012)
- → Accurate And Fast Fine-Grained Image Classification via Discriminative Learning(2019)3 cited
- → Discriminative Regions: A Substrate for Analyzing Life-Logging Image Sequences(2014)2 cited
- → Diffusion-TTA: Test-time Adaptation of Discriminative Models via Generative Feedback(2023)