Audio-visual speech recognition with background music using single-channel source separation
Citations Over TimeTop 13% of 2012 papers
Abstract
In this paper, we consider audio-visual speech recognition with background music. The proposed algorithm is an integration of audio-visual speech recognition and single channel source separation (SCSS). We apply the proposed algorithm to recognize spoken speech that is mixed with music signals. First, the SCSS algorithm based on nonnegative matrix factorization (NMF) and spectral masks is used to separate the audio speech signal from the background music in magnitude spectral domain. After speech audio is separated from music, regular audio-visual speech recognition (AVSR) is employed using multi-stream hidden Markov models. Employing two approaches together, we try to improve recognition accuracy by both processing the audio signal with SCSS and supporting the recognition task with visual information. Experimental results show that combining audio-visual speech recognition with source separation gives remarkable improvements in the accuracy of the speech recognition system.
Related Papers
- → Effect of speech coders on speech recognition performance(1996)89 cited
- → Phase recovery in NMF for audio source separation: An insightful benchmark(2015)23 cited
- → Generalized constraints for NMF with application to informed source separation(2016)6 cited
- → Extended semantic initialization for NMF-based audio source separation(2015)2 cited
- → A comparative study of example-guided audio source separation approaches based on nonnegative matrix factorization(2017)1 cited