Machine Speech Chain with One-shot Speaker Adaptation
Citations Over TimeTop 10% of 2018 papers
Abstract
In previous work, we developed a closed-loop speech chain model based on deep learning, in which the architecture enabled the automatic speech recognition (ASR) and text-to-speech synthesis (TTS) components to mutually improve their performance.This was accomplished by the two parts teaching each other using both labeled and unlabeled data.This approach could significantly improve model performance within a single-speaker speech dataset, but only a slight increase could be gained in multi-speaker tasks.Furthermore, the model is still unable to handle unseen speakers.In this paper, we present a new speech chain mechanism by integrating a speaker recognition model inside the loop.We also propose extending the capability of TTS to handle unseen speakers by implementing one-shot speaker adaptation.This enables TTS to mimic voice characteristics from one speaker to another with only a one-shot speaker sample, even from a text without any speaker information.In the speech chain loop mechanism, ASR also benefits from the ability to further learn an arbitrary speakers characteristics from the generated speech waveform, resulting in a significant improvement in the recognition rate.
Related Papers
- → Robust several-speaker speech recognition with highly dependable online speaker adaptation and identification(2010)18 cited
- Technology of Speaker Adaptation in Speech Recognition and Its Development Trend(2003)
- → An incremental speaker-adaptation technique for hybrid HMM-MLP recognizer(2002)8 cited
- → Multi-speaker adaptation for robust speech recognition under ubiquitous environment(2009)1 cited
- → Unsupervised Speaker Adaptation Using Speaker-Class Models for Lecture Speech Recognition(2010)