Pretraining Techniques for Sequence-to-Sequence Voice Conversion
Citations Over TimeTop 10% of 2021 papers
Abstract
Sequence-to-sequence (seq2seq) voice conversion (VC) models are attractive owing to their ability to convert prosody. Nonetheless, without sufficient data, seq2seq VC models can suffer from unstable training and mispronunciation problems in the converted speech, thus far from practical. To tackle these shortcomings, we propose to transfer knowledge from other speech processing tasks where large-scale corpora are easily available, typically text-to-speech (TTS) and automatic speech recognition (ASR). We argue that VC models initialized with such pretrained ASR or TTS model parameters can generate effective hidden representations for high-fidelity, highly intelligible converted speech. In this work, we examine our proposed method in a parallel, one-to-one setting. We employed recurrent neural network (RNN)-based and Transformer based models, and through systematical experiments, we demonstrate the effectiveness of the pretraining scheme and the superiority of Transformer based models over RNN-based models in terms of intelligibility, naturalness, and similarity.
Related Papers
- → Listener Perception of Monopitch, Naturalness, and Intelligibility for Speakers With Parkinson's Disease(2015)71 cited
- → On-line experimental methods to evaluate text-to-speech (TTS) synthesis: effects of voice gender and signal quality on intelligibility, naturalness and preference(2004)43 cited
- → MOS Naturalness and the Quest for Human-Like Speech(2018)13 cited
- → Factors influencing ratings of speech naturalness in augmentative and alternative communication(2002)5 cited
- → Automatic Prosody Generation(1997)1 cited