Speech Resynthesis from Discrete Disentangled Self-Supervised Representations
Citations Over TimeTop 1% of 2021 papers
Abstract
We propose using self-supervised discrete representations for the task of\nspeech resynthesis. To generate disentangled representation, we separately\nextract low-bitrate representations for speech content, prosodic information,\nand speaker identity. This allows to synthesize speech in a controllable\nmanner. We analyze various state-of-the-art, self-supervised representation\nlearning methods and shed light on the advantages of each method while\nconsidering reconstruction quality and disentanglement properties.\nSpecifically, we evaluate the F0 reconstruction, speaker identification\nperformance (for both resynthesis and voice conversion), recordings'\nintelligibility, and overall quality using subjective human evaluation. Lastly,\nwe demonstrate how these representations can be used for an ultra-lightweight\nspeech codec. Using the obtained representations, we can get to a rate of 365\nbits per second while providing better speech quality than the baseline\nmethods. Audio samples can be found under the following link:\nspeechbot.github.io/resynthesis.\n
Related Papers
- → Review of methods for coding of speech signals(2023)15 cited
- → The effect of speech and audio compression on speech recognition performance(2002)36 cited
- → A speech packet loss concealment algorithm using real-time speech quality measurement and redundancy coding(2009)1 cited
- → Interaction of speech disorders with speech coders: effects on speech intelligibility(2002)6 cited
- → State-space approach to linear predictive coding of speech — A comparative assessment(2013)