Generative Spoken Dialogue Language Modeling
Transactions of the Association for Computational Linguistics2023Vol. 11, pp. 250–266
Citations Over TimeTop 10% of 2023 papers
Tu Anh Nguyen, Eugene Kharitonov, Jade Copet, Yossi Adi, Wei-Ning Hsu, Ali Elkahky, Paden Tomasello, Robin Algayres, Benoît Sagot, Abdelrahman Mohamed, Emmanuel Dupoux
Abstract
Abstract We introduce dGSLM, the first “textless” model able to generate audio samples of naturalistic spoken dialogues. It uses recent work on unsupervised spoken unit discovery coupled with a dual-tower transformer architecture with cross-attention trained on 2000 hours of two-channel raw conversational audio (Fisher dataset) without any text or labels. We show that our model is able to generate speech, laughter, and other paralinguistic signals in the two channels simultaneously and reproduces more naturalistic and fluid turn taking compared to a text-based cascaded model.1,2
Related Papers
- → The Boundaries of Language: Dealing with Paralinguistic Features(2012)15 cited
- → “Strange laughter”: Post-Gothic Questions of Laughter and the Human in Samuel Beckett's Work(2017)13 cited
- → Paralinguistic Features in Students’ Speaking Performance(2021)5 cited
- → Humor and Laughter(2018)
- → Classification of Laughter in Stand-Up Comedies(2022)