Multilingual acoustic models using distributed deep neural networks
Citations Over TimeTop 1% of 2013 papers
Abstract
Today's speech recognition technology is mature enough to be useful for many practical applications. In this context, it is of paramount importance to train accurate acoustic models for many languages within given resource constraints such as data, processing power, and time. Multilingual training has the potential to solve the data issue and close the performance gap between resource-rich and resource-scarce languages. Neural networks lend themselves naturally to parameter sharing across languages, and distributed implementations have made it feasible to train large networks. In this paper, we present experimental results for cross- and multi-lingual network training of eleven Romance languages on 10k hours of data in total. The average relative gains over the monolingual baselines are 4%/2% (data-scarce/data-rich languages) for cross- and 7%/2% for multi-lingual training. However, the additional gain from jointly training the languages on all data comes at an increased training time of roughly four weeks, compared to two weeks (monolingual) and one week (crosslingual).
Related Papers
- → An interview study of 'continuous' implementations of information technology(1997)33 cited
- → Cost-Benefit Analysis of RFID Implementations in Retail Stores(2006)7 cited
- Case studies in Advanced Planning Systems for Tactical Planning in Process Industries(2010)
- → Continuous implementation of information technology: The development of an interview guide and a cross‐national comparison of Austrian and American organizations(1999)1 cited