An Unsupervised Autoregressive Model for Speech Representation Learning
Citations Over TimeTop 1% of 2019 papers
Abstract
This paper proposes a novel unsupervised autoregressive neural model for learning generic speech representations.In contrast to other speech representation learning methods that aim to remove noise or speaker variabilities, ours is designed to preserve information for a wide range of downstream tasks.In addition, the proposed model does not require any phonetic or word boundary labels, allowing the model to benefit from large quantities of unlabeled data.Speech representations learned by our model significantly improve performance on both phone classification and speaker verification over the surface features and other supervised and unsupervised approaches.Further analysis shows that different levels of speech information are captured by our model at different layers.In particular, the lower layers tend to be more discriminative for speakers, while the upper layers provide more phonetic content.
Related Papers
- → Identification of autoregressive moving-average parameters of time series(1975)97 cited
- → Auxiliary model based recursive and iterative least squares algorithm for autoregressive output error autoregressive systems(2015)5 cited
- → Adding data process feedback to the nonlinear autoregressive model(2002)11 cited
- → Outliers in functional autoregressive time series(2005)6 cited
- Pendekatan Space Time Autoregressive (Star) Dan Generalized Space Time Autoregressive (Gstar) Melalui Metode Autoregressive (Ar) Dan Vector Autoregressive (Var).(2015)