On the compression of recurrent neural networks with an application to LVCSR acoustic modeling for embedded speech recognition
2016pp. 5970–5974
Citations Over TimeTop 1% of 2016 papers
Abstract
We study the problem of compressing recurrent neural networks (RNNs). In particular, we focus on the compression of RNN acoustic models, which are motivated by the goal of building compact and accurate speech recognition systems which can be run efficiently on mobile devices. In this work, we present a technique for general recurrent model compression that jointly compresses both recurrent and non-recurrent inter-layer weight matrices. We find that the proposed technique allows us to reduce the size of our Long Short-Term Memory (LSTM) acoustic model to a third of its original size with negligible loss in accuracy.
Related Papers
- → Artificial Intelligence for Sport Actions and Performance Analysis using Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM)(2018)10 cited
- → Understanding LSTM -- a tutorial into Long Short-Term Memory Recurrent\n Neural Networks(2019)497 cited
- → Understanding LSTM -- a tutorial into Long Short-Term Memory Recurrent Neural Networks(2019)152 cited
- → Accident Detection System Based on RNN Exploiting Keypoints and LSTM(2023)
- → Hoax Identification On Social Media Using Recurrent Neural Network (RNN) And Long Short-term Memory (LSTM) Methods(2023)