Accurate Supervised and Semi-Supervised Machine Reading for Long Documents
Citations Over TimeTop 10% of 2017 papers
Abstract
We introduce a hierarchical architecture for machine reading capable of extracting precise information from long documents. The model divides the document into small, overlapping windows and encodes all windows in parallel with an RNN. It then attends over these window encodings, reducing them to a single encoding, which is decoded into an answer using a sequence decoder. This hierarchical approach allows the model to scale to longer documents without increasing the number of sequential steps. In a supervised setting, our model achieves state of the art accuracy of 76.8 on the WikiReading dataset. We also evaluate the model in a semi-supervised setting by downsampling the WikiReading training set to create increasingly smaller amounts of supervision, while leaving the full unlabeled document corpus to train a sequence autoencoder on document windows. We evaluate models that can reuse autoencoder states and outputs without finetuning their weights, allowing for more efficient training and inference.
Related Papers
- → Performance Comparison of Three Types of Autoencoder Neural Networks(2008)28 cited
- → The Learning Effect of Different Hidden Layers Stacked Autoencoder(2016)20 cited
- → Downsampling dependent upsampling of images(2003)32 cited
- → Combining an Autoencoder and a Variational Autoencoder for Explaining the Machine Learning Model Predictions(2021)5 cited
- → A Comparative Evaluation of AutoEncoder-Based Unsupervised Anomaly Detection Methods Applied on Space Payload(2020)