Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction
2017pp. 645–654
Citations Over TimeTop 1% of 2017 papers
Abstract
We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task - predicting one subset of the data channels from another. Together, the sub-networks extract features from the entire input signal. By forcing the network to solve crosschannel prediction tasks, we induce a representation within the network which transfers well to other, unseen tasks. This method achieves state-of-the-art performance on several large-scale transfer learning benchmarks.
Related Papers
- → Representation learning via an integrated autoencoder for unsupervised domain adaptation(2023)25 cited
- → Representation Learning: Recommendation With Knowledge Graph via Triple-Autoencoder(2022)6 cited
- → Self-taught Learning with Residual Sparse Autoencoders for HEp-2 Cell Staining Pattern Recognition(2018)1 cited
- → Residual Sparse Autoencoders for Unsupervised Feature Learning and Its Application to HEp-2 Cell Staining Pattern Recognition(2019)1 cited