General Multi-label Image Classification with Transformers
Citations Over TimeTop 1% of 2021 papers
Abstract
Multi-label image classification is the task of predicting a set of labels corresponding to objects, attributes or other entities present in an image. In this work we propose the Classification Transformer (C-Tran), a general framework for multi-label image classification that leverages Transformers to exploit the complex dependencies among visual features and labels. Our approach consists of a Transformer encoder trained to predict a set of target labels given an input set of masked labels, and visual features from a convolutional neural network. A key ingredient of our method is a label mask training objective that uses a ternary encoding scheme to represent the state of the labels as positive, negative, or unknown during training. Our model shows state-of-the-art performance on challenging datasets such as COCO and Visual Genome. Moreover, because our model explicitly represents the label state during training, it is more general by allowing us to produce improved results for images with partial or extra label annotations during inference. We demonstrate this additional capability in the COCO, Visual Genome, News-500, and CUB image datasets.
Related Papers
- → Performance Comparison of Three Types of Autoencoder Neural Networks(2008)28 cited
- → The difference learning of hidden layer between autoencoder and variational autoencoder(2017)26 cited
- → The Learning Effect of Different Hidden Layers Stacked Autoencoder(2016)20 cited
- → Combining an Autoencoder and a Variational Autoencoder for Explaining the Machine Learning Model Predictions(2021)5 cited
- → A Comparative Evaluation of AutoEncoder-Based Unsupervised Anomaly Detection Methods Applied on Space Payload(2020)