Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers
Citations Over TimeTop 10% of 2021 papers
Abstract
Abstract Recently, multimodal transformer models have gained popularity because their performance on downstream tasks suggests they learn rich visual-linguistic representations. Focusing on zero-shot image retrieval tasks, we study three important factors that can impact the quality of learned representations: pretraining data, the attention mechanism, and loss functions. By pretraining models on six datasets, we observe that dataset noise and language similarity to our downstream task are important indicators of model performance. Through architectural analysis, we learn that models with a multimodal attention mechanism can outperform deeper models with modality-specific attention mechanisms. Finally, we show that successful contrastive losses used in the self-supervised learning literature do not yield similar performance gains when used in multimodal transformers.
Related Papers
- → Physician-Friendly Machine Learning: A Case Study with Cardiovascular Disease Risk Prediction(2019)71 cited
- → Application of Machine Learning in Animal Disease Analysis and Prediction(2020)26 cited
- → Sentiment Analysis by Using Supervised Machine Learning and Deep Learning Approaches(2020)3 cited
- → Breakdown of Machine Learning Algorithms(2022)1 cited
- → Machine Learning Techniques for the Management of Diseases: A Paper Review(2024)