COCOA
Citations Over TimeTop 10% of 2022 papers
Abstract
Self-Supervised Learning (SSL) is a new paradigm for learning discriminative representations without labeled data, and has reached comparable or even state-of-the-art results in comparison to supervised counterparts. Contrastive Learning (CL) is one of the most well-known approaches in SSL that attempts to learn general, informative representations of data. CL methods have been mostly developed for applications in computer vision and natural language processing where only a single sensor modality is used. A majority of pervasive computing applications, however, exploit data from a range of different sensor modalities. While existing CL methods are limited to learning from one or two data sources, we propose COCOA (Cross mOdality COntrastive leArning), a self-supervised model that employs a novel objective function to learn quality representations from multisensor data by computing the cross-correlation between different data modalities and minimizing the similarity between irrelevant instances. We evaluate the effectiveness of COCOA against eight recently introduced state-of-the-art self-supervised models, and two supervised baselines across five public datasets. We show that COCOA achieves superior classification performance to all other approaches. Also, COCOA is far more label-efficient than the other baselines including the fully supervised model using only one-tenth of available labeled data.
Related Papers
- → Leveraging Intra and Inter Modality Relationship for Multimodal Fake News Detection(2022)69 cited
- → Factors Influencing Modality Choice in Multimodal Applications(2008)15 cited
- The Mismatch of Modalities and Its Effects(2012)
- → Study of communication modalities for teaching distance information(2022)
- → TYPES OF MODALITY AND ITS INCONSISTENCIES/ԵՂԱՆԱԿԱՎՈՐՄԱՆ ՏԵՍԱԿՆԵՐԸ ԵՎ ԴՐԱ ՀԵՏ ԿԱՊՎԱԾ ԱՆՀԱՄԱՊԱՏԱՍԽԱՆՈՒԹՅՈՒՆՆԵՐԸ/ТИПЫ МОДАЛЬНОСТИ И ЕЕ НЕСООТВЕТСТВИЯ(2022)