Loading [MathJax]/extensions/MathZoom.js
Multimodal Contrastive Training for Visual Representation Learning | IEEE Conference Publication | IEEE Xplore

Multimodal Contrastive Training for Visual Representation Learning


Abstract:

We develop an approach to learning visual representations that embraces multimodal data, driven by a combination of intra- and inter-modal similarity preservation objecti...Show More

Abstract:

We develop an approach to learning visual representations that embraces multimodal data, driven by a combination of intra- and inter-modal similarity preservation objectives. Unlike existing visual pre-training methods, which solve a proxy prediction task in a single domain, our method exploits intrinsic data properties within each modality and semantic information from cross-modal correlation simultaneously, hence improving the quality of learned visual representations. By including multimodal training in a unified framework with different types of contrastive losses, our method can learn more powerful and generic visual features. We first train our model on COCO and evaluate the learned visual representations on various downstream tasks including image classification, object detection, and instance segmentation. For example, the visual representations pre-trained on COCO by our method achieve state-of-the-art top-1 validation accuracy of 55.3% on ImageNet classification, under the common transfer protocol. We also evaluate our method on the large-scale Stock images dataset and show its effectiveness on multi-label image tagging, and cross-modal retrieval tasks.
Date of Conference: 20-25 June 2021
Date Added to IEEE Xplore: 02 November 2021
ISBN Information:

ISSN Information:

Conference Location: Nashville, TN, USA

1. Introduction

Visual representation learning is crucial for many computer vision tasks including image classification [9], [50], [27], [30], tagging [16], [23], object detection [17], [47], [40], semantic and instance segmentation [41], [26]. Supervised pre-training over large-scale datasets [9] yields useful visual features which lead to state-of-the-art performance on those tasks. Yet, fine-grained class labeling efforts [9] are prohibitively heavy. Self-supervised learning methods [4], [12], [59], [25], [5], [6] do not require any annotations, but still require either extremely large training sets or longer training epochs.

Contact IEEE to Subscribe

References

References is not available for this document.