Loading [MathJax]/extensions/MathZoom.js
End-to-End Pre-Training With Hierarchical Matching and Momentum Contrast for Text-Video Retrieval | IEEE Journals & Magazine | IEEE Xplore

End-to-End Pre-Training With Hierarchical Matching and Momentum Contrast for Text-Video Retrieval


Abstract:

Lately, video-language pre-training and text-video retrieval have attracted significant attention with the explosion of multimedia data on the Internet. However, existing...Show More

Abstract:

Lately, video-language pre-training and text-video retrieval have attracted significant attention with the explosion of multimedia data on the Internet. However, existing approaches for video-language pre-training typically limit the exploitation of the hierarchical semantic information in videos, such as frame semantic information and global video semantic information. In this work, we present an end-to-end pre-training network with Hierarchical Matching and Momentum Contrast named HMMC. The key idea is to explore the hierarchical semantic information in videos via multilevel semantic matching between videos and texts. This design is motivated by the observation that if a video semantically matches a text (can be a title, tag or caption), the frames in this video usually have semantic connections with the text and show higher similarity than frames in other videos. Hierarchical matching is mainly realized by two proxy tasks: Video-Text Matching (VTM) and Frame-Text Matching (FTM). Another proxy task: Frame Adjacency Matching (FAM) is proposed to enhance the single visual modality representations while training from scratch. Furthermore, momentum contrast framework was introduced into HMMC to form a multimodal momentum contrast framework, enabling HMMC to incorporate more negative samples for contrastive learning which contributes to the generalization of representations. We also collected a large-scale Chinese video-language dataset (over 763k unique videos) named CHVTT to explore the multilevel semantic connections between videos and texts. Experimental results on two major Text-video retrieval benchmark datasets demonstrate the advantages of our methods. We release our code at https://github.com/cheetah003/HMMC.
Published in: IEEE Transactions on Image Processing ( Volume: 32)
Page(s): 5017 - 5030
Date of Publication: 15 May 2023

ISSN Information:

PubMed ID: 37186535

Funding Agency:

No metrics found for this document.

I. Introduction

With the explosive development of the internet users, a massive amount of videos were created and uploaded to the internet. The ability to accurately retrieve videos from enormous video contents with a text query is thus essential to quickly find relevant information that we want. The task of text-video retrieval [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12] is an approach to solve this problem, and large-scale multimodal pre-training [13], [14], [15], [16], [17] has proved that it can further boost the retrieval performance. This paper studies the video-language pre-training for text-video retrieval.

Usage
Select a Year
2025

View as

Total usage sinceMay 2023:682
05101520JanFebMarAprMayJunJulAugSepOctNovDec1480000000000
Year Total:22
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.