Loading [MathJax]/extensions/MathZoom.js
vid-TLDR: Training Free Token merging for Light-Weight Video Transformer | IEEE Conference Publication | IEEE Xplore

vid-TLDR: Training Free Token merging for Light-Weight Video Transformer


Abstract:

Video Transformers have become the prevalent solution for various video downstream tasks with superior expressive power and flexibility. However, these video transformers...Show More

Abstract:

Video Transformers have become the prevalent solution for various video downstream tasks with superior expressive power and flexibility. However, these video transformers suffer from heavy computational costs induced by the massive number of tokens across the entire video frames, which has been the major barrier to train and deploy the model. Further, the patches irrelevant to the main contents, e.g., backgrounds, degrade the generalization performance of models. To tackle these issues, we propose training-free token merging for lightweight video Transformer (vid-TLDR) that aims to enhance the efficiency of video Transformers by merging the background tokens without additional training. For vid-TLDR, we introduce a novel approach to capture the salient regions in videos only with the attention map. Further, we introduce the saliency-aware token merging strategy by dropping the background tokens and sharpening the object scores. Our experiments show that vid-TLDR significantly mitigates the computational complexity of video Transformers while achieving competitive performance compared to the base model without vid-TLDR. Code is available at https://github.com/mlvlab/vid-TLDR.
Date of Conference: 16-22 June 2024
Date Added to IEEE Xplore: 16 September 2024
ISBN Information:

ISSN Information:

Conference Location: Seattle, WA, USA

Funding Agency:


1. Introduction

With the success of Transformers in computer vision, e.g., classification [14], [52], object detection [10], [32], [43], [61], [75], [77], segmentation [59], [64], a line of works [16], [33], [51], [57], [60], [76] have proposed video Transformers to comprehend the video for various downstream tasks. The attention mechanism in Transformers shows the desirable characteristics for video understanding such as the ability to capture the spatial and temporal dependencies at the same time. Consequently,

Comparison of vid-TLDR (Ours) with UMT [33]. Without any additional training, vid-TLDR obtains comparable or even better performance than the base model UMT (left) while reducing the considerable computational cost (right). UMT-B (87M) is used.

these video Transformers have been the primary backbones for the various downstream tasks in the video domain, including action recognition [65], [73], video-text retrieval [17], [38], video question-answering [18], [63], etc. Meanwhile, the self-attention mechanism entails the dot-product calculation between tokens, which brings the quadratic cost in the number of tokens. This poses a challenge for existing video Transformers like UMT [33] that tokenize the whole video into a large number of tokens.

Contact IEEE to Subscribe

References

References is not available for this document.