Loading [a11y]/accessibility-menu.js
Action Unit Memory Network for Weakly Supervised Temporal Action Localization | IEEE Conference Publication | IEEE Xplore

Action Unit Memory Network for Weakly Supervised Temporal Action Localization


Abstract:

Weakly supervised temporal action localization aims to detect and localize actions in untrimmed videos with only video-level labels during training. However, without fram...Show More

Abstract:

Weakly supervised temporal action localization aims to detect and localize actions in untrimmed videos with only video-level labels during training. However, without frame-level annotations, it is challenging to achieve localization completeness and relieve background interference. In this paper, we present an Action Unit Memory Network (AUMN) for weakly supervised temporal action localization, which can mitigate the above two challenges by learning an action unit memory bank. In the proposed AUMN, two attention modules are designed to update the memory bank adaptively and learn action units specific classifiers. Furthermore, three effective mechanisms (diversity, homogeneity and sparsity) are designed to guide the updating of the memory network. To the best of our knowledge, this is the first work to explicitly model the action units with a memory network. Extensive experimental results on two standard benchmarks (THUMOS14 and ActivityNet) demonstrate that our AUMN performs favorably against state-of-the-art methods. Specifically, the average mAP of IoU thresholds from 0.1 to 0.5 on the THUMOS14 dataset is significantly improved from 47.0% to 52.1%.
Date of Conference: 20-25 June 2021
Date Added to IEEE Xplore: 02 November 2021
ISBN Information:

ISSN Information:

Conference Location: Nashville, TN, USA

Funding Agency:

University of Science and Technology of China
University of Science and Technology of China
University of Science and Technology of China
JD AI Research
JD AI Research
University of Science and Technology of China
University of Science and Technology of China

1. Introduction

Temporal action localization (TAL) is an important yet challenging task for video understanding. Its goal is to localize temporal boundaries of actions with specific categories in untrimmed videos [13], [7]. Because of its broad applications in high-level tasks such as video surveillance [40], video summarization [17], and event detection [15], TAL has recently drawn increasing attentions from the community. Up to now, deep learning based methods have made impressive progresses in this area. However, most of them handle this task in a fully supervised way, requiring massive temporal boundary annotations for actions [24], [51], [5], [42], [36]. Such manual annotations are expensive to obtain, which limits the development potential of fully-supervised methods in real-world scenarios.

University of Science and Technology of China
University of Science and Technology of China
University of Science and Technology of China
JD AI Research
JD AI Research
University of Science and Technology of China
University of Science and Technology of China
Contact IEEE to Subscribe

References

References is not available for this document.