Loading [MathJax]/extensions/MathMenu.js
Actions as space-time shapes | IEEE Conference Publication | IEEE Xplore

Actions as space-time shapes


Abstract:

Human action in video sequences can be seen as silhouettes of a moving torso and protruding limbs undergoing articulated motion. We regard human actions as three-dimensio...Show More

Abstract:

Human action in video sequences can be seen as silhouettes of a moving torso and protruding limbs undergoing articulated motion. We regard human actions as three-dimensional shapes induced by the silhouettes in the space-time volume. We adopt a recent approach by Gorelick et al. (2004) for analyzing 2D shapes and generalize it to deal with volumetric space-time action shapes. Our method utilizes properties of the solution to the Poisson equation to extract space-time features such as local space-time saliency, action dynamics, shape structure and orientation. We show that these features are useful for action recognition, detection and clustering. The method is fast, does not require video alignment and is applicable in (but not limited to) many scenarios where the background is known. Moreover, we demonstrate the robustness of our method to partial occlusions, non-rigid deformations, significant changes in scale and viewpoint, high irregularities in the performance of an action and low quality video
Date of Conference: 17-21 October 2005
Date Added to IEEE Xplore: 05 December 2005
Print ISBN:0-7695-2334-X

ISSN Information:

Conference Location: Beijing, China

1. Introduction

Recognizing human action is a key component in many computer vision applications, such as video surveillance, human-computer interface, video indexing and browsing, recognition of gestures, analysis of sports events and dance choreography. Some of the recent work done in the area of action recognition [7], [21], [11], [17] have shown that it is useful to analyze actions by looking at the video sequence as a space-time intensity volume. Analyzing actions directly in the space-time volume avoids some limitations of traditional approaches that involve the computation of optical flow [2], [8] (aperture problems, smooth surfaces, singularities, etc.), feature tracking [20], [4] (self-occlusions, re-initialization, change of appearance, etc.), key frames [6] (lack of information about the motion). Most of the above studies are based on computing local space-time gradients or other intensity based features and thus might be unreliable in cases of low quality video, motion discontinuities and motion aliasing. Space-time shapes of “jumping-jack”, “walking” and “running” actions.

Contact IEEE to Subscribe

References

References is not available for this document.