Category driven deep recurrent neural network for video summarization | IEEE Conference Publication | IEEE Xplore

Category driven deep recurrent neural network for video summarization


Abstract:

A large number of videos are generated and uploaded to video websites (like youku, youtube) every day and video websites play more and more important roles in human life....Show More

Abstract:

A large number of videos are generated and uploaded to video websites (like youku, youtube) every day and video websites play more and more important roles in human life. While bringing convenience, the big video data raise the difficulty of video summarization to allow users to browse a video easily. However, although there are many existing video summarization approaches, the key frames selected fail to integrate the large video contexts and the qualities of the summarized results are difficult to evaluate because of the lack of ground-truth. Inspired by the previous methods that extract key frames, we propose a deep recurrent neural network model, which learns to extract category-driven key frames. First, we sequentially extract a fixed number of key frames using time-dependent location networks. Second, we utilize recurrent neural network to integrate information of the key frames to classify the category of the video. Therefore, the quality of the extracted key frames could be evaluated by the categorization accuracy. Experiments on a 500-video dataset show that the proposed scheme extracts reasonable key frames and outperforms other methods by quantitative evaluation.
Date of Conference: 11-15 July 2016
Date Added to IEEE Xplore: 26 September 2016
ISBN Information:
Conference Location: Seattle, WA

1. Introduction

While a large number of videos are generated every day, it becomes more and more difficult to find interested video with irrelevant data. To access interested videos conveniently, it is necessary to get a glimpse of a video without watching it entirely and video summarization achieves the goal among the most promising techniques.

Contact IEEE to Subscribe

References

References is not available for this document.