Loading [MathJax]/extensions/MathMenu.js
Change Detection by Training a Triplet Network for Motion Feature Extraction | IEEE Journals & Magazine | IEEE Xplore

Change Detection by Training a Triplet Network for Motion Feature Extraction


Abstract:

Change/motion detection is a challenging problem in video analysis and surveillance system. Recently, the state-of-the-art methods using the sample-based background model...Show More

Abstract:

Change/motion detection is a challenging problem in video analysis and surveillance system. Recently, the state-of-the-art methods using the sample-based background model have demonstrated astonishing results with this problem. However, they are ineffective in the dynamic scenes that contain complex motion patterns. In this paper, we introduce a novel data-driven approach that combines the sample-based background model with a feature extractor obtained by training a triplet network. We construct the network by three identical convolutional neural networks, each of which is called a motion feature network. Our network can automatically learn motion patterns from small image patches and transform input images of any size into feature embeddings for high-level representations. The sample-based background model of each pixel is then employed by using the color information and the extracted feature embeddings. We also propose an approach to generate triplet examples from CDNet 2014 for training our network model from scratch. The offline trained network can be used on the fly without re-training on any video sequence before each execution. Therefore, it is feasible for real-time surveillance systems. In this paper, we show that our method outperforms the other state-of-the-art methods on CDNet 2014 and other benchmarks (BMC and Wallflower).
Page(s): 433 - 446
Date of Publication: 22 January 2018

ISSN Information:

Funding Agency:

No metrics found for this document.

I. Introduction

Due to the growth of video analysis and surveillance applications, change detection (CD) has emerged as an essential step for advanced tasks such as object tracking [1]–[3], object classification [4], action recognition [5]. CD is considered a two-class classification based on the movement of the objects in a scene. The background class (BG) represents the stationary scenes, objects, or events, and the foreground class (FG) denotes the moving objects of interest. The classes are denoted as a binary image, called a segmentation mask.

Usage
Select a Year
2025

View as

Total usage sinceJan 2018:1,459
0246810JanFebMarAprMayJunJulAugSepOctNovDec960000000000
Year Total:15
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.