Loading [MathJax]/extensions/MathZoom.js
Argus: Enabling Cross-Camera Collaboration for Video Analytics on Distributed Smart Cameras | IEEE Journals & Magazine | IEEE Xplore

Argus: Enabling Cross-Camera Collaboration for Video Analytics on Distributed Smart Cameras


Abstract:

Overlapping cameras offer exciting opportunities to view a scene from different angles, allowing for more advanced, comprehensive and robust analysis. However, existing v...Show More

Abstract:

Overlapping cameras offer exciting opportunities to view a scene from different angles, allowing for more advanced, comprehensive and robust analysis. However, existing video analytics systems for multi-camera streams are mostly limited to (i) per-camera processing and aggregation and (ii) workload-agnostic centralized processing architectures. In this paper, we present Argus, a distributed video analytics system with cross-camera collaboration on smart cameras. We identify multi-camera, multi-target tracking as the primary task of multi-camera video analytics and develop a novel technique that avoids redundant, processing-heavy identification tasks by leveraging object-wise spatio-temporal association in the overlapping fields of view across multiple cameras. We further develop a set of techniques to perform these operations across distributed cameras without cloud support at low latency by (i) dynamically ordering the camera and object inspection sequence and (ii) flexibly distributing the workload across smart cameras, taking into account network transmission and heterogeneous computational capacities. Evaluation of three real-world overlapping camera datasets with two Nvidia Jetson devices shows that Argus reduces the number of object identifications and end-to-end latency by up to 7.13× and 2.19× (4.86× and 1.60× compared to the state-of-the-art), while achieving comparable tracking quality.
Published in: IEEE Transactions on Mobile Computing ( Volume: 24, Issue: 1, January 2025)
Page(s): 117 - 134
Date of Publication: 18 September 2024

ISSN Information:


I. Introduction

It is increasingly common for physical locations to be surrounded and monitored by multiple cameras with overlapping fields of view (hereinafter ‘overlapping cameras’), e.g., intersections, shopping malls, public transport, construction sites and airports, as shown in Fig. 1. Such multiple overlapping cameras offer exciting opportunities to observe a scene from different angles, enabling enriched, comprehensive and robust analysis. For example, our analysis of the CityFlowV2 dataset [4] (5 cameras deployed to monitor vehicles on the road intersection) shows that each camera separately detects only 3.7 vehicles per frame on average, while five cameras detect a total of 12.0 vehicles altogether. Since a target vehicle can be captured by multiple cameras from different distances and angles, we can also observe objects of interest with a holistic view. Such view diversity can make the analytics more enriched and robust, e.g., a vehicle’s license plate may be occluded in one camera’s view due to its position or occlusion, but not in the other cameras.

Contact IEEE to Subscribe

References

References is not available for this document.