Loading [MathJax]/extensions/MathZoom.js
CooperFuse: A Real-Time Cooperative Perception Fusion Framework | IEEE Conference Publication | IEEE Xplore

CooperFuse: A Real-Time Cooperative Perception Fusion Framework


Abstract:

Cooperative perception algorithms based on fusing sensing data across multiple connected automated vehicles (CAVs) have shown promising performance to enhance the existin...Show More

Abstract:

Cooperative perception algorithms based on fusing sensing data across multiple connected automated vehicles (CAVs) have shown promising performance to enhance the existing individual perception algorithm in terms of object detection tracking. However, existing cooperative perception algorithms are only developed offline given the constraints from data sharing and computational resources and none of them have been verified in real-time conditions. In this work, we propose a real-time cooperative perception framework called CooperFuse, which achieves cooperative perception in a late fusion scheme. Based on object detection and tracking results from individual vehicle, the late fusion cooperative perception algorithm considers object detection confidence score, kinematics, and dynamics consistency as well as scale consistency of detected objects. The algorithm computes the kinematic and dynamic consistency of the objects by solving for the energy consumption of inter-frame trajectories, and determines scale consistency by calculating inter-frame scale changes, enabling feature-based bounding box fusion. The experimental results demonstrate the real-time performance of the proposed algorithm and reveal its effective improvements in feature fusion and object detection accuracy when dealing with heterogeneous detection models across different cooperative intelligent agents.
Date of Conference: 02-05 June 2024
Date Added to IEEE Xplore: 15 July 2024
ISBN Information:

ISSN Information:

Conference Location: Jeju Island, Korea, Republic of
Citations are not available for this document.

I. Introduction

In recent years, neural network algorithms based on deep learning have made significant progress in tasks such as LiDAR 3D detection and tracking for individual vehicles [1], [2]. Several mature detection algorithms have been practically applied to an increasing number of autonomous vehicles [3], [4], [5], [6]. Despite these achievements, the perception systems of single agents still have many limitations in complex environments due to the constraints of a single field of view and restricted capabilities in navigating through intricate urban scenarios. Among these environments, the intersection environment is one of the most intricate scenarios. This environment usually includes a diverse array of road users consisting of vehicles, pedestrians, cyclists, and scooters, each with unique movement patterns and safety needs. A key challenge in enhancing intersection safety is for accurate, detailed, and real-time data that captures the road users’ classification and their movement.

Cites in Papers - |

Cites in Papers - IEEE (1)

Select All
1.
Roshan George, Joseph Clancy, Tim Brophy, Ganesh Sistu, William O'Grady, Sunil Chandra, Fiachra Collins, Darragh Mullins, Edward Jones, Brian Deegan, Martin Glavin, "Infrastructure Assisted Autonomous Driving: Research, Challenges, and Opportunities", IEEE Open Journal of Vehicular Technology, vol.6, pp.662-716, 2025.
Contact IEEE to Subscribe

References

References is not available for this document.