Loading [MathJax]/extensions/MathMenu.js
Multiframe-Based High Dynamic Range Monocular Vision System for Advanced Driver Assistance Systems | IEEE Journals & Magazine | IEEE Xplore

Multiframe-Based High Dynamic Range Monocular Vision System for Advanced Driver Assistance Systems


Abstract:

In this paper, we propose a monocular multiframe high dynamic range (HDR) monocular vision system to improve the imaging quality of traditional CMOS/charge-coupled device...Show More

Abstract:

In this paper, we propose a monocular multiframe high dynamic range (HDR) monocular vision system to improve the imaging quality of traditional CMOS/charge-coupled device (CCD)-based vision system for advanced driver assistance systems (ADASs). Conventional CMOS/CCD image sensors are confined to limited dynamic range that it impairs the imaging quality under undesirable environments for ADAS (e.g., strong contrast of bright and darkness, strong sunlight, headlights at night, and so on). Contrary to current HDR video solutions relying on expensive specially designed sensors, we implement a multiframe HDR algorithm to enable one common CMOS/CCD sensor capturing HDR video. Key parts of the realized HDR vision system are: 1) circular exposure control; 2) latent image calculation; and 3) exposure fusion. We have successfully realized a prototype of monocular HDR vision system and mounted it on our SetCar platform. The effectiveness of this technique is demonstrated by our experimental results, while its bottleneck is the processing time. By exploring the capability of the proposed method in the future, a low-cost HDR vision system can be achieved for ADAS.
Published in: IEEE Sensors Journal ( Volume: 15, Issue: 10, October 2015)
Page(s): 5433 - 5441
Date of Publication: 04 June 2015

ISSN Information:

Funding Agency:

Citations are not available for this document.

I. Introduction

During the past decades, advanced driver assistance systems (ADAS) [1] have made great progresses in both research society and automobile industry. ADAS are systems developed to automate/enhance vehicle systems for safety and better driving based upon vision/camera systems, sensor technology and vehicle communication systems. Usually, CMOS/CCD based vision systems are viewed as essential components of ADAS. ADAS applications utilize image sensors to provide enhanced safety features such as parking assistance, lane departure warnings and collision avoidance systems. However, performances of cameras degrade in wide dynamic range scenarios (e.g. low-angle strong sunlight, the headlights of oncoming vehicles, shadows in summer), because in these situations, dynamic range exceeds the capabilities of the conventional CMOS/CCD image sensors. Dynamic range is the ratio of the highest (lightest) signal which an imaging sensor can record to the lowest (darkest) signal. The dynamic range of the real world ranges from (starlight) up to (direct sunlight). A typical nature scene has a contrast ratio around 10,000:1. Common CCD sensors accumulate charge in a “potential well” that is proportional to the number of photons that struck the sensor in that pixel. The size and depth of the “potential well” determines the dynamic range capability of the sensor. Usually, a CMOS/CCD sensor can acquire a contrast of roughly 1,000:1 (60 dB) dynamic range of intensities. The darkest signal is constrained by the thermal noise, or “darkcurrent,” of the sensor. The brightest signal is limited by the total amount of charge that can be accumulated in a single pixel. Image sensors are built such that this total maximum charge is greater than 1,000 times the charge generated thermally. This implies that if the scene has a higher dynamic range, the sensor will not be able to capture it, and the resulting image always has saturated regions in the highlights and underexposed regions in the shadows (as shown in Fig. 1). The degeneration of imaging quality in wide dynamic range scenarios injures camera based ADAS performance [2]. Unfortunately, WDR scenarios appear frequently in real road environments (imaging you drive in the shadow caused by the buildings, or drive in the noon of summer) that make this issue critical for developing better ADAS in the future. For instance, when a car reverses into a garage, the inside of the garage is hard to see because of the contrast between the dark interior and bright daylight.

Image quality deteriorates in wide dynamic range (WDR) environment.

Cites in Papers - |

Cites in Papers - IEEE (10)

Select All
1.
Huinan Gong, Zhenjie Zhou, Jun Pan, Ming Yang, Hongquan Ding, Xin Zhu, Qingsong Wu, "Image Motion Blur Mechanism-Based Measurement Method for Low-Frequency Vibration Amplitude and Direction", IEEE Sensors Journal, vol.24, no.24, pp.41301-41310, 2024.
2.
You Li, Julien Moreau, Javier Ibanez-Guzman, "Emergent Visual Sensors for Autonomous Vehicles", IEEE Transactions on Intelligent Transportation Systems, vol.24, no.5, pp.4716-4737, 2023.
3.
Manoj Purohit, Manvendra Singh, Ajay Kumar, Brajesh Kumar Kaushik, "Enhancing the Surveillance Detection Range of Image Sensors Using HDR Techniques", IEEE Sensors Journal, vol.21, no.17, pp.19516-19528, 2021.
4.
You Li, Clément Le Bihan, Txomin Pourtau, Thomas Ristorcelli, Javier Ibanez-Guzman, "Coarse-to-Fine Segmentation on LiDAR Point Clouds in Spherical Coordinate and Beyond", IEEE Transactions on Vehicular Technology, vol.69, no.12, pp.14588-14601, 2020.
5.
You Li, Clément Le Bihan, Txomin Pourtau, Thomas Ristorcelli, "InsClustering: Instantly Clustering LiDAR Range Measures for Autonomous Vehicle", 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pp.1-6, 2020.
6.
Seiichi Mita, Xu Yuquan, Kazuhisa Ishimaru, Sakiko Nishino, "Robust 3D Perception for any Environment and any Weather Condition using Thermal Stereo", 2019 IEEE Intelligent Vehicles Symposium (IV), pp.2569-2574, 2019.
7.
Nabeel A. Riza, Mohsin A. Mazhar, "177 dB Linear Dynamic Range Pixels of Interest DSLR CAOS Camera", IEEE Photonics Journal, vol.11, no.3, pp.1-10, 2019.
8.
Ilija Popadić, "HDR-Like Imaging", 2018 26th Telecommunications Forum (TELFOR), pp.1-8, 2018.
9.
Jia-Li Yin, Bo-Hao Chen, Kuo-Hua Robert Lai, Ying Li, "Automatic Dangerous Driving Intensity Analysis for Advanced Driver Assistance Systems From Multimodal Driving Signals", IEEE Sensors Journal, vol.18, no.12, pp.4785-4794, 2018.
10.
Nurbaity Sabri, Zaidah Ibrahim, Mastura Md. Saad, Nur Nabilah Abu Mangshor, Nursuriati Jamil, "Human detection in video surveillance using texture features", 2016 6th IEEE International Conference on Control System, Computing and Engineering (ICCSCE), pp.45-50, 2016.

Cites in Papers - Other Publishers (13)

1.
Yung-Yao Chen, Chih-Hsien Hsia, Sin-Ye Jhong, Chin-Feng Lai, "Attention-Guided HDR Reconstruction for Enhancing Smart City Applications", Electronics, vol.12, no.22, pp.4625, 2023.
2.
Xuan Pan, Jingwen Shi, Pengfei Wang, Shuang Wang, Chen Pan, Wentao Yu, Bin Cheng, Shi-Jun Liang, Feng Miao, "Parallel perception of visual motion using light-tunable memory matrix", Science Advances, vol.9, no.39, 2023.
3.
Hongwei Tan, Sebastiaan van Dijken, "Dynamic machine vision with retinomorphic photomemristor-reservoir computing", Nature Communications, vol.14, no.1, 2023.
4.
JongBae Kim, "Detection of Road Images Containing a Counterlight Using Multilevel Analysis", Symmetry, vol.13, no.11, pp.2210, 2021.
5.
Mohamed Sejai, Anass Mansouri, Saad Bennani Dosse, Yassine Ruichek, Proceedings of the 2nd International Conference on Electronic Engineering and Renewable Energy Systems, vol.681, pp.105, 2021.
6.
Bernardino Gonzalez, Francisco J. Jimenez, Jose De Frutos, "A Virtual Instrument for Road Vehicle Classification Based on Piezoelectric Transducers", Sensors, vol.20, no.16, pp.4597, 2020.
7.
Masahiro Kobayashi, Hiroshi Sekine, Takafumi Miki, Takashi Muto, Toshiki Tsuboi, Yusuke Onuki, Yasushi Matsuno, Hidekazu Takahashi, Takeshi Ichikawa, Shunsuke Inoue, "A 3.4 μm pixel pitch global shutter CMOS image sensor with dual in-pixel charge domain memory", Japanese Journal of Applied Physics, vol.58, no.SB, pp.SBBL02, 2019.
8.
Jing He, Linfan Liu, Changfan Zhang, Kaihui Zhao, Jian Sun, Peng Li, "Deep Denoising Autoencoding Method for Feature Extraction and Recognition of Vehicle Adhesion Status", Journal of Sensors, vol.2018, pp.1, 2018.
9.
Tiezhu Qiao, Lulu Chen, Yusong Pang, Gaowei Yan, "Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion", Photonic Sensors, 2018.
10.
Nabeel A. Riza, "The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design", Emerging Digital Micromirror Device Based Systems and Applications IX, vol.10117, pp.101170L, 2017.
11.
Tiezhu Qiao, Lulu Chen, Yusong Pang, Gaowei Yan, Changyun Miao, "Integrative binocular vision detection method based on infrared and visible light fusion for conveyor belts longitudinal tear", Measurement, vol.110, pp.192, 2017.
12.
Arjuna Marzuki, Developing and Applying Optoelectronics in Machine Vision, pp.38, 2017.
13.
Tiezhu Qiao, Weili Liu, Yusong Pang, Gaowei Yan, "Research on visible light and infrared vision real-time detection system for conveyor belt longitudinal tear", IET Science, Measurement & Technology, vol.10, no.6, pp.577-584, 2016.
Contact IEEE to Subscribe

References

References is not available for this document.