I. Motivation
In 2020 the number of road fatalities decreased to 2.719, which was the first noticeable drop since more than ten years [1]. Before, it stagnated around 3.200 deadly accidents on German streets. The same is true for the whole EU, in which it stays around 19.800 [2]. Ignoring the temporal drop due to less traffic during the Covid-19 lockdowns, is can be stated out, that vehicle safety on European streets does not evolve as it is planned in the Vision Zero strategy. The reason for this is, that today, most accidents are causes by human failure rather than technical issues [3]. This can be explained with natural limitations of human performance in perception and responsiveness as well as distraction, drunk driving or speeding [4]. With the help of advanced driver assistance systems (ADAS) such critical scenarios could already be addressed and their amount as well as severity reduced [5]. Modern ADAS operate in the field of active safety (avoiding an accident) and intervene automatically in cases in which the driver is unable to react in time to eventually avoid a collision with another traffic participant [6]. However, not every accidents is avoidable such as collisions with wild animals. So, in order to achieve maximum vehicle safety, a combination of active safety systems with passive safety systems (reducing the severity of an accident) is crucial. These so-called integrated safety systems use the information of the environmental sensors radar, camera and LiDAR to deploy irreversible safety actuators like airbags before an inevitable collision occurs. Also, an early recognition of an accident enables enhanced restraint systems, especially bigger and less aggressive airbags [7]. For this reason, the environmental sensors must be extremely reliable in detecting objects and tracking in the surrounding area. A crucial role is played by the camera, which is primarily responsible for detecting vehicles and larger objects and their dimensions. Yet, it is susceptible to disturbances from rapid changes in exposure, dark conditions with low contrasts, or severe overexposure. To specifically optimize cameras in such scenarios, it requires clear knowledge of the lighting conditions, wavelengths, color spectra and intensities in which objects are not detected or are lost again. For this purpose, this paper introduces an approach on how to analyze the mentioned criteria and extract parameters to improve object detection and tracking with cameras in challenging illumination scenarios.