I. Introduction
The rapid growth of autonomous technology has made it easier to investigate dependable and efficient 3D object identification systems. These systems are required for safe navigation in tough environments, as well as observation and inference. Object detection is a critical component of any autonomous driving system since it allows an autonomous vehicle to identify and track items in real time, such as pedestrians, other vehicles, and barriers. YOLOv3, with its extraordinary real-time speed and accuracy, appears to be one of the most popular object identification systems ever built. YOLOv3 is a famous one-shot detector that uses The CNNbased image uses bounding box position and class probabilities as prediction targets. It has been demonstrated to outperform traditional multi-stage object detection algorithms, which rely on multi-pass methods for initial picture identification and then object identification. YOLOv3’s structure is complicated by the addition of extra convolutional blocks, batch normalisation, and skip connections with leaky ReLU. This improves the model’s ability to intra-clip spatial information from the input image. The upgraded version of YOLOv3, a new mathematical framework designed exclusively for autonomous vehicles to target 3-D objects, allows multi-phase detection algorithms to differentiate targets of varying sizes. This deduction has numerous levels [1] –[3].