I. Introduction
Scannerless 3D scene reconstruction based on indirect time-of-flight (ToF) principle relies on measurements of the time elapsed between the moment in which a light signal, actively modulated and widened by a special diffusing lens to cover an entire scene, is sent by a light source and a moment at which, after being reflected by different objects in a scene, it impinges at the photosensor usually collocated aside this emitting light source. The photosensor is here operated synchronously with the emission of the light pulse and enables the evaluation of the exact delay of the returned pulsed signal impinging on each individual pixel and the determination of the distance of different objects in the illuminated scene. Due to its non ambiguity, this measurement principle allows for operation ranges of a few centimeters to tens of meters [1]. The indirect ToF measurement utilizes the integration of the photogenerated charge during an ultra short shutter time [2]. For centimetre accuracies, nevertheless, a nanosecond time discrimination capability, high detection speed, low noise, and high signal-to-noise ratio (SNR) are required [3]. For example, according to Eq. (1) [2], where is the distance between the sensor and a defined object in the scene, is the velocity of light and is the length of the pulse emitted by the light source, for a maximum distance of 4.5m a pulse width pulse, width of 30ns is required. The problem of so short time scales are the short integration times of the photogenerated charge, which requires short transit time, low noise and the fast readout in each pixel [4], [5], [6], [7]. $${d_{max}}={c \over 2} \cdot {T_{pulse}}{\hbox{(1)}}$$ Technology cross-section of the LDPD used in the presented 3D image sensor (not to scale)