Introduction
Camera-based measurement systems are used in a wide range of application fields. In indoor localization systems, the use of cameras is becoming widespread, e.g., [1], [2]. Vision systems are extensively used in robotics and industrial applications, e.g., to identify and locate objects, provide guidance, avoid obstacles, and increase safety [3]. In space technology, the position and orientation of space targets (e.g., satellites) can be estimated using cameras deployed on robot arms [4]. The speed of objects can be measured using image sequences taken with very low exposure time, to provide sharp images [5], or using a single image by extracting the properties of the image blur [6]. Fringe projection profilometry uses cameras, as sensors, to provide 3-D reconstruction of physical objects [7]. In order to provide precise measurements with camera-based systems, several applications require the calibration of the cameras: for matrix and line cameras, measurement methods were proposed in [8] and [9], respectively, while a self-calibration method was proposed for visual odometry systems [10].
Most of today’s handheld mobile devices are equipped with cameras, fostering the rapid development of optical camera communication (CamCom) systems. The IEEE standardization group 802.15.7 developed a standard for optical wireless communication [11], e.g., using blinking LED transmitters and cameras [12]. Such communication systems are utilized as services in many applications, e.g., wireless broadcast systems using LED luminaries [13] or indoor localization using LED beacons [2].
The control of the exposure time (often called shutter time or shutter speed) has a central role in several applications. In marker-based optical positioning, the exposure time has a direct effect on blurring and thus on accuracy [14]. In fringe projection profilometry, the exposure time must be carefully set in order to get accurate estimates [7]. In particle image velocimetry cameras with extremely low exposure time are utilized [15]. In high dynamic range (HDR) imaging, multiple exposure time synthesis techniques are used to produce high-quality images, utilizing various fusion methods, e.g., gradient-based techniques [16] or multiscale edge-preserving smoothing [17]. CamCom methods may also be sensitive to the exact value of the exposure time, as was pointed out in [18], and thus, this camera parameter is an important design factor in various CamCom protocols [19].
Although the exposure time can be set in most cameras, the real shutter speed may (sometimes significantly) differ from the nominal value, and thus, the measurement of the real shutter speed may be necessary in demanding applications [20]. In some (mainly lower end) cameras, the shutter speed is unknown, and in this case, it must be measured.
Several solutions have been proposed to measure the timing properties of cameras. Standard ISO 516 defines the methods for shutter speed measurements, specifically for manufacturing testing and quality control [21]. These methods are suitable for cameras equipped by either mechanical or nonmechanical shutters but require the disassembly of the camera so that the focal plane is accessed. The principle of the measurement is straightforward: a constant illumination is provided in front of the lens, while the light intensity is measured behind the shutter (e.g., using a photodiode or phototransistor and an oscilloscope), as shown in Fig. 1(a). When the shutter is open, a high-intensity peak is detected, the width of which provides estimate for the exposure time, with reasonable (
Traditional methods to measure exposure time of cameras. (a) Direct method. (b)–(d) Indirect methods by taking photographs of a moving target. (b) Ad hoc solution using a record player. (c) Ad hoc solution using a CRT screen. (d) Dedicated instrument using an LED array.
Other solutions use the photographs taken by the camera in normal operating conditions [see Fig. 1(b)–(d)]. Most of these methods use a moving object with known speed. The covered distance during the exposure time can be determined from the photograph, and thus, the exposure time can be calculated. A classical method uses a turntable, on which a line is placed in the radial direction, as shown in Fig. 1(b). From the angle swept by the red line on the photograph and the rotational speed of the turntable, the exposure time can be calculated [23]. The idea was further improved in [24], where the moving object was replaced by a moving image on a computer screen, the speed of which was controlled by the generating software. With these methods, the achievable accuracy is moderate (
Other solutions use moving light sources instead of physical objects. In cathode ray tube (CRT) monitors, an electron beam sweeps across the screen, the refresh rate of which is known. A photograph taken on the screen contains a lighter area, which was covered by the electron beam during the exposure time, while the total size of the screen corresponds to the refresh time. From the ratio of these areas and the refresh rate, the exposure time can be calculated [23], with an accuracy of 1%–10% [22]. This method is shown in Fig. 1(c). A very similar approach uses an oscilloscope with dc input and automatic triggering mode to generate a sweeping light dot (seen as a horizontal line) on the scope’s screen. The speed of the dot is controlled by the horizontal sweep setting of the oscilloscope. On the photograph taken by the camera, the moving dot creates a line segment, the length of which is proportional to the exposure time [25].
In the above ad hoc solutions, the speed of the moving object is given and can be configured either in a very limited range (radial speed of the turntable) or not at all (refresh rate of the monitor), and thus, the range of measurable exposure times is rather limited (e.g., 1/125–2 for the turntable and 1/10.000–1/125 for the CRT [22]). To provide more flexible measurements, special equipment was designed to measure the timing properties of cameras. The principle of taking a photograph of moving objects remains the same, but the role of moving source is played by an array of blinking LEDs, as shown in Fig. 1(d). The array may have different forms: the equipment proposed in [26] utilizes five LED stripes, each containing 100 LEDs, while in the commercial equipment mentioned in [27], a 10
A new accurate and simple solution was proposed in [28], which requires minimal hardware support: only a signal generator is required, which drives an LED with 50% duty cycle square wave, to provide input for the camera. The camera is used in video mode, where a series of images of the blinking LED is recorded using equivalent sampling [29]. The exposure time is determined from these images using the known frequency of the signal generator. The measurement method is shown in Fig. 2.
In this article, a novel automatic estimation method is proposed to complete [28], using accurate estimates of the measured signal’s segment boundaries, using linear regression. A detailed error analysis of the proposed estimate is provided. In addition to this, multiple upgrades will be proposed to improve the accuracy of the estimates.
The outline of this article is given as follows. In Section II, the proposed method is reviewed. First, the sampling model of the camera is discussed, followed by the introduction of the measurement method using equivalent sampling. A novel automatic estimation procedure is proposed, along with methods for improving the accuracy. Section III contains the error analysis of the proposed method. In Section IV, measurement results validate the proposed method.
Exposure Time Measurement
A. Camera Sampling Model
The camera sampling model is shown in Fig. 3. The sampling process of the camera can be modeled as a combination of integral sampling [30] and nonlinear saturation. The input light intensity is denoted by
Ideally, the nonlinearity
Using the notations of Fig. 3, the general operation model of the camera is the following:\begin{equation*} x_{s}\left ({t_{k} }\right) = \Gamma \left ({\alpha \int _{t_{k}-\frac {S}{2}}^{t_{k}+\frac {S}{2}} {x\left ({\tau }\right)d\tau } }\right)\tag{1}\end{equation*}
For cameras, which are linear from 0 to \begin{equation*} x_{s}\left ({t_{k} }\right)=\max \left ({\alpha \int _{t_{k}-\frac {S}{2}}^{t_{k}+\frac {S}{2}} {x\left ({\tau }\right)d\tau },A_{\mathrm {max}} }\right).\tag{2}\end{equation*}
Finally, if the camera is operated in any linear range, the output is simplified as \begin{equation*} x_{s}\left ({t_{k} }\right)=\alpha _{0}+\alpha ^{\prime }\int _{t_{k}-\frac {S}{2}}^{t_{k}+\frac {S}{2}} {x\left ({\tau }\right)d\tau }\tag{3}\end{equation*}
Let
B. Equivalent Sampling-Based Measurement
Let \begin{equation*} x_{s}\left ({t+P }\right)=x_{s}\left ({t }\right).\tag{4}\end{equation*}
Let the camera’s sampling period be \begin{equation*} T_{S}=nP+\Delta t\tag{5}\end{equation*}
\begin{equation*} x_{s}\left ({t+T_{S} }\right)=x_{s}\left ({t+nP+\Delta t }\right)=x_{S}\left ({t+\Delta t }\right).\tag{6}\end{equation*}
According to (6), the sample
C. Estimation of the Exposure Time
Let us express the number \begin{equation*} N_{S}=\left \lfloor{ \frac {S}{\Delta t} }\right \rfloor\tag{7}\end{equation*}
\begin{equation*} N_{P}=\left \lfloor{ \frac {P}{\Delta t} }\right \rfloor.\tag{8}\end{equation*}
From (7) and (8), the exposure time can be estimated as \begin{equation*} \hat {S}\cong N_{S}\Delta t\cong N_{S}\frac {P}{N_{P}}=P\frac {N_{S}}{N_{P}}.\tag{9}\end{equation*}
Notice that the smaller
D. Precise Estimation of Model Parameters
Estimate (9) relies on the accurate count of parameters
First, let us segment the measured signal into four regions, according to Fig. 5. If the segment boundaries are not clear, do not use (ignore) samples in the uncertain region. In each segment \begin{equation*} A_{1}=\frac {1}{N_{1}}\sum _{i=1}^{N_{1}} y_{i}.\tag{10}\end{equation*}
Input square wave with period
Similarly, the amplitude of the higher horizontal part (see Fig. 5) is estimated as parameter \begin{equation*} A_{2}=\frac {1}{N_{2}}\sum _{i=1}^{N_{2}} y_{i}.\tag{11}\end{equation*}
The slopes are estimated using linear regression [31]. Line \begin{equation*} \hat {y}=m_{1}x+b_{1}.\tag{12}\end{equation*}
Similarly, for line \begin{equation*} \hat {y}=m_{2}x+b_{2}.\tag{13}\end{equation*}
Let us calculate the intersection points (i.e., the segments’ boundaries) \begin{equation*} X_{1}=\frac {A_{1}}{m_{1}}-\frac {b_{1}}{m_{1}}.\tag{14}\end{equation*}
Similarly, for \begin{align*} X_{2}=&\frac {A_{2}}{m_{1}}-\frac {b_{1}}{m_{1}} \tag{15}\\ X_{3}=&\frac {A_{2}}{m_{2}}-\frac {b_{2}}{m_{2}}.\tag{16}\end{align*}
Notice that \begin{equation*} \hat {S}_{\mathrm {LR}}=\frac {X_{2}-X_{1}}{2\left ({X_{3}-X_{1} }\right)}P.\tag{17}\end{equation*}
E. Measurement Procedure
The proposed measurement method is summarized as follows.
Step 1:
Create a square wave with period length according to (8). Drive the LED with the square wave. Adjust the camera settings (aperture) or the gain of the LED driver so that the photograph of the LED does not saturate the camera. Stabilize both the camera and the LED so that the image of the LED does not move on the photograph.
Step 2:
Observe the output video stream of the camera. The LED on the video should blink with low frequency. Adjust the generator frequency to provide as low blinking frequency on the image as possible (the equivalent blinking period may be as high as several tens of seconds, resulting several hundreds of samples in a period). Read the generator frequency
.f_{\mathrm {GEN}}=1/P Step 3:
Record the video stream. The record should contain at least one period (notice that the period length was already observed in Step 2).
Step 4:
Extract the light intensity function
from the video stream, using the same pixel in each frame, located in the center of the LED’s image.x_{s}(k) Step 5:
Count
andN_{S} . Estimate the exposure time using (9).N_{P} Step 6:
Observe the rising and falling edges in the record. If they are fairly linear, calculate
, andX_{1}, X_{2} , using linear regression, and use the LR estimator (17).X_{3}
F. Improving Measurement Accuracy
The measured intensity signal contains noise (originating mainly from the noise of the sensor). The measurement noise can be decreased if the outputs of multiple pixels are averaged. If the measured camera has global shutter, pixels from any region of the photograph can be selected (e.g., the region where the image of the LED is located). In case of a rolling shutter camera, pixels from a single row must be selected since the exposures of different rows are shifted in time.
Larger LED image allows the averaging of larger number of pixels. To provide a larger image, a diffusor can be placed between the LED and the camera.
Certain cameras show strong nonlinearity in the low-intensity region. Thus, it is advisable to use an input signal where the OFF state is not completely dark but produces significant sensor output (e.g., 10% of the full scale). Similarly, care must be taken to avoid saturation of the sensor (e.g., the ON state should produce approximately 90% of the full scale). These rules apply to all pixels if the averaging process is applied.
Error Analysis
Values \begin{equation*} \Delta X_{1}\cong -X_{1}\frac {\Delta m_{1}}{m_{1}}-\frac {\Delta b_{1}}{m_{1}}+\frac {\Delta A_{1}}{m_{1}}.\tag{18}\end{equation*}
Similarly, the variation of \begin{equation*} \Delta X_{2}\cong -X_{2}\frac {\Delta m_{1}}{m_{1}}-\frac {\Delta b_{1}}{m_{1}}+\frac {\Delta A_{2}}{m_{1}}.\tag{19}\end{equation*}
For the sake of simplicity, but without loss of generality, let us set the coordinate system \begin{align*} \Delta X_{1}\cong&- \frac {\Delta b_{1}}{m_{1}}+\frac {\Delta A_{1}}{m_{1}} \tag{20}\\ \Delta X_{2}\cong&- S\frac {\Delta m_{1}}{m_{1}}-\frac {\Delta b_{1}}{m_{1}}+\frac {\Delta A_{2}}{m_{1}}.\tag{21}\end{align*}
Since \begin{align*} \mathrm {var}\,X_{1}=&E\left \{{\Delta X_{1}^{2} }\right \}\cong E\left \{{\left ({-\frac {\Delta b_{1}}{m}+\frac {\Delta A_{1}}{m} }\right)^{2} }\right \} \\=&\frac {1}{m^{2}}\mathrm {var}\,b_{1}+\frac {1}{m^{2}}\mathrm {var}\,A_{1}-2\mathrm {cov}\left ({b_{1},A_{1} }\right).\tag{22}\end{align*}
Since the estimates \begin{equation*} \mathrm {var}\,A_{i}=\frac {s_{i}^{2}}{N_{i}},\quad i =1, 2\tag{23}\end{equation*}
\begin{equation*} \mathrm {var}\,X_{1}\cong \frac {4s_{3}^{2}}{m^{2}N_{3}}+ \frac {s_{1}^{2}}{{m^{2}N}_{1}}.\tag{24}\end{equation*}
Since \begin{align*} \mathrm {var}\,X_{2}=&E\left \{{\Delta X_{2}^{2} }\right \}=E\left \{{\left ({-S\frac {\Delta m}{m}-\frac {\Delta b_{1}}{m}+\frac {\Delta A_{2}}{m} }\right)^{2} }\right \} \\=&\frac {S^{2}\mathrm {var}\,m}{m^{2}}+\frac {\mathrm {var}\,b_{1}}{m^{2}}+\frac {\mathrm {var}\,A_{2}}{m^{2}}+\frac {2S}{m^{2}}\mathrm { cov}\left ({b_{1},m }\right) \\\cong&\frac {12s_{3}^{2}}{m^{2}N_{3}}+\frac {4s_{3}^{2}}{m^{2}N_{3}} +\frac {s_{2}^{2}}{m^{2}N_{2}} -\frac {12s_{3}^{2}}{m^{2}N_{3}} \\=&\frac {4s_{3}^{2}}{m^{2}N_{3}} +\frac {s_{2}^{2}}{m^{2}N_{2}}.\tag{25}\end{align*}
The variance of \begin{equation*} \mathrm {var}\,X_{3}\cong \frac {4s_{3}^{2}}{m^{2}N_{3}}+ \frac {s_{2}^{2}}{{m^{2}N}_{2}}.\tag{26}\end{equation*}
Since \begin{align*}&\hspace {-2pc}\mathrm {cov}\left ({X_{1},X_{2} }\right) \\=&E\left \{{\Delta X_{1}\Delta X_{2} }\right \} \\=&E\left \{{\left ({-\frac {\Delta b_{1}}{m}+\frac {\Delta A_{1}}{m} }\right)\left ({-S\frac {\Delta m_{1}}{m}-\frac {\Delta b_{1}}{m}+\frac {\Delta A_{2}}{m} }\right) }\right \} \\=&\frac {S}{m^{2}}\mathrm {cov}\left ({b_{1},m_{1} }\right)+\frac {1}{m^{2}}\mathrm {var}\,b_{1} \\\cong&\frac {-6s_{3}^{2}}{m^{2}N_{3}}+ \frac {4s_{3}^{2}}{m^{2}N_{3}}= \frac {-2s_{3}^{2}}{m^{2}N_{3}} \tag{27}\\&\hspace {-2pc}\mathrm {cov}\left ({X_{2},X_{3} }\right) \\=&E\left \{{\Delta X_{2}\Delta X_{3} }\right \} \\=&E\left \{{\left ({-S\frac {\Delta m_{1}}{m}-\frac {\Delta b_{1}}{m}+\frac {\Delta A_{2}}{m} }\right)\left ({-\frac {\Delta b_{2}}{m}+\frac {\Delta A_{2}}{m} }\right) }\right \} \\=&\frac {1}{m^{2}}\mathrm {var}\,A_{2}\cong \frac {s_{2}^{2}}{m^{2}N_{2}}.\tag{28}\end{align*}
Since \begin{equation*} \mathrm {cov}\left ({X_{1},X_{3} }\right)=0.\tag{29}\end{equation*}
The exposure time estimate is (17), and thus, the uncertainty of \begin{align*} \Delta \hat {S}_{\mathrm {LR}}\cong&\frac {\delta \hat {S}}{\delta X_{1}}\Delta X_{1}+\frac {\delta \hat {S}}{\delta X_{2}}\Delta X_{2}+\frac {\delta \hat {S}}{\delta X_{3}}\Delta X_{3}+\frac {\delta \hat {S}}{\delta P}\Delta P \\=&- \frac {P}{2}\frac {X_{3}-X_{2}}{\left ({X_{3}-X_{1} }\right)^{2}}\Delta X_{1}+\frac {P}{2}\frac {X_{3}-X_{1}}{\left ({X_{3}-X_{1} }\right)^{2}}\Delta X_{2} \\&-\,\frac {P}{2}\frac {X_{2}-X_{1}}{\left ({X_{3}-X_{1} }\right)^{2}}\Delta X_{3}+\frac {S}{P}\Delta P.\tag{30}\end{align*}
Using notations \begin{equation*} A=\frac {X_{3}-X_{2}}{\left ({X_{3}-X_{1} }\right)^{2}},B=\frac {X_{3}-X_{1}}{\left ({X_{3}-X_{1} }\right)^{2}},C=\frac {X_{2}-X_{1}}{\left ({X_{3}-X_{1} }\right)^{2}}\tag{31}\end{equation*}
\begin{align*}&\hspace {-1.8pc}\mathrm {var}\,\hat {S}_{\mathrm {LR}} \\=&E\left \{{{\Delta \hat {S}_{\mathrm {LR}}}^{2} }\right \} \\=&E\left \{{\left ({-\frac {PA}{2}\Delta X_{1}+\frac {PB}{2}\Delta X_{2}-\frac {PC}{2}\Delta X_{3}+\frac {S}{P}\Delta P }\right)^{2} }\right \} \\=&\frac {P^{2}A^{2}}{4}\mathrm {var}\,X_{1}+\frac {P^{2}B^{2}}{4}\mathrm {var}\,X_{2}+\frac {P^{2}C^{2}}{4}\mathrm {var}\,X_{3} \\&+\,\frac {S^{2}}{P^{2}}\mathrm {var}\,P-\frac {ABP^{2}}{2}\mathrm {cov}\left ({X_{1},X_{2} }\right) \\&-\,\frac {BCP^{2}}{2}\mathrm {cov}\left ({X_{2},X_{3} }\right)+\frac {ACP^{2}}{2}\mathrm {cov}\left ({X_{1},X_{3} }\right) \\&+\,S\left ({-A\mathrm {cov}\left ({X_{1},P }\right)+B\mathrm {cov}\left ({X_{2},P }\right)-C\mathrm {cov}\left ({X_{3},P }\right) }\right). \\\tag{32}\end{align*}
Since the estimations of \begin{align*}&\hspace {-2pc}\mathrm {var}\,\hat {S}_{\mathrm {LR}} \\=&\frac {s_{1}^{2}P^{2}}{4}\frac {A^{2}}{N_{1}m^{2}} +\frac {s_{2}^{2}P^{2}}{4}\frac {\left ({B +C }\right)^{2}}{N_{2}m^{2}} \\&+\,s_{3}^{2}P^{2} \frac {A^{2}+B^{2}+C^{2}+AB}{N_{3}m^{2}}+s_{p}^{2}\frac {S^{2}}{P^{2}}.\tag{33}\end{align*}
In (33), parameter \begin{equation*} m \cong \frac {A_{2}-A_{1}}{S}\cong \frac {A_{2}-A_{1}}{N_{2}}\frac {1}{\Delta t}=m^{\prime }\frac {1}{\Delta t}\tag{34}\end{equation*}
\begin{align*} A=&\frac {X_{3}-X_{2}}{\left ({X_{3}-X_{1} }\right)^{2}}\cong \frac {N_{3}\Delta t}{\left ({N_{2}\Delta t +N_{3}\Delta t }\right)^{2}} \\=&\frac {1}{\Delta t}\frac {N_{3}}{\left ({N_{2}+N_{3} }\right)^{2}}=\frac {A^{\prime }}{\Delta t}.\tag{35}\end{align*}
\begin{equation*} B \cong \frac {A^{\prime }}{\Delta t} = C \cong \frac {C^{\prime }}{\Delta t}\tag{36}\end{equation*}
\begin{align*} A^{\prime }=&\frac {N_{3}}{\left ({N_{2}+N_{3} }\right)^{2}},\quad B^{\prime }=\frac {1}{N_{2}+N_{3}} \\ C^{\prime }=&\frac {N_{2}}{\left ({N_{2}+N_{3} }\right)^{2}},\quad m^{\prime }=\frac {A_{2}-A_{1}}{N_{2}}.\tag{37}\end{align*}
\begin{equation*} \frac {S}{P}\cong \frac {N_{2}}{2\left ({N_{2}+N_{3} }\right)}=\frac {N_{2}}{2}B^{\prime }.\tag{38}\end{equation*}
\begin{align*} \mathrm {var}\,\hat {S}_{\mathrm {LR}}\cong&\frac {s_{1}^{2}P^{2}}{4}\frac {A^{\prime ^{2}}} {N_{1}m^{\prime ^{2}}}+\frac {s_{2}^{2} P^{2}}{4}\frac {\left ({B^{\prime }+C^{\prime } }\right)^{2}}{N_{2}m^{\prime ^{2}}} \\&+\,s_{3}^{2}P^{2}\frac {A^{\prime ^{2}} +B^{\prime ^{2}}+C^{\prime ^{2}} +A^{\prime }B^{\prime }}{N_{3}m^{\prime ^{2}}} +s_{p}^{2}\frac {B^{\prime ^{2}}N_{2}^{2}} {4}\tag{39}\end{align*}
\begin{align*} s_{1}^{2}\cong&\frac {1}{N_{1}-1}\sum _{i =1}^{N_{1}} \left ({y_{i}-A_{1} }\right)^{2} \tag{40}\\ s_{2}^{2}\cong&\frac {1}{N_{2}-1}\sum _{i =1}^{N_{2}} \left ({y_{i}-A_{2} }\right)^{2} \tag{41}\\ s_{3}^{2}\cong&\frac {1}{N_{3}-2}\sum _{i =1}^{N_{1}} \left ({y_{i}-\hat {y}_{i} }\right)^{2}.\tag{42}\end{align*}
Experiments
A. Measurement Setup
The measurement hardware is shown in Fig. 6. The LED source was attached to the camera through a 3-D-printed enclosure, and thus, external disturbances were eliminated during the experiments and the stable relative positioning of the camera and the light source was guaranteed. The signal generator was implemented on an Arduino Due board. The blinking frequency was tunable in steps of approximately 5
Measurement setup: a camera with the attached LED source, and the signal generator implemented on an Arduino Due microcontroller unit.
The generator’s frequency was chosen according to (5). In our tests, 30- and 60-fps sampling frequencies were used, and thus, the signal generator’s frequency was chosen to be close to
B. Target Cameras
During the measurements, two cameras were used. Camera C1 was a high-quality industrial machine vision camera GS3-U3-23S6M produced by FLIR [32]. The camera’s software is able to report the exact actual exposure time, with possible values ranging from 5
C. Reference Measurements
A reference running LED measurement setup was used to provide measured exposure values with known accuracy. The method is similar to the one illustrated in Fig. 1(d) but uses multiple timers to provide higher accuracy [22]. The resolution of our device was 1
In our solution, the LEDs were used in the binary mode: an LED was considered in the ON state if its detected light intensity on the photograph was higher than the maximal noise level of the OFF state; otherwise, it was considered in the OFF state. (The value of the detected intensity was not used to improve the accuracy.) Thus, the reference measurements’ uncertainty, resulting from the binary measurements and the resolution of the device, is bounded by
D. Measurement Results
An example measurement can be seen in Fig. 7, where the exposure time of camera C1 was set to 98
Measurement using C1 with nominal exposure time of 98
In the inset of Fig. 7, the average of 100 pixels is also shown in green, as proposed in Section II-F (the signal is shifted vertically, for better visibility). The noise level is clearly much lower in this case. From this signal,
Camera C1 was tested using exposure times starting from 8
The uncertainty of the reference measurements was maximum
Interestingly, the reported and measured exposure times show a constant bias of
Various forms of the equivalent sampling-based estimates were compared, as shown in Table II.
The results clearly show that all of the estimates perform very well, but the accuracy of
Comparing the results of the single and multi-LED measurements in Table II, the following conclusions can be drawn: using multiple pixels improves the accuracy of estimate
Camera C2 was tested with 60 fps, and exposure time settings are between −1 and −12 (setting 0 did not work with 60 fps). Similar to C1, the mean and standard deviation of the estimates were calculated from ten measurements. The test results are shown in Table III. The uncertainty of all the reference measurements was again the same, i.e., less than
Taking into account the test results in Tables I–III, the accuracy of the proposed linear regression-based method can be summarized as follows: the absolute error below 1 ms was maximum
The last column of Table III contains the theoretical standard deviation values as well. These values were calculated using (39), where the noise parameters
In the above measurements, care was taken to operate the cameras in their linear operation range (see Fig. 7). The undesired effects of nonlinear behavior are shown in Fig. 8. The measurements were made using camera C2, with 60 fps and shutter speed setting −12. The line labeled “single point– good” shows a measurement, which is in the linear operating range of the camera. Measurement “single point—too high” was saturated, and its effect is clearly visible: the rising edge ends sooner and the falling edge starts later, and thus, the edges are measured to be shorter. This camera showed nonlinear behavior at the low-intensity region, too: measurement “single point—too low” shows that the sensor did not react to low light intensities. In this case, the rising edge starts later, and the falling edge ends sooner; thus, the edges seem to be shorter again.
Effect of nonlinear behavior of C2. Measurements were made using exposure time setting −12 (approximately 32
Fig. 8 also shows the nonlinear effects when a set of pixels (a box of size 3
Summary
In this article, a novel method was proposed to measure the exposure time of digital cameras. During the measurement, a sequence of photographs (a video stream) is recorded, while the target image is a blinking LED. The frequency of the LED is chosen so that the resulting equivalent sampling allows good temporal resolution. If the blinking frequency is known, then the exposure time can be determined from the recorded time-intensity function of a single pixel or the average of a set of pixels. The measurement procedure and the estimation method of the exposure time were introduced in detail, along with methods to increase the accuracy of the measurement procedure. A linear regression-based automatic estimate was also proposed, allowing the increase of both the resolution and the precision of the estimate. Other advantages of the proposed methods include its simplicity, compared to the previous LSs estimate [28], and the behavior of the estimate can be analyzed. The error analysis of the method was also presented in detail.
The applicability of the proposed measurement method was illustrated through measurement examples, where a high-end industrial machine vision camera and an inexpensive camera were tested. The proposed technique was compared to a well-known method where a photograph is taken on an array of blinking LEDs, using a device similar to [26] and [27]. Since the accuracy of the reference method was known, the uncertainty of the proposed method could be determined. According to the tests, the uncertainty of the proposed method was maximum
AppendixVariances and Covariance of the Linear Regression Coefficients
Variances and Covariance of the Linear Regression Coefficients
The variances and covariance of the linear regression coefficients can be derived as follows. Let the measured points be \begin{equation*} y_{i}=b+mx_{i}+n_{i}\tag{43}\end{equation*}
\begin{align*} \hat {m}=&\frac {\sum _{i =1}^{N} {\left ({x_{i}-\bar {x} }\right)\left ({y_{i}-\bar {y} }\right)} }{\sum _{i =1}^{N} \left ({x_{i}-\bar {x} }\right)^{2}} \\=&\frac {\sum _{i =1}^{N} {\left ({x_{i}-\bar {x} }\right)y_{i}}}{\sum _{i =1}^{N} \left ({x_{i}-\bar {x} }\right)^{2} }=\sum _{i =1}^{N} {c_{i}y_{i}} \tag{44}\\ \hat {b}=&\bar {y}-\hat {m}\bar {x}\tag{45}\end{align*}
\begin{align*} c_{i}=&\frac {x_{i}-\bar {x}}{S_{xx}} \tag{46}\\ S_{xx}=&\sum _{i} \left ({x_{i}-\bar {x} }\right)^{2}.\tag{47}\end{align*}
The variance of \begin{equation*} \mathrm {var}\,\hat {m}=\frac {s^{2}}{S_{xx}}.\tag{48}\end{equation*}
From (45), the variance of \begin{equation*} \mathrm {var}\,\hat {b}=\text {var}\,\bar {y}+\bar {x}^{2}\mathrm {var}\,\hat {m}-2\bar {x}\mathrm {cov}\left ({\bar {y},\hat {m} }\right).\tag{49}\end{equation*}
Using (47), the term \begin{align*} \mathrm {cov}\left ({\bar {y},\hat {m} }\right)=E\left \{{ \Delta \bar {y}\Delta \hat {m} }\right \}=E\left \{{ \left ({\frac {1}{N}\sum _{i =1}^{N} n_{i} }\right)\left ({\sum _{j =1}^{N} {c_{j}n_{j}} }\right) }\right \}. \\\tag{50}\end{align*}
If the measurement noise is uncorrelated (i.e., \begin{equation*} \mathrm {cov}\left ({\bar {y},\hat {m} }\right)=\frac {1}{N}\sum _{i =1}^{N} c_{i} E\left \{{n_{i}^{2} }\right \}=\frac {s^{2}}{N}\sum _{i =1}^{N} c_{i} = 0.\tag{51}\end{equation*}
Thus, (49) becomes \begin{equation*} \mathrm {var}\,\hat {b}=\frac {s^{2}}{N}+\bar {x}^{2}\mathrm {var}\,\hat {m}=\frac {s^{2}}{N}+\bar {x}^{2}\frac {s^{2}}{S_{xx}}.\tag{52}\end{equation*}
\begin{align*} \mathrm {cov}\left ({\hat {m}, \hat {b} }\right)=&E\left \{{\Delta \hat {m}\Delta \hat {b} }\right \} \\=&E\left \{{\Delta \hat {m} (\Delta \bar {y}-\bar {x}\Delta \hat {m}) }\right \} \\=&\mathrm {cov}\left ({\bar {y},\hat {m} }\right)-\bar {x}\mathrm {var}\,\hat {m} = -\bar {x}\mathrm {var}\,\hat {m}.\tag{53}\end{align*}
Notice that \begin{align*} S_{xx}=&\sum _{i =1}^{N} \left ({x_{i}-\bar {x} }\right)^{2} =\sum _{i =1}^{N} x_{i}^{2} +N\bar {x}-2\bar {x}\sum _{i =1}^{N} x_{i} \\=&\sum _{i =1}^{N} x_{i}^{2} +N\bar {x}^{2}-2\bar {x}N\bar {x}\cong \sum _{i =1}^{N} x_{i}^{2} -\frac {NS^{2}}{4} \\\cong&\sum _{i =1}^{N} \left ({i\frac {S}{N} }\right)^{2} -\frac {NS^{2}}{4}=\frac {S^{2}}{N^{2}}\sum _{i =1}^{N} i^{2} -\frac {NS^{2}}{4}.\tag{54}\end{align*}
Using \begin{equation*} \sum _{i =1}^{N} i^{2} =\frac {N(N + 1)(2 N + 1) }{6}.\tag{55}\end{equation*}
\begin{equation*} S_{xx}\cong \frac {S^{2}}{N^{2}}\frac {N\left ({N + 1 }\right)\left ({2N + 1 }\right)}{6}-\frac {NS^{2}}{4}\cong \frac {NS^{2}}{12}.\tag{56}\end{equation*}
The approximation is valid if
Using approximation (56), the variances and covariance are simplified as follows:\begin{align*} \mathrm {var}\,m\cong&\frac {12s^{2}}{NS^{2}} \tag{57}\\ \mathrm {var}\,b\cong&\frac {4s^{2}}{N} \tag{58}\\ \mathrm {cov}\left ({m,b }\right)\cong&- \frac {6s^{2}}{NS}.\tag{59}\end{align*}