Processing math: 0%
Automatic Measurement of Digital Cameras’ Exposure Time Using Equivalent Sampling | IEEE Journals & Magazine | IEEE Xplore

Automatic Measurement of Digital Cameras’ Exposure Time Using Equivalent Sampling


Abstract:

This article presents a novel method to measure the exposure time of digital cameras. The measurement relies on a sequence of images taken by the camera in video mode, us...Show More

Abstract:

This article presents a novel method to measure the exposure time of digital cameras. The measurement relies on a sequence of images taken by the camera in video mode, using the effect of equivalent sampling. The hardware requirements of the proposed method are low: a signal generator drives an LED source and the blinking LED is recorded by the camera under test. The exposure time is determined from the recorded images and the frequency of the generator. The detailed error analysis of the proposed method is provided, and its performance is validated by real measurements. The measurements indicate that the proposed solution is able to provide estimates with uncertainties in the low microsecond range.
Article Sequence Number: 5015110
Date of Publication: 27 June 2022

ISSN Information:


SECTION I.

Introduction

Camera-based measurement systems are used in a wide range of application fields. In indoor localization systems, the use of cameras is becoming widespread, e.g., [1], [2]. Vision systems are extensively used in robotics and industrial applications, e.g., to identify and locate objects, provide guidance, avoid obstacles, and increase safety [3]. In space technology, the position and orientation of space targets (e.g., satellites) can be estimated using cameras deployed on robot arms [4]. The speed of objects can be measured using image sequences taken with very low exposure time, to provide sharp images [5], or using a single image by extracting the properties of the image blur [6]. Fringe projection profilometry uses cameras, as sensors, to provide 3-D reconstruction of physical objects [7]. In order to provide precise measurements with camera-based systems, several applications require the calibration of the cameras: for matrix and line cameras, measurement methods were proposed in [8] and [9], respectively, while a self-calibration method was proposed for visual odometry systems [10].

Most of today’s handheld mobile devices are equipped with cameras, fostering the rapid development of optical camera communication (CamCom) systems. The IEEE standardization group 802.15.7 developed a standard for optical wireless communication [11], e.g., using blinking LED transmitters and cameras [12]. Such communication systems are utilized as services in many applications, e.g., wireless broadcast systems using LED luminaries [13] or indoor localization using LED beacons [2].

The control of the exposure time (often called shutter time or shutter speed) has a central role in several applications. In marker-based optical positioning, the exposure time has a direct effect on blurring and thus on accuracy [14]. In fringe projection profilometry, the exposure time must be carefully set in order to get accurate estimates [7]. In particle image velocimetry cameras with extremely low exposure time are utilized [15]. In high dynamic range (HDR) imaging, multiple exposure time synthesis techniques are used to produce high-quality images, utilizing various fusion methods, e.g., gradient-based techniques [16] or multiscale edge-preserving smoothing [17]. CamCom methods may also be sensitive to the exact value of the exposure time, as was pointed out in [18], and thus, this camera parameter is an important design factor in various CamCom protocols [19].

Although the exposure time can be set in most cameras, the real shutter speed may (sometimes significantly) differ from the nominal value, and thus, the measurement of the real shutter speed may be necessary in demanding applications [20]. In some (mainly lower end) cameras, the shutter speed is unknown, and in this case, it must be measured.

Several solutions have been proposed to measure the timing properties of cameras. Standard ISO 516 defines the methods for shutter speed measurements, specifically for manufacturing testing and quality control [21]. These methods are suitable for cameras equipped by either mechanical or nonmechanical shutters but require the disassembly of the camera so that the focal plane is accessed. The principle of the measurement is straightforward: a constant illumination is provided in front of the lens, while the light intensity is measured behind the shutter (e.g., using a photodiode or phototransistor and an oscilloscope), as shown in Fig. 1(a). When the shutter is open, a high-intensity peak is detected, the width of which provides estimate for the exposure time, with reasonable (\approx 1\% ) accuracy [22]. Unfortunately, most digital cameras do not provide access to the focal plane, so the standard methods can only be used during manufacturing but cannot be used by the users.

Fig. 1. - Traditional methods to measure exposure time of cameras. (a) Direct method. (b)–(d) Indirect methods by taking photographs of a moving target. (b) Ad hoc solution using a record player. (c) Ad hoc solution using a CRT screen. (d) Dedicated instrument using an LED array.
Fig. 1.

Traditional methods to measure exposure time of cameras. (a) Direct method. (b)–(d) Indirect methods by taking photographs of a moving target. (b) Ad hoc solution using a record player. (c) Ad hoc solution using a CRT screen. (d) Dedicated instrument using an LED array.

Other solutions use the photographs taken by the camera in normal operating conditions [see Fig. 1(b)–(d)]. Most of these methods use a moving object with known speed. The covered distance during the exposure time can be determined from the photograph, and thus, the exposure time can be calculated. A classical method uses a turntable, on which a line is placed in the radial direction, as shown in Fig. 1(b). From the angle swept by the red line on the photograph and the rotational speed of the turntable, the exposure time can be calculated [23]. The idea was further improved in [24], where the moving object was replaced by a moving image on a computer screen, the speed of which was controlled by the generating software. With these methods, the achievable accuracy is moderate (1\% -10\% ) [22].

Other solutions use moving light sources instead of physical objects. In cathode ray tube (CRT) monitors, an electron beam sweeps across the screen, the refresh rate of which is known. A photograph taken on the screen contains a lighter area, which was covered by the electron beam during the exposure time, while the total size of the screen corresponds to the refresh time. From the ratio of these areas and the refresh rate, the exposure time can be calculated [23], with an accuracy of 1%–10% [22]. This method is shown in Fig. 1(c). A very similar approach uses an oscilloscope with dc input and automatic triggering mode to generate a sweeping light dot (seen as a horizontal line) on the scope’s screen. The speed of the dot is controlled by the horizontal sweep setting of the oscilloscope. On the photograph taken by the camera, the moving dot creates a line segment, the length of which is proportional to the exposure time [25].

In the above ad hoc solutions, the speed of the moving object is given and can be configured either in a very limited range (radial speed of the turntable) or not at all (refresh rate of the monitor), and thus, the range of measurable exposure times is rather limited (e.g., 1/125–2 for the turntable and 1/10.000–1/125 for the CRT [22]). To provide more flexible measurements, special equipment was designed to measure the timing properties of cameras. The principle of taking a photograph of moving objects remains the same, but the role of moving source is played by an array of blinking LEDs, as shown in Fig. 1(d). The array may have different forms: the equipment proposed in [26] utilizes five LED stripes, each containing 100 LEDs, while in the commercial equipment mentioned in [27], a 10 \times10 array of LEDs is used. Such equipment provides wide measurement range with an accuracy of around 1% [22].

A new accurate and simple solution was proposed in [28], which requires minimal hardware support: only a signal generator is required, which drives an LED with 50% duty cycle square wave, to provide input for the camera. The camera is used in video mode, where a series of images of the blinking LED is recorded using equivalent sampling [29]. The exposure time is determined from these images using the known frequency of the signal generator. The measurement method is shown in Fig. 2.

Fig. 2. - Blinking LED measurement method.
Fig. 2.

Blinking LED measurement method.

In this article, a novel automatic estimation method is proposed to complete [28], using accurate estimates of the measured signal’s segment boundaries, using linear regression. A detailed error analysis of the proposed estimate is provided. In addition to this, multiple upgrades will be proposed to improve the accuracy of the estimates.

The outline of this article is given as follows. In Section II, the proposed method is reviewed. First, the sampling model of the camera is discussed, followed by the introduction of the measurement method using equivalent sampling. A novel automatic estimation procedure is proposed, along with methods for improving the accuracy. Section III contains the error analysis of the proposed method. In Section IV, measurement results validate the proposed method.

SECTION II.

Exposure Time Measurement

A. Camera Sampling Model

The camera sampling model is shown in Fig. 3. The sampling process of the camera can be modeled as a combination of integral sampling [30] and nonlinear saturation. The input light intensity is denoted by x(t) , which is integrated by the sensor, while the shutter is open, for time S . The gain factor \alpha represents the aggregate of various camera parameters (e.g., aperture and sensitivity). The integrated signal passes through a static nonlinearity \Gamma , saturated at A_{\mathrm {max}} , which is the maximum value the sensor can represent. Finally, the signal is sampled by pulse sampling.

Fig. 3. - Camera sampling model.
Fig. 3.

Camera sampling model.

Ideally, the nonlinearity \Gamma contains a linear ramp from 0 to A_{\mathrm {max}} and a flat line above A_{\mathrm {max}} : this is the case of cameras that are linear before saturation. Some cameras may contain other nonlinearities as well (e.g., gamma distortion). The automatic estimator, proposed in Section II-D, can be applied only where the camera has a linear operation range, while the manual solution, proposed in Section II-C, can be used for any \Gamma .

Using the notations of Fig. 3, the general operation model of the camera is the following:\begin{equation*} x_{s}\left ({t_{k} }\right) = \Gamma \left ({\alpha \int _{t_{k}-\frac {S}{2}}^{t_{k}+\frac {S}{2}} {x\left ({\tau }\right)d\tau } }\right)\tag{1}\end{equation*} View SourceRight-click on figure for MathML and additional features. where S is the exposure time. Notice that in (1), for the sake of simplicity, the t_{k} time of exposition is placed in the center of the exposure time.

For cameras, which are linear from 0 to A_{\mathrm {max}} , the output is the following:\begin{equation*} x_{s}\left ({t_{k} }\right)=\max \left ({\alpha \int _{t_{k}-\frac {S}{2}}^{t_{k}+\frac {S}{2}} {x\left ({\tau }\right)d\tau },A_{\mathrm {max}} }\right).\tag{2}\end{equation*} View SourceRight-click on figure for MathML and additional features.

Finally, if the camera is operated in any linear range, the output is simplified as \begin{equation*} x_{s}\left ({t_{k} }\right)=\alpha _{0}+\alpha ^{\prime }\int _{t_{k}-\frac {S}{2}}^{t_{k}+\frac {S}{2}} {x\left ({\tau }\right)d\tau }\tag{3}\end{equation*} View SourceRight-click on figure for MathML and additional features. where \alpha _{0} and \alpha ^{\prime } describe the linear operating section of \Gamma .

Let x(t) be a square wave signal with period P and duty cycle of 50%. This signal is produced by the blinking LED of Fig. 2. If constraint P/2>S is fulfilled, then from (3), it follows that x_{s} is a periodic trapezoid signal with period P , and the lengths of the rising and falling edges of x_{s} are S , as shown in Fig. 4. For the sake of simplicity, in the following, we will refer to the rising edge, but because of the symmetry, either the rising or the falling edge could be used.

Fig. 4. - Equivalent sampling of a periodic trapezoid signal.
Fig. 4.

Equivalent sampling of a periodic trapezoid signal.

B. Equivalent Sampling-Based Measurement

Let x_{s}(t) be periodic with period length of P \begin{equation*} x_{s}\left ({t+P }\right)=x_{s}\left ({t }\right).\tag{4}\end{equation*} View SourceRight-click on figure for MathML and additional features.

Let the camera’s sampling period be T_{S} and n be a positive integer so that the following equation is true:\begin{equation*} T_{S}=nP+\Delta t\tag{5}\end{equation*} View SourceRight-click on figure for MathML and additional features. where \Delta t may be much smaller than T_{S} . Let us express x_{s}(t+T_{S}) , using (3) and (4) \begin{equation*} x_{s}\left ({t+T_{S} }\right)=x_{s}\left ({t+nP+\Delta t }\right)=x_{S}\left ({t+\Delta t }\right).\tag{6}\end{equation*} View SourceRight-click on figure for MathML and additional features.

According to (6), the sample x_{s}(t+T_{S}) is the same as x_{s}(t+\Delta t) , and thus, it seems that x_{s}(t) is sampled with sampling period of \Delta t . The parameter \Delta t is the equivalent sampling interval. The effect of equivalent sampling is shown in Fig. 4, for n=1 .

C. Estimation of the Exposure Time

Let us express the number N_{S} of samples on the rising edge as follows:\begin{equation*} N_{S}=\left \lfloor{ \frac {S}{\Delta t} }\right \rfloor\tag{7}\end{equation*} View SourceRight-click on figure for MathML and additional features. where \lfloor a\rfloor is the integer part of a . Similarly, the number N_{P} of samples in one period is the following:\begin{equation*} N_{P}=\left \lfloor{ \frac {P}{\Delta t} }\right \rfloor.\tag{8}\end{equation*} View SourceRight-click on figure for MathML and additional features.

From (7) and (8), the exposure time can be estimated as \begin{equation*} \hat {S}\cong N_{S}\Delta t\cong N_{S}\frac {P}{N_{P}}=P\frac {N_{S}}{N_{P}}.\tag{9}\end{equation*} View SourceRight-click on figure for MathML and additional features.

Notice that the smaller \Delta t (and the higher N_{S} and N_{P} ) the better the approximation in (9), and thus, the blinking period P is chosen so that \Delta t in (5) is small, as will be detailed in Section II-E, step 2.

D. Precise Estimation of Model Parameters

Estimate (9) relies on the accurate count of parameters N_{S} and N_{P} , which is not straightforward when the measurement is noisy. Also, the lengths of these intervals may not be integer number of samples. Thus, a more accurate automatic solution is proposed.

First, let us segment the measured signal into four regions, according to Fig. 5. If the segment boundaries are not clear, do not use (ignore) samples in the uncertain region. In each segment i , the measured values are y_{i} and their number is N_{i} . Let us estimate parameter A_{1} (the amplitude of the lower horizontal part of trapezoid signal, as shown in Fig. 5) with the mean of the measured data in region 1 as follows:\begin{equation*} A_{1}=\frac {1}{N_{1}}\sum _{i=1}^{N_{1}} y_{i}.\tag{10}\end{equation*} View SourceRight-click on figure for MathML and additional features.

Fig. 5. - Input square wave with period 
$P$
, sampled by the camera, with equivalent sampling interval of 
$\Delta t$
.
Fig. 5.

Input square wave with period P , sampled by the camera, with equivalent sampling interval of \Delta t .

Similarly, the amplitude of the higher horizontal part (see Fig. 5) is estimated as parameter A_{2} in region 3 \begin{equation*} A_{2}=\frac {1}{N_{2}}\sum _{i=1}^{N_{2}} y_{i}.\tag{11}\end{equation*} View SourceRight-click on figure for MathML and additional features.

The slopes are estimated using linear regression [31]. Line f_{1} is approximated in region 2, using parameters b_{1} and m_{1} , as follows:\begin{equation*} \hat {y}=m_{1}x+b_{1}.\tag{12}\end{equation*} View SourceRight-click on figure for MathML and additional features.

Similarly, for line f_{2} \begin{equation*} \hat {y}=m_{2}x+b_{2}.\tag{13}\end{equation*} View SourceRight-click on figure for MathML and additional features.

Let us calculate the intersection points (i.e., the segments’ boundaries) X_{1}, {X}_{2} , and X_{3} , as shown in Fig. 5, as follows:\begin{equation*} X_{1}=\frac {A_{1}}{m_{1}}-\frac {b_{1}}{m_{1}}.\tag{14}\end{equation*} View SourceRight-click on figure for MathML and additional features.

Similarly, for X_{2} and X_{3} \begin{align*} X_{2}=&\frac {A_{2}}{m_{1}}-\frac {b_{1}}{m_{1}} \tag{15}\\ X_{3}=&\frac {A_{2}}{m_{2}}-\frac {b_{2}}{m_{2}}.\tag{16}\end{align*} View SourceRight-click on figure for MathML and additional features.

Notice that X_{2}-X_{1} and 2(X_{3}-X_{1}) are the numbers of samples in the rising edge and the full length of the period, respectively (see Fig. 5). Thus, similar to (9), the exposure time estimate, using linear regression, is the following:\begin{equation*} \hat {S}_{\mathrm {LR}}=\frac {X_{2}-X_{1}}{2\left ({X_{3}-X_{1} }\right)}P.\tag{17}\end{equation*} View SourceRight-click on figure for MathML and additional features.

E. Measurement Procedure

The proposed measurement method is summarized as follows.

  • Step 1:

    Create a square wave with period length according to (8). Drive the LED with the square wave. Adjust the camera settings (aperture) or the gain of the LED driver so that the photograph of the LED does not saturate the camera. Stabilize both the camera and the LED so that the image of the LED does not move on the photograph.

  • Step 2:

    Observe the output video stream of the camera. The LED on the video should blink with low frequency. Adjust the generator frequency to provide as low blinking frequency on the image as possible (the equivalent blinking period may be as high as several tens of seconds, resulting several hundreds of samples in a period). Read the generator frequency f_{\mathrm {GEN}}=1/P .

  • Step 3:

    Record the video stream. The record should contain at least one period (notice that the period length was already observed in Step 2).

  • Step 4:

    Extract the light intensity function x_{s}(k) from the video stream, using the same pixel in each frame, located in the center of the LED’s image.

  • Step 5:

    Count N_{S} and N_{P} . Estimate the exposure time using (9).

  • Step 6:

    Observe the rising and falling edges in the record. If they are fairly linear, calculate X_{1}, X_{2} , and X_{3} , using linear regression, and use the LR estimator (17).

F. Improving Measurement Accuracy

The measured intensity signal contains noise (originating mainly from the noise of the sensor). The measurement noise can be decreased if the outputs of multiple pixels are averaged. If the measured camera has global shutter, pixels from any region of the photograph can be selected (e.g., the region where the image of the LED is located). In case of a rolling shutter camera, pixels from a single row must be selected since the exposures of different rows are shifted in time.

Larger LED image allows the averaging of larger number of pixels. To provide a larger image, a diffusor can be placed between the LED and the camera.

Certain cameras show strong nonlinearity in the low-intensity region. Thus, it is advisable to use an input signal where the OFF state is not completely dark but produces significant sensor output (e.g., 10% of the full scale). Similarly, care must be taken to avoid saturation of the sensor (e.g., the ON state should produce approximately 90% of the full scale). These rules apply to all pixels if the averaging process is applied.

SECTION III.

Error Analysis

Values X_{1} , X_{2} , and X_{3} are calculated according to (14)–​(16). Using partial derivatives {\delta X}_{1}/\delta A_{1}, {\delta X}_{1}/\delta m_{1} , and {\delta X}_{1}/\delta b_{1} , the \Delta X_{1} variation of X_{1} , as a function of variations {\Delta A}_{1}, \Delta m_{1} , and \Delta b_{1} , can be expressed as \begin{equation*} \Delta X_{1}\cong -X_{1}\frac {\Delta m_{1}}{m_{1}}-\frac {\Delta b_{1}}{m_{1}}+\frac {\Delta A_{1}}{m_{1}}.\tag{18}\end{equation*} View SourceRight-click on figure for MathML and additional features.

Similarly, the variation of X_{2} is the following:\begin{equation*} \Delta X_{2}\cong -X_{2}\frac {\Delta m_{1}}{m_{1}}-\frac {\Delta b_{1}}{m_{1}}+\frac {\Delta A_{2}}{m_{1}}.\tag{19}\end{equation*} View SourceRight-click on figure for MathML and additional features.

For the sake of simplicity, but without loss of generality, let us set the coordinate system K_{1} for f_{1} and e_{1} such that the first sample along f_{1} corresponds to x=0 , as shown in Fig. 5. In this case, X_{1}\cong 0 and X_{2}\cong S . The uncertainties can thus be simplified as follows:\begin{align*} \Delta X_{1}\cong&- \frac {\Delta b_{1}}{m_{1}}+\frac {\Delta A_{1}}{m_{1}} \tag{20}\\ \Delta X_{2}\cong&- S\frac {\Delta m_{1}}{m_{1}}-\frac {\Delta b_{1}}{m_{1}}+\frac {\Delta A_{2}}{m_{1}}.\tag{21}\end{align*} View SourceRight-click on figure for MathML and additional features.

Since m_{1}\cong -m_{2}=m , the corresponding variances and covariances can be expressed as follows:\begin{align*} \mathrm {var}\,X_{1}=&E\left \{{\Delta X_{1}^{2} }\right \}\cong E\left \{{\left ({-\frac {\Delta b_{1}}{m}+\frac {\Delta A_{1}}{m} }\right)^{2} }\right \} \\=&\frac {1}{m^{2}}\mathrm {var}\,b_{1}+\frac {1}{m^{2}}\mathrm {var}\,A_{1}-2\mathrm {cov}\left ({b_{1},A_{1} }\right).\tag{22}\end{align*} View SourceRight-click on figure for MathML and additional features.

Since the estimates b_{1} and A_{1} are independent, the last term is zero. Trivially \begin{equation*} \mathrm {var}\,A_{i}=\frac {s_{i}^{2}}{N_{i}},\quad i =1, 2\tag{23}\end{equation*} View SourceRight-click on figure for MathML and additional features. where s_{i} and N_{i} denote the standard deviation of the measurement noise and the number of samples, respectively, in region i . Using (23) and (58) \begin{equation*} \mathrm {var}\,X_{1}\cong \frac {4s_{3}^{2}}{m^{2}N_{3}}+ \frac {s_{1}^{2}}{{m^{2}N}_{1}}.\tag{24}\end{equation*} View SourceRight-click on figure for MathML and additional features.

Since A_{2} is independent of both m_{1} and b_{1} , the corresponding covariances are zero. Using (23) and the variances and covariance of the linear regression coefficients (57)–​(59), for \mathrm {var}~X_{2} , the following result can be obtained:\begin{align*} \mathrm {var}\,X_{2}=&E\left \{{\Delta X_{2}^{2} }\right \}=E\left \{{\left ({-S\frac {\Delta m}{m}-\frac {\Delta b_{1}}{m}+\frac {\Delta A_{2}}{m} }\right)^{2} }\right \} \\=&\frac {S^{2}\mathrm {var}\,m}{m^{2}}+\frac {\mathrm {var}\,b_{1}}{m^{2}}+\frac {\mathrm {var}\,A_{2}}{m^{2}}+\frac {2S}{m^{2}}\mathrm { cov}\left ({b_{1},m }\right) \\\cong&\frac {12s_{3}^{2}}{m^{2}N_{3}}+\frac {4s_{3}^{2}}{m^{2}N_{3}} +\frac {s_{2}^{2}}{m^{2}N_{2}} -\frac {12s_{3}^{2}}{m^{2}N_{3}} \\=&\frac {4s_{3}^{2}}{m^{2}N_{3}} +\frac {s_{2}^{2}}{m^{2}N_{2}}.\tag{25}\end{align*} View SourceRight-click on figure for MathML and additional features.

The variance of X_{3} can be derived similar to \mathrm {var}~X_{1} as follows:\begin{equation*} \mathrm {var}\,X_{3}\cong \frac {4s_{3}^{2}}{m^{2}N_{3}}+ \frac {s_{2}^{2}}{{m^{2}N}_{2}}.\tag{26}\end{equation*} View SourceRight-click on figure for MathML and additional features.

Since A_{1} and A_{2} are independent of each other and the linear regression coefficients and the two linear regressions are also independent, the covariances are the following:\begin{align*}&\hspace {-2pc}\mathrm {cov}\left ({X_{1},X_{2} }\right) \\=&E\left \{{\Delta X_{1}\Delta X_{2} }\right \} \\=&E\left \{{\left ({-\frac {\Delta b_{1}}{m}+\frac {\Delta A_{1}}{m} }\right)\left ({-S\frac {\Delta m_{1}}{m}-\frac {\Delta b_{1}}{m}+\frac {\Delta A_{2}}{m} }\right) }\right \} \\=&\frac {S}{m^{2}}\mathrm {cov}\left ({b_{1},m_{1} }\right)+\frac {1}{m^{2}}\mathrm {var}\,b_{1} \\\cong&\frac {-6s_{3}^{2}}{m^{2}N_{3}}+ \frac {4s_{3}^{2}}{m^{2}N_{3}}= \frac {-2s_{3}^{2}}{m^{2}N_{3}} \tag{27}\\&\hspace {-2pc}\mathrm {cov}\left ({X_{2},X_{3} }\right) \\=&E\left \{{\Delta X_{2}\Delta X_{3} }\right \} \\=&E\left \{{\left ({-S\frac {\Delta m_{1}}{m}-\frac {\Delta b_{1}}{m}+\frac {\Delta A_{2}}{m} }\right)\left ({-\frac {\Delta b_{2}}{m}+\frac {\Delta A_{2}}{m} }\right) }\right \} \\=&\frac {1}{m^{2}}\mathrm {var}\,A_{2}\cong \frac {s_{2}^{2}}{m^{2}N_{2}}.\tag{28}\end{align*} View SourceRight-click on figure for MathML and additional features.

Since X_{1} and X_{3} are estimated independently \begin{equation*} \mathrm {cov}\left ({X_{1},X_{3} }\right)=0.\tag{29}\end{equation*} View SourceRight-click on figure for MathML and additional features.

The exposure time estimate is (17), and thus, the uncertainty of \hat {S}_{\mathrm {LR}} can be estimated as follows:\begin{align*} \Delta \hat {S}_{\mathrm {LR}}\cong&\frac {\delta \hat {S}}{\delta X_{1}}\Delta X_{1}+\frac {\delta \hat {S}}{\delta X_{2}}\Delta X_{2}+\frac {\delta \hat {S}}{\delta X_{3}}\Delta X_{3}+\frac {\delta \hat {S}}{\delta P}\Delta P \\=&- \frac {P}{2}\frac {X_{3}-X_{2}}{\left ({X_{3}-X_{1} }\right)^{2}}\Delta X_{1}+\frac {P}{2}\frac {X_{3}-X_{1}}{\left ({X_{3}-X_{1} }\right)^{2}}\Delta X_{2} \\&-\,\frac {P}{2}\frac {X_{2}-X_{1}}{\left ({X_{3}-X_{1} }\right)^{2}}\Delta X_{3}+\frac {S}{P}\Delta P.\tag{30}\end{align*} View SourceRight-click on figure for MathML and additional features.

Using notations \begin{equation*} A=\frac {X_{3}-X_{2}}{\left ({X_{3}-X_{1} }\right)^{2}},B=\frac {X_{3}-X_{1}}{\left ({X_{3}-X_{1} }\right)^{2}},C=\frac {X_{2}-X_{1}}{\left ({X_{3}-X_{1} }\right)^{2}}\tag{31}\end{equation*} View SourceRight-click on figure for MathML and additional features. the variance of \hat {S} can be estimated as follows:\begin{align*}&\hspace {-1.8pc}\mathrm {var}\,\hat {S}_{\mathrm {LR}} \\=&E\left \{{{\Delta \hat {S}_{\mathrm {LR}}}^{2} }\right \} \\=&E\left \{{\left ({-\frac {PA}{2}\Delta X_{1}+\frac {PB}{2}\Delta X_{2}-\frac {PC}{2}\Delta X_{3}+\frac {S}{P}\Delta P }\right)^{2} }\right \} \\=&\frac {P^{2}A^{2}}{4}\mathrm {var}\,X_{1}+\frac {P^{2}B^{2}}{4}\mathrm {var}\,X_{2}+\frac {P^{2}C^{2}}{4}\mathrm {var}\,X_{3} \\&+\,\frac {S^{2}}{P^{2}}\mathrm {var}\,P-\frac {ABP^{2}}{2}\mathrm {cov}\left ({X_{1},X_{2} }\right) \\&-\,\frac {BCP^{2}}{2}\mathrm {cov}\left ({X_{2},X_{3} }\right)+\frac {ACP^{2}}{2}\mathrm {cov}\left ({X_{1},X_{3} }\right) \\&+\,S\left ({-A\mathrm {cov}\left ({X_{1},P }\right)+B\mathrm {cov}\left ({X_{2},P }\right)-C\mathrm {cov}\left ({X_{3},P }\right) }\right). \\\tag{32}\end{align*} View SourceRight-click on figure for MathML and additional features.

Since the estimations of X_{i} and P are independent, \mathrm {cov}(X_{i},P)=0 , for all i . Substituting (24)–​(29) into (32), the variance of \hat {S} becomes the following:\begin{align*}&\hspace {-2pc}\mathrm {var}\,\hat {S}_{\mathrm {LR}} \\=&\frac {s_{1}^{2}P^{2}}{4}\frac {A^{2}}{N_{1}m^{2}} +\frac {s_{2}^{2}P^{2}}{4}\frac {\left ({B +C }\right)^{2}}{N_{2}m^{2}} \\&+\,s_{3}^{2}P^{2} \frac {A^{2}+B^{2}+C^{2}+AB}{N_{3}m^{2}}+s_{p}^{2}\frac {S^{2}}{P^{2}}.\tag{33}\end{align*} View SourceRight-click on figure for MathML and additional features.

In (33), parameter m can be estimated as follows:\begin{equation*} m \cong \frac {A_{2}-A_{1}}{S}\cong \frac {A_{2}-A_{1}}{N_{2}}\frac {1}{\Delta t}=m^{\prime }\frac {1}{\Delta t}\tag{34}\end{equation*} View SourceRight-click on figure for MathML and additional features. where \Delta t is the equivalent sampling interval (see Fig. 5). According to Fig. 5, variable A in (31) can be expressed as follows:\begin{align*} A=&\frac {X_{3}-X_{2}}{\left ({X_{3}-X_{1} }\right)^{2}}\cong \frac {N_{3}\Delta t}{\left ({N_{2}\Delta t +N_{3}\Delta t }\right)^{2}} \\=&\frac {1}{\Delta t}\frac {N_{3}}{\left ({N_{2}+N_{3} }\right)^{2}}=\frac {A^{\prime }}{\Delta t}.\tag{35}\end{align*} View SourceRight-click on figure for MathML and additional features. Similarly \begin{equation*} B \cong \frac {A^{\prime }}{\Delta t} = C \cong \frac {C^{\prime }}{\Delta t}\tag{36}\end{equation*} View SourceRight-click on figure for MathML and additional features. with \begin{align*} A^{\prime }=&\frac {N_{3}}{\left ({N_{2}+N_{3} }\right)^{2}},\quad B^{\prime }=\frac {1}{N_{2}+N_{3}} \\ C^{\prime }=&\frac {N_{2}}{\left ({N_{2}+N_{3} }\right)^{2}},\quad m^{\prime }=\frac {A_{2}-A_{1}}{N_{2}}.\tag{37}\end{align*} View SourceRight-click on figure for MathML and additional features. Trivially \begin{equation*} \frac {S}{P}\cong \frac {N_{2}}{2\left ({N_{2}+N_{3} }\right)}=\frac {N_{2}}{2}B^{\prime }.\tag{38}\end{equation*} View SourceRight-click on figure for MathML and additional features. Using (37) and (38), the variance estimate (33) can be transformed as follows:\begin{align*} \mathrm {var}\,\hat {S}_{\mathrm {LR}}\cong&\frac {s_{1}^{2}P^{2}}{4}\frac {A^{\prime ^{2}}} {N_{1}m^{\prime ^{2}}}+\frac {s_{2}^{2} P^{2}}{4}\frac {\left ({B^{\prime }+C^{\prime } }\right)^{2}}{N_{2}m^{\prime ^{2}}} \\&+\,s_{3}^{2}P^{2}\frac {A^{\prime ^{2}} +B^{\prime ^{2}}+C^{\prime ^{2}} +A^{\prime }B^{\prime }}{N_{3}m^{\prime ^{2}}} +s_{p}^{2}\frac {B^{\prime ^{2}}N_{2}^{2}} {4}\tag{39}\end{align*} View SourceRight-click on figure for MathML and additional features. where parameters A^{\prime }, B^{\prime }, C^{\prime } , and m^{\prime } are easily computable from record lengths N_{1}, N_{2} , and N_{3} . The noise parameters are estimated as follows [31]:\begin{align*} s_{1}^{2}\cong&\frac {1}{N_{1}-1}\sum _{i =1}^{N_{1}} \left ({y_{i}-A_{1} }\right)^{2} \tag{40}\\ s_{2}^{2}\cong&\frac {1}{N_{2}-1}\sum _{i =1}^{N_{2}} \left ({y_{i}-A_{2} }\right)^{2} \tag{41}\\ s_{3}^{2}\cong&\frac {1}{N_{3}-2}\sum _{i =1}^{N_{1}} \left ({y_{i}-\hat {y}_{i} }\right)^{2}.\tag{42}\end{align*} View SourceRight-click on figure for MathML and additional features.

SECTION IV.

Experiments

A. Measurement Setup

The measurement hardware is shown in Fig. 6. The LED source was attached to the camera through a 3-D-printed enclosure, and thus, external disturbances were eliminated during the experiments and the stable relative positioning of the camera and the light source was guaranteed. The signal generator was implemented on an Arduino Due board. The blinking frequency was tunable in steps of approximately 5 \times \,\,{10}^{-3}\,\,\mathrm {Hz} .

Fig. 6. - Measurement setup: a camera with the attached LED source, and the signal generator implemented on an Arduino Due microcontroller unit.
Fig. 6.

Measurement setup: a camera with the attached LED source, and the signal generator implemented on an Arduino Due microcontroller unit.

The generator’s frequency was chosen according to (5). In our tests, 30- and 60-fps sampling frequencies were used, and thus, the signal generator’s frequency was chosen to be close to n\,\,\times \,\,30\,\,\mathrm {Hz} and n\,\,\times \,\,60\,\,\mathrm {Hz} , respectively. The integer parameter n has no effect on the estimate, but for smaller exposure times, it is advisable to use a higher blinking frequency (higher n ): in this way, the values N_{S} and N_{P} can be kept in the same order of magnitude, and thus, long recording times can be avoided.

B. Target Cameras

During the measurements, two cameras were used. Camera C1 was a high-quality industrial machine vision camera GS3-U3-23S6M produced by FLIR [32]. The camera’s software is able to report the exact actual exposure time, with possible values ranging from 5 \mu \text{s} and 31.9 s. Camera C2 was an inexpensive camera of type ELP-USBGS720P02. This camera came with practically no documentation. The shutter speed can be set in 13 discrete steps (0 to −12), but the corresponding shutter speed values are unknown (undocumented).

C. Reference Measurements

A reference running LED measurement setup was used to provide measured exposure values with known accuracy. The method is similar to the one illustrated in Fig. 1(d) but uses multiple timers to provide higher accuracy [22]. The resolution of our device was 1 \mu \text{s} . The timing accuracy of the device was 70 ppm, resulting in timing uncertainty below 0.5 \mu \text{s} in all of the used measurements (up to 7 ms).

In our solution, the LEDs were used in the binary mode: an LED was considered in the ON state if its detected light intensity on the photograph was higher than the maximal noise level of the OFF state; otherwise, it was considered in the OFF state. (The value of the detected intensity was not used to improve the accuracy.) Thus, the reference measurements’ uncertainty, resulting from the binary measurements and the resolution of the device, is bounded by \pm 0.5\,\,\pm \,\,1\,\,\mu \text{s}\,\,= \pm 1.5\,\,\mu \mathrm {s} .

D. Measurement Results

An example measurement can be seen in Fig. 7, where the exposure time of camera C1 was set to 98 \mu \mathrm {s} and the blinking frequency was tuned to 1201.098 Hz, with a camera sampling frequency of 30 fps. The measured signal of a single pixel is shown in blue. Signal values N_{S}\cong 151 and N_{P}\cong 1199 were determined, and thus, using (9), the estimate of the exposure time is \hat {S}=104.9\,\,\mu \mathrm {s} . The linear fit is shown in Fig. 7 in red, for which the linear regression-based estimator, according to (11), was \hat {S}_{\mathrm {LR}}=103.6\,\,\mu \text{s} .

Fig. 7. - Measurement using C1 with nominal exposure time of 98 
$\mu \text{s}$
 and 
$P=1/1201.098\,\,\mathrm {s}$
.
Fig. 7.

Measurement using C1 with nominal exposure time of 98 \mu \text{s} and P=1/1201.098\,\,\mathrm {s} .

In the inset of Fig. 7, the average of 100 pixels is also shown in green, as proposed in Section II-F (the signal is shifted vertically, for better visibility). The noise level is clearly much lower in this case. From this signal, N_{S}\cong 150 and N_{P}\cong 1201 were determined, resulting from the estimate of \hat {S}=104.0\,\,\mu \text{s} . This value is very close to the linear regression-based estimator.

Camera C1 was tested using exposure times starting from 8 \mu \text{s} to 1 ms, using a sampling frequency of 30 fps. Notice that the exposure time can be set in predefined steps, and thus, the set values are not always round numbers. Table I presents the test measurement results. Columns S, \hat {S}_{\mathrm {ref}}, \mathrm {mean} \hat {S}_{\mathrm {LR}} , and \mathrm {std} \hat {S}_{\mathrm {LR}} contain the nominal value, the reference measurement result, the mean value of the proposed linear regression-based estimator, and its sample variance, respectively. The \mathrm {mean} \hat {S}_{\mathrm {LR}} and \mathrm {std}\hat {S}_{\mathrm {LR}} values were calculated from ten independent estimators from ten consecutive measured periods.

TABLE I Nominal and Measured Exposure Times for C1
Table I- 
Nominal and Measured Exposure Times for C1

The uncertainty of the reference measurements was maximum \pm 1.5~\mu \text{s} . The differences between the reference and LR estimators were also bounded by \pm 1.5~\mu \text{s} , according to Table I. The uncertainty of the proposed method is estimated as the sum of the two uncertainties, resulting in \pm 3.0~\mu \text{s} .

Interestingly, the reported and measured exposure times show a constant bias of 6-7\,\,\mu \text{s} (see column \hat {S}_{\mathbf {ref}}-S in Table I). Similar behavior was observed for other types of camera of the same manufacturer [28]. The technical reason for the systematic bias is not known.

Various forms of the equivalent sampling-based estimates were compared, as shown in Table II. \hat {S} is the simple estimate, according to (9), \hat {S}_{\mathrm {LR}} is the linear regression estimate of (17), and \hat {S}_{\mathrm {LS}} is a least squares (LS) estimate, proposed in [28]. Each method was used to produce ten estimates from ten independent periods of the measurement record, and the differences between the estimates and the reference value \hat {S}_{\mathrm {ref}} were calculated. The mean and the standard deviation of the error are shown in Table II. For each nominal exposure time, two measurements are presented: the first value (1 pix) corresponds to the single-pixel measurements and the second (100 pix) shows the effect of the multiple pixel-based measurement, proposed in Section II-F. Here, the measurements were taken as the average of a 10 \times10 pixels region at the center of the LED image.

TABLE II Mean and Std of Measured Exposure Times for C1
Table II- 
Mean and Std of Measured Exposure Times for C1

The results clearly show that all of the estimates perform very well, but the accuracy of \hat {S}_{\mathrm {LR}} slightly outperforms both \hat {S} and \hat {S}_{\mathrm {LS}} . The LS method provided the smallest standard deviation, followed by the LR method. (For more measurement results \hat {S} and \hat {S}_{\mathrm {LS}} , refer to [22] and [28].)

Comparing the results of the single and multi-LED measurements in Table II, the following conclusions can be drawn: using multiple pixels improves the accuracy of estimate \hat {S} and also decreases its standard deviation. In case of estimator \hat {S}_{\mathrm {LS}} , the accuracy did not change significantly, but the variation of the results decreased. In case of \hat {S}_{\mathrm {LR}} , the improvement in accuracy can be observed in the higher time range, while the variation of the estimates did not change significantly.

Camera C2 was tested with 60 fps, and exposure time settings are between −1 and −12 (setting 0 did not work with 60 fps). Similar to C1, the mean and standard deviation of the estimates were calculated from ten measurements. The test results are shown in Table III. The uncertainty of all the reference measurements was again the same, i.e., less than \pm.5~\mu \text{s} . The differences between the reference and the linear regression-based estimators were less than \pm 1.7 ~\mu \text{s} for exposure times below 1 ms, indicating maximum uncertainty of \pm 3.2~\mu \mathrm {s} . For exposure times between 1 and 7 ms, the difference \hat {S}_{\mathrm {LR}}-S_{\mathrm {ref}} increases, reaching 7.0 \mu \text{s} ; for these measurement ranges, the accuracy of the estimator can be estimated as \pm 8.5~\mu \text{s} .

TABLE III Measured Exposure Times for C2
Table III- 
Measured Exposure Times for C2

Taking into account the test results in Tables I–​III, the accuracy of the proposed linear regression-based method can be summarized as follows: the absolute error below 1 ms was maximum \pm 3.2~\mu \text{s} , while above 1 ms, it was maximum \pm 8.5~\mu \text{s} . Notice that above 1 ms, the relative error was less than 0.2%, showing the remarkable accuracy of the method.

The last column of Table III contains the theoretical standard deviation values as well. These values were calculated using (39), where the noise parameters s_{1}, {s}_{2} , and s_{3} were estimated according to (40)–​(42), and s_{p} was estimated from ten measurements for each blinking frequency. The measured sample standard deviation and the theoretical standard deviation values show a very good agreement.

In the above measurements, care was taken to operate the cameras in their linear operation range (see Fig. 7). The undesired effects of nonlinear behavior are shown in Fig. 8. The measurements were made using camera C2, with 60 fps and shutter speed setting −12. The line labeled “single point– good” shows a measurement, which is in the linear operating range of the camera. Measurement “single point—too high” was saturated, and its effect is clearly visible: the rising edge ends sooner and the falling edge starts later, and thus, the edges are measured to be shorter. This camera showed nonlinear behavior at the low-intensity region, too: measurement “single point—too low” shows that the sensor did not react to low light intensities. In this case, the rising edge starts later, and the falling edge ends sooner; thus, the edges seem to be shorter again.

Fig. 8. - Effect of nonlinear behavior of C2. Measurements were made using exposure time setting −12 (approximately 32 
$\mu \text{s}$
) and 
$P =1/1196.172\,\,\mathrm {s}$
.
Fig. 8.

Effect of nonlinear behavior of C2. Measurements were made using exposure time setting −12 (approximately 32 \mu \text{s} ) and P =1/1196.172\,\,\mathrm {s} .

Fig. 8 also shows the nonlinear effects when a set of pixels (a box of size 3 \times10 pixels) was used in the measurement. The detected pixel intensities were averaged. Measurement “box—good” illustrates a case where all of the pixels were in the linear operating region. This measurement corresponds well with measurement “single point—good.” Although the averaged measurement line “box—too high” does not directly show saturation, some of the pixels in the box were saturated. The effect is clearly visible: the upper end of the edges became rounded, and thus, it becomes difficult to determine the end of the edges. Similarly, in measurement “box—too low,” some pixels (near the periphery of the LED image) were not responding to the low-intensity signal, and thus, the averaged signal became rounded at the lower end of the edges, hindering the detection of the edge boundaries.

SECTION V.

Summary

In this article, a novel method was proposed to measure the exposure time of digital cameras. During the measurement, a sequence of photographs (a video stream) is recorded, while the target image is a blinking LED. The frequency of the LED is chosen so that the resulting equivalent sampling allows good temporal resolution. If the blinking frequency is known, then the exposure time can be determined from the recorded time-intensity function of a single pixel or the average of a set of pixels. The measurement procedure and the estimation method of the exposure time were introduced in detail, along with methods to increase the accuracy of the measurement procedure. A linear regression-based automatic estimate was also proposed, allowing the increase of both the resolution and the precision of the estimate. Other advantages of the proposed methods include its simplicity, compared to the previous LSs estimate [28], and the behavior of the estimate can be analyzed. The error analysis of the method was also presented in detail.

The applicability of the proposed measurement method was illustrated through measurement examples, where a high-end industrial machine vision camera and an inexpensive camera were tested. The proposed technique was compared to a well-known method where a photograph is taken on an array of blinking LEDs, using a device similar to [26] and [27]. Since the accuracy of the reference method was known, the uncertainty of the proposed method could be determined. According to the tests, the uncertainty of the proposed method was maximum \pm 3.2~\mu \text{s} in measurements ranges below 1 ms, while above exposure time of 1 ms, the relative error was less than 0.2%. This accuracy is smaller than but comparable to that of the professional equipment [27], using simple and inexpensive tools.

Appendix

Variances and Covariance of the Linear Regression Coefficients

The variances and covariance of the linear regression coefficients can be derived as follows. Let the measured points be y_{i} at time instants x_{i}, i=1, 2,\ldots, N . The relationship between the y and x values is assumed to be linear as follows:\begin{equation*} y_{i}=b+mx_{i}+n_{i}\tag{43}\end{equation*} View SourceRight-click on figure for MathML and additional features. where n_{i} is the measurement noise with standard deviation of s . The coefficients are estimated as follows [31]:\begin{align*} \hat {m}=&\frac {\sum _{i =1}^{N} {\left ({x_{i}-\bar {x} }\right)\left ({y_{i}-\bar {y} }\right)} }{\sum _{i =1}^{N} \left ({x_{i}-\bar {x} }\right)^{2}} \\=&\frac {\sum _{i =1}^{N} {\left ({x_{i}-\bar {x} }\right)y_{i}}}{\sum _{i =1}^{N} \left ({x_{i}-\bar {x} }\right)^{2} }=\sum _{i =1}^{N} {c_{i}y_{i}} \tag{44}\\ \hat {b}=&\bar {y}-\hat {m}\bar {x}\tag{45}\end{align*} View SourceRight-click on figure for MathML and additional features. where \begin{align*} c_{i}=&\frac {x_{i}-\bar {x}}{S_{xx}} \tag{46}\\ S_{xx}=&\sum _{i} \left ({x_{i}-\bar {x} }\right)^{2}.\tag{47}\end{align*} View SourceRight-click on figure for MathML and additional features.

The variance of \hat {m} is the following [31]:\begin{equation*} \mathrm {var}\,\hat {m}=\frac {s^{2}}{S_{xx}}.\tag{48}\end{equation*} View SourceRight-click on figure for MathML and additional features.

From (45), the variance of \hat {b} can be derived as follows:\begin{equation*} \mathrm {var}\,\hat {b}=\text {var}\,\bar {y}+\bar {x}^{2}\mathrm {var}\,\hat {m}-2\bar {x}\mathrm {cov}\left ({\bar {y},\hat {m} }\right).\tag{49}\end{equation*} View SourceRight-click on figure for MathML and additional features.

Using (47), the term \mathrm {cov} (\bar {y},\hat {m}) can be expressed as follows:\begin{align*} \mathrm {cov}\left ({\bar {y},\hat {m} }\right)=E\left \{{ \Delta \bar {y}\Delta \hat {m} }\right \}=E\left \{{ \left ({\frac {1}{N}\sum _{i =1}^{N} n_{i} }\right)\left ({\sum _{j =1}^{N} {c_{j}n_{j}} }\right) }\right \}. \\\tag{50}\end{align*} View SourceRight-click on figure for MathML and additional features.

If the measurement noise is uncorrelated (i.e., E\{ n_{i}n_{j} \}=0 , if i\ne j ), then (50) can be simplified as follows:\begin{equation*} \mathrm {cov}\left ({\bar {y},\hat {m} }\right)=\frac {1}{N}\sum _{i =1}^{N} c_{i} E\left \{{n_{i}^{2} }\right \}=\frac {s^{2}}{N}\sum _{i =1}^{N} c_{i} = 0.\tag{51}\end{equation*} View SourceRight-click on figure for MathML and additional features.

Thus, (49) becomes \begin{equation*} \mathrm {var}\,\hat {b}=\frac {s^{2}}{N}+\bar {x}^{2}\mathrm {var}\,\hat {m}=\frac {s^{2}}{N}+\bar {x}^{2}\frac {s^{2}}{S_{xx}}.\tag{52}\end{equation*} View SourceRight-click on figure for MathML and additional features. The covariance of the regression coefficients can be expressed, starting from (44) and (45), and using (51) as follows:\begin{align*} \mathrm {cov}\left ({\hat {m}, \hat {b} }\right)=&E\left \{{\Delta \hat {m}\Delta \hat {b} }\right \} \\=&E\left \{{\Delta \hat {m} (\Delta \bar {y}-\bar {x}\Delta \hat {m}) }\right \} \\=&\mathrm {cov}\left ({\bar {y},\hat {m} }\right)-\bar {x}\mathrm {var}\,\hat {m} = -\bar {x}\mathrm {var}\,\hat {m}.\tag{53}\end{align*} View SourceRight-click on figure for MathML and additional features.

Notice that S_{xx} , according to (47), contains the measurement points x_{i} , which are distributed equidistantly between 0 and S (see Fig. 5). Using \bar {x}\cong S/2 and x_{i}\cong i (S/N) , (47) can be expressed as follows:\begin{align*} S_{xx}=&\sum _{i =1}^{N} \left ({x_{i}-\bar {x} }\right)^{2} =\sum _{i =1}^{N} x_{i}^{2} +N\bar {x}-2\bar {x}\sum _{i =1}^{N} x_{i} \\=&\sum _{i =1}^{N} x_{i}^{2} +N\bar {x}^{2}-2\bar {x}N\bar {x}\cong \sum _{i =1}^{N} x_{i}^{2} -\frac {NS^{2}}{4} \\\cong&\sum _{i =1}^{N} \left ({i\frac {S}{N} }\right)^{2} -\frac {NS^{2}}{4}=\frac {S^{2}}{N^{2}}\sum _{i =1}^{N} i^{2} -\frac {NS^{2}}{4}.\tag{54}\end{align*} View SourceRight-click on figure for MathML and additional features.

Using \begin{equation*} \sum _{i =1}^{N} i^{2} =\frac {N(N + 1)(2 N + 1) }{6}.\tag{55}\end{equation*} View SourceRight-click on figure for MathML and additional features. S_{xx} becomes the following:\begin{equation*} S_{xx}\cong \frac {S^{2}}{N^{2}}\frac {N\left ({N + 1 }\right)\left ({2N + 1 }\right)}{6}-\frac {NS^{2}}{4}\cong \frac {NS^{2}}{12}.\tag{56}\end{equation*} View SourceRight-click on figure for MathML and additional features.

The approximation is valid if N\gg 1 .

Using approximation (56), the variances and covariance are simplified as follows:\begin{align*} \mathrm {var}\,m\cong&\frac {12s^{2}}{NS^{2}} \tag{57}\\ \mathrm {var}\,b\cong&\frac {4s^{2}}{N} \tag{58}\\ \mathrm {cov}\left ({m,b }\right)\cong&- \frac {6s^{2}}{NS}.\tag{59}\end{align*} View SourceRight-click on figure for MathML and additional features.

References

References is not available for this document.