Introduction
With the development of unmanned aerial vehicles (UAV) and miniaturized hyperspectral imaging sensors, UAV-borne hyperspectral remote sensing technology has been widely used in agriculture, forestry, and other fields [1], [2], [3]. However, random noise will be generated due to the atmospheric environment, the UAV flying platform, the imaging optical devices, and other reasons when the UAV hyperspectral remote sensing load is imaging, so the obtained UAV hyperspectral image quality level is uneven [4], [5], [6]. SNR [7] is an important radiation characteristic parameter for hyperspectral remote sensing image quality assessment [8], and it is also one of the important indexes to measure the performance of remote sensing sensors [9]. Accurately evaluating the SNR of UAV hyperspectral images will help to evaluate the performance of hyperspectral remote sensing systems and measure image quality. It is also of great significance to the application of UAV-borne hyperspectral remote sensing data [10].
The traditional SNR estimation methods for optical remote sensing images are mainly based on spatial characteristics. Generally, when selecting a homogeneous region in the image, the standard deviation of the homogeneous region in the image is taken as the noise estimation value, and the mean of the homogeneous region in the image is taken as the signal estimation value. The ratio of the mean and the standard deviation is the SNR of the image. For example, Wrigley et al. [11] proposed the homogeneous area method, in which more than four homogeneous regions are selected manually and the standard deviation and mean are calculated. The ratio of the mean and standard deviation is the image SNR. Curran and Dungan [12] proposed the geo-statistical method, which is applied to compute the noise based on the correlation changes of pixels in the image spatial domain. By calculating the semivariogram of homogeneous area narrow pixel strips in the image as the estimation value of image noise, the ratio of mean and this noise estimation value is obtained, namely, the image SNR. When applying these two methods, users need to manually select the homogeneous area, which has a low-automatic degree and is greatly influenced by land cover type. In addition, the preselected homogeneous regions cannot represent the SNR for the whole remote sensing image. In order to make up for these shortcomings, Gao [13] proposed the local mean and local standard deviation method, in which the image is split into equal intervals, the local standard deviation (LSD) of each image sub-block is calculated and ranked, and a maximum interval is selected. Then, the LSD average of the image sub-blocks in this interval is calculated. The SNR of the whole remote sensing image is the LSD average divided by the average of the image sub-block in this interval. Though the SNR for the whole remote sensing image can be calculated automatically by this method, the SNR estimation value is affected by the cover type of ground objects and the homogeneousness of image sub-blocks. In addition, it is worth noting that all the abovementioned methods are particularly inaccurate in ultrahigh spatial resolution remote sensing images.
The other SNR estimation methods are mainly designed for hyperspectral images. Taking advantage of the strong correlation of hyperspectral images in the spectral dimension, the highly correlated signal is removed from the homogeneous sub-block of the hyperspectral image through multiple linear regression (MLR) [14], and the residual value after regression is the estimated value of image noise. For example, Roger and Arnold [15] proposed the spatial and spectral decorrelation method (SSDC), in which the target pixel and adjacent pixels are used to calculate residuals by MLR in the spectral dimension and spatial dimension. It is found that this method helps to obtain calculated in the image sub-blocks after regular segmentation of the image, However, these image sub-blocks are not completely homogeneous and contain the boundary and texture information of the ground objects, and the spatial texture information is introduced in the MLR, so that the estimation results of image noise are not accurate. In particular, the UAV hyperspectral remote sensing image has a high spatial resolution and this heterogeneity is more obvious since spectral and spatial correlations of the heterogeneous blocks cannot be removed using simple multiple linear regression. In order to solve the influence of the texture feature on the noise estimation, Gao et al. [16] proposed the homogeneous regions division and spectral decorrelation method (HRDSDC). And, in order to find homogeneous sub-blocks without edge and texture information of ground objects in hyperspectral images more accurately, Zhang et al. [17] proposed an optimized optical and spectral decorrelation method (OSSDC) for assessing hyperspectral image noise. The two methods are based on the continuous segmentation algorithm, and the continuous segmentation of the image is carried out by using the spectral angle distance and the combination of spectral angle and Euclidean distance. The sub-blocks with a continuous segmentation area greater than 50 pixels are selected to compute the noise and signal values. However, on the one hand, due to the high spatial resolution of UAV hyperspectral images, it is difficult to achieve continuous segmentation in complex texture images. On the other hand, these two methods cannot eliminate outliers when calculating SNR (e.g., using the box counting method to determine the image SNR aggregation interval.), which causes a low-precision algorithm.
In this article, a new method called pure pixel extraction and spectral decorrelation (PPESDC) is proposed to automatically estimate SNR in UAV hyperspectral images. In this method, the Euclidean distance, the spectral angular distance, and their combination were used to extract the pure pixels from hyperspectral images, where pure pixels are pixels that contain only one ground object in a pixel [18], [19]. Then, the noise residuals of these pure pixels were calculated by MLR. The SNR of each pure pixel is the pure pixel signal value divided by the residual value. Finally, the clustering interval of SNR of all pure pixels was obtained by a box counting program, and the average value of SNR in the interval was calculated, which is the SNR of the whole hyperspectral image. In addition, to test the robustness and accuracy of this method, we compared it with the SSDC, HRDSDC, and OSSDC methods and applied it to multitype hyperspectral images (the UAV frame Rikola Hyperspectral image with different scene radiance images, reflectance images, and add-noise images). In addition, we also tested the UAV Pushbroom Nano-Hyperspec image.
Methods
A. Pure Pixel Extraction
1) Judging Criterion
Euclidean Distance (ED): According to ED, each pixel of hyperspectral image is taken as a multivariate random variable, and its similarity is measured by calculating the distance values of two pixels [20]. ED mainly reflects the brightness difference of spectral vector, which is the total contribution of the brightness difference of n-dimensional band. The calculation formula as follows:
\begin{equation*}\text{ED} = \sqrt {\sum\limits_{i = 1}^n {{{\left( {{x}_i + {y}_i} \right)}}^2} } \tag{1} \end{equation*} View Source\begin{equation*}\text{ED} = \sqrt {\sum\limits_{i = 1}^n {{{\left( {{x}_i + {y}_i} \right)}}^2} } \tag{1} \end{equation*}
continuecontinuewhere x and y are two spectral curves, n is the length of the spectral curve (the total number of bands), and x = (x1, x2, x3, …, xn), y = (y1, y2, y3, …, yn). If the ED of two adjacent pixels is smaller, their similarity is higher.
Spectral angle distance (SAD): According to SAD, two spectral curves are regard as two-dimensional (2-D) spatial vectors, and their similarity is characterized by calculating their generalized angle [21]. For hyperspectral images, ASD is related to the shape of the spectrum and is not sensitive to light intensity. SAD can be expressed as
\begin{equation*}\text{SAD}\left( {x,y} \right) = \arccos \frac{{\sum\nolimits_{i = 1}^n {{x}_i{y}_i} }}{{\sqrt {\sum\nolimits_{i = 1}^n {x_i^2} } \sqrt {\sum\nolimits_{i = 1}^n {y_i^2} } }}.. \tag{2} \end{equation*} View Source\begin{equation*}\text{SAD}\left( {x,y} \right) = \arccos \frac{{\sum\nolimits_{i = 1}^n {{x}_i{y}_i} }}{{\sqrt {\sum\nolimits_{i = 1}^n {x_i^2} } \sqrt {\sum\nolimits_{i = 1}^n {y_i^2} } }}.. \tag{2} \end{equation*}
If the SAD between the two adjacent pixels is smaller, their similarity is higher.
Euclidean distance and spectral angle distance (ED-SAD): As shown in Fig. 1(a), the vectors
,$\vec{a}$ , and$\vec{b}$ are the spectral vectors generated after the spectral curves A, B, and C are projected to the 2-D space. The angle between vectors$\vec{c}$ and$\vec{a}$ is the same as the angle between vector$\vec{b}$ and$\vec{a}$ , i.e., the spectral angle is the same, but$\vec{c}$ is smaller than$\text{ED}( {\vec{a},\vec{b}} )$ . Therefore, the spectral curve B is closer to the spectral curve A in shape and value, and the above two curves should be regarded as the same ground object. As shown in Fig. 1(b),$\text{ED}( {\vec{a},\vec{c}} )$ and$\text{ED}( {\vec{a},\vec{b}} )$ are equal, but the angle α between vector$\text{ED}( {\vec{a},\vec{c}} )$ and$\vec{a}$ is greater than the angle β between vector$\vec{b}$ and$\vec{a}$ . However, the spectral curve B is closer to the spectral curve A in shape and value, and should be regarded as the same ground object.$\vec{c}$ Fig. 1.Spectral angle and Euclidean distance between spectral vectors. (a) Same spectral angle. (b) Same Euclidean distance.
It can be seen that the difference between the spectral vectors cannot be reflected accurately by only ED or SAD, but the combination of ED and SAD improves the accuracy of the distance between the spectral vectors. As a result, the spectral angle distance and the Euclidean distance (ED-SAD) were proposed [20], whose formula is
\begin{equation*}
{D}_{{\rm{ED - SAD}}} \!=\! \sqrt {\sum\limits_{i = 1}^n {{{\left( {{x}_i + {y}_i} \right)}}^2} } \left( {1 \!-\! \frac{{\sum\nolimits_{i = 1}^n {{x}_i{y}_i} }}{{\sqrt {\sum\nolimits_{i = 1}^n {x_i^2} } \sqrt {\sum\nolimits_{i = 1}^n {y_i^2} } }}} \right)\!. \tag{3}
\end{equation*}
If the ED-SAD between the two adjacent pixels is smaller, their similarity is higher.
2) Pure Pixel Extraction
In most cases, when the earth objects are larger than the size of a pixel, their distribution in remote sensing images is considered to be continuous. This means that the central pixel and the surrounding pixel are likely to belong to the same object [23], [24]. This characteristic is adopted to extract the image pure pixel. In order to clearly explain this pure pixel extraction algorithm, a hyperspectral image, i.e., m pixels rows and n pixels columns, was used as an example.
In this hyperspectral image, S(i, j) describes the spectrum at position (i, j), where1 ≤ i ≤ m, and 1 ≤ j ≤ n. As shown in Fig. 2, according to the decision rule, the center pixel xi,j belongs to the pure pixel if the average of the distance between the center pixel and the adjacent 8 pixels is less than c. c is a prespecified threshold and the distance (D) is value of ED, SAD or SAD-ED. The general searching sequence of the pure pixel extraction is from left to right and from top to bottom. The searching process is as follows.
If the pixels are at the positions of the image edge (i = 1 or i = m or j = 1 or j = n), bypass them.
The pixels are not at the aforementioned positions (i ≠ 1, i ≠ m, j ≠ 1, and j ≠ n): if t ≤ c, the pixel xi,j belongs to pure pixel; otherwise, xi,j does not belong to pure pixel, continue, where t = mean (D(Si,j, Si-1,j-1), D(Si,j, Si-1,j), D(Si,j, Si-1,j+1), D(Si,j, Si,j-1), D(Si,j, Si,j+1), D(Si,j, Si+1,j-1), D(Si,j, Si+1,j), D(Si,j, Si+1,j+1)), i = 2, 3, …, m−1, j = 2, 3, …, n−1.
The size of the threshold c should be determined according to the image to be estimated. Generally, two adjacent pixels in the uniform region of the image are selected, and ED and SAD between their spectra are calculated to determine the size of the threshold c. By testing the determined threshold on the final image, and comparing it with the number of all pixels in the image, the number of pure pixels was obtained, which should be reasonable in quantity. In this article, the thresholds of ED, SAD and ED-SAD were 20–30, 0.04–0.08 rad and 0.2–0.3, respectively.
B. Estimating the SNR
The pure pixel and 8 adjacent pixels belong to same homogeneous regions, so they have similar correlations between the bands. When the band noise of the extracted pure pixels is evaluated, we can regard the extracted pure pixels and 8 adjacent pixels as a pure pixel block (3 × 3 pixels). After the pure pixel block is extracted from the image, the spectral correlation of the pure pixel block in the image is removed using a multiple linear regression. The standard deviation of the residuals obtained by using this procedure is used to estimate the pure pixel band noise.
The SNR estimate for each pure pixel is computed according to the following procedure.
Let xi,j,k be the extracted pure pixel in band k at position (i, j) within hyperspectral image. So the residuals ri,j,k is computed using
\begin{equation*}
{r}_{i,j,k} = {x}_{i.j,k} - {\hat{x}}_{i,j,k} \tag{4}
\end{equation*}
\begin{equation*}
{\hat{x}}_{i,j,k} = a{x}_{i,j,k - 1} + b{x}_{i,j,k + 1} + c \tag{5}
\end{equation*}
In the same way, the residual for 8 adjacent pixels of the pure pixel are also obtained by the same method, namely, ri−1,j−1,k, ri−1,j,k, ri−1,j+1,k, ri,j−1,k, ri,j+1,k, ri+1,j−1,k, ri+1,j,k, ri+1,j+1,k. The sum of the squares of these residuals, which is given by
\begin{align*}
S_{i,j,k}^2 =& r_{i,j,k}^2 + r_{i - 1,j - 1,k}^2 + r_{i - 1,j,k}^2 + r_{i - 1,j + 1,k}^2 + r_{i,j - 1,k}^2\\
& + r_{i,j + 1,k}^2 + r_{i + 1,j - 1,k}^2 + r_{i +,j,k}^2 + r_{i + 1,j + 1,k}^2 . \tag{6}
\end{align*}
For this pure pixel block, we use the unbiased estimate of the standard deviation, namely, \begin{equation*}
{\sigma }_{i,j,k} = \sqrt {\frac{1}{{\left( {W - 3} \right)}}S_{i,j,k}^2} . \tag{7}
\end{equation*}
In this estimate, the degrees of freedom are reduced from W to W−3 because three parameters are used in the regression. where
The signal value of the pure pixel is the pixels mean value of the pure pixel block, i.e.,
\begin{align*}
{{\bar{x}}}_{i,j,k} =& ({x}_{i,j,k} + {{x}_{i -1,j - 1,k}} + {{x}_{i -1,j,k}}\\
&+ {{x}_{i - 1,j + 1,k}} + \ {{x}_{i,j - 1,k}} + {{x}_{i,j + 1,k}}\\
& + {{x}_{i + 1,j - 1,k}} + {{x}_{i + 1,j,k}} + {x}_{{i + 1,j + 1,k}})/9. \tag{8}
\end{align*}
The SNR of pure pixel is the signal value divided by the standard deviation, i.e.,
\begin{equation*}
\text{SN}{\mathrm{R}}_{i,j,k} = \frac{{{{\bar{x}}}_{i,j,k}}}{{{\sigma }_{i,j,k}}}. \tag{9}
\end{equation*}
Finally, we use a box counting procedure to obtain the most accurate SNR estimation of the entire hyperspectral image, dividing 100 intervals between the minimum value and the 1.2 times mean value of all the obtained pure pixels SNR, and listing them in the corresponding interval according to the SNR value of each pure pixel. The SNR of the entire UAV hyperspectral image is the mean of the SNR value of pure pixels in the interval with the biggest pure pixels number.
The difference in the accumulated SNR results of 45 bands is roughly 20 when the division interval is between 20 and 150. It is most reasonable to divide the interval into 100 after testing. Simultaneously, this method can increase the accuracy of SNR estimate of the entire hyperspectral picture by around 5% when compared to directly averaging the SNR of all pure pixels for the SNR estimation of the entire hyperspectral image. Therefore, adopting a box counting approach to filter the SNR aggregation interval of pure pixels is important and effective.
Data
A. UAV Hyperspectral Images
In this article, a Rikola hyperspectral frame camera based on the Fabry–Perot interferometer (FPI) technology was used, which is manufactured by Senop Ltd., Oulu, Finland, and carried on the DJI Matrice 600 Pro UAV through the DJI Ronin-MX Pan-Tilt (Shenzhen, China; http://www.dji.com/matrice600). The Rikola hyperspectral image size is 1010 × 1010 pixels with a pixel size of 5.5 μm (Image resolution of 6.5 cm at 100 m flight altitude). The adjustable FPI filter allows users to select the spectral bands of the hyperspectral cubes according to the requirements of the application; when airborne imaging, the camera can collect 0–65 bands hyperspectral images in the 503–908 nm range and with the spectral resolution ranging of 3–10 nm (full width at half maximum, FWHM).
The preprocessing steps of Rikola hyperspectral image mainly include [25], [26]: 1) Format conversion and dark current correction (using Hyperspectral Imager software); 2) Bands registration (using RegMosaic software); 3) Image mosaic (using Agisoft Photoscan Professional software).
In general, the covers area of a mosaic hyperspectral image is about 180 × 100 meters by one takeoff and landing of UAV (the flight altitude above ground level is 100 m, and the flight time is 13 min), i.e., the hyperspectral image contains 2700 × 1500 pixels. In this article, in order to test the accuracy and stability of the new SNR estimation method, and to ensure that the environmental factors are the same when UAV hyperspectral images are acquired. It is necessary to use UAV hyperspectral images with different texture complexity under the one aerial survey task for horizontal comparison. Therefore, eight UAV-borne Rikola hyperspectral images were selected, including four groups of hyperspectral images with complex texture and simple texture. As shown in Fig. 3, the resolution of hyperspectral images is between 0.05 and 0.13 m. All hyperspectral images are cut to the size of 950 × 500 pixels and the DN values range from 0 to 255.
Rikola radiance hyperspectral image for SNR estimation. (All images are displayed as false-color composite images, and the wavelengths of composite false-color images are 853 nm, 692 nm, and 503 nm.).
Fig. 3(a) shows a mixed forest with many species and complex texture features. Fig. 3(b) exhibits Jujube tree, with only one type of ground object. The texture feature of ground object is relatively simple. However, Fig. 3(a) and (b) do not belong to the hyperspectral image obtained by the one takeoff and landing of the UAV, but they can be used to compare the influence of the complexity of land cover types on the accuracy of SNR estimation, as well as the difference comparison of the judgment criteria for pure pixel extraction. The building [see Fig. 3(c)] and water [see Fig. 3(d)] belong to the hyperspectral image obtained by the one takeoff and landing of the UAV. The cotton [see Fig. 3(e)] and zucchini [see Fig. 3(f)] belong to the hyperspectral image obtained by the one takeoff and landing of UAV. The wheat [see Fig. 3(f)] and bare soil [see Fig. 3(f)] belong to the hyperspectral image obtained by the one takeoff and landing of UAV. So, the SNR should be the same. Water, zucchini (initial flowering stage) and bare soil relative to building, cotton (seedling stage) and wheat are characterized by large homogeneous regions and simple texture features.
B. Noise Processing Added to Hyperspectral Image
The noise of hyperspectral images is generally divided into random noise and periodic noise [27], [28]. The periodic noise has a fixed mode and can be removed through certain processing. The random noise is unpredictable and difficult to be completely removed. Therefore, the random noise is the main factor affecting the quality of hyperspectral images [29]. Random noise in UAV hyperspectral images is generally considered as additive noise unrelated to the signal in the image [15], [30], [31]. The random noise is usually of normal distribution [29]. Therefore, in this article, a random process with a mean of zero and a Gaussian probability density function was used to simulate hyperspectral noise.
In order to test the accuracy and stability of the SNR estimation algorithm, four hyperspectral images [see Fig. 3(c), (d), (g), and (h)] were selected, and Gaussian random noise with SNR of 20, 30, and 40 were added. Meanwhile, the mean of absolute error (MAE) and standard deviation of absolute error (SDAE) of statistical indicators was used to evaluate the results of SNR estimation. MAE is the average of the absolute value of the deviation between all single observations and the arithmetic mean. MAE can avoid the problem of errors offsetting each other, so it can accurately reflect the actual prediction error. SDAE is used to indicate the degree of error between each band of the whole image. The smaller the value of MAE and SDAE, the closer the predicted value is to the real value. Its calculation formula is as follows:
\begin{align*}
&\text{MAE} = \frac{1}{K}\sum\limits_{k = 1}^K {\left| {\text{SN}{\mathrm{R}}_k - \text{SN}{{{\rm{\hat{R}}}}}_k} \right|} \tag{10}\\
&\text{SDAE} = {\left( {\frac{1}{K}\sum\limits_{k = 1}^K {{{\left( {\left| {\text{SN}{\mathrm{R}}_k - \text{SN}{{{\rm{\hat{R}}}}}_k} \right| - \text{MAE}} \right)}}^2} } \right)}^{1/2} \tag{11}
\end{align*}
Results
A. SNR Estimation for UAV Hyperspectral Images of Different Scenarios
The results of SNR estimation for the 8 hyperspectral images in Fig. 3 with SSDC, HRDSDC, OSSDC, and PPESDC are shown in Fig. 4. As shown in Fig. 4(d), (f), and (h), when the image is homogeneous, the SNR estimated by PPESDC is almost the same as that estimated by HRDSDC and OSSDC. As shown in Fig. 4(c), (e), and (g), when the image is heterogeneous, the SNR estimated by PPESDC is more stable than that estimated by HRDSDC and OSSDC. In addition, as shown in Fig. 4(a) and (b), the complexity of land cover types affects the SNR estimation of the six methods, especially the SNR difference of SSDC estimation is larger. Besides, the difference of the judgment criteria (ED, SAD, and ED-SAD) on the pure pixels extraction of complex ground objects is not obvious by comparing the SNR estimation results of HRDSDC, OSSDC, and PPESDC (ED, SAD, and ED-SAD).
SNR estimation results of radiance hyperspectral images in Fig. 3 using SSDC, HRDSDC, OSSDC, and PPESDC.
By comparing the SNR values of Fig. 4(c), (e), and (g) and the corresponding Fig. 4(d), (f), and (h), we can see that the estimation results of SSDC are quite different and are most affected by the texture characteristics and spectral characteristics of ground objects. The PPESDC method is the least susceptible to the influence of land cover types, which is better for the SNR estimation of the UAV hyperspectral image than SSDC, HRDSDC, and OSSDC.
Through comparing the SNR results of PPESDC(ED), PPESDC(SAD), and PPESDC(ED-SAD) in Fig. 4, we can see that the SNRs estimated by the three judging criteria (ED, SAD, and ED-SAD) are almost the same no matter whether the image is homogeneous or heterogeneous, and regardless of the complexity of the image of land cover types.
B. Efficiency Evaluation of the Method
To evaluate efficiency of the method, UAV hyperspectral images were divided into complex texture images [see Fig. 3(a), (c), (e), and (g)] and simple texture images [see Fig. 3(b), (d), (f), and (h)]. Then, the average value of the time spent on the two types of hyperspectral images was calculated. The average time spent by the six SNR methods is shown in Table I. It can be seen that the time spent by HRDSDC and OSSDC is almost the same, and that spent by PPESDC(ED), PPESDC(SAD) and PPESDC(ED-SAD) is also almost the same. However, the operation speed of HRDSDC and OSSDC is the fastest, and that of PPESDC is the slowest. The time spent by PPESDC is about three times that spent by HRDSDC and OSSDC. In addition, as far as HRDSDC, OSSDC, and PPESDC are concerned, the time spent for evaluating the SNR of complex texture images is less than that spent for evaluating the SNR of simple texture images.
C. Stability Evaluation of the Method
In order to test the accuracy and stability of the method for SNR estimation of hyperspectral images with different noise levels, six hyperspectral images of houses, water, cotton, zucchini, wheat, and bare soil in Fig. 3 [see Fig. 3(c), (d), (e), (f), (g), and (h)] were chosen, and different levels of noise were added. Besides, the SNR results of water and bare soil were taken as the corresponding true values. Then, the MAE of SNR estimation results was calculated and normalized to the same level. The results are shown in Table II. The MAE and SDAE of the SNR estimation results of the three hyperspectral images have a small range of change as the proportion of added noise increases, which shows that the six methods have good stability for SNR estimation of complex texture images with various levels of noise. Additionally, the PPESDC method's MAE of SNR estimation results is the smallest at the same added noise level, demonstrating that it is more accurate at estimating SNR than the SSDC, HRDSDC, and OSSDC methods.
Discussion
A. SNR Estimation for UAV Hyperspectral Images of Different Sensor Type
In addition to the hyperspectral images of cotton at the bud stage obtained by the UAV frame Rikola Hyperspectral sensor, the UAV pushbroom Nano-Hyperspec (Headwall Photonics, Boston, USA) sensor was also used to obtain the hyperspectral images of cotton at the same stage. Hyperspectral images of cotton in the bud stage obtained by two kinds of hyperspectral imagers are shown in Fig. 5(Left). Hyperspectral images were cut in the size of 950 × 500 pixels. The grayscale range was adjusted to 0–255. As shown in Fig. 5(Right), correlation analysis of the two hyperspectral images shows that the correlation between the two types of hyperspectral bands is strong, but it is worth noting that the band correlation is poor at the red-edge [32] of the vegetation spectrum. And if the band spacing is too large, it may lead to inaccurate SNR estimation. Therefore, it is necessary to explore the influence of band spacing on SNR estimation results. Besides, in some applications, scientists prefer to work with hyperspectral reflectance images from which the effects of the solar radiation spectrum and the atmosphere have been removed by the process of radiation correction. Therefore, the availability of the method on hyperspectral reflectance images was also tested.
(Left) (a) Rikola hyperspectral image of cotton in the bud stage, 42 bands (spectral range 500–900 nm). (b) Nano hyperspectral image of cotton in the bud stage, 227 bands (spectral range 500–1000 nm). (Right) Correlation between bands of two hyperspectral images.
1) SNR Estimation for UAV Hyperspectral Images of Different Band Spacing
As shown in Fig. 6, no matter whether it is Rikola hyperspectral image or Nano hyperspectral image, there is no mutation in the SNR estimation result at the red-edge of the vegetation spectrum. With reference to the SNR estimation results in Fig. 4, the SNR result of PPESDC is reliable at this band spacing. The band spacing of the two hyperspectral spectrometers is 9.5 nm and 2.2 nm, respectively. Hyperspectral is defined as the spectral resolution greater than 10 nm. Therefore, this method is generally applicable to hyperspectral images.
2) SNR Estimation for UAV Hyperspectral Reflectance Images
The two radiance hyperspectral images in Fig. 5 are corrected into reflectance hyperspectral images using the diffuse panels and empirical linear models [33]. The SNR estimation results of reflectance hyperspectral images are shown in Fig. 7. Compared with Fig. 6, the value range of SNR and the shape of the SNR curve of the radiance hyperspectral image are basically consistent with the reflectance hyperspectral image. This proves that PPESDC is suitable for both radiance image and reflectance image. Second, compared with other SNR estimation methods, PPESDC is more stable and reliable.
In addition, according to the comparison of the SNR estimation results of PPESDC(ED), PPESDC(SAD), and PPESDC(ED-SAD) in Figs. 6 and 7, the estimation results of the three methods are almost the same. It is difficult for us to judge, which judging criterion basis is better, and which may be related to the type of land cover types. In addition, this also confirms the necessity and accuracy of the drawing interval method to screen the SNR clustering interval of all pure pixels in the PPESDC method.
From these tests (see Figs. 4, 6, and 7), we can see that if the band spacing is large, SSDC is particularly vulnerable to the influence of land cover type, but PPESDC is the least vulnerable. Furthermore, compared with Figs. 6 and 7, the SNR of the Rikola hyperspectral sensor at 34 bands (822 nm) and 27 bands (782 nm) and the SNR of the Nano-hyperspec sensor at 121 bands (765 nm) have greater fluctuations. This fluctuation is related to the performance of the sensor itself, but the fluctuation amplitude of the SNR curve of the reflectance hyperspectral image is significantly less than that of the radiance hyperspectral image. This variation can be considered as being linked to the atmospheric environment, such as oxygen absorption in the atmosphere. Second, by observing the hyperspectral images and spectral curves, we observed that the image radiance value in this band decreased as a whole, and the spectral curve was obviously abnormal. But this anomaly in the reflectance image is always invisible after radiation correction. It can be seen that although radiation correction is a linear affine transformation, it effectively reduces the influence of the atmosphere on the spectral accuracy of hyperspectral images. Therefore, radiation correction is also a necessary preprocessing process for low-altitude UAV hyperspectral remote sensing images.
From these tests (see Figs. 4, 6, and 7), we can see that if the band spacing is large, SSDC is particularly vulnerable to the influence of land cover type, but PPESDC is the least vulnerable. Furthermore, compared with Figs. 6 and 7, the SNR of the Rikola hyperspectral sensor at 34 bands (822 nm) and 27 bands (782 nm) and the SNR of the Nano-hyperspec sensor at 121 bands (765 nm) have greater fluctuations. This fluctuation is related to the performance of the sensor itself, but the fluctuation amplitude of the SNR curve of the reflectance hyperspectral image is significantly less than that of the radiance hyperspectral image. This variation can be considered as being linked to the atmospheric environment, such as oxygen absorption in the atmosphere. Second, by observing the hyperspectral images and spectral curves, we observed that the image radiance value in this band decreased as a whole, and the spectral curve was obviously abnormal. But this anomaly in the reflectance image is always invisible after radiation correction. It can be seen that although radiation correction is a linear affine transformation, it effectively reduces the influence of the atmosphere on the spectral accuracy of hyperspectral images. Therefore, radiation correction is also a necessary preprocessing process for low-altitude UAV hyperspectral remote sensing images.
B. Discussion About Algorithm Design
1) The predictor we have used to decorrelate the hyperspectral image is quite simple and the hyperspectral image noise is white. As some sour comments have indicated, the residual images it produces show weak or no spatial features. Nevertheless, the residual itself still shows some correlations, so for hyperspectral images with correlated noise, we can try more sophisticated predictors, e.g.,
\begin{align*}
{\hat{x}}_{i,j,k} =& a{x}_{i,j,k - 1} + b{x}_{i,j,k + 1} + c{x}_{i + 1,j,k} + d \tag{12}\\
{\hat{x}}_{i,j,k} =& a{x}_{i,j,k - 1} + b{x}_{i,j,k + 1} + c{x}_{i,j - 1,k} + d{x}_{i,j + 1,k}\\
& + e{x}_{i - 1,j,k} + f{x}_{i + 1,j,k} + g. \tag{13}
\end{align*}
The regression contains one and four adjacent pixel values of the extracted pure pixel. The multivariate linear regression equation of the other eight adjacent pixels is
\begin{align*}s
{{\hat{x}}}_{i,j,k} =& a{x}_{i,j,k - 1} + b{x}_{i,j,k + 1} + c{x}_{i + 1,j,k} + d\\
{{\hat{x}}}_{i - 1,j - 1,k} =& a{x}_{i - 1,j - 1,k - 1} + b{x}_{i - 1,j - 1,k + 1} + c{x}_{i,j,k} + d\\
{{\hat{x}}}_{i - 1,j,k} =& a{x}_{i - 1,j,k - 1} + b{x}_{i - 1,j,k + 1} + c{x}_{i,j,k} + d\\
{{\hat{x}}}_{i - 1,j + 1,k} =& a{x}_{i - 1,j + 1,k - 1} + b{x}_{i - 1,j + 1,k + 1} + c{x}_{i,j,k} + d\\
{{\hat{x}}}_{i,j - 1,k} =& a{x}_{i,j - 1,k - 1} + b{x}_{i,j - 1,k + 1} + c{x}_{i,j,k} + d\\
{{\hat{x}}}_{i,j + 1,k} =& a{x}_{i,j + 1,k - 1} + b{x}_{i,j + 1,k + 1} + c{x}_{i,j,k} + d\\
{{\hat{x}}}_{i + 1,j - 1,k} =& a{x}_{i + 1,j - 1,k - 1} + b{x}_{i + 1,j - 1,k + 1} + c{x}_{i,j,k} + d\\
{{\hat{x}}}_{i + 1,j,k} =& a{x}_{i + 1,j,k - 1} + b{x}_{i + 1,j,k + 1} + c{x}_{i,j,k} + d\\
{{\hat{x}}}_{i + 1,j + 1,k} =& a{x}_{i + 1,j + 1,k - 1} + b{x}_{i + 1,j + 1,k + 1} + c{x}_{i,j,k} + d. \tag{14}
\end{align*}
There is no clear gain compared with the regression using only one pure pixel. But they are too complicated to implement quickly, and make little difference to the overall results. It is a moot point as to whether this more sophisticated processing can eliminate noise correlation and scene effects, or a question worth proving.
2) When calculating the clustered regions of pure pixels SNR, in addition to the method in this article, it can also be based on the method of [34]. First, 100 intervals are divided between the minimum SNR and 1.2 times the average SNR of all pure pixels, and the number of pure pixels contained in each interval is counted. Second, the statistical curve of pure pixel number is transformed by the Fourier transform. In the frequency domain, the ideal low-pass filter is used to remove the high frequency components, and then the low frequency waveform of the statistical curve of pure pixel number is obtained by the inverse Fourier transform. Finally, in the extracted low-frequency waveform, the SNR average value of the corresponding interval of the first waveform vertex is used as the optimal estimation of image SNR.
3) In terms of algorithm efficiency, when the image size is small, the rate difference is not apparent. When the size of the hyperspectral remote sensing image is large or a sophisticated predictor is used, the rate is significantly smaller than other algorithms (e.g., results shown in Table I). Considering that there are a large number of pure pixels in large-scale hyperspectral images, and the PPESDC method uses box counting method to filter the SNR clustering interval, the accuracy should not be reduced. Therefore, we propose to increase the extraction step size of each pure pixel to improve the traversal efficiency of the algorithm. The hyperspectral image shown in Fig. 3 was tested, and the test results show that when the step size is 3 pixels, the rate will be increased by 3–5 times, and the relative error of SNR estimation result is less than 1. Therefore, the PPESDC method proposed in this article also has certain advantages in efficiency compared with Table I.
Conclusion
The SNR estimation for remote sensing images is of great significance to the performance estimation of remote sensing systems and the application of remote sensing data. In this paper, a spectral dimension decorrelation SNR estimation algorithm based on pure pixel extraction is proposed for the SNR estimation of high resolution UAV hyperspectral images. The algorithm uses the high spatial resolution characteristics of UAV hyperspectral images to accurately extract pure pixels. Compared with the traditional hyperspectral image SNR estimation algorithm, it reduces the influence of feature edge and texture information on SNR estimation results, and solves the problem that some remote sensing images cannot be continuously segmented in the continuous segmentation SNR estimation algorithms (e.g., HRDSDC and OSSDC). The main conclusions are as follows.
No matter whether the image texture features are simple or rich, the judging criteria ED, SAD, and ED-SAD have little effect on the SNR estimation results. But the ED threshold determination needs to be tested repeatedly, Furthermore, the size of SAD and the number of pure pixels obtained change significantly. So, theoretically, we prefer ED-SAD.
In contrast, PPESDC demonstrates better reliability and adaptive abilities, as can be seen in all the images used in this article.
In summary, the PPESDC method has good applicability to UAV hyperspectral images with different ground cover types, and its SNR estimation results are more accurate. This method can be used for all low-altitude UAV hyperspectral images (including radiation and reflectance hyperspectral images with different band spacing, and whether it is frame imaging or pushbroom imaging hyperspectral). This article provides theoretical and methodological guidance for the selection of SNR estimation methods for hyperspectral images, and also provides a measurement basis for the radiance correction in a certain band of UAV hyperspectral images.
ACKNOWLEDGMENT
The authors would like to thank the Shihezi University and Research Center for Space Information Engineering Technology for their support for this article and also like to thank Chinese Academy of Sciences, Q. Zhang, P. Jiang, and P. Yuan for their assistance in collecting the UAV data.