Loading [MathJax]/extensions/MathMenu.js
Subpixel Change Detection Based on Improved Abundance Values for Remote Sensing Images | IEEE Journals & Magazine | IEEE Xplore

Subpixel Change Detection Based on Improved Abundance Values for Remote Sensing Images


Abstract:

To achieve land cover change detection (LCCD) with both fine spatial and temporal resolutions from remote sensing images, subpixel mapping-based approaches have been wide...Show More
Topic: Integrated Crowdsourcing and GeoAI for Land Use, Land Cover and Change Detection

Abstract:

To achieve land cover change detection (LCCD) with both fine spatial and temporal resolutions from remote sensing images, subpixel mapping-based approaches have been widely studied in recent years. The fine spatial but coarse temporal resolution image and the coarse spatial but fine temporal image are used to accomplish LCCD by combining their advantages. However, the performance of subpixel mapping is easily affected by the accuracy of spectral unmixing, thereby reducing the reliability of LCCD. In this article, a novel subpixel change detection scheme based on improved abundance values is proposed to tackle the aforementioned problem, in which the spatial distribution of fine spatial resolution image is borrowed to promote the accuracy of spectral unmixing. First, the coarse spatial resolution image is used to generate the original abundance image by the spectral unmixing method. Second, the spatial distribution information of the fine spatial resolution image is incorporated into the original abundance image to obtain improved abundance values. Third, the fine spatial resolution subpixel map can be generated by the subpixel mapping method using the improved abundance values. At last, the fine resolution change map can be obtained by comparing the subpixel map with the fine spatial resolution image. Experiments are conducted on a simulated dataset based on Landsat-7 images and two real datasets based on Landsat-8 and MODIS images. The results of the real datasets showed that the proposed method can effectively improve the performance of LCCD with an overall accuracy of approximately 1.26% and 0.79% to the existing methods.
Topic: Integrated Crowdsourcing and GeoAI for Land Use, Land Cover and Change Detection
Page(s): 10073 - 10086
Date of Publication: 23 November 2022

ISSN Information:

Funding Agency:


CCBY - IEEE is not the copyright holder of this material. Please follow the instructions via https://creativecommons.org/licenses/by/4.0/ to obtain full-text articles and stipulations in the API documentation.
SECTION I.

Introduction

In Remote sensing, land cover change detection (LCCD) finds land cover changes using remote sensing images acquired at different times [1], [2], [3], [4]. A significant number of LCCD approaches have been investigated during decades of development, and many of these methods have been successfully implemented in domains like landslide detection [5], [6], [7], flood mapping [8], [9], and urban expansion monitoring [10], [11], [12]. At present, LCCD using remote sensing images is an active research hotspot.

After decades of research and development, LCCD using remote sensing images has been relatively mature, and a large number of change detection algorithms have been proposed and widely used. The commonly used change detection algorithms mainly include threshold-based methods [13], [14], clustering-based methods [15], [16], object-oriented-based methods [17], [18], and deep-learning-based methods [19], [20], [21], [22]. Threshold-based methods and clustering-based methods are the first proposed and applied methods. Their algorithm is relatively simple and fast, but the false detection rate and missed detection rate are relatively high. The object-oriented method is based on remote sensing image segmentation for change detection, which has achieved good results in high spatial resolution remote sensing image change detection. In recent years, with the extensive research and application of deep learning algorithms, a large number of studies use convolutional neural network algorithms for change detection. Change detection based on a convolutional neural network algorithm can provide end-to-end change information output and obtains higher change detection accuracy by using deep image feature information.

With the advancement of sensor platforms, the acquisition of remote sensing images has become more convenient, and the types are more abundant. The spatial resolution of remote sensing images has improved from approximately100 to 1 m or even less, and the temporal resolution of remote sensing images has also been greatly improved. With this background, achieving LCCD with both fine spatial and temporal resolutions becomes possible. This will contribute to the monitoring of disasters, such as floods and forest fires. However, due to the limitations of the hardware of sensors, it is difficult to obtain images with both fine resolutions simultaneously. In other words, if an image has a fine spatial resolution, its revisit period is generally long, and if an image has a short revisit period, the spatial resolution is generally coarse. Although a few remote sensing images can achieve fine resolutions by multisatellite grouping, their high cost hinders their application in large-scale monitoring. The multispectral band of Landsat images, for example, has a 30 m spatial resolution and a temporal resolution of 16 days; the MODIS image, on the other hand, has a 500 m spatial resolution but a temporal resolution of 1 day. However, for the special inputs based on bitemporal images, traditional LCCD methods have difficulty extracting changes with both fine spatial and temporal resolutions. To combine the advantages of the aforementioned two image types, it is essential to develop new LCCD techniques based on them.

For the above problems, a large number of attempts by researchers have been made in recent years, such as spatiotemporal fusion-based methods [23], [24], [25], [26], spectral unmixing-based methods [27], [28], [29], [30], and subpixel mapping-based methods [31], [32], [33], [34]. Next, we will make a brief introduction to these LCCD methods.

A. Spatiotemporal Fusion-Based LCCD Methods

Spatiotemporal fusion is a technology that can obtain images with both fine spatial and temporal resolutions [25]. Based on this technology, the generated images can be used to obtain satisfactory LCCD results. Xi et al. [23] presented a new spatiotemporal cube model and a spatiotemporal multiresolution segmentation method to analyze the intra-annual seasonal changes in land cover. In [24], an improved flexible spatiotemporal fusion method was proposed for change detection with spatial details. To detect and predict the land cover changes in Hefei over the past 30 years, a novel change detection method based on spatiotemporal fusion and the cellular automata-Markov model was proposed in [25]. In [26], a new forest change detection approach was proposed via a spatiotemporal in painting mechanism to detect forest cover changes. Although change detection methods based on spatiotemporal fusion can extract change information with fine spatial and temporal resolutions, these methods usually require both fine spatial and temporal resolution images synchronously. This stringent criterion restricts the use of spatiotemporal fusion-based LCCD approaches in real-world issues.

B. Spectral Unmixing-Based LCCD Methods

Spectral unmixing is a technique that is applied to mixed pixels of remote sensing images for obtaining an abundance of each component using a certain mathematical model. As is known, there are many mixed pixels in remote sensing images, especially for coarse spatial resolution images. We may acquire an abundance of each component within the mixed pixels using spectral unmixing techniques and then obtain subpixel scale change information. In [27], a novel multitemporal spectral unmixing approach is presented to handle the challenging LCCD problem by investigating the spectral–temporal variations at a subpixel level. Wu et al. [28] developed a new LCCD approach based on spectral unmixing from stacked multitemporal remote sensing images with variable end members. For monitoring coastal wetlands, a subpixel level LCCD approach via collaborative coupled unmixing using spatial and spectral information is presented in [29]. In [30], a new LCCD method based on convolutional sparse analysis and temporal spectral unmixing was proposed to combine the advantages of pixel- and subpixel-level change detection. However, spectral unmixing-based methods can only obtain the proportion of changes within a mixed pixel, and it is difficult to locate the specific position of the changes.

C. Subpixel Mapping-Based LCCD Methods

Using coarse spatial resolution image, subpixel mapping can extract the spatial distribution information of ground objects at a subpixel scale. For the coarse spatial resolution image, subpixel mapping is implemented to generate the land cover map with fine spatial resolution, and then the land cover changes can be obtained by comparing it with the fine spatial resolution image. Different from spatiotemporal fusion and spectral unmixing-based approaches, it is not essential to use image pairs with both fine resolutions in the same period for subpixel mapping-based methods. In addition, the specific location of change information can be obtained in this kind of method. Therefore, subpixel mapping-based methods have been widely studied and applied in LCCD with both fine spatial and temporal resolutions. Ling et al. [31] presented an LCCD algorithm to obtain the subpixel level spatial pattern of the earth's changes. In [32], an HNN-based method is proposed to detect the changes at subpixel resolutions by borrowing information from a known FSR land cover map. To fully utilize the fine resolution image information, a novel supervised subpixel LCCD approach via a BPNN is presented in [33]. He et al. [34] developed a new subpixel mapping method using the maximum a posteriori estimation in terms of the joint spectral–spatial–temporal information for LCCD.

Although subpixel mapping-based LCCD approaches have shown tremendous promise in detecting changes with both fine resolutions, there are still issues to be addressed. In the subpixel mapping procedure, the abundance image generated by the spectral unmixing process is the input of the subpixel mapping method. Hence, the accuracy of the abundance image is very important because it directly influences the subpixel mapping accuracy which then affects the LCCD results. Although researchers have proposed many spectral unmixing methods to improve the abundance image, the uncertainty of abundance values is inevitable. This is because the transformation from a mixed pixel value to abundance values of each class is an ill-conditioned process. Hence, it is difficult to obtain accurate abundance values for application by only improving the mathematical model used for spectral unmixing.

According to the analysis described above, for detecting changes with fine spatial and temporal resolutions, a subpixel change detection method based on improved abundance values is proposed in this article. Specifically, the main contributions are as follows:

  1. A novel subpixel change detection framework based on improved abundance values is proposed to detect land cover changes at both fine spatial and temporal resolutions, in which the spatial distribution information from the fine spatial but coarse temporal image is borrowed to improve spectral unmixing.

  2. The improved abundance value is generated by the abundance image difference measure, which can be used to upscale the coarse spatial but fine temporal resolution image to a fine spatial resolution subpixel map.

This article will be illustrated as follows. Section II illustrates the current problem of subpixel mapping-based LCCD methods and describes the methodology of our proposed scheme. Section III shows the experimental settings and analysis based on synthetic and real datasets. At last, Section IV presents the conclusions.

SECTION II.

Methodology

A. Problem Formulation

As mentioned before, to acquire the change information of land cover with both fine spatial and temporal resolutions, subpixel mapping-based LCCD methods are widely studied. Suppose the bitemporal input images at t1 and t2 are the fine spatial but coarse temporal resolution image and the coarse spatial but fine temporal resolution image. As shown in Fig. 1, the general framework of subpixel mapping-based LCCD methods is as follows. First, using the image at t1 as input, the fine spatial resolution thematic map can be obtained by image interpretation. Second, for the image at t2, the subpixel mapping algorithm can be applied to produce the subpixel map with fine spatial resolution. Finally, the LCCD result can be generated by analyzing the thematic map with the subpixel map.

Fig. 1. - General framework of subpixel mapping-based LCCD methods.
Fig. 1.

General framework of subpixel mapping-based LCCD methods.

Although many subpixel mapping-based LCCD methods have been proposed, few articles have considered the uncertainty of spectral unmixing on subpixel maps and LCCD. In fact, as an input to subpixel mapping methods, the spectral unmixing result plays an important role in subpixel mapping-based LCCD methods. We note that spectral unmixing is adopted to generate the abundance images for the coarse spatial resolution image using mathematical models. The mathematical models that are used are often based on certain assumptions or data fitting. Hence, the transformation from coarse spatial resolution image to fine spatial resolution abundance image of each class is an ill-conditioned process because of the complexity of spectral imaging. As a result, the uncertainty of abundance values inevitably arises during spectral unmixing. Moreover, this uncertainty will propagate step by step and affect the accuracy of the LCCD results in the end.

B. Proposed Subpixel Mapping-Based LCCD Method

Considering the above problems, a subpixel mapping-based LCCD framework based on improved abundance values is proposed in this article. Different from the traditional methods, the abundance values are not generated directly by spectral unmixing. The fine spatial distribution information of the image at t1 is also used to improve the abundance values. The framework of the proposed LCCD scheme is shown in Fig. 2. Three steps are mainly included in the proposed method, namely, image interpretation and spectral unmixing, incorporation process, and subpixel mapping and change detection. Through the first step, the fine spatial resolution thematic map and original abundance image can be obtained, and through the second and third steps, the improved abundance image and the final change map can be generated. The details of the scheme are as follows.

Fig. 2. - Framework of the proposed LCCD scheme.
Fig. 2.

Framework of the proposed LCCD scheme.

1) Image Interpretation and Spectral Unmixing

As mentioned before, the bitemporal input images at t1 and t2 are the fine spatial but coarse temporal resolution image and the coarse spatial but fine temporal resolution image. Let the two input images be two coregistered images collected at two different times over the same area, and S (S> 1) denotes the zoom factor between the two images. For example, if the images at t1 and t2 have spatial resolutions of 10 and 20 m, respectively, then S = 2.

For the image at t1, the image interpretation process is conducted to obtain the fine spatial resolution thematic map. In this article, manual visual interpretation is applied to produce the thematic map for accuracy. For the image at t2, the spectral unmixing procedure is utilized to generate the abundance image. Specifically, the pixel purity index method [35] is used to extract the end members in which the number of end members is generated by the number of categories of the fine thematic map due to its simplicity and practicability. In addition, the abundance image is generated using completely constrained linear spectral mixing modeling [36].

2) Incorporation Process

To obtain accurate spectral unmixing results, an incorporation process is proposed in this article, in which the spatial distribution of fine spatial resolution thematic map is borrowed to promote its accuracy. As shown in Fig. 3, the proposed incorporation process can be illustrated as follows.

Fig. 3. - Incorporation procedure.
Fig. 3.

Incorporation procedure.

First, for the fine spatial resolution thematic map at t1, a degradation process based on an S × S mean filter is used to produce the abundance image at t1. Fig. 4 gives an example to expound on the degradation process. Suppose the zoom factor S = 6. The image on the left of Fig. 4 represents the fine thematic map in terms of two classes (i.e., Class 1 and Class 2) with 18 × 18 pixels. The image on the right represents the corresponding abundance image with 3 × 3 pixels. According to the specific spatial distribution of the objects in the left image, we can obtain the proportions of Class 1 and Class 2 in the corresponding location to be 0.44 and 0.56, respectively. Then, the abundance image at t1 can be obtained.

Fig. 4. - Example to expound the degradation process.
Fig. 4.

Example to expound the degradation process.

Second, an abundance image difference measure [37] is used to obtain the changed and unchanged pixels between the two abundance images in this section. Let ${P}_i$ be the ith pixel of the coarse spatial resolution image at t2, and let ${F}_{k\_c}({{P}_i})$ denote the abundance value of class k for this pixel. Accordingly, ${F}_{k\_f}({{P}_i})$ denotes the corresponding abundance value of the fine spatial resolution image at t1. Let ${D}_F({{P}_i})$ be the value of the abundance difference image within the extent of ${P}_i$ based on the abundance difference image measure. It can be formulated as follows: \begin{equation*} {D}_F\left({{P}_i} \right) = \sqrt {\sum\limits_{k = 1}^q {{{\left| {{F}_{k\_c}\left({{P}_i} \right) - {F}_{k\_f}\left({{P}_i} \right)} \right|}}^2} } . \tag{1} \end{equation*} View SourceRight-click on figure for MathML and additional features.

Using this measure, the abundance difference image can be classified into three categories [i.e., totally unchanged pixels (TUP), partly changed pixels (PCP), and totally changed pixels (TCP)] using two thresholds ${\mu }_1$ and ${\mu }_2$ (${\mu }_1 < {\mu }_2$). For a particular ${P}_i$, if ${D}_F({{P}_i}) \leq {\mu }_1$, it belongs to TUP; if ${D}_F({{P}_i}) \geq {\mu }_2$, it belongs to TCP; and if ${\mu }_1 < {D}_F({{P}_i}) < {\mu }_2$, it belongs to PCP.

Finally, the TUP and TCP can be used to generate the improved abundance image as follows. For TUP, the land cover within them is deemed to be unchanged. Hence, the values of the two temporal abundance images should be consistent in theory. In other words, the difference in the values in practice is caused by the spectral unmixing error. Considering that the abundance image at t1 is accurate for its fine spatial resolution, the improved abundance image can be generated by replacing the corresponding values in the abundance image at t2 with the values in the abundance image at t1. For the TCP, they are considered to have changed completely during two periods. Since many situations produce the above changes, the pure pixels at two times are considered for simplicity in this article. Because of the existence of spectral unmixing error, the abundance value of pure pixels belonging to TCP is usually not equal to 1. Hence, the abundance value at t1 is used to amend the original abundance value at t2.

As previously stated, the two thresholds ${\mu }_1$ and ${\mu }_2$ play a vital part in the improved abundance image generation. Therefore, how to accurately determine the value of the two thresholds becomes a key problem to be solved. In this article, the EM-based thresholding approach [38], for its robustness and practicability, is applied to automatically determine the two thresholds. Specifically, the abundance difference image is deemed to be a Gaussian mixture distribution, which consists of two components. One component presents the distribution of the changed pixels, and another component presents the distribution of the unchanged pixels. The associated probability density distribution function is described as \begin{equation*} p\left({{D}_F/{W}_r} \right) = \frac{1}{{{\sigma }_r\sqrt {2\pi } }}\exp \left[ { - \frac{{{{\left({{D}_F - {\mu }_r} \right)}}^2}}{{2\sigma _r^2}}} \right] \tag{2} \end{equation*} View SourceRight-click on figure for MathML and additional features.where $p({{D}_F/{W}_r})$ is the probability density function of pixels ${W}_r\{ {r \in ({1,2})} \}$. Specifically, ${W}_1$ denotes the unchanged pixels and ${W}_2$ denotes the changed pixels. ${\mu }_r$ is the mean and ${\sigma }_r$ is the variance of ${W}_r$. According to the principle of EM, ${\mu }_r$ can be adopted as the needed threshold considering the fact it is the mean of changed/unchanged pixels. Then, the key problem changes from resolving two thresholds to calculating ${\mu }_r$. In this article, three iteration steps are taken to calculate ${\mu }_r$ as follows:

  1. Before the iteration process begins, ${\mu }_r$, ${\sigma }_r,$ and a priori probabilities $P({{W}_r})$ need to be initialized. Here, the k-means cluster algorithm is run to achieve this purpose. It separates the pixels of ${D}_F$ into two classes, namely, changed and unchanged classes. The initialized values can be calculated from the two classes of pixels.

  2. After the initialization, the expectation evaluation is carried out. The a posterior probability $P({{W}_r/{D}_F})$ can be evaluated as \begin{equation*} P\left({{W}_r/D_F^i} \right) = \frac{{P\left({{W}_r} \right)p\left({D_F^i/{W}_r} \right)}}{{P\left({D_F^i} \right)}} \tag{3} \end{equation*} View SourceRight-click on figure for MathML and additional features.where $D_F^i$ is the ith pixel of ${D}_F$ and $p({{D}_F/{W}_r})$ is obtained by (2).

  3. Maximization calculation by iteration. Using the aforementioned values, the following iteration equations are utilized to update ${\mu }_r$, ${\sigma }_r,$ and $P({{W}_r})$: \begin{align*} {P}^{t + 1}\left({{W}_r} \right) =& \frac{{\sum\nolimits_{i = 1}^N {{P}^t\left({{W}_r/D_F^i} \right)} }}{N} \tag{4}\\ \mu _r^{t + 1} =& \frac{{\sum\nolimits_{i = 1}^N {{P}^t\left({{W}_r/D_F^i} \right)} D_F^i}}{{\sum\nolimits_{i = 1}^N {{P}^t\left({{W}_r/D_F^i} \right)} }} \tag{5}\\ {\left({\sigma _r^2} \right)}^{t + 1} =& \frac{{\sum\nolimits_{i = 1}^N {{P}^t\left({{W}_r/D_F^i} \right)} {{\left({D_F^i - {\mu }_r} \right)}}^2}}{{\sum\nolimits_{i = 1}^N {{P}^t\left({{W}_r/D_F^i} \right)} }}. \tag{6} \end{align*} View SourceRight-click on figure for MathML and additional features.

We set the iteration time or the difference between the two measurements as the iteration termination condition. Steps 2 and 3 will be repeated until the condition is satisfied. In this way, the two thresholds can be obtained automatically.

3) Subpixel Mapping and Change Detection

Using the improved abundance image as input, the subpixel mapping method is applied to generate the advanced subpixel map. In this article, a recently developed method, namely, the soft then hard-based subpixel mapping method [39], is used for its effectiveness. As shown in Fig. 5, the main procedure of the soft then hard-based subpixel mapping method consists of following two steps:

  1. Subpixel sharpening. Subpixel sharpening is used to produce the fine thematic map with soft values from the abundant image with a coarse spatial resolution.

  2. Class allocation. The membership relationship assigns the soft value of each subpixel to the unique hard class value, namely the class label.

Fig. 5. - Flowchart of the soft then hard-based subpixel mapping method.
Fig. 5.

Flowchart of the soft then hard-based subpixel mapping method.

The detailed description of the soft then hard-based subpixel mapping method is illustrated as follows.

For subpixel sharpening, the radial basis function (RBF) [40] is applied in this article. Suppose ${p}_{i,j}$(j = 1, 2, …, S2) is the jth subpixel of coarse pixel ${P}_i$, and the kth soft value is denoted as ${V}_{k\_c}({{p}_{i,j}})$. The purpose of subpixel sharpening is to predict the soft value. For subpixel sharpening based on the RBF, the following function is used: \begin{equation*} {V}_{k\_c}\left({{p}_{i,j}} \right) = \sum\limits_{g = 1}^G {{\lambda }_k\left({{P}_g} \right)\phi \left({{P}_g,{p}_{i,j}} \right)} \tag{7} \end{equation*} View SourceRight-click on figure for MathML and additional features.where ${P}_g$ is the neighboring pixel around ${P}_i$, ${\lambda }_k({{P}_g})$ is the kth class coefficient for ${P}_g$, and $\phi ({{P}_g,{p}_{i,j}})$ is a basis function that can reflect the spatial relationship between pixels. Here, the Gaussian form is utilized to calculate the basis function \begin{equation*} \phi \left({{P}_g,{p}_{i,j}} \right) = {e}^{ - {d}^2\left({{P}_g,{p}_{i,j}} \right)/{a}^2} \tag{8} \end{equation*} View SourceRight-click on figure for MathML and additional features.where $d({{P}_g,{p}_{i,j}})$ represents the Euclidean distance between ${P}_g$ and ${p}_{i,j}$, and a represents a constant. The coefficient ${\lambda }_k({{P}_g})$ can be calculated by \begin{align*} & \left[ {\begin{array}{cccc} {\phi \left({{P}_1,{P}_1} \right)}&{\phi \left({{P}_2,{P}_1} \right)}& \cdots &{\phi \left({{P}_G,{P}_1} \right)}\\ {\phi \left({{P}_1,{P}_2} \right)}&{\phi \left({{P}_2,{P}_2} \right)}& \cdots &{\phi \left({{P}_G,{P}_2} \right)}\\ \vdots & \vdots & \ddots & \vdots \\ {\phi \left({{P}_1,{P}_G} \right)}&{\phi \left({{P}_2,{P}_G} \right)}& \cdots &{\phi \left({{P}_G,{P}_G} \right)} \end{array}} \right] \\ & \quad \times \left[ {\begin{array}{c} {{\lambda }_k\left({{P}_1} \right)}\\ {{\lambda }_k\left({{P}_2} \right)}\\ \vdots \\ {{\lambda }_k\left({{P}_G} \right)} \end{array}} \right] = \left[ {\begin{array}{c} {{V}_{k\_c}\left({{P}_1} \right)}\\ {{V}_{k\_c}\left({{P}_2} \right)}\\ \vdots \\ {{V}_{k\_c}\left({{P}_G} \right)} \end{array}} \right]. \tag{9} \end{align*} View SourceRight-click on figure for MathML and additional features.

Then, ${V}_{k\_c}({{p}_{i,j}})$ can be predicted by (7)–(9), and the soft class value estimation is completed.

For the class allocation, the units-of-class method [41] is adopted to predict the hard class values because of its speed and accuracy. The hard class value represents the real class for each subpixel. A detailed description of this selected method can be found in [41].

After the generation of the advanced subpixel map at t2, the LCCD result can be obtained by comparing it with the fine spatial resolution thematic map at t1.

SECTION III.

Experiments

In this section, three datasets, including one simulated dataset and two real datasets, are applied to test the performance of the proposed subpixel mapping-based LCCD method. For comparison, four conventional subpixel mapping-based LCCD methods (i.e., bicubic interpolation [42], bilinear interpolation [43], subpixel spatial attraction model (SPSAM) [44], and the original RBF-based method [40]) are also employed. For reliability, the parameters related to the aforementioned LCCD methods are set to be consistent with those in [45]: a = 10 and neighborhood window size = 5.

A. Simulated Dataset

Two Landsat-7 ETM+ remote sensing images are used in this experiment to replicate the bitemporal images for validation of our proposed method. Specifically, the fine spatial but coarse temporal resolution image is generated using the Landsat-7 image taken in 2001, and the coarse spatial but fine temporal image is generated using the Landsat-7 image acquired in 2002 via an S × S mean filter. The two Landsat-7 images are located in Liaoning Province, China, with a 30-m resolution. The image, which has a size of 200 × 200 pixels, depicts the countryside with three types of crops (labeled C1, C2, and C3 for simplicity). In order to explore the performance of the proposed algorithm under different zoom factors, S is set as 4, 5, 8, 10, and 20. The above values are selected to avoid the influence of resampling errors on the LCCD results. The original Landsat-7 images and the generated coarse spatial resolution images are shown in Fig. 6.

Fig. 6. - Landsat-7 datasets and the generated coarse images. (a) Landsat image from 2001. (b) Landsat image from 2002. The synthetic coarse spatial resolution images at 2002: (c) S = 4, (d) S = 5, (e) S = 8, (f) S = 10, and (g) S = 20.
Fig. 6.

Landsat-7 datasets and the generated coarse images. (a) Landsat image from 2001. (b) Landsat image from 2002. The synthetic coarse spatial resolution images at 2002: (c) S = 4, (d) S = 5, (e) S = 8, (f) S = 10, and (g) S = 20.

The reference real change map is created by comparing the two temporal thematic maps, which are generated by the manual visual interpretation using the two original Landsat images for accuracy. The bitemporal thematic maps are presented in Fig. 7(a) and (b), and the reference real change map is shown in Fig. 7(c).

Fig. 7. - Two temporal land cover maps and the real LCCD map. (a) Thematic map for the Landsat image in 2001. (b) Thematic map for the Landsat image in 2002. (c) Real change map.
Fig. 7.

Two temporal land cover maps and the real LCCD map. (a) Thematic map for the Landsat image in 2001. (b) Thematic map for the Landsat image in 2002. (c) Real change map.

1) Abundance Image Comparison Between the Original and Proposed Methods

According to the approach described in Section II, the five generated remote sensing images [shown in Fig. 6(c)–​(g)] based on different zoom factors are used to produce the corresponding abundance images by the spectral unmixing method, which are labeled as the original abundance images. Then, the improved abundance images are generated by our proposed method. In addition, the real abundance images produced by degrading the thematic map in 2002 are also listed for comparison. The three kinds of abundance images with five zoom factors are shown in Fig. 8.

Fig. 8. - Abundance images based on the original method, the proposed method, and the real abundance images with S = 4, 5, 8, 10, and 20 (shown in Line 1, 2, 3, 4, and 5).
Fig. 8.

Abundance images based on the original method, the proposed method, and the real abundance images with S = 4, 5, 8, 10, and 20 (shown in Line 1, 2, 3, 4, and 5).

By comparing the original abundance images and improved abundance images with the real abundance images, it is found that the improved abundance images are more similar to the real abundance images. This indicates that the improved abundance images are more accurate than the original abundance images. For instance, as seen in the first line of Fig. 9, the C1 image in the first column includes more linear artifacts than the C1 image in the fourth column when compared with the corresponding C1 image in the seventh column. This suggests that the C1 image obtained by our proposed method is closer to the real C1 image. Thus, the improved abundance image-based method can effectively improve the performance of the spectral unmixing procedure.

Fig. 9. - Subpixel mapping results based on the four existing methods and the proposed method with S = 4, 5, 8, 10, and 20 (shown in Line 1, 2, 3, 4, and 5).
Fig. 9.

Subpixel mapping results based on the four existing methods and the proposed method with S = 4, 5, 8, 10, and 20 (shown in Line 1, 2, 3, 4, and 5).

2) Subpixel Mapping Results Comparison Between the Original and Proposed Methods

Based on the generated abundance images, the results of subpixel mapping using the existing four methods and our proposed method with five zoom factors are shown in Fig. 9. Specifically, the images from the first row to the fifth row in Fig. 9 represent the subpixel mapping results with S = 4, 5, 8, 10, and 20, and the images from the first column to the eighth column represent the subpixel mapping results obtained by the four existing methods and our proposed method. The results via the original four approaches have numerous isolated pixels, as seen in Fig. 9. The proposed-method-based results, on the other hand, are substantially cleaner. We can see that the subpixel mapping results achieved by the proposed technique are closer to the reference of subpixel mapping than the original methods by comparing the aforementioned results with the reference image displayed in Fig. 7(b). For instance, there are many isolated pixels shown in the subpixel mapping results based on the original four methods, which were incorrectly identified as the C3 class when compared with the reference image. Another observation is that the difference between the five kinds of subpixel mapping results with the reference image increases when the zoom factor increases. As a result, our proposed method produces more accurate results than the original methods.

3) LCCD Results Comparison Between the Original and Proposed Methods

To evaluate the accuracies of the generated LCCD results from visual interpretation, the five LCCD results with different zoom factors shown in Fig. 10, including the four original methods and the proposed method, are compared with the reference change map shown in Fig. 7(c). For the LCCD results based on the four original methods, there are many unchanged pixels incorrectly identified as change pixels. Correspondingly, for the LCCD results based on the proposed method, the false identification seems to be much less. For example, for the “C3 to C1” change type shown in Fig. 10, which is labeled by the red color, there are many incorrectly identified pixels in the original-methods-based results, but the above phenomenon rarely appears in the results based on our proposed method. By visual comparison, the LCCD results based on our proposed method are closer to the reference map than the original methods. This indicates that the proposed LCCD method-based results are more accurate than the original four methods, which verifies the effectiveness of the proposed method.

Fig. 10. - LCCD results based on the four existing methods and the proposed method with S = 4, 5, 8, 10, and 20 (shown in Line 1, 2, 3, 4, and 5).
Fig. 10.

LCCD results based on the four existing methods and the proposed method with S = 4, 5, 8, 10, and 20 (shown in Line 1, 2, 3, 4, and 5).

For quantitative evaluation of the LCCD results, the overall accuracies (OA), calculated by the full-transition error matrix, of the original methods and the proposed method are shown in Table I. The proposed method achieves better results than the original four methods under five zoom factors. The result based on our proposed method obtains the most accurate value of 85.47% when S = 4, and the most inaccurate value of 70.63% is generated by the SPSAM when S = 20. Specifically, for the original four methods, the values of OA are approximately 82.3% when S = 4. In comparison, for the proposed method, the value of OA is approximately 85.5%. It can be clearly seen that the LCCD results based on our proposed method have significantly improved with an increase of approximately 3.2% when S = 4. This demonstrates that the proposed method can effectively improve LCCD performance.

TABLE I Comparisions of the Eight LCCD Methods With the Simulated Dataset Based on OA
Table I- Comparisions of the Eight LCCD Methods With the Simulated Dataset Based on OA

Another observation is that the OA gains of the five methods decrease when the zoom factor increases. Fig. 11 shows the changing trend of OA based on the five LCCD methods when S = 4, 5, 8, 10, and 20. As shown in Fig. 11, the OA values of the five LCCD methods decrease as the zoom factor increases, and the OA gains of the five methods also decrease. For example, the OA gains of RBF and the proposed method are 3.12% for S = 4 and 0.94% for S = 20. This indicates that the LCCD accuracy is difficult to guarantee when the difference in spatial resolution between two images is large.

Fig. 11. - Changing trend of OA based on the five LCCD methods with respect to different zoom factors. (a) Bicubic. (b) Bilinear. (c) SPSAM. (d) RBF. (e) Proposed.
Fig. 11.

Changing trend of OA based on the five LCCD methods with respect to different zoom factors. (a) Bicubic. (b) Bilinear. (c) SPSAM. (d) RBF. (e) Proposed.

B. Real Datasets

1) Datasets

In this experiment, two real datasets, namely, two pairs of Landsat-8 and MODIS images, were applied to validate the proposed scheme. Specifically, the multispectral bands of Landsat-8 OLI image and MODIS image derived from the MOD09A1product were selected, and the two types of images have spatial resolutions of 30 and 500 m, respectively. As a result, the Landsat-8 image can be utilized as the fine spatial but coarse temporal resolution image, while the MODIS image can be applied as the coarse spatial image. Particularly, to satisfy the zoom factor, the nearest neighbor method is applied to resample the two original MODIS images to 480 m. In addition, two Landsat images were used as the corresponding fine reference images for accuracy evaluation. It is worth noting that the Landsat and MODIS image acquisition times should be the same or similar.

The research site was in Hefei, Anhui Province, China. Specifically, for the two datasets, the Landsat images were taken in 2014, and the MODIS images, in 2018. For the first region, the Landsat image is 784 × 400 pixels in size, whereas the MODIS image is 49 × 25 pixels. For the second region, the corresponding sizes are 784 × 784 pixels and 49 × 49 pixels, respectively. Fig. 12 shows the two datasets of the two regions and the corresponding fine reference images.

Fig. 12. - Landsat-MODIS datasets and the reference images. (a) Landsat-8 image from 2014. (b) MODIS image from 2018. (c) Corresponding fine reference image from 2018 for the first region. (d) Landsat-8 image from 2014. (e) MODIS image from 2018. (f) Corresponding fine reference image from 2018 for the second region.
Fig. 12.

Landsat-MODIS datasets and the reference images. (a) Landsat-8 image from 2014. (b) MODIS image from 2018. (c) Corresponding fine reference image from 2018 for the first region. (d) Landsat-8 image from 2014. (e) MODIS image from 2018. (f) Corresponding fine reference image from 2018 for the second region.

The Landsat images were classified based on the manual visual interpretation to generate the thematic map at t1 with two classes (i.e., water and nonwater), as shown in Fig. 13(a), (b), (d), and (e). The MODIS images were unmixed by the spectral unmixing method to produce the abundance image at t2. It is worth mentioning that the coregistration and radiometric correction were implemented before the aforementioned process. Then, the original four methods and the proposed method were applied to the generated abundance image to recreate the fine land cover maps. For the original method, the generated thematic map at t1 and the recreated fine land cover maps at t2 were compared to produce the LCCD results. For the proposed method, however, further processing was conducted, in which the generated thematic map was used to produce the abundance image by the degradation procedure. Then, the two abundance images were applied to generate the improved abundance image at t2, which were used as the input to produce the advanced subpixel maps and the corresponding LCCD results.

Fig. 13. - Bitemporal land cover maps and the reference LCCD maps for the two regions. (a) Thematic map from 2014. (b) Thematic map from 2018. (c) Reference change map for the first region. (d) Thematic map from 2014. (e) Thematic map from 2018. (f) Reference change map for the second region.
Fig. 13.

Bitemporal land cover maps and the reference LCCD maps for the two regions. (a) Thematic map from 2014. (b) Thematic map from 2018. (c) Reference change map for the first region. (d) Thematic map from 2014. (e) Thematic map from 2018. (f) Reference change map for the second region.

2) Results

Fig. 14 gives the abundance images at t2 of the two regions, including the original abundance images based on the spectral unmixing method and the improved abundance images generated by our proposed method. In addition, as a contrast, the real abundance images generated by the land cover map at t2 are also listed in Fig. 14, which can be regarded as the reference abundance image. As seen in the first two columns in Fig. 14, we observe that the boundaries of water and nonwater generated by the original method are not clear. Correspondingly, the boundaries generated by the proposed method are identified easily. Comparing with the real abundance images, the improved abundance images are much closer to the real images than the original abundance images. This demonstrates the proposed method is effective in decreasing the error of spectral unmixing and improving the accuracy of the generated abundance image.

Fig. 14. - Abundance images based on the original method, the proposed method, and the real abundance images for the first region (line 1) and the second region (line 2).
Fig. 14.

Abundance images based on the original method, the proposed method, and the real abundance images for the first region (line 1) and the second region (line 2).

Fig. 15 shows the subpixel mapping results of the two regions based on the original four methods and the proposed method. As can be observed, the four subpixel mapping results based on the existing methods include more linear artifacts than the advanced results. In addition, many isolated pixels also exist in the former results. Correspondingly, the boundaries of the advanced results are smoother than those of the original subpixel mapping results. Particularly, the advanced results, when compared to the reference map illustrated in Fig. 13(b) and (e), are found to be very near to the reference map. The aforementioned phenomenon indicates that the proposed method obtains more accurate results than the original methods.

Fig. 15. - Subpixel mapping results based on the existing methods and the proposed method for the first region (line 1) and the second region (line 2).
Fig. 15.

Subpixel mapping results based on the existing methods and the proposed method for the first region (line 1) and the second region (line 2).

Fig. 16 gives the LCCD results of the two regions based on the original four methods and the proposed method. As seen from the LCCD results based on the original methods shown in the first four columns in Fig. 16, the four LCCD results have a lot of isolated pixels and linear artifacts. In contrast, the advanced method-based LCCD results shown in the last columns in Fig. 16 have relatively little discrete noise. Clearly, the LCCD results generated by the existing methods have more incorrectly identified pixels, including the undetected change pixels and unrecognized unchanged pixels. When comparing the change maps to the reference maps, which are shown in Fig. 13(c) and (f), the LCCD results produced by the proposed method are much closer to the reference change map than those generated by the existing methods. Hence, the generated LCCD maps confirm the benefit of the proposed method.

Fig. 16. - LCCD results based on the existing methods and the proposed method for the first region (line 1) and the second region (line 2).
Fig. 16.

LCCD results based on the existing methods and the proposed method for the first region (line 1) and the second region (line 2).

Table II shows the CD quantitative evaluations using OA of the two regions based on the existing four methods and the proposed method. As shown in this table, the proposed method outperformed the existing four methods. Specifically, for the first region, the OA values of the proposed method improved up to approximately 1.26% over the existing four methods. Correspondingly, for the second region, the increases are approximately 0.79%. The quantitative evaluations indicate that the proposed method is effective in improving the performance of LCCD.

TABLE II Comparisions of the Three LCCD Methods With the Real Datasets Based on OA
Table II- Comparisions of the Three LCCD Methods With the Real Datasets Based on OA

SECTION IV.

Discussion

The above three experiments, including one simulated dataset-based experiment and two real dataset-based experiments, verify the effectiveness of the proposed method in this article. In the experiments, the fine spatial but coarse temporal resolution image and coarse spatial but fine temporal resolution image data were used as the input images. Through the subpixel mapping-based change detection method, the characteristics and advantages of the two input images were fully utilized to obtain both fine spatial and temporal resolution land cover change maps, thus providing technical support for emergency disaster rescue and other works.

In the subpixel mapping-based LCCD methods, the accuracy of spectral unmixing has a great impact on the change detection results. The improved change detection algorithm proposed in this article can effectively reduce the influence of spectral unmixing error and improve the reliability of change detection results. In the first experiment using the simulation dataset, for all zoom factors, the change detection results obtained by the proposed algorithm are significantly higher than those obtained by the four existing algorithms. However, this performance decreases as the zoom factor increases. This shows that when the spatial resolution difference between the two input images is too large, it is difficult to ensure the accuracy of the change detection results. In the two real dataset experiments, the proposed algorithm is compared with the four existing algorithms, which also proves that the proposed method can reduce the influence of spectral unmixing error, so as to obtain more accurate change detection results.

Although the proposed LCCD method has significantly improved over the existing algorithms, there is still a large room for improvement in change detection with fine spatial–temporal resolutions. In the first experiment, the accuracy of the change detection results obtained by the four existing change detection algorithms is between 70% and 82%, and the change detection accuracy of the proposed method is improved to 71%–85%. It can be seen that although the change detection accuracy of the proposed algorithm has been greatly improved, there is still a lot of room for improvement. In Fig. 8, the comparison of the three abundance images (i.e., the original image, the improved image, and the real image) also shows that there are still some differences between the improved abundance image obtained by the proposed algorithm and the real one. In the two real dataset experiments, the change detection accuracy of the proposed algorithm is about 80% and 87%, respectively, and there is also room for further improvement. By further improving the spectral unmixing and subpixel mapping accuracy, more accurate change detection results can be obtained.

SECTION V.

Conclusion

In this article, a novel subpixel change detection scheme based on improved abundance values is proposed and implemented for detecting fine spatial and temporal changes. The proposed method borrows the spatial distribution of fine spatial resolution image to obtain improved abundance values and promote the accuracy of LCCD. The proposed method can combine the advantages of different spatial and temporal resolution images and provide technical support for high spatial and temporal resolution LCCD. Three datasets, including one simulated dataset based on Landsat-7 images and two real datasets based on Landsat-MODIS images, were adopted to assess the performance of our proposed scheme. The three experiment results show that the proposed technique reduces spectral unmixing error and improves the accuracy of LCCD. Compared with four existing subpixel change detection methods, the proposed scheme obtains the most accurate LCCD results. The increase in accuracy becomes more prominent when the spatial resolutions between the bitemporal images are close. Although the proposed method in this article is effective in fine spatial and temporal resolution LCCD, there is still much room for improvement in subpixel mapping accuracy. In future article, more spectral unmixing methods will be developed, and new strategies of abundance image generation will be developed.

References

References is not available for this document.