Introduction
As global population is increasing, the demand for vegetables and food is growing accordingly [1]. Paradoxically, cultivated land on which vegetable and food production depends gets more and more occupied by ever more dynamic settlement expansion during the past few decades [2]–[4]. This phenomenon is particularly happening in developing countries [5]–[7]. The growing demand for food, however, expedites scientific and technological progress and innovation. As one consequence, the plastic greenhouse was invented. It revolutionized the traditional farming to industrial farming all over the world [8], [9]. It has been estimated that more than 1.1 million acres of plastic greenhouses for commercial use existed in 2016 in 130 countries [10]. Primarily, these plastic greenhouses are distributed in Europe, Africa, and China [11]. Among them, more than 80% are located in China [12].
The semifinished material for plastic greenhouse production does have negative effects on environment and human health [13]: it contains more than 60% of phthalates by weight [14], which lead to secondary salinization in soil [15]. Moreover, excessive fertilization can cause acidification and nutrient imbalances in soil [16]. However, the plastic greenhouse increases the productivity of vegetable. Therefore, the necessity of this agricultural technique demands for reasonable planning and strict monitoring. The knowledge of locations and quantities of plastic greenhouses is a prerequisite for this task. Remote sensing is one data source allowing the mapping of plastic greenhouses in a consistent manner independent from other data sources.
A plastic greenhouse is usually structured by skeletons of steel, bamboo, and other materials, and the roof is covered by transparent plastic membrane [17]. According to the ceiling shape, plastic greenhouses can be classified into an arch type and a roof type, or they can be classified into a single type and a multispan type according to its connection mode (see Fig. 1). Therefore, plastic greenhouses usually display in a regular geometric way in space [18], [19]. Besides, the plastic ceiling of greenhouses increases the reflectivity of visible light [20], [21]. The evident spectral and geometric characteristics of plastic greenhouses are the linchpin of greenhouse mapping based on multiresolution remote sensing images.
The plastic greenhouse is a kind of “high-precision land use type.” Naturally, very high spatial resolution (VHR) satellite images are more suitable for plastic greenhouse mapping considering the monitoring accuracy [22], and a considerable amount of existing studies are based on these data. For instance, Agüera et al. [23] proposed a pixel-based mapping method for detecting new plastic greenhouses based on QuickBird (0.61 m) imagery; subsequently, they [24] improved the pixel-based mapping method by integrating texture features based on VHR images reaching 96.89% accuracy; besides, Koc-San [25] compared the performances of different supervised classification techniques for greenhouse detection on WorldView-2 (0.5 m) imagery. However, VHR images contain scarce multispectral bands, usually four, namely, blue, green, red, and near-infrared [26]. Due to the relative limitation of spectral resolution, metameric substance of the same spectrum is challenging when classifying land use/cover by these VHR images [27]. Therefore, object-based mapping approaches are gradually considered and developed when monitoring plastic greenhouses. For instance, Aguilar et al. [28] proposed an object-based mapping method by using GeoEye-1 (0.5 m) and WorldView-2 imageries reaching 90% accuracy; lately, an optimizing multiresolution segmentation (MRS) was additionally proposed by them based on WorldView-3 (0.31 m) imagery for improving the accuracy [19]. The first and pivotal procedure for object-based mapping is to carry out a segmentation of images [6], [29]. However, the setting of segmentation parameters based on more or less systematic trial-and-error approaches is challenging, and accuracies measured by visual interpretation are instable [30], [31]. For improving the segmentation accuracy, few studies have used manual digitization for image segmentation [28], [32], the workload of this process, however, is extensive.
The high costs, the limited spatial extents, and the short availability of historical data of VHR images prevent studies to date that map plastic greenhouses on a large scale over a long-time sequence [33], [34]. Consequently, medium resolution (30 m) data, free of charge and available for long historical periods such as the Landsat series, are preferred [35], [36]. Methods of spectral unmixing in combination with textural features are shown to be a prerequisite for high mapping accuracies of plastic greenhouses on medium resolution images of 91.2% [21]. However, for assessing the performances of different classification products based on medium resolution images [37], VHR images are always needed to verify and correct the mapping accuracies [38], [39]. Although medium resolution images seem more economical, reported accuracies are significantly lower. At these resolutions, as example, pixels with small plastic greenhouse fractions are undetectable; in other words, medium resolution images are mainly suitable for plastic greenhouse mapping in areas of large, concentrated structures of plastic greenhouses. Besides, the mapping accuracy is highly affected by the phenological changes and the complexity of the surrounding surfaces.
In the present study, we propose a new method for mapping plastic greenhouses using VHR satellite images; in our case, we use data from the GaoFen-2 (GF-2) satellite. We propose a three-step hierarchical procedure: first, we develop a new metric titled “Double Coefficient Vegetation Sieving Index” (DCVSI) to explicitly distinguish plastic greenhouses and vegetation from other land surfaces by enhancing vegetation information; second, we develop a new metric titled “High-Density Vegetation Inhibition Index” (HDVII) to explicitly eliminate high-density vegetation by inhibiting its spectral signature; and third, the commonly used Normalized Difference Vegetation Index (NDVI) is adopted to distinguish plastic greenhouses ultimately.
Study Area and Dataset
A. Study Area
The study area for the development of the proposed method is located very close to the urban area of Yucheng, Dezhou, Shandong Province, China [red square in Fig. 2(a)] (116°39′55″E, 36°55′53″N). It is depicted by a standard false color composite of a GF-2 satellite image and covers an area of 100 km2 [see Fig. 2(b)]. Some representative areas of plastic greenhouses are selected and exhibited [see Fig. 2(c)]. The study area has a warm temperate continental monsoon climate with long and dry winters. The exhibited plastic greenhouses belong to both arch and single plastic greenhouses, which are the main types in China for planting various vegetables and typically feature a size of 400–1200 m² [40]. In other areas, such as in Spain even larger cover areas are characteristic (around 10 000 m2) [37]. Tomato, pimento, tabasco, cucumber, muskmelon, and watermelon are the most representative crops under the plastic greenhouses in this area. For testing the stability and universality of our mapping method across space, we transfer it on a large area with different conditions. The area under test for transferability is located further to the north with a coverage area of 2025 km2 (a whole scene of GF-2 image) [yellow square in Fig. 2(a)]. Contrary to the original study area, the plastic greenhouses in this area exhibit a dispersed distribution and a different composition of the surrounding land cover.
(a) Location of study area and testing area in Dezhou. (b) Standard false color composite of GF-2 satellite image of study area. (c) Representation of plastic greenhouses in the satellite data.
B. GF-2 Data and Preprocessing
GF-2 is the first civil, optical, VHR Chinese satellite. It was launched in August 2014. It carries two panchromatic and multispectral charge-coupled device camera sensors. Parameters of sensors and information of spectral bands are listed in Table I [41].
Remote sensing images from mid or late April are considered suitable for the plastic greenhouse mapping in China [40]. In this period, the crops under the plastic greenhouse are mainly fruit-vegetable (e.g., tomato, cucumber, and luffa) with large plants and leaf areas with typical reflective properties of vegetation; plastic membrane on the roof of greenhouses increases the spectral reflectance significantly; as a consequence, the spectral signature of plastic greenhouses differs from other land surface types. However, temporal availability of satellite data is limited. Thus, in this study, we rely on GF-2 images from June 10, 2015 for mapping plastic greenhouses in our study area. It is the beginning for winter wheat harvest in North China, with parts of the winter wheat already been harvested at that time. It results in a very high complexity of surrounding land surface types around the plastic greenhouses, which contains vegetation, soil, water, and man-made surfaces. This complex spatial and spectral pattern is one major challenge for mapping plastic greenhouses. The crops under the plastic greenhouse are still mainly fruit-vegetable in June.
For testing the transferability of the developed method, we rely on another GF-2 image from a different season (January 13, 2016) with a different phenological stage and different surrounding patterns. At that time, the crops under the plastic greenhouse are mainly leaf-vegetable (e.g., spinach, coriander, and garland chrysanthemum) with small plants and leaf areas. The typical reflective properties of vegetation are much less pronounced than for the fruit-vegetable period in June.
The following three preprocessing steps are carried out for the GF-2 images: first, we convert the GF-2 image to top-of-atmosphere radiance by using radiometric calibration coefficients; second, we process calibrated radiance with the FLAASH module for resulting in atmospherically corrected surface reflectance; and third, we perform geometric correction eliminating the spatial mismatch between multispectral bands and the panchromatic band. We use the FLAASH module for atmospheric correction in terrestrial applications [42], as the corrected reflectance is generally within ±15% of the ground-based measurements [43], [44]. The accuracy of geometric correction for the same feature point is required not to exceed two pixels in high precision classification [45].
Methods
A. Spectral Characteristic Analysis of Different Land Covers
The literature documented matters of “same spectral from different materials” and “same material with different spectral” are faced due to the spectral limitation of GF-2 images [46]. We select 25 representative samples to investigate spectral characteristics of different land surface types; all samples are evenly selected for different land surface types (i.e., vegetation, soil, water, man-made surfaces, and plastic greenhouses) and displayed on the composite image of GF-2 multispectral bands (see Fig. 3). Afterward, ten pixels for each sample (250 pixels in total) are selected manually from the corresponding enclosed irregular region for the quantitative analysis of spectral reflectance. By the manually sampled pixels, we aim to ensure that all the 250 pixels are pure. The spectral reflectance curve for each sample is drawn by the mean reflectance of the ten pixels as support points (see Fig. 4). When we compare the average spectral reflectance curves of these land cover types, we find the following:
differences of spectral reflectance within the same land cover type;
differences of spectral reflectance across different land cover types are indistinctive;
the spectral reflectance interval between all land cover types on the near-infrared band is the largest, followed by red, green, and blue.
(a) Vegetation with different moisture contents and densities. (b) Soil with different moisture contents and reflectance. (c) Man-made surface with different materials. (d) Water with different impurity contents. (e) Plastic greenhouse with different homogeneities and vegetation densities.
Spectral characteristics of vegetation, soil, man-made surface, water, and plastic greenhouses.
For understanding spectral characteristics in a quantitative way, a statistical interpretation of reflectance of all land cover types on the four multispectral bands is derived based on all manually sampled pixels (see Table II). According to Fig. 4(f), we find that the spectral reflectance curves of plastic greenhouses are nearly parallel to that of vegetation. However, the reflectance of plastic greenhouses is measured higher. It indicates that the plastic membrane on the roof of greenhouses enhances the surface reflectance on the four multispectral bands equably. It seems that the gap of spectral reflectance between vegetation and plastic greenhouses is significant; thus, discriminating plastic greenhouses from vegetation is viable. However, when the density of vegetation decreases, the spectral reflectance on blue, green, and red bands increases [see Fig. 4(a)]. Correspondingly, the spectral reflectance intervals of vegetation on the blue, green, and red bands feature also a wide range; it is measured slightly lower than man-made surfaces, and so is the standard deviation. Besides, the changes in moisture content of vegetation make great effects on the spectral reflectance in the near-infrared band. Spectral reflectance curves of subclasses of vegetation and plastic greenhouse are superposed. In addition to this, “same spectrum from different materials” makes the situation more complicated. For instance, the spectral characteristics of Caigang watts (a kind of man-made surface) [sample c4 in Fig. 3(c) and Fig. 4(c)] in the red and near-infrared bands are similar to vegetation and plastic greenhouses. However, c4 is not vegetation but a kind of man-made surface (color steel).
With so many interlocking spectral characteristics, discriminating plastic greenhouses from the “background” (understood as the complex pattern of land cover types around plastic greenhouses) by a single, straightforward classification procedure is challenging. The basic methodological idea in our study is to successively eliminate the background gradually until only the plastic greenhouses are left. The whole procedure is realized by relying on enhancing the spectral characteristics of the four multispectral bands of GF-2 images. The aim is to narrow the differences of spectral reflectance within the same land cover type and to enlarge the differences of spectral reflectance across different land cover types. In consequence, we develop a consecutive three-step procedure to overcome these spectral challenges.
B. Methodology Development and Theoretical Basis
1) Preclassification Using the DCVSI
The first step in our three-step procedure aims at discriminating vegetation and plastic greenhouses from other land cover types. To do so, we develop the DCVSI. The equation is as follows:
\begin{align*}
{\rm{DCVSI}} &= \frac{{\left({{R_g} - {R_b}} \right)}}{{\left| {{R_g} - {R_b}} \right|}} \times \frac{{\left({{R_g} - {R_r}} \right)}}{{\left| {{R_g} - {R_r}} \right|}} \\
&\quad \times \frac{{{R_{{\rm{nir}}}}\left| {{R_{{\rm{nir}}}} - {R_r}} \right|}}{{1 - {R_b}}} \times {10^4}\tag{1}
\end{align*}
The DCVSI is designed for the following three reasons.
For soil and man-made surfaces, reflectance characteristics generally reveal that
; for water, reflectance characteristics generally reveal thatR_{b}< R_{g}< R_{r} ; for vegetation and plastic greenhouses, there exists a moderate reflection peak on the green band; it indicates thatR_{b}> R_{g}> R_{r} andR_{g}> R_{b} .R_{g}> R_{r} That there exists a reflection peak of vegetation and plastic greenhouses in the near-infrared band and a moderate reflection valley in the red band; it indicates that
and{R_{{\rm{nir}}}} of vegetation and plastic greenhouse are significantly higher than other land cover types.({{R_{{\rm{nir}}}} - {R_r}}) The standard deviations of
between different land cover types or within a land cover type are both relatively low comparing to other spectral bands; it indicates that the interval ofR_{b} is narrow and convergent.R_{b}
According to the first reason, two directivity coefficients (
(a) Gray-level histogram image of DCVSI. (b) Binary image of the result of DCVSI. (c) False color image of the result of DCVSI.
2) Classification Refinement by the HDVII
In a second step, we aim to restrain the spectral information of vegetation, with the aim of enlarging the difference of spectral characteristics between plastic greenhouses and open vegetation. To do so, we develop the HDVII. The equation is as follows:
\begin{equation*}
{\rm{HDVII}} = \frac{{{R_{{\rm{nir}}}} \times {R_g}}}{{{R_{{\rm{nir}}}} + {R_g}}} \times {10^4}\tag{2}
\end{equation*}
After the first step of classification using the DCVSI, pixels covered by vegetation and plastic greenhouses are obtained, and pixels covered by other land cover types are eliminated. The essential difference of spectral characteristics between open vegetation and plastic greenhouses is that the transparent plastic membrane on the roof of plastic greenhouses increases the reflectivity in all four visible bands. If we can restrain the spectral information of vegetation, the difference of spectral characteristics between plastic greenhouses and open vegetation would be enlarged.
The green and NIR bands are selected for restraining the spectral information of vegetation; the reasons are as follows: on the one side, there are two reflection peaks of vegetation in the green and NIR spectrums, the former is gentle, and the latter is fierce. Accordingly, obvious differences between the two reflection peaks are existing; on the other side, the
To authenticate it, we calculate
3) Final Classification Using the NDVI
In a third step, we apply the widely used NDVI to finally distinguish plastic greenhouses from low-density vegetation. The equation is as follows:
\begin{equation*}
{\rm{NDVI}} = \frac{{{R_{{\rm{nir}}}} - {R_r}}}{{{R_{{\rm{nir}}}} + {R_r}}}\tag{3}
\end{equation*}
After the process of the refinement of the classification by the HDVII, pixels covered by low-density vegetation and plastic greenhouses are obtained. The acquisition time of the GF-2 image for our first test is June, i.e., the vegetation under the greenhouse is flourishing. Although the plastic membrane on the roof of greenhouses enhances the surface reflectance on the four multispectral bands, the spectral characteristics of vegetation are preserved. Therefore, we find the NDVI capable to distinguish greenhouses from low-density vegetation.
C. Thresholds Setting for the Final Classification
The three-step procedure is designed to reduce the spectral reflectance differences within the same land cover type and to enlarge the spectral reflectance differences between different land cover types. And, it is very specifically designed to extract plastic greenhouses from the complex surrounding patterns of various land cover types. After each step, a new gray-level image is generated, reduced by nongreenhouse background and pixels of high probability for plastic greenhouses. In each case, the aim is to define appropriate thresholds on the histogram. Thresholds can be well determined when the histogram of gray-level images is bimodal or nearly bimodal with a deep valley between two peaks if the gray-level image contains two categories of objects [47]. Gray levels of the two objects are concentrated around the two peaks, and the deep valley would be the best threshold [48]. Therefore, searching a valley between two peaks is our first choice for setting the threshold. Regarding the two peaks, one is for the target object and the other is for the background. If there is no clear deep valley, we adapt a fast and effective threshold setting method named Otsu algorithm. This method was presented by Nobuyuki Otsu in 1979, and now it is widely adopted as a classic segmentation method for threshold setting and image segmentation [49]. The Otsu algorithm classifies the image into target and background by the rule of minimal error probability [50]. The threshold value is determined by calculating the interclass variance of the target and the background. The threshold value that can make the interclass variance reach the maximum is the optimal threshold [51]. Based on this procedure, we have an unambiguous and transparent procedure for determining the thresholds independent from certain locations or spectral situations.
D. Greenhouse Extraction Accuracy Assessment
In this study, we focus on mapping plastic greenhouses from VHR optical satellite data by implementing the introduced three-step procedure. We exclude nongreenhouse pixels after each step of processing. All excluded backgrounds contain various land cover types except the plastic greenhouses. The pixels will be labeled “plastic greenhouses” or “other” by both our automated three-step procedure and a manual extraction by visual inspection. We adopt indexes defined by Mckeown [52] and recommended by other studies [19], [53] for accuracy assessment. We use the following four measures:
true positive (TP), i.e., pixels are labeled as plastic greenhouses by both automated and manual methods;
true negative (TN), i.e., pixels are labeled as “other” by both methods;
false positive (FP), i.e., pixels are labeled as plastic greenhouse by our automated three-step procedure and labeled as “other” by the manual method;
false negative (FN), i.e., pixels are labeled as plastic greenhouses by manual image interpretation and labeled as “other” by the developed automated classification.
We calculate the following four metrics:
branching factor (BF) for measuring the incorrect rate of greenhouse labeling, with BF = FP/TP;
miss factor (MF) for measuring the omission rate of greenhouse labeling, with MF = FN/TP;
greenhouse detection percentage (DP) for measuring the correct rate of greenhouse labeling by the automated approach, with DP = 100TP/(TP+FP);
quality percentage (QP) for measuring the likelihood of greenhouses being correctly labeled, with QP = 100TP/(TP+FP+FN).
The plastic greenhouse pixels extracted by the developed automated method are obtained after the introduced three-step procedure. In total, 500 pixels of them are selected in a spatially even distribution for the accuracy assessment. The panchromatic image with a 0.81-m geometric resolution is adopted for generating the manual interpretation. Google Earth VHR data and field investigations are additionally used to verify and ensure the accuracy of the manual interpretation. The area of one pixel on the spectral image covers the area of 16 pixels (4 × 4 matrix) on the panchromatic image due to the different geometric resolutions between them. Therefore, we select 500 pixel sets (4 × 4 matrix) spatially evenly distributed from the manual interpretation on the panchromatic image; and, they are labeled as plastic greenhouses by manual interpretation. The selected 500 pixels on the spectral image are cross-checked on the panchromatic image, and the selected 500 pixel sets on the panchromatic image are cross-checked with the spectral image. Ultimately, all four metrics (BF, MF, DP, and QP) are calculated for those selected pixels or pixel sets.
Results
A. Result of the Preclassification Using DCVSI
After the preclassification using DCVSI, we receive a gray-level image. A multimodal histogram of the gray-level image is illustrated in Fig. 5(a). The horizontal axis shows the values of DCVSI, and they range from −2276 to 1778; the vertical axis shows the quantities of pixels, and they range from 1 to 2.67 × 105. There are three evident peaks and two deep valleys on the current histogram. According to the threshold setting method for bimodal histograms, we assume that the values of the two valleys are best thresholds for the current distribution. The Otsu algorithm is also adopted to acquire the exact values of the two valleys, as this algorithm was designed for threshold setting for only two objects (target and background). Besides the two thresholds, we also define another value for the first valley. The three defined thresholds (in our specific case, they are −797, −18, and 188) are applied to classify the histogram into four intervals.
We extract the pixels per interval and interpret them thematically. We find that pixels in interval I are totally covered by mirror reflection of plastic greenhouses; the quantity of these kinds of reflectance values is small. The pixels in interval II are covered by soil and man-made surfaces. They are classified as one category due to the high similarity of the reflectance characteristics. Pixels in interval III are covered by water. Our target is in interval IV, as it contains the total vegetation as well as plastic greenhouses. Therefore, we consider that pixels in interval IV contain our target and pixels in intervals I–III contain background. Subsequently, we binarize the gray-level image [see Fig. 5(b)] and continue with pixels in interval IV [see Fig. 5(c)].
B. Result of the Classification Refinement Using HDVII
In a second step, we refine the classification based on the HDVII. It is carried out on the extracted image by the DCVSI, and a gray-level image for HDVII is additionally generated [see Fig. 6(a)]. The horizontal axis shows the values of HDVII, and they range from 722 to 1643; the vertical axis shows the quantities of pixel, and they range from 1 to 7.56 × 104. There are two peaks within the histogram: the first one is steep and evident and the second one is small and low. The size of the peak is determined by the quantity of corresponding objects. Although the valley between the two peaks is not very deep, it is clear enough. We also assume that the value of the valley is the best threshold for the current histogram. Besides this, we also define another threshold by the Otsu algorithm. The two thresholds (in our sample case, they are 1148 and 1436) classify the histogram into three intervals.
(a) Gray-level histogram image of HDVII. (b) Binary image of the result of HDVII. (c) False color image of the result of HDVII.
After thematic interpretation of the pixels for each interval, we find pixels in interval I are totally covered by vegetation; furthermore, pixels covered by vigorous vegetation are centralized around the top of the first peak. Pixels in interval II are mostly covered by plastic greenhouses; however, very low-density vegetation remains existent to a small degree. Pixels in interval III are covered by man-made surfaces with very tiny amounts. Therefore, the pixels in interval II contain our target. A binary image is generated for extracting pixels in interval II [see Fig. 6(b)], corresponding pixels on those four bands are extracted subsequently [see Fig. 6(c)].
C. Result of the Final Classification Using the NDVI
In the third and last step, the final classification is produced using the NDVI. It is carried out on the extracted image by the HDVII, and a gray-level image for NDVI [see Fig. 7(a)]. The horizontal axis shows the values of the NDVI, and they range from −0.09 to 0.54; the vertical axis shows the quantities of pixels, and they range from 1 to 8276. There is only one peak on the current histogram this time. All remaining pixels are consisting predominantly of plastic greenhouses and few contain low-density vegetation and other land cover types. We divide the histogram into two intervals by a threshold, which we identify by the Otsu algorithm. The threshold is set in our particular case to 0.26 and it is located at the bottom of the peak. After thematic interpretation, we find that pixels in interval II are completely covered by plastic greenhouses. A binary image is generated for extracting plastic greenhouses based on interval II [see Fig. 7(b)], and corresponding pixels on those four bands are extracted subsequently [see Fig. 7(c)].
(a) Gray-level histogram image of NDVI. (b) Binary image of the result of NDVI. (c) False color image of the result of NDVI.
D. Greenhouse Mapping in the Study Area
The great majority of pixels covered by plastic greenhouses are extracted by the three-step procedure, which was designed by the idea of hierarchical decision-making. However, there are tiny amounts of pixels covered by plastic greenhouses, which are missed by the classification approach. We find the plastic membrane on the roof of greenhouse with very smooth surfaces at a particular orientation forms mirror reflection of visible light. In this circumstance, the reflectance characteristic of plastic greenhouses is spectrally concordant with other land cover types.
We calculate the average reflectance of all sampled pixels featuring plastic greenhouses (50 pixels) and derive the reflectance curves. All those sampled pixels were covered by normal plastic greenhouses. For comparison, ten pixels covered by the mirror reflection plastic greenhouses are selected; then, the average reflectance was calculated and the reflectance curve is derived [see Fig. 8(a)]. We find that the mirror reflection of plastic greenhouses is higher than that of normal plastic greenhouses, but the significant difference of reflectance on red and near-infrared is maintained. Besides, the reflectance of mirror reflections of plastic greenhouses on the four bands increases progressively
(a) Spectral reflectance of normal greenhouse and mirror reflection greenhouse. (b) Spectral characterize of mirror reflection greenhouse. (c) Spatial distribution of plastic greenhouse in research area.
E. Mapping Accuracy of Plastic Greenhouses
Accuracy assessment of the final result is carried by four metrics (BF, MF, DP, and QP) based on manual interpretation of 500 pixel sets (4 × 4 matrix) and 500 pixels classified by our three-step procedure. The number of selected pixels or pixel sets accounts for 0.2% of all mapped plastic greenhouses. The 500 automated extraction pixels are selected from the presented plastic greenhouse classification result. We convert all adjacent pixels into patches that feature different sizes. The number of selected pixels in each patch is determined by its area proportion of all patches. The 500 manually interpreted pixel sets are selected by a different way: For a balanced spatial distribution, we divide the study site into 100 grids and classify plastic greenhouses manually per grid. If there is no plastic greenhouse in a certain grid, no pixel sets will be selected. Otherwise, the number of selected pixel sets in each grid is determined by the area proportion of manually drawn greenhouses in this grid. All the 500 automated pixels extracted are displayed in Fig. 9(a) and the 500 manual interpretation pixel sets are displayed in Fig. 9(b). TP, FP, and FN pixels are distinguished with supplementary data (i.e., Google Earth VHR data and a topographic map), TN pixels are not taken into consideration. Ultimately, the four metrics for accuracy assessment of plastic greenhouses are calculated (see Table III). The correct rate of our mapping method reaches 97.34%, the likelihood of greenhouses being correctly labeled is 95.20%, and both the proportions of FP pixels and FN pixels are less than 3%.
(a) 500 pixels derived from the plastic greenhouse mapping result. (b) 500 manually interpreted pixel sets of plastic greenhouses.
F. Transfer of the Plastic Greenhouse Mapping Method
For testing the robustness of our suggested plastic greenhouse mapping method, we transfer it to a larger area (2025 km2) with a different, i.e., dispersed spatial distribution pattern of plastic greenhouses and high vegetation fraction. In addition to this, the current GF-2 image acquisition time is winter, i.e., in a different phenological stage. All these features are considered a challenge for transferring the approach. The mapping product of plastic greenhouses by the transferred three-step procedure is displayed by yellow patches in Fig. 10. The total area of plastic greenhouses is 65.68 ha. The proportion of mirror reflection greenhouses measured at 6.36%. The lower rate in this image might be caused by the different solar elevations. With so many challenges, the classification success for plastic greenhouse mapping on the testing area only reveals a slight decrease in accuracy. The likelihood of plastic greenhouses being correctly labeled reaches 96%, and the misclassification rate maintains less than 3%. The satisfactory result demonstrates the capability of transferability of this approach.
Discussion
With the rapid development of modern agriculture, plastic greenhouses have increased remarkably during the past few decades as well as the various mapping methods based on VHR satellite images. The pixel-based mapping methods by using maximum likelihood, random forest (RF), and support vector machines (SVM) are able to obtain accuracies up to 93% [25]; with the integration of texture, the mapping accuracy even reached 97% [20], [24]. On the other side, object-oriented mapping methods were reported to obtain an accuracy over 90% [28]; with the extensive amount of training data, the mapping accuracy was reported to reach 98% [38]. By comparison, our suggested plastic greenhouse mapping method shows very competitive accuracies without texture feature integration or the necessity of having a large amount of prior knowledge for training of models.
On the other hand, we also carried out the most widely used method of object-oriented mapping for comparing the achieved mapping accuracy to our pixel-based results. The three metrics (i.e., scale, shape, and compactness) for MRS suggested by the existing influential studies are implemented [54], [55]. After a tedious trial-and-error process [39], even the optimal segmentation still presents poor performances on the two GF-2 images, particularly on the image acquired in winter, because the amount of plastic greenhouses is little and the spectral difference across plastic greenhouses is large. Subsequently, the well-behaved classification methods (RF and SVM) [56] are unable to obtain accuracies that exceed 90%, which we find mainly due to the poor segmentation. It proves that the phenological stage has great impacts on plastic greenhouse mapping accuracies. Thus, remote sensing images at a particular phase (mid or later April) of the year are typically required [40]. Our mapping method, however, presents good performances at two completely different phenological stages (summer and winter).
However, challenges remain for mapping plastic greenhouses. Although the proportion of mirror reflection for greenhouses is not high, it greatly influences the classification accuracy on VHR images. The accuracy drops from the current 97.34% (value of DP) to 92.68%, if the mirror reflection greenhouse is not taken into consideration. Therefore, taking mirror reflection greenhouses by the suggested DCVSI and threshold setting into account is a necessity.
In this study, 2.73% pixels were labeled as plastic greenhouses by our classification, but labeled as other land cover types by visual interpretation. We found that all those misclassified pixels were covered by the same land cover type: water plants [see Fig. 11(a)]. The spectral characteristic of the water plant is very similar to the plastic greenhouses in the false color composite image [see Fig. 11(b)]. Fig. 11(c) pictures the low spectral reflectance differences between water plants and greenhouses.
(a) Spectral characterize of water plant. (b) Spectral characterize of normal greenhouse. (c) Spectral reflectance of water plant and normal greenhouse.
On the other side, 2.31% pixels were labeled as plastic greenhouses by manual interpretation, but labeled as other land cover types by the developed method. We find that all the missed pixels in our classification are covered by greenhouses without or with very rare vegetation inside. Our plastic greenhouse mapping method relies on the characteristics of both vegetation and plastic membrane, and the vegetation attributes of plastic greenhouses is the prerequisite. By enhancing the spectral features of vegetation, we find greenhouses can also be detected with low-density vegetation inside [see Fig. 12(a)]. Greenhouses, however, without vegetation inside or a very low density of vegetation [see Fig. 12(b)] are not detectable with our approach focusing on spectral features. We find that the spectral reflectance of nonvegetation greenhouse is very similar to that of soil [see Fig. 12(c)].
(a) Spectral characterize of plastic greenhouse with low-density vegetation inside. (b) Spectral characterize of plastic greenhouse with no vegetation inside. (c) Spectral reflectance of soil, normal greenhouse, and nonvegetation greenhouse.
Conclusion
Intensive agricultural practices are developing across the globe to meet growing demands of increasing populations. Plastic greenhouse agriculture is one practice for improving the production of vegetables and food. However, their development has great effects on our environment. Plastic greenhouses are considered as cultivated land in traditional land use classifications. With the fast growth of plastic greenhouse agriculture, inventories of amount and distribution of plastic greenhouses are widely absent.
This study presents a new plastic greenhouse mapping method by a three-step produce based on GF-2 satellite images. The proposed method achieves competitive mapping accuracies under adverse operational conditions. The transfer to another GF-2 image, to another seasonality and phenological stage, and to different land use pattern configurations in the surroundings of plastic greenhouses proves stability and universality of this method. Therefore, the successful applicability of our method on images with higher spatial resolutions and for areas where plastic greenhouses are even larger is very likely. The approach proves its feasibility on GF-2 satellite image. Although these data lack historical availability, the larger swath width (45 × 45 km), the lower costs compared to highest resolution satellite data, and the high achieved accuracies by the presented method make it possible to take spatial inventories of plastic greenhouses for large areas.