Loading [MathJax]/jax/output/HTML-CSS/autoload/mtable.js
Temporal Resolution Enhancement of COMS Satellite Using Geo-Kompsat-2A Satellite Through Data-to-Data Translation | IEEE Journals & Magazine | IEEE Xplore

Temporal Resolution Enhancement of COMS Satellite Using Geo-Kompsat-2A Satellite Through Data-to-Data Translation


Abstract:

This study introduces a data-to-data (D2D) translation approach that utilizes a conditional adversarial learning framework to generate hypothetical data at the meteorolog...Show More

Abstract:

This study introduces a data-to-data (D2D) translation approach that utilizes a conditional adversarial learning framework to generate hypothetical data at the meteorological imager (MI) sensor on the communication, ocean, and meteorological satellite (COMS). The proposed D2D model produces virtual 10-min data from actual 30-min data by exploiting the 10-min temporal resolution (TR) of the advanced MI (AMI) on GEO-KOMPSAT-2A during the overlapping observation period of the two satellites. Specifically, the D2D model uses one visible (VIS) at 0.64 μm channel and four infrared channels (3.8, 6.9, 10.8, and 12.3 μm) from AMI of the actual 30-min data from April 2020 to April 2022 to train and test the model. Subsequently, the D2D model is applied to simulate hypothetical 10-min-TR COMS data using the 30-min COMS observation data from September 2019 to March 2020 during the coexistence period of the two satellites. Regression-calibrated COMS data alleviated the spectral response function differences between MI and AMI sensors. The proposed D2D method exhibits excellent statistical performance, with an average root-mean-square error of 0.056 for the VIS channel and 3.237 K, 1.005 K, 3.251 K, and 3.184 K for 3.8 μm, 6.9 μm, 10.8 μm, and 12.2 μm channels, respectively. The findings of this study are expected to facilitate various types of remote sensing research and applications using long-term data with AMI and AMI-like past MI data.
Page(s): 9759 - 9771
Date of Publication: 06 May 2024

ISSN Information:

Funding Agency:

References is not available for this document.

SECTION I.

Introduction

Satellite observation data have been critical in various research and application fields, including weather forecasting, environmental monitoring, disaster management, and national security. Satellites equipped with advanced sensors can capture the Earth's surface and track fluctuations in atmospheric and marine conditions [1], [2]. The recently operating advanced geostationary weather satellites provide higher spatiotemporal resolution than previous generations of satellites, enabling them to provide essential insights into natural phenomena, such as wildfires and hurricanes [3], [4], [5]. Enhanced temporal resolution (TR) for satellite observation is one of the crucial factors in satellite remote sensing data collection. Fine TRs offer numerous advantages, including improving the temporal analysis of global surface changes, weather phenomena, and human activities [6], [7], detecting subtle changes at a small scale, improving the accuracy of remote-sensed data analysis [8], [9], and facilitating quick assessment of damage and impact analysis during natural disasters [10]. In particular, forecasting local heavy rain poses challenges due to its intense and short lifetime. Most precipitation products offer data at 5- or 10-min intervals [11], [12] to address the need for real-time monitoring of rapid precipitation changes. Moreover, numerous studies leverage high-TR data to enhance spatial resolution, accuracy, and predictive capabilities in precipitation products [13], [14].

The latest generation of geostationary weather satellites, such as Meteosat Third Generation and GEO-KOMPSAT-2A/B (GK2A/B), are equipped with advanced sensors with higher TR [15], [16] than the former satellites. However, their relatively short observation period compared with previous generations of geostationary weather satellites, such as communication, ocean, and meteorological satellite (COMS), has limited available data. Therefore, a practical approach to address the limited availability of short-term GK2A data involves leveraging data collected from previous COMS observations. Both GK2A and COMS satellites operate with similar central wavelengths in five channels, consisting of one visible (VIS) and four infrared (IR) channels. However, they differ in their TRs, with GK2A having a 10-min interval and COMS having a 30-min interval. Combining successive time series of COMS and GK2A observation data with a 10-min TR makes it advantageous for monitoring severe weather events and conducting long-term analyses.

The potential improvements in the TR can be explored using various video frame interpolation techniques. For instance, the optical flow-based method in computer vision may be useful as it can track object motion within a video and utilize it to interpolate intermediate frames by linearly combining optical flow maps [17], [18], [19], [20]. Another method is data fusion technology, which integrates data from multiple sources to estimate fast-changing phenomena [21], [22]. Recently, machine learning and deep learning (DL) technologies have been developed and applied to TR enhancement studies, such as DL algorithms that learn patterns from high-resolution satellite data and apply them to low-resolution data, advancing their performance [23], [24], [25], [26], [27]. However, the methods utilized in previous studies have yet to be applied in satellite remote sensing.

This research aims to augment the TR of the earlier COMS data to align it with that of the GK2A data. This enhancement will facilitate the integration of the past COMS data with the current GK2A data, enabling more comprehensive long-term studies. For this purpose, this study utilized a data-to-data (D2D) method [28], [29], [30], [31], which consists of both pre- and postprocessing of the satellite observation data with different units and ranges to the input and output data for learning D2D model and conditional generative adversarial network (GAN) structure [32] for adversarial learning between two different sensors, such as meteorological imager (MI) and advanced MI (AMI), implemented using Pix2Pix framework [33].

The Pix2Pix, using the discrete digital numbers (DNs) of conventional satellite images that range from 0 to 255, has demonstrated remarkable translation from one image to another image in satellite remote sensing applications [34], [35], [36]. The D2D framework translates one original data to another using the normalization process as the preprocessing and denormalization as the postprocessing, which converts between original satellite-observed albedo or brightness temperature (TB) and a numerical array each other before and after adversarial learning. The proposed TR enhancement D2D model used the GK2A/AMI data sequence as the paired input data. This study investigated and presented the optimal input sequence length for obtaining the best results. Notably, the constructed D2D TR model used the regression-calibrated COMS/MI data to simulate the AMI-like COMS/MI data with 10-min TR due to the different spectral response functions (SRFs) of COMS/MI and GK2A/AMI sensors.

SECTION II.

Data and Study Area

A. Study Area

The study area was delimited within latitudes of 31–40.5°N and longitudes of 121.5–132.5°E, encompassing the Korean peninsula and its adjacent seas. This selection was made because processing data with DL algorithms covering a full disk or East Asia region using COMS and GK2A satellite data requires a substantial amount of memory due to the GPUs limited capacity to handle a large number of pixels. Fig. 1 illustrates the study area, including the Korean peninsula and surrounding seas on 21 February 2020 at 05:30 UTC.

Fig. 1. - Study area encompassing the Korean peninsula and its adjacent seas.
Fig. 1.

Study area encompassing the Korean peninsula and its adjacent seas.

B. COMS Data

From June 2010 to March 2020, the COMS, Korea's first geostationary meteorological satellite, was successfully operated by the National Meteorological Satellite Center (NMSC) of the Korea Meteorological Administration (KMA). The COMS carried two Earth-observing instruments: the MI and the geostationary ocean color imager, as well as one experimental communication system, the communication payload system, operating in the Ka-band radio frequencies [37]. During its successful 10-year operation, the COMS provided 16 retrieved products for monitoring various weather events, such as tropical storms, deep convective clouds, fronts, fog, and Asian dust, in addition to numerical weather forecasting models and aerosol monitoring [38]. The KMA produced near real-time weather products using the observations at the five channels of the COMS/MI, including VIS channels with a 1 km spatial resolution and short-wave IR (SWIR), water vapor (WV), and IR channels, such as 10.8 and 12.0 μm with a 4 km spatial resolution. The MI scans the East Asian region, centered on the Korean Peninsula, every 15 min and the full-disk coverage of the Earth every 3 h [38]. Table I summarizes the characteristics of the five channels of the MI sensor.

TABLE I Channel Specifications of the COMS/MI Sensor
Table I- Channel Specifications of the COMS/MI Sensor

C. GK2A Data

The GK2A satellite equipped with the AMI sensor, a successor of the COMS/MI, was launched on 5 December 2018 to continue satellite-based weather observations. Following an 8-month orbital test, the GK2A began its official observation on 25 July 2019. The GK2A/AMI sensor comprises 16 channels and offers high spatiotemporal resolution, with a range of 0.5–2 km every 2 min in the Korean Peninsula region and every 10-min in East Asia and full-disk areas [39]. Compared with the COMS/MI, the GK2A/AMI provides significantly more observation data due to its advanced TR, spatial, and spectral resolution capabilities. Notably, the shorter observation intervals enable the identification and tracking of rapidly changing meteorological phenomena, leading to more accurate quantitative products [40]. Table II provides details on the characteristics of the AMI sensor's 16 channels.

TABLE II Channel Specifications of the GK2A/AMI Sensor
Table II- Channel Specifications of the GK2A/AMI Sensor

D. COMS and AMI Data Collocation

This study used the COMS and GK2A data as input for the D2D model for adversarial learning. The COMS/MI and GK2A/AMI sensors have different SRFs. Fig. 2 illustrates the SRFs of the five overlapping channels between the two sensors.

Fig. 2. - SRFs of COMS/MI and GK2A/AMI at (a) VIS, (b) SWIR, (c) WV, (d) IR1, and (e) IR2 channels.
Fig. 2.

SRFs of COMS/MI and GK2A/AMI at (a) VIS, (b) SWIR, (c) WV, (d) IR1, and (e) IR2 channels.

First, we obtained the albedo and TB values of the AMI and COMS observation using conversion tables provided by the NMSC that establish the relationship between digital counts and albedo or TB values for the AMI and MI sensors. Second, we upscaled the GK2A data using a nearest-neighbor interpolation technique for the AMIs SWIR, WV, and IR channels from 2 to 4 km and the VIS channel of the AMI from 0.5 to 1 km for the pairs of the same size of numerical arrays for the adversarial learning. Third, this study adjusted the original COMS data to fit the AMI data for the effective training of the D2D model. Linear regression functions were used for the VIS, WV, IR1, and IR2 channels, while a second-order regression was applied for the COMS SWIR channel due to differences between the COMS and GK2A data at low and high TB. As illustrated in Fig. 2(d), the central wavelength of the COMS/MI 10.8 μm (IR1) channel, distinct from the other COMS MI channels, aligns with the averaged values of the GK2A/AMI 10.5 μm (#13) and 11.2 μm (#14) channels. Consequently, we used the mean TB values at GK2A/AMI 10.5 μm and 11.2 μm channels as input datasets for generating the COMS/MI 10.8 μm channel. The regression coefficients between the AMI and MI sensors were derived by averaging the coefficients calculated for the entire dataset.

Fig. 3 shows scatterplots of the original and corrected COMS and GK2A data on 5 January 2020 at 04:00 UTC. As a result, the COMS VIS, SWIR, and WV channels were corrected symmetrically to the common AMI five channels. Notably, the IR1 and IR2 channels between the two sensors showed more similarity than the other three channels. Table III summarizes the coefficients obtained through regression analysis between the COMS/MI channels and their corresponding GK2A/AMI channels.

Fig. 3. - Scatterplots of (left column) original and (right column) corrected COMS and GK2A data for five common channels: VIS, SWIR, WV, IR1, and IR2 channels on 5 January 2020 at 04:00 UTC.
Fig. 3.

Scatterplots of (left column) original and (right column) corrected COMS and GK2A data for five common channels: VIS, SWIR, WV, IR1, and IR2 channels on 5 January 2020 at 04:00 UTC.

TABLE III Summary of Regression Coefficients for MI Channel Data Calibration Via AMI Data
Table III- Summary of Regression Coefficients for MI Channel Data Calibration Via AMI Data

SECTION III.

Methods

A. Preprocessing for D2D Generation

The MI and AMI VIS channels provide albedo values ranging from 0 to 1. Conversely, the SWIR, WV, IR1, and IR2 channels offer TB values typically within the range of approximately 190–400 K. Generally, the DL model may exhibit bias toward target data with larger values, particularly when the ranges of data are distributed differently [41], [42], [43]. Thus, appropriate data normalization is required to expedite suitable training and to aid the identification of optimal local minimum values [42]. Therefore, we normalized the albedo and TB data values into the range from −1 to 1 for the D2D model as follows: \begin{equation*} \ \widehat {{{Y}_i}} = \frac{{{{Y}_i} - {{Y}_{\text{min}}}}}{{{{Y}_{\text{max}}} - {{Y}_{\text{min}}}}}\ \times 2 - 1 \tag{1} \end{equation*}

View SourceRight-click on figure for MathML and additional features.where \widehat {{{Y}_i}} represents the normalized albedo and TB values of COMS/MI or GK2A/AMI data. The subscript i denotes the index of the pixel; and {{Y}_{\text{max}}} and {{Y}_{\text{min}}} represent the maximum and minimum values at each channel.

B. Adversarial Learning Using Pix2Pix for D2D Model

The D2D method in the present study includes an adversarial learning structure using the Pix2pix framework [33] based on GANs [44] comprising a generator (G) and a discriminator (D). G generates a simulated virtual image {{y}_v} = \ G( x ) using the paired dataset (X\ = \{ x \}). At the same time, D assesses the similarity of y in Y\ = \{ y \}with the generated image {{y}_v} on a scale from 0 to 1.

The Pix2pix method utilizes the U-net architecture [45] in G, which comprises encoder–decoder layers to generate virtual data between observation intervals using skip connections to exchange low-level information across the bottleneck linking the encoder and decoder. In contrast, D employs the PatchGAN structure [46] to divide the time-series data of the predicted outputs with the enhanced TR into patches to verify whether each patch is real or virtual and to aggregate the patch results to arrive at a final decision.

In this study, we assigned the normalized time series of AMI and MI data with 30-min intervals to a dataset of X, and the time-series data of AMI data with 10-min intervals to a dataset of Y, respectively.

Compared with the Pix2pix for image-to-image translation using DN values, the D2D method in this study leverages time-series data of physical values, such as normalized albedo or TB values. Thus, this D2D approach has the advantage of translating one variable data to another variable data with the original features of satellite observations. During training, D^{\prime}s weights are updated through the adversarial loss ({{L}_a}) from previous training, while G’s weights are updated through reconstruction loss ({{L}_1}). Fig. 4 illustrates the adversarial learning structure of the D2D model using the paired satellite data for generating hypothetical TR-enhanced COMS data.

Fig. 4. - D2D-based TR enhancement framework in this study.
Fig. 4.

D2D-based TR enhancement framework in this study.

C. Experiments

In this study, a D2D model was developed to enhance the TR of COMS data from 30-min to 10-min intervals using corresponding central wavelengths of the five GK2A/AMI channels with 10-min TR as paired data.

This study experimented to determine the optimal input sequence length using IR 1 (10.5 μm) channel of GK2A/AMI in five types, denoted as Method 1 to Method 5. Method 1 utilized the time t data as input to the model and generated data 10 min before and after time t. On the other hand, Methods 2–5 leveraged time-series data at 30-min intervals as input and created virtual data at 10-min intervals to fill the missing temporal gaps. Method 2 used two frames of 30-min time series to generate four frames. Method 3 used three frames of 60-min time series to generate seven frames. Method 4 used five frames of 120-min time series to generate 13 frames, and Method 5 employed 13 frames of 360-min time series to generate 37 frames. All data composition methods were applied to each D2D model to generate 360 min of data. Finally, five D2D models with different learning strategies were compared with the AMI-observed data as true values. Fig. 5 illustrates the schematic diagrams of methods 1–5 for determining the optimal time-series length in order to enhance the TR of COMS data.

Fig. 5. - Methods 1–5 schematic diagrams for determining the optimal time-series length for the TR enhancement of the COMS data.
Fig. 5.

Methods 1–5 schematic diagrams for determining the optimal time-series length for the TR enhancement of the COMS data.

This study used the training dataset of 3600 data points from September 2019 to December 2020, while the test dataset of 400 data points from January 2021 to December 2021 for separating from the training dataset. After selecting the optimal D2D model among five methods, the constructed D2D model was applied to generate 10 min of TR COMS data for the VIS, SWIR, WV, IR1, and IR2 channels from September 2019 to March 2020. The study employed the TensorFlow and PyTorch versions of the D2D models to determine the optimal learning methodology and to enhance the TR of all channels for GK2A and COMS satellites, respectively. Notably, this study used daytime MI and AMI data because of solar effects on VIS and SWIR channels. Table IV presents the composition ratio and amount of data used in the D2D model for all datasets.

TABLE IV Datasets for D2D Model Training, Validation, and Application
Table IV- Datasets for D2D Model Training, Validation, and Application

D. Evaluation

The pixel-by-pixel statistical verification between the observed and the D2D-generated time-series data was quantitatively evaluated through Pearson correlation coefficient (CC), mean absolute error (MAE), bias, and root-mean-square error (RMSE) as follows [47], [48], [49]: \begin{align*} \text{CC} =& \frac{{\mathop \sum \nolimits_{i = 1}^N \left( {\ {{Y}_{D,i}} - \overline {{{Y}_D}} \ } \right)\left( {{{Y}_{O,i}} - \overline {{{Y}_O}} } \right)}}{{\sqrt {\mathop \sum \nolimits_{i = 1}^N {{{\left( {\ {{Y}_{D,i}} \!-\! \overline {Y{{P}_D}} } \right)}}^2}} \ \sqrt {\mathop \sum \nolimits_{i = 1}^N {{{\left( {\ {{Y}_{O,i}} \!-\! \overline {{{Y}_O}} \ } \right)}}^2}} }}\ \tag{2}\\ {\rm{Bias}} =& \sum_{i\ = \ 1}^N \left( {{{Y}_{D,i}}\ - \ {{Y}_{O,i}}\ } \right)/N \tag{3}\\ \text{MAE} =& \sum_{i\ = \ 1}^N \left| {{{Y}_{D,i}}\ - \ {{Y}_{O,i}}} \right|/N \tag{4}\\ {\rm{RMSE}} =& \sqrt {\ \sum_{i = 1}^N {{{\left( {\ {{Y}_{D,i}}\ - \ {{Y}_{O,i}}\ } \right)}}^2}/N} \ \tag{5} \end{align*}

View SourceRight-click on figure for MathML and additional features.where N represents the total number of pixels in the observed data; i indicates the index from 1 to N; {{Y}_{O,i}} means the value (albedo or K) of the ith pixel in the observed data; {{Y}_{D,i}} denotes the value of the ith pixel in the D2D-generated data; \overline {{{Y}_O}} is the mean value of the observed data; and \overline {{{Y}_D}} denotes the mean value of the D2D-generated data.

Fig. 6 presents a comprehensive process and essential steps of the D2D model development for TR enhancement of COMS data, which includes data preprocessing, model training, postprocessing, and validation.

Fig. 6. - Schematic flowchart of enhancing the TR of COMS data using the D2D model.
Fig. 6.

Schematic flowchart of enhancing the TR of COMS data using the D2D model.

SECTION IV.

Results

A. Optimal Method for D2D Model Development

Fig. 7 shows the results of the models trained for five methods using 400 test cases. The curves in the figure using cubic spline interpolation [50] show the average values of each D2D model for five methods. The differences among the five methods were insignificant. All five methods showed CC values higher than 0.9, and the biases between from −0.5 to 0.5 K. The MAE and RMSE values were less than 4 K and 6 K, respectively.

Fig. 7. - Scatterplots and averaged CC, bias, MAE, and RMSE values for Methods 1–5 for the total validation dataset.
Fig. 7.

Scatterplots and averaged CC, bias, MAE, and RMSE values for Methods 1–5 for the total validation dataset.

Methods 2 and 4 showed overall the highest CC and lowest MAE and RMSE values concerning the CC, MAE, and RMSE values. Method 2 showed the best results from May to September, while Method 4 was the best for the other months.

Fig. 8 presents the seasonal box-and-whisker plots [51] of CC, bias, MAE, and RMSE for methods 1–5 on the test dataset to determine the optimal learning method based on the highest CC and lowest MAE and RMSE values. Table V summarizes the seasonal mean values of statistical indices. For spring and winter, Method 4 yielded the highest CC values of 0.959 and 0.941 and the lowest MAE values of 2.147 K and 1.922 K, respectively. For summer and fall, Method 2 showed the highest CC values of 0.966 and 0.957 and the lowest MAE values of 2.898 K and 2.192 K, respectively. The degraded results of all methods during summer and fall may be attributed to the regional meteorological characteristics of the Korean Peninsula in the study area because the Korean Peninsula usually experiences extreme weather events, such as tropical low-pressure systems and localized heavy rainfall in summer and fall, leading to rapid changes in cloud and atmospheric conditions [52]. Consequently, Method 2 yielded the best result with an MAE value of 2.379 K, while Method 4 yielded the best result with a CC value of 0.957.

Fig. 8. - Box-and-whisker plots of seasonal-averaged values for Methods 1–5.
Fig. 8.

Box-and-whisker plots of seasonal-averaged values for Methods 1–5.

TABLE V Summary of Seasonal Statistical Results
Table V- Summary of Seasonal Statistical Results

We determined the most suitable approach between Methods 2 and 4 from the comparison of their stability from the calculation of case-by-case variance to ensure the accurate computation of the evaluation index value throughout the 360-min generation period.

Fig. 9 displays a box-and-whisker plot of the variance of the total time lengths for seasonal cases calculated for each statistical metric. As a result, Method 4 showed consistently lower variance values for CC, MAE, and RMSE than Method 2, irrespective of the season. Thus, this study selected Method 4 as the optimal method for D2D model construction. Notably, other methods also showed excellent performances. The five methods were evaluated using the test dataset consisting of 200 data for the VIS channel and 400 data for four IR channels, covering the observation period of GK2A and COMS from September 2019 to March 2020.

Fig. 9. - Box-and-whisker plots of the CC, bias, MAE, and RMSE variances between Methods 2 and 4.
Fig. 9.

Box-and-whisker plots of the CC, bias, MAE, and RMSE variances between Methods 2 and 4.

B. D2D-Simulated AMI Data

Fig. 10 shows a comparison result between the AMI observation data and the D2D-generated data using the 30-min interval AMI data for the case on 21 February 2020 at 06:00 UTC. The D2D-generated AMI data for all five channels qualitatively depicted cloud patterns and clear skies, similar to the observed AMI data. However, there were differences in the boundary areas of clouds between the two D2D-simulated and observed AMI data at the VIS, SWIR, IR1, and IR2 channels. The CC values from the VIS to IR2 channels were very high at 0.931, 0.984, 0.989, 0.989, and 0.990, respectively, indicating a high correlation between the observed and D2D-generated AMI data. Additionally, the MAE values for the five AMI models trained using the D2D method were very low at 0.034 K, 2.185 K, 0.889 K, 2.199 K, and 2.743 K, respectively, suggesting that the D2D method can simulate AMI observation data with high accuracy at 10-min intervals.

Fig. 10. - Examples of (a) observed, (b) D2D-generated GK2A/AMI data, and (c) difference between (a) and (b). The D2D model used the 30-min interval of GK2A data. The time was 21 February 2020 at 06:00 UTC.
Fig. 10.

Examples of (a) observed, (b) D2D-generated GK2A/AMI data, and (c) difference between (a) and (b). The D2D model used the 30-min interval of GK2A data. The time was 21 February 2020 at 06:00 UTC.

C. D2D-Simulated AMI-Like COMS/MI Data

Fig. 11 presents five channel-specific images of GK2A observations and D2D-generated 10-min TR of COMS/MI data, as well as the difference between the two on 21 February 2020 at 06:00 UTC. The proposed D2D method was successful in generating 10-min virtual data of COMS/MI with time resolution similar to AMI observations. However, it should be noted that the D2D-generated 10-min MI data showed more differences compared with the D2D-generated 10-min AMI data (refer to Figs. 10 and 11). The CC values of the VIS, SWIR, WV, IR1, and IR2 channels of the D2D-generated 10-min MI data were 0.933, 0.982, 0.982, 0.990, and 0.990, respectively, which were slightly lower than those of the D2D-generated 10-min AMI data, as shown in Fig. 10. Additionally, the MAE values were slightly higher, with values of 0.034 K, 2.598 K, 1.984 K, 2.262 K, and 2.417 K, compared with the results in Fig. 10. Therefore, the proposed D2D method was able to generate 10-min virtual data of COMS/MI that closely resembled AMI-like observations. Furthermore, it should be emphasized that calibrated COMS/MI data were utilized as input data in this D2D model.

Fig. 11. - Examples of (a) observed AMI data, (b) D2D-generated AMI-like COMS data, and (c) difference between (a) and (b). The time was the same as in Fig. 10.
Fig. 11.

Examples of (a) observed AMI data, (b) D2D-generated AMI-like COMS data, and (c) difference between (a) and (b). The time was the same as in Fig. 10.

Table VI presents the statistical average values of the D2D-simulated AMI and AMI-like COMS/MI data compared with AMI observation over the entire areas and cloud areas for the entire application dataset. The AMI-like COMS/MI data show consistent results with D2D-simulated AMI data for all five bands. The proposed D2D method generated AMI-like COMS/MI data with the 10-min TR with a CC value of 0.88 and MAE and RMSE values less than 0.06 for the VIS channel over the entire area. The D2D model exhibits outstanding CC values > 0.94, MAE < 2.8 K, and RMSE < 3.9 K for both AMI and MI sensors' SWIR, WV, IR1, and IR2 channels.

TABLE VI Statistical Results for D2D-Generated AMI and AMI-Like COMS/MI Data Using the Total Validation Datasets
Table VI- Statistical Results for D2D-Generated AMI and AMI-Like COMS/MI Data Using the Total Validation Datasets

Clouds identified using the GK2As cloud mask data in Table VI exhibit prevailing temporal variations. Despite the dynamic nature of clouds within the 30-min interval, both D2D-simulated AMI and AMI-like COMS/MI data consistently show superior average CC values, surpassing those between the AMI and MI sensors. This result underscores the effectiveness of the D2D method in enhancing TR, even a significant advancement in cloud analysis.

Fig. 12 illustrates a 30-min time series of observed COMS/MI data and a 10-min time series of AMI-like D2D-generated COMS/MI data for Typhoon Bolaven [53] on 28 August 2012, from 03:00 to 03:30 UTC, during which the typhoon had a central pressure of 910 hPa, significantly impacting the Korean Peninsula. Notably, only COMS data were available. The 10-min time series of the AMI-like D2D-generated COMS/MI data provides detailed insights into the variance of Typhoon Bolaven over the Korean peninsula. This example shows the potential of the proposed D2D-based TR enhancement method to significantly contribute to disaster monitoring and prevention, particularly in rapidly changing phenomena, such as extreme weather events, such as typhoons or localized heavy rainfall with very short lifetimes.

Fig. 12. - Case study of Typhoon Bolaven on 28 August 2012, from 03:00 to 03:30 UTC, involved examining the COMS VIS, SWIR, WV, IR1, and IR2 channels at 10-min intervals. The observed COMS data portray a 30-min time series of observed COMS/MI data, whereas the D2D-generated COMS data show a 10-min time series of AMI-like D2D-generated COMS/MI data.
Fig. 12.

Case study of Typhoon Bolaven on 28 August 2012, from 03:00 to 03:30 UTC, involved examining the COMS VIS, SWIR, WV, IR1, and IR2 channels at 10-min intervals. The observed COMS data portray a 30-min time series of observed COMS/MI data, whereas the D2D-generated COMS data show a 10-min time series of AMI-like D2D-generated COMS/MI data.

SECTION V.

Discussion

This article proposes a novel TR enhancement method utilizing D2D translation with adversarial DL techniques. The trained D2D model generated COMS/MI data from 30 to 10 min of TR using the current GK2A/AMI sensor with 10 min of TR as the paired dataset.

Most previous satellite remote sensing studies on TR enhancement, including video frame interpolation methods, interpolate entire video frames sequentially and arbitrarily. However, this study presented an example of the length of the video as the time length (120 min) through various experiments to achieve the most accurate and stable TR enhancement without arbitrary decisions. In contrast to previous research [23], [24], [54], this study investigated various combinations of datasets and selected the construction of learning datasets based on the optimal time length for TR enhancement. During the optimal time length selection process, the suggested Method 2 and Method 4 yielded the most accurate results based on the overall dataset statistics. Method 4 exhibited superior stability in data generation compared with Method 2. Notably, Method 2 outperformed Method 4 in most cases during the summer and fall seasons, indicating the effectiveness of Method 2 in studying the rapidly changing weather conditions in East Asia [55], [56], [57], [58]. Thus, Method 2 could be advantageous for handling shorter times due to its ability to capture these changes effectively. The details of Method 2 should have been investigated in future studies.

This study also showed the necessity of calibration in the TR enhancement study using two different sensors with different but similar SRFs of two satellites The performance of our D2D model improved when we applied the calibrated COMS/MI data to the constructed model, compared with using the original COMS/MI data. In terms of the RMSE values, the overall statistical scores using the calibrated COMS/MI data decreased by 2.94%, 11.62%, and 63.19% at the VIS, SWIR, and WV channels compared with using the original COMS/MI data, respectively. However, the COMS IR1 and IR2 channels notably exhibited similar results to the GK2A observation data irrespective of calibration. This result may be studied in future work because differences in SRFs of IR1 and IR2 channels are similar to the other three channels between the two sensors.

One limitation of this study lies in upscaling the spatial resolution of AMI data to match that of the COMS MI. The different spatial resolutions may affect the statistical results of the proposed D2D model because the errors due to the spatial interpolation were included. Thus, future studies will explore the effects of different spatial and spectral resolutions.

SECTION VI.

Summary and Conclusion

This study proposed a DL-based D2D translation technique using an adversarial learning structure for enhancing the TR of COMS/MI with VIS, SWIR, WV, and IRs channels, from long-time-interval (30-min) data to short-time-interval (10-min) data. Thus, the study used the previous COMS/MI data and GK2A/AMI, a successor of MI. Two sensors had overlapped observation periods.

In this study, we experimented to find the optimal learning datasets for enhancing COMS TR. The constructed D2D model using the AMI test dataset for five channels was validated with high CC and low RMSE values, proving that the D2D-based method was applicable for enhancing the TR of AMI data. The D2D model was trained using the GK2A/AMI observation data at the five channels. Method 4 showed the highest accuracy and stable results with the time series of data for 120 min as input and output for the D2D model.

Additionally, the constructed D2D model generated the AMI-like COMS/MI data with 10 min of TR, which were compared with the results of the AMI observation and D2D-generated AMI data during the two sensors' common observation period. We used the calibrated COMS/MI data as input data instead of the original COMS/MI data. As a result, the AMI-like COMS/MI data showed similar CC, MAE, and RMSE values to the D2D-simulated AMI data.

The proposed D2D method demonstrated that TR could be enhanced using the paired datasets and adversarial DL technique if two satellites have similar channels but different TRs. Thus, this study contributed to the various TR enhancement research for satellite remote sensing. The D2D-generated data can contribute to investigating and analyzing the details of past weather conditions, such as typhoons and heavy rainfall, as if past events were observed by the current advanced satellite. In addition, consistent and long-term data combined with current AMI and past AMI-like data could be helpful for climate studies.

ACKNOWLEDGMENT

The authors would like to thank the anonymous reviewers for their helpful and constructive comments on the article.

Select All
1.
S. Businger et al., "The promise of GPS in atmospheric monitoring", Bull. Amer. Meteorol. Soc., vol. 77, no. 1, pp. 5-18, 1996.
2.
R. Stumpf et al., "Monitoring Karenia brevis blooms in the Gulf of Mexico using satellite ocean color imagery and other data", Harmful Algae, vol. 2, no. 2, pp. 147-160, 2003.
3.
R. K. Jaiswal, S. Mukherjee, K. D. Raju and R. Saxena, "Forest fire risk zone mapping from satellite imagery and GIS", Int. J. Appl. Earth Observ. Geoinf., vol. 4, no. 1, pp. 1-10, 2002.
4.
T. McNally, M. Bonavita and J.-N. Thépaut, "The role of satellite data in the forecasting of hurricane Sandy", Monthly Weather Rev., vol. 142, no. 2, pp. 634-646, 2014.
5.
A. A. Tronin, M. Hayakawa and O. A. Molchanov, "Thermal IR satellite data application for earthquake research in Japan and China", J. Geodyn., vol. 33, no. 4/5, pp. 519-534, 2002.
6.
X. Zhu and E. H. Helmer, "An automatic method for screening clouds and cloud shadows in optical satellite image time series in cloudy regions", Remote Sens. Environ., vol. 214, pp. 135-153, 2018.
7.
G. Chaudhuri, K. P. Mainali and N. B. Mishra, "Analyzing the dynamics of urbanization in Delhi national capital region in India using satellite image time-series analysis", Environ. Plan. B Urban Analytics City Sci., vol. 49, no. 1, pp. 368-384, 2022.
8.
D. B. Lobell, "The use of satellite data for crop yield gap analysis", Field Crops Res., vol. 143, pp. 56-64, 2013.
9.
J. Reiche, E. Hamunyela, J. Verbesselt, D. Hoekman and M. Herold, "Improving near-real time deforestation monitoring in tropical dry forests by combining dense Sentinel-1 time series with Landsat and ALOS-2 PALSAR-2", Remote Sens. Environ., vol. 204, pp. 147-161, 2018.
10.
K. Uddin, M. A. Matin and F. J. Meyer, "Operational flood mapping using multi-temporal Sentinel-1 SAR images: A case study from Bangladesh", Remote Sens., vol. 11, no. 13, pp. 1581-1600, 2019.
11.
H. C. Lee et al., "McGill algorithm for precipitation nowcasting by Lagrangian extrapolation (MAPLE) applied to the South Korean radar network—Part II: Real-time verification for the summer season", Asia-Pacific J. Atmos. Sci., vol. 46, no. 3, pp. 383-391, 2010.
12.
S. Kwon, S.-H. Jung and G. Lee, "Inter-comparison of radar rainfall rate using constant altitude plan position indicator and hybrid surface rainfall maps", J. Hydrol., vol. 531, pp. 234-247, 2015.
13.
Y. Kim and S. Hong, "Very short-term prediction of weather radar-based rainfall distribution and intensity over the Korean peninsula using convolutional long short-term memory network", Asia Pacific J. Atmos. Sci., vol. 58, pp. 489-506, 2022.
14.
Y. Kim and S. Hong, "Very short-term rainfall prediction using ground radar observations and conditional generative adversarial networks", IEEE Trans. Geosci. Remote Sens., vol. 60, Sep. 2021.
15.
K. Holmlund et al., "Meteosat third generation (MTG): Continuation and innovation of observations from geostationary orbit", Bull. Amer. Meteorol. Soc., vol. 102, no. 5, pp. E990-E1015, 2021.
16.
H. Lim, J. Park and D. Kim, "The GK2A/2B ground system after the COMS", Proc. Int. Conf. Space Oper., pp. 1-6, 2016.
17.
W. Bao, W.-S. Lai, C. Ma, X. Zhang, Z. Gao and M.-H. Yang, "Depth-aware video frame interpolation", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 3703-3712, 2019.
18.
W. Bao, W.-S. Lai, X. Zhang, Z. Gao and M.-H. Yang, "MEMC-net: Motion estimation and motion compensation driven neural network for video interpolation and enhancement", IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 3, pp. 933-948, Mar. 2021.
19.
H. Jiang, D. Sun, V. Jampani, M.-H. Yang, E. Learned-Miller and J. Kautz, "Super SLOMO: High quality estimation of multiple intermediate frames for video interpolation", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 9000-9008, 2018.
20.
S. Niklaus and F. Liu, "Context-aware synthesis for video frame interpolation", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 1701-1710, 2018.
21.
P. Wu, H. Shen, L. Zhang and F.-M. Göttsche, "Integrated fusion of multi-scale polar-orbiting and geostationary satellite observations for the mapping of high spatial and temporal resolution land surface temperature", Remote Sens. Environ., vol. 156, pp. 169-181, 2015.
22.
Y. Zhao and B. Huang, "A hybrid image fusion model for generating high spatial-temporal-spectral resolution data using OLI-MODIS-hyperion satellite imagery", Int. J. Geol. Environ. Eng., vol. 11, no. 9, pp. 869-874, 2017.
23.
Y. Xiao et al., "Space-time super-resolution for satellite video: A joint framework based on multi-scale spatial-temporal transformer", Int. J. Appl. Earth Observ. Geoinf., vol. 108, 2022.
24.
L. F. Cruz, P. T. M. Saito and P. H. Bugatti, "DeepCloud: An investigation of geostationary satellite imagery frame interpolation for improved temporal resolution", Proc. Int. Conf. Artif. Intell. Soft Comput., pp. 50-59, 2020.
25.
J. Kang, Y. Jo, S. W. Oh, P. Vajda and S. J. Kim, "Deep space-time video upsampling networks", Proc. Eur. Conf. Comput. Vis., pp. 701-717, 2020.
26.
Z. Shi et al., "Learning for unconstrained space-time video super-resolution", IEEE Trans. Broadcast., vol. 68, no. 2, pp. 345-358, Jun. 2022.
27.
G. Xu, J. Xu, Z. Li, L. Wang, X. Sun and M.-M. Cheng, "Temporal modulation network for controllable space-time video super-resolution", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 6388-6397, 2021.
28.
K.-H. Han, J.-C. Jang, S. Ryu, E.-H. Sohn and S. Hong, "Hypothetical visible bands of advanced meteorological imager onboard the geostationary Korea multi-purpose satellite-2A using data-to-data translation", IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, pp. 8378-8388, 2022.
29.
Y. Kim, H.-S. Ryu and S. Hong, "Data-to-data translation-based nowcasting of specific sea fog using geostationary weather satellite observation", Atmos. Res., vol. 290, 2023.
30.
Y. Kim and S. Hong, "Hypothetical ground radar-like rain rate generation of geostationary weather satellite using data-to-data translation", IEEE Trans. Geosci. Remote Sens., vol. 61, pp. 1-14, 2023.

References

References is not available for this document.