Introduction
In the past decade, increasing focus has been given to the development of new synthetic aperture radar (SAR) concepts capable of delivering both high resolution and wide coverage [1]–[5]. In order to overcome the fundamental limitation imposed by the direct relation between swath width and azimuth resolution, the solutions usually consider multiple elevation beams in combination with SCan-On-REceive (SCORE) or the acquisition by multiple subapertures in the along-track direction [2], [4], [6]–[11].
The authors in [5] demonstrated the staggered SAR concept, which together with SCORE allows for the coverage of a continuous wide swath with high resolution. To achieve that, the pulse repetition interval (PRI) is continuously varied during the acquisition causing the blockage—i.e., the instants during transmission when the radar cannot receive the backscattered echoes—to move along the swath.
Several schemes can be used to control the variation of the PRI, e.g., pseudorandom variation, slow linear variation or fast linear variation. In [5], it was shown that optimum performance in terms of azimuth-ambiguity-to-signal-ratio (AASR) is achieved by combining a few fast linearly varying PRI sequences (e.g., 7). Such scheme is usually able to distribute the blockage along the swath in a way that no consecutive samples are missed in azimuth. If enough oversampling is available, the missing data can be interpolated and a performance similar to the constant pulse repetition frequency (PRF) case can be achieved. Moreover, in this case, the staggered operation has the further advantage of leading to smeared azimuth ambiguities.
Given the periodicity of the nonuniform pattern, an alternative for the processing of the staggered data is the use multichannel reconstruction approaches [1], [12]. However, such methods are impacted by noise scaling and by the back-folding of the nonlimited spectrum, especially for long PRI sequences [13]. In order to efficiently perform the SAR focusing, the staggered data can be interpolated to a uniform grid allowing for the use of conventional frequency-domain SAR processing techniques. This resampling can be performed, e.g., with the best linear unbiased estimator (BLU), as suggested in [5], or with a nonuniform cardinal sine (sinc) kernel, as described in [14]. Alternatively, the data can be focused directly from the nonuniform grid, e.g., considering the nonuniform discrete Fourier transform [15], or employing time domain back-projection. In all cases, the presence of the blockage potentially degrades the quality of the focused data, especially if acquiring with a low average oversampling ratio.
Staggered SAR is currently the baseline acquisition mode of the Tandem-L concept [16]. Tandem-L's goal is to acquire single/dual-polarimetric data over a 350 km ground-swath and fully polarimetric data over a 175 km ground-swath, with an azimuth resolution of around 7.5 m. Optimum oversampling factors are calculated considering the ambiguity level requirements and the designed PRI sequence, and amount to 2.3 and 1.9 for the dual-pol and quad-pol modes. Although an experimental quad-pol mode with 350 km swath is desirable, due to range ambiguity constraints its mean PRF on transmit has to be limited to 1200 Hz per channel. This corresponds to an oversampling ratio of only 1.1, in which case the standard processing solution based on BLU resampling suggested in [5] cannot be used.
Another mission which can benefit from the staggered operation mode is the NASA-ISRO SAR (NISAR) mission [17]. NISAR aims to acquire data over a 240 km wide swath with 6 m azimuth resolution. As described in [18], if employing a constant PRF for transmission, the gaps amount to 10% of the dual-pol swath (ascending-only or descending-only) and can be mitigated if a coarser range resolution is acceptable. If continuous swath coverage in each individual pass is required, the staggered mode can be used [19]. In order to fulfill the ambiguity requirements, the staggered operation would require a mean PRF on transmit of around 2400 Hz, which is much higher than the current maximum value of 1650 Hz imposed by limited downlink capacity. Nevertheless, for some applications the degradation of the ambiguity level caused by acquiring with a reduced PRF may be acceptable.
Both aforementioned staggered NISAR and experimental Tandem-L modes employ low average oversampling ratios (see parameters summary in Table I). Hence, in both cases the focused SAR image might contain nonnegligible artifacts, specially for areas presenting high contrast and containing strong point-like targets. A first solution for the imaging of low-oversampled staggered data was proposed in [20]. The method consists in two steps: first, the recovery of the blockage using an spectral estimator (SE) for nonuniformly sampled data [21], [22]; and second, the resampling of the data to the uniform grid by minimizing the ambiguity energy in a multichannel reconstruction scheme [23]. While the approach is able to considerably reduce the artifacts of strong targets, it is suboptimal in the sense that it does not consider the target characteristics during the data recovery.
In [24], we first suggested a few modifications to the approach in [20], which intended to improve the reconstruction of point-like targets while avoiding the degradation of distributed scatterers (DSs). In this article, the suggested approach for the processing of low-oversampled staggered data is discussed in depth, with the aid of an extensive analysis of the blockage recovery in Section II. Section III includes a detailed description of the complete approach, including the outline of optional steps and a discussion of when they should be used. Finally, in Section IV, we validate the proposed methodology with simulated staggered SAR data and provide first examples of the impact of the staggered operation for interferometric applications. Conclusion is drawn in Section V.
Blockage Recovery
Standard PRI design for staggered SAR acquisitions ensures that consecutive azimuth samples are not lost in either the raw or range-compressed data domains, depending on the adopted processing approach [25]. However, even when employing such optimum design strategy, if the mean effective PRF on transmit is close to the Doppler bandwidth, the signal can be locally under-sampled. In this case, the blockage will introduce large gaps in the signal (in terms of the signal bandwidth), and the recovery of this missing data becomes a major challenge to the handling of the staggered data.
The main contribution of [20] to the processing of low-oversampling SAR data is precisely the dedicated treatment given to the blockage. Instead of directly interpolating the available data into a uniform grid as conventionally done in the high PRF case [25], the authors handle the reconstruction in two steps: first, the missing data of the blockage is recovered still in the nonuniform grid; and second, the full vector is resampled to the uniform grid. Hence, if the recovery of the missing data is successful, the resampling is performed on a properly sampled data vector.
If a time domain back-projection approach is employed for the SAR focusing, the resampling to the uniform grid can be avoided. However, the prior recovery of the blockage may still be required depending on the local sampling characteristics (i.e., its average oversampling and deviation to the uniform grid [26], [27]). In the remaining of this section, we address the significance of this recovery for the resulting azimuth side-lobe level (see Section II-A). Moreover, we show the performance of the iterative adaptive approach for missing data (MIAA) and BLU for the blockage recovery considering different simulation scenarios (in Section II-B).
A. On the Necessity of the Blockage Interpolation
The necessity of the blockage interpolation can be evaluated by considering a simplified case where samples from a uniformly sampled signal are periodically missed. If the blockage is neither recovered nor accounted for, the back-projection integral for a single scatter can be approximated as
\begin{align*}
s_{\mathrm{B}}[n] &= \mathop{\sum }\limits^{+\mathrm{LSA}/2}_{-\mathrm{LSA}/2}\left(s_{\mathrm{rc}}[m]-s_{\mathrm{rc}}[m]\Pi [m;T_\mathrm{B}]\right)h_{\mathrm{az}}[m-n] \\
&= s_{\mathrm{NB}}[n]-s_{\mathrm{rc}}[n]\Pi [n;T_\mathrm{B}]\ast h_{\mathrm{az}}[n] \tag{1}
\end{align*}
Alternatively, the back-projection integral can be performed over the available samples only, i.e., considering the new nonuniformly sampled grid described by the time instants
\begin{equation*}
t[n]=n\Delta t+g[n]=n\Delta t +\left(\mathop{\sum}\limits_{i=0}^{\left\lfloor \frac{N-1}{T_{B}} \right\rfloor }i\chi _{[iT_{B},(i+1)T_{B})}\right)[n] \tag{2}
\end{equation*}
\begin{equation*}
s_{\mathrm{NU}}[n]=\frac{1}{\Delta t}\mathop{\sum}\limits_{-\mathrm{LSA}/2}^{+\mathrm{LSA}/2}s_{\mathrm{rc,avl}}[m]h_{\mathrm{az,avl}}[m-n]\delta [m] \tag{3}
\end{equation*}
\begin{equation*}
\delta [n]=t[n+1]-t[n]=\Delta t+\Pi [n;T_{B}-1]. \tag{4}
\end{equation*}
If the blockage is interpolated, the focused signal is given by
\begin{align*}
s_{\mathrm{I}}[n] &= \mathop{\sum}\limits_{-\mathrm{LSA}/2}^{+\mathrm{LSA}/2}s_{\mathrm{rc,i}}[m]h_{\mathrm{az}}[m-n] \\
& = s_{\mathrm{NB}}[n]+\psi [n]\Pi [n;T_{\mathrm{B}}]\ast h_{\mathrm{az}}[n] \tag{5}
\end{align*}
Assuming
\begin{align*}
\sigma _{B}^{2} &= \mathop{\int}\nolimits_{\!\!-B_\mathrm{az}/2}^{B_\mathrm{az}/2}\left|S_{\mathrm{NB}}\left(e^{j\Omega}\right)\ast \Pi \left(e^{j\Omega}\right)\right|^{2}\left|H_{\mathrm{az}}\left(e^{j\Omega}\right)\right|^{2}\mathrm{d}\Omega \tag{6}\\
\sigma _{\mathrm{NU}}^{2} &= \mathop{\int}\nolimits_{\!\!-B_{\mathrm{az}}/2}^{B_{\mathrm{az}}/2}\left|S_{\mathrm{NB}}\left(e^{j\Omega}\right)\ast \Delta \left(e^{j\Omega}\right)\right|^{2}\left|H_{\mathrm{az}}\left(e^{j\Omega}\right)\right|^{2}\mathrm{d}\Omega \tag{7}
\end{align*}
\begin{align*}
\sigma _{\mathrm{I}}^{2}=\mathop{\int}\nolimits_{\!\!-B_{\mathrm{az}}/2}^{B_{\mathrm{az}}/2}\left|\varPsi\left(e^{j\Omega}\right)\ast \Pi \left(e^{j\Omega}\right)\right|^{2}\left|H_{\mathrm{az}}\left(e^{j\Omega}\right)\right|^{2}\mathrm{d}\Omega \tag{8}
\end{align*}
Fig. 1 shows simulation results considering an ideal point target and a staggered SAR acquisition with the parameters described in the third column of Table I (staggered NISAR example). The plot shows the mean azimuth-ambiguity-to-signal-ratio (AASR) over the swath obtained after back-projecting from the non-uniform grid. The AASR is computed here as the difference between the staggered SAR integrated-side-lobe-ratio (ISLR) and the ISLR of a constant PRI SAR system with a PRF equal to the mean staggered PRF on transmit, same values for the other system and processing parameters, and an azimuth antenna pattern equal to zero outside the interval
Mean AASR obtained after back-projecting from the nonuniform grid. Five cases are shown: considering only the valid samples for the back-projection (equivalent to (3), in black), recovering the blockage with a nearest-neighbor interpolator (in red), recovering the blockage with BLU (in green), recovering the blockage with a SE (in blue), and when no blockage is present (in turquoise). The curves show the behavior of the AASR for increasing mean PRF on transmitter.
The use of back-projection directly from the nonuniform grid (i.e., without resampling) can potentially diminish the propagation of interpolation errors and noise scaling. However, this is not necessarily the case for the modes examined in this article, since the reconstruction of the low-oversampled staggered signal from its nonuniform samples is not ideal [26], [27]. For the simulation shown in Fig. 2 the parameters in the third column of Table I were also considered, and the goal was to compare the focusing when back-projecting directly from the nonuniform grid and when performing an additional resampling step before the integration. The plots on the first and second columns show zooms of the impulse response function (IRF) obtained when recovering with the SE and back-projecting from the nonuniform grid (in black), and considering an additional resampling step (in red). The plots in the third and fourth columns show the variation of the AASR over the swath when using MIAA and BLU, respectively. The top row corresponds to a mean PRF on transmit of 1650 Hz, whereas the bottom one corresponds to a relaxed case with a mean PRF of 2000 Hz. Note that in both PRF scenarios, the resampling to the uniform grid leads to artifacts related to the propagation of blockage recovery errors. Those are recognizable as periodic spurious lobes in the second IRF zooms (second column). Moreover, in the results obtained after the resampling, the side-lobe energy concentrates near the main one, while it is spread when back-projecting from the nonuniform grid. The resulting integrated side lobe energy using both processing strategies is similar. In fact, for the 1650 Hz and MIAA recovery case, there is actually a slight degradation of the ISLR when back-projecting from the nonuniform grid due to a small increase of the overall side-lobe energy caused by the nonuniformity (see second column) [26]. The same is not true when recovering the data with BLU, since in this case the recovery errors are considerably larger and their propagation during the resampling dominates.
Plots in the first and second column show zooms of the IRF obtained when recovering with the SE and back-projecting from the nonuniform grid (solid black), and considering an additional resampling step (solid red). The plots in the third and fourth columns show the variation of the ISLR over the swath when using MIAA and BLU, respectively. The top row corresponds to a PRF of 1650 Hz, whereas the bottom one corresponds to a relaxed case with mean PRF of 2000 Hz.
Finally, we make a note on the discrete implementation of the back-projection integral. The discrete back-projection integral for the complete staggered signal can also be described by (3) and (4). In (4), we considered the left-Riemann approximation (up to a multiplicative constant). However, the discretization (or equivalently, the tapering of the staggered signal), can be done in different ways [29], [30]. The use of a scheme that averages time increments, e.g., the Trapezoidal rule, will potentially decrease artifacts caused by the strong nonuniformity of the elaborated staggered sampling pattern used here. This can be observed in the IRF responses shown in Fig. 3, also corresponding to the parameters in third column of Table I. However, here an ideal case with no-blockage was considered to avoid propagation of the recovery errors. Both results were focused by back-projecting the signal from the nonuniform grid. The curves in black consider the left-Riemann sum, while the ones in red correspond to the trapezoidal rule.
Zooms of the obtained IRF response using back-projection from the nonuniform grid considering the left-Riemann sum and Trapezoidal rule.
B. Performance of Super-Resolution SEs
The reconstruction of signals from nonuniform samples has been extensively studied by the signal processing community [26], [31]–[33]. For example, Yao and Thomas showed that perfect reconstruction of band-limited signals is possible using Lagrange interpolation functions if the nonuniform sampling instants do not deviate from the uniform grid by more than a quarter of the signal bandwidth [32]. In the case of staggered SAR acquisitions, not only is the spectrum nonlimited, but also the deviation from the uniform grid can be much larger than the 3 dB observation time. For example, for the experimental 350 km swath/quad-pol mode of the Tandem-L concept described in the second column of Table I, the deviation is around 2.6 times larger than the Yao and Thomas condition, whereas for a staggered NISAR acquisition such as the one described in the third column of Table I, it is about 2.7 times (for a segment size given by the correlation length [5]). Hence, more sophisticated reconstruction approaches are required.
The use of parametric and nonparametric SE for the recovery of interrupted SAR data has been demonstrated in [34], [35]. SEs have been also suggested for the recovery of the blockage in staggered SAR acquisitions in [20]. More specifically, the authors employed the nonuniform iterative adaptive approach for missing data (MIAA) for their recovery step [21], [22].
In the aforementioned studies, the SE are applied to all missing data, i.e., no distinction concerning the characteristics of the imaged target is made. However, such algorithms are derived for line spectra and, hence, are specially suitable for the reconstruction of raw data from point targets. In fact, experiments with TerraSAR-X data in [36] showed that the performance of different SEs is not satisfactory when recovering gapped data from distributed scatters. Note that in that case, the data were missing in a uniform grid due to the synchronization link between the TerraSAR-X and TanDEM-X satellites [37]. In the context of staggered SAR, the performance of the data recovery is further impaired by the strong nonuniformity of the sampling. In fact, the distribution of the nonuniform samples and the choice of the spectral grid used for the reconstruction are known to impact the performance of the spectral estimation [21], [26], [27].
In order to recover the blockage of a staggered SAR dataset in an optimum way, it is necessary to understand the behavior of the recovery methods. This is the main goal of the remaining of this section. For that, we include a brief recap of the super-resolution SE of choice and provide analysis of the performance of the blockage recovery with respect to different aspects (e.g., the type of data being recovered; the available signal-to-clutter ratio (SCR), the sequence design and the chosen spectral grid). All the analysis results presented in the following were obtained through one-dimensional (1-D) simulations considering the experimental quad-pol/ 350 km swath mode of the Tandem-L concept, unless otherwise specified (see second column of Table I).
1) MIAA for Data Recovery
Methods for data recovery based on super resolution spectral estimation generally model data segments as composed of available (
Like in [20], MIAA (specifically MIAA-t [22]) is the spectral estimation of choice in this article. This is motivated by its simplicity, direct applicability to the nonuniform sampling case, and its good performance for the recovery of point-like targets, which are the main source of artifacts in the low-oversampled staggered SAR case (see remaining of this section for its performance and limitations). In the following, a brief recap of MIAA is provided to aid in the discussion presented in this section. Please refer to [22] for a detailed description of the algorithm.
The complete data segment
\begin{equation*}
y_{Nx1}=\mathbf {A}\alpha =\left[\begin{array}{ccc}e^{j\omega _{0}t_{0}} & & e^{j\omega _{K-1}t_{0}}\\
\vdots & \ddots & \vdots \\
e^{j\omega _{0}t_{N-1}} & & e^{j\omega _{K-1}t_{N-1}} \end{array}\right] \tag{9}
\end{equation*}
In each iteration
\begin{equation*}
\hat{\alpha }\left[\omega _{k}\right]_{i}=\frac{a_{g}^{H}\left[\omega _{k}\right]\hat{\boldsymbol{R_{g}}}_{i-1}^{-1}y_{g}}{a_{g}^{H}\left[\omega _{k}\right]\hat{\boldsymbol{R_{g}}}_{i-1}^{-1}a_{g}\left[\omega _{k}\right]} \tag{10}
\end{equation*}
\begin{equation*}
a_{g}\left[\omega _{k}\right]=\left[e^{j\omega _{k}t_{g,0}}\cdots e^{j\omega _{k}t_{g,G-1}}\right] \tag{11}
\end{equation*}
\begin{equation*}
\hat{\boldsymbol{R_{g}}}_{i} = \left\lbrace \begin{array}{lc} \sum^{K-1}_{k=0} \left|\hat{\alpha }\left[\omega _{k}\right]\right|^{2}a_{g}a_{g}^{H}, & i\ne 0\\
\boldsymbol{I}_{G}, & i=0 \end{array}\right. \tag{12}
\end{equation*}
\begin{equation*}
\hat{y}_{m}=\underset{k=0}{\overset{K-1}{\sum }}\left|\hat{\alpha }\left[\omega _{k}\right]\right|^{2}a_{g}^{H}\left[\omega _{k}\right]\hat{\boldsymbol{R_{g}}}_{i-1}^{-1}y_{g}a_{m}\left[\omega _{k}\right] \tag{13}
\end{equation*}
\begin{equation*}
a_{m}\left[\omega _{k}\right]=\left[e^{j\omega _{k}t_{m,0}}\cdots e^{j\omega _{k}t_{m,M-1}}\right] \tag{14}
\end{equation*}
2) Impact of the Type of Target
Fig. 4 shows the performance of the blockage reconstruction in terms of the mean oversampling factor for two different type of targets. At the top right, the normalized root mean square error (NRMSE) for an ideal point target after data recovery, resampling and azimuth compression is shown. The NRMSE for simulated distributed scatters appears at the bottom left, and the corresponding coherence degradation is shown at the bottom right.
Performance of the blockage recovery as a function of the oversampling for different recovery methods and processing approaches. (Top left) The percentage of missing samples in the echo. Top right: the NRMSE after reconstruction, resampling and compression for an ideal point target. Bottom left): the NRMSE for distributed scatters. Bottom right: the corresponding coherence between reconstructed data and data with no blockage.
Different oversampling cases were simulated by varying the mean PRF on transmitter and considering a fixed set of parameters, namely, chirp duration, azimuth bandwidth, swath, and range position. In all cases, we adopted the optimum PRI sequence design described in [5], ensuring that no consecutive azimuth samples are missing in the raw-data domain. For the parameters considered (see second column of Table I), the maximum oversampling ratio that allows for this condition to be met was around 2.1. Two missing samples patterns were considered: one corresponding to the actual blockage in the raw data domain, and the other corresponding to the extended blockage in the range-compressed domain (i.e., imposing full range resolution [25]). The obtained data loss percentage for both cases is shown in Fig. 4, at the top left. The adopted reconstruction strategy follows the one in [36], where small segments around each missing event (one or more missing samples) are treated separately.
Note that even for oversampling factors of 1.1, the SE yields good results for point targets, regardless of the blockage percentage. On the other hand, the increased amount of missing data in the range compressed case considerably decreases the quality of the BLU reconstruction for the oversampling ratios considered. As the oversampling increases, the reconstruction error with BLU approaches the one with the SE. In fact, for oversampling ratios larger than 1.9 (e.g., as is the case of the standard Tandem-L modes), the use of MIAA does not improve the overall performance in comparison to applying BLU in the raw data domain.
In the case of pure distributed scatters, the reconstruction considering the raw domain blockage is better than the one considering the range-compressed blockage, regardless the recovery method. This is because the auto-correlation of the distributed scatters decays faster in comparison to the one of point-targets, and the recovery is more impacted by the overall increased amount of missing samples (and possibly adjacent blockage) in the range-compressed domain. The coherence degradation caused by applying MIAA to raw data is small in comparison with the one using BLU, specially for larger oversampling ratios. On the other hand, the performance degradation caused by applying MIAA to range-compressed data in comparison to the one of applying BLU in the raw data is significant. For example, for an oversampling ratio of around 1.09 (i.e., experimental Tandem-L case), the coherence goes from around 0.97 using BLU in the raw data domain to 0.9 using MIAA in the range-compressed domain. The performance of all strategies improves with increasing oversampling rates, but at a lower rate for distributed scatters when compared to point-targets.
A lower bound for the coherence degradation over distributed scatters can be obtained considering the case where the blockage samples are set to zero. In this case, the azimuth-dependent coherence modulation can be approximated as
\begin{equation*}
\gamma _{\mathrm{stag}}\left[n\right]=\frac{\mathop{\sum}\nolimits_{-\mathrm{LSA}/2}^{+\mathrm{LSA}/2}\left(\Pi _{\mathrm{m}}G_{_{m}}\right)\left(\Pi _{\mathrm{s}}G_{_{s}}\right)}{\sqrt{\mathop{\sum}\nolimits_{-\mathrm{LSA}/2}^{+\mathrm{LSA}/2}\left(\Pi _{\mathrm{m}}G_{_{m}}\right)^{2}\mathop{\sum}\nolimits_{-\mathrm{LSA}/2}^{+\mathrm{LSA}/2}\left(\Pi _{\mathrm{s}}G_{_{s}}\right)^{2}}} \tag{15}
\end{equation*}
The expected coherence modulation as a function of the along-track baseline for the staggered NISAR case is shown in the left of Fig. 5. On the right, the obtained coherence values for different levels of signal-to-noise ratio (SNR) and different interpolation methods are shown for an along track baseline of 10 m. When using MIAA, the obtained coherences are very close to the bounds, i.e., the recovery does not improve the data quality. Note that the actual decorrelation depends on the spectral power of the interpolation error, and is itself a function of the SNR, since a lower SNR results in larger errors.
Left: Decorrelation factor due to staggered operation for varying along-track baseline. Right: estimated coherence from simulated distributed scatters for varying SNR and an along-track baseline of 10 m.
3) Impact of SCR
The point-target simulation results in Fig. 4 consider the reconstruction of a pure point-target, i.e., no noise nor clutter are present. In this case, the reconstruction in the raw-data domain has better performance than in the range-compressed domain, where the amount of missing data due to blockage is larger. However, in real images, strong targets appear superimposed to noise and clutter. In this case, the signal-to-clutter gain obtained through range compression can benefit the reconstruction of point-targets. This can be observed in the simulation results presented in Fig. 6, where the NRMSE after reconstruction, resampling and compression for a point target under noise is shown. The figure on the left shows the performance as a function of the SNR and amount of missing samples. For this particular simulation, the data were randomly missed, i.e., the optimum PRI sequence design of [5] was not employed, since the goal was to evaluate the effect of different amount of missing data for a given acquisition scenario. An oversampling ratio of 1.09 was considered. Note that a combination of higher SNR and higher amount of missing samples can yield better performance than lower missing samples rate and lower SCR (e.g., see the two red crosses on the left plot of Fig. 6). The plot on the right shows a simulation considering the Tandem-L experimental quad-pol/350 km swath mode, allowing for a variable mean PRF on transmit and considering the expected compression gain in the range-compressed domain. The performance as a function of the oversampling factor considering the reconstruction in the raw-data domain is given by the solid black curve, whereas the one in the range-compressed domain appears in solid red. Regardless of the oversampling rate, the recovery in the range-compressed domain provides better results. For DSs no gain is obtained with the range-compression, and the increasing amount of missing data will degrade the reconstruction in this domain (see Figs. 4 and 5)
NRMSE after reconstruction, resampling, and compression for a point target under noise. (Left) Performance for increasing amount of randomly missed samples and SCR. (Right) Performance for the Tandem-L experimental scenario (see Table I) for varying mean PRF on transmitter and a reference SNR value of 15 dB.
4) Range-Compressed Versus Raw Data Design
The results in the previous section show that the recovery of point targets using high-resolution SEs profits from increased SCR, even if consecutive missing samples occur. In principle, the PRI sequence design could also be constrained to ensure that no consecutive azimuth samples are missed at range-compressed level [5]. However, such design leads to a faster variation of the PRI subsequences and a larger maximum PRI, which might cause performance degradation despite the thinner gaps. In fact, the simulation results in Fig. 7 show that the range-compressed design does not generally improve the reconstruction. The simulation considered the experimental Tandem-L case, allowing for a varying mean PRF on transmit. The presented curves correspond to two PRI sequence designs: in black, ensuring no consecutive azimuth loss in the raw data domain, and in red ensuring no consecutive azimuth loss in the range-compressed domain. The plot on the top left shows the designed PRI sequence (for an oversampling of around 10%); the plot on the top right shows the percentage of missing samples; the plot on the bottom left shows the NRMSE for an ideal point reconstructed in the range-compressed domain with MIAA, and the plot on the bottom right shows the NRMSE for a distributed scatter reconstructed in the raw data domain with BLU. Not only there is no considerable performance gain for point-targets, but also there is a decrease in quality over distributed scatters for several of the considered oversampling factors.
(Top left) PRI sequences for 10 % oversampling factor. (Top right) The percentage of missing samples. (Bottom left) NRMSE for an ideal point reconstructed in the range-compressed domain with MIAA. (Bottom right) NRMSE for a distributed scatter reconstructed in the raw data domain with BLU. In all plots, the curve in black corresponds to design for no consecutive loss in the raw data domain, while the curve in red corresponds to the design for no consecutive loss in the range-compressed domain.
5) Spectral Grid Characterization
In the case of nonuniform sampling, the time vector in (9) is given by
\begin{equation*}
t_{n}=\left(n+r_{n}\right)\Delta t \tag{16}
\end{equation*}
In [20], the authors define the spectral frequencies as
\begin{equation*}
\omega _{k}=2\pi k\frac{\Omega_{\mathrm{max}}}{K},\quad k=0,K-1 \tag{17}
\end{equation*}
The focusing of complex SAR data acquired at equidistant time intervals considers a spectral support of
\begin{equation*}
A\left[\omega _{k\pm K/2},t_{n}\right]=e^{j\frac{2\pi }{K}k\left(n+r_{n}\right)}e^{\pm j\pi \left(n+r_{n}\right)}. \tag{18}
\end{equation*}
\begin{equation*}
\omega _{k}=2\pi \left(k-\frac{K}{2}\right)\frac{\Omega_{\mathrm{max}}}{K},\quad k=0,K-1 \tag{19}
\end{equation*}
As suggested in [21], the spectral sampling can be chosen as a fraction of the resolution of the periodogram, i.e.,
\begin{equation*}
\Delta \omega =\frac{1}{\left(t_{n}-t_{1}\right)p} \tag{20}
\end{equation*}
\begin{equation*}
K=\left\lfloor \mathrm{PRF_{\mathrm{mean}}}/\Delta \omega \right\rfloor \tag{21}
\end{equation*}
Note that the spectral support does not have to be limited to
\begin{equation*}
W\left[\omega _{k}\right]=\left|\underset{n=0}{\overset{N_{g}-1}{\sum }}e^{j\omega _{k}t_{g,n}}\right| \tag{22}
\end{equation*}
Fig. 8 shows the obtained of the NRMSE for a point target after recovery and focusing using different spectral supports for MIAA. The curve in black corresponds to the result obtained with
NRMSE for a point target after reconstruction and focusing considering different spectral grid choices, with and without regularization: The curve in black corresponds to the result obtained with
6) Model Regularization
Depending on the nonuniformity pattern and missing data location, the covariance matrix estimated from the available samples can become rank deficient [40], [41]. In fact, this is often the case in the staggered SAR scenario due to the strong variation of the sampling in the segment.
As a regularization alternative, we suggest to directly use the scheme proposed in [41] for IAA, but now considering the missing samples scenario, i.e.,
\begin{equation*}
\hat{\boldsymbol{R}}=\underset{m\in \xi }{\sum }\left|\hat{\alpha }\right|^{2}a_{g}a_{g}^{H}+\underset{m\in [1,K]\setminus \xi }{\sum }\left|\hat{\alpha }\right|^{2}\boldsymbol{I} \tag{23}
\end{equation*}
The blue curve in Fig. 8 shows the obtained NRMSE when considering the regularization. For the very low oversampling scenario, the regularization brings a small improvement and mitigates sampling-related asymmetry in the recovered IRF.
7) Choice of Segment Size
Large kernels can be helpful for the data recovery with conventional algorithms due to noise suppression. However, in the recovery or staggered data with super-resolution SEs, this is not necessarily the case. This is due to the fact that large residuals in (16) often result in pathological samplings, leading to ill-condition covariance matrices. As discussed in the previous section, regularization approaches can be used to prevent quality degradation due to this effect. However, the use of smaller segments can also aid in the inversion. Small segments are also preferred from a computational cost perspective, since algorithms such as MIAA are very demanding due to the operations with large matrices.
Neglecting noise, an heuristic to select the segment size for the data recovery is to evaluate the maximum deviation of the actual sampling to its best uniform sampling approximation. For example, a good segment size would be the largest size for which the maximum deviation to this uniform grid is smaller than half of the uniform sampling step. Fig. 9 shows NRMSE for a point target after reconstruction and focusing considering different segment sizes. The recovery was performed considering the regularization described in the previous section. The plot on the left corresponds to the raw-data blockage pattern, while the one on the right corresponds to the range-compressed one. The vertical lines in red indicate the sizes obtained with the heuristic described above. In both cases, there is a good agreement between the obtained sizes and minimum NRMSE.
NRMSE for a point target after reconstruction and focusing considering different segment sizes. The plot on the left corresponds to the raw-data blockage pattern, while the one on the right corresponds to the range-compressed one. The vertical line in red indicates the size obtained with the heuristic described in this section.
Modified Two-Step Reconstruction for Low-Oversampled Staggered SAR Data
From the performance analysis presented in the previous section, it is clear that the reconstruction of point-targets and DSs have conflicting characteristics: while the former generally benefits from the range-compression, the latter has better performance if carried out in the raw-data domain. Moreover, super-resolution SEs have a positive impact for point-like targets only, although its performance degradation over distributed scatters can be acceptable considering the recovery at raw data domain, depending on performance requirements and system characteristics (e.g., oversampling rate and chirp duration).
In order to accommodate these somewhat conflicting requirements, we propose a modified strategy for the handling of low-oversampled staggered data. As in [20], blockage recovery and resampling to the uniform grid (when necessary) are performed in independent steps. However, our strategy contains the following particularities.
The recovery of the blockage is performed twice: first, at raw data level and then at range-compressed level. While the second recovery step employs a high-resolution SE and focus on the recovery of point-like targets, the first one is performed with BLU.
A validity test is performed in order to accept or not the result of the SE in order to avoid degradation over distributed scatters.
The spectral-estimation based recovery is applied over small segments rather than over the complete synthetic aperture.
Our proposed approach for the handling of low-oversampled staggered data is summarized in the block diagram shown in Fig. 10. In the following, the processing steps are discussed.
8) Blockage Recovery at Raw-Data Level: As discussed in II-A, even if considering a back-projection kernel for the focusing, the recovery of the missing data from the blocked instants is required for the low-oversampling staggered SAR scenario. Accordingly, the first step of the proposed approach is the recovery of all the blockage at raw-data level. As indicated in Fig. 10, the result of this first recovery is later used to aid in the validation of the high-resolution spectral-estimation. We suggest the use of the BLU interpolator for this step, since it has optimum performance for distributed scatters and can be implemented efficiently considering the periodicity of the PRI variation [5]. Moreover, it is known to have a better performance in terms of noise scaling when compared to multichannel strategies, as the one used in [20].
9) Extension of Blockage Matrix: Since the reconstruction of strong targets is performed in the rage-compressed domain, the missing data matrix (binary matrix indicating the positions of the blockage) has to be dilated in range in order to account for partially available echoes, which are treated here as invalid, i.e.,
\begin{equation*}
M_{\mathrm{block_{RC}}}=M_{\mathrm{block_{raw}}}\oplus S \tag{24}
\end{equation*}
\begin{equation*}
S=\left[\begin{array}{ccc}1 & \cdots & 1\end{array}\right]_{1 \times N_{\mathrm{chirp}}} \tag{25}
\end{equation*}
10) Blockage Recovery at Range-Compressed-Data Level: After the range compression and the retrieval of the extended blockage mask, the second recovery step is performed by means of a high resolution SE (in this study, MIAA). As mentioned in II-B, we perform the estimation for each range bin separately, and the 1-D signal is dived into small segments whose size are determined using the heuristic described in Section II-B7 (typically between 12-20 samples). The computation of the spectrum follows on the grid defined by (19)–(21), with
Note that the block-diagram includes the optional use of a “Bright scatterers mask” (dashed box). The main purpose of this mask is to diminish the overall computational burden by avoiding the second recovery step over segments with very low back-scatter. Note that, for these kind of targets, the result based on spectral estimation techniques is likely to be invalid. Assuming that bright targets do not dominate the scene content, this mask can be created by applying a simple outlier detector in the amplitude after range-compression. The mask is then dilated in both range and azimuth directions in order to diminish miss-detection and to “close” dark areas corresponding to the data poorly interpolated with BLU.
11) Validity Test: From the characteristics of the spectrum estimated in the previous step, we can attempt to distinguish valid from invalid recoveries. This can be accomplished, e.g., by using the Bayesian information criterion (BIC). The BIC rule is defined as [21]
\begin{align*}
&\mathrm{BIC}[M]= \\
&N\ln\!\! \left(\!\mathop{\sum}\limits_{{n}=1}^{N}\!\left|y\left[t_{n}\right]\!-\!\!\underset{k=1}{\overset{M}{\sum }}\hat{\alpha }_{\mathrm{ordered}}\left[\omega _{k}\right]e^{j\omega _{k,\mathrm{ordered}}t_{n}}\!\right|^{2}\right)\!\! + 4M\ln N \tag{26}
\end{align*}
(Top) Spectrum estimated with MIAA at its first iteration (in black), and upon convergence (in red). (Bottom) The BIC criteria as a function of spectral components. From left to right: Pure point-target, Point target plus noise (SNR=12 dB), Point target plus noise (SNR=3 dB) and clutter.
Specifically, we consider the recovery based on spectral estimation techniques of a segment to be invalid if the following conditions are met
\begin{equation*}
\arg \underset{M}{\min }\;\mathrm{BIC[M]}=0 \tag{27}
\end{equation*}
\begin{align*}
&\frac{1}{N-1}\mathop{\sum}\limits_{{n}=1}^{N}\left|y_{\mathrm{RAWest}}\left[t_{n}\right]-\bar{y}_{\mathrm{RAWest}}\right|^{2} \\
&\quad\geq \frac{1}{N-1}\mathop{\sum}\limits_{{n}=1}^{N}\left|y_{\mathrm{RCest}}\left[t_{n}\right]-\bar{y}_{\mathrm{\mathrm{RCest}}}\right|^{2} \tag{28}
\end{align*}
As a final remark, note that, as is the case in [20], the quality of the recovered data using the strategy proposed here is bounded to that of staggered data acquired with no blockage, i.e., artifacts related to the nonlimitation of the azimuth spectrum and to the nonuniformity or eventual under-sampling of the nonblocked sampling pattern will still be present.
Results With Simulated Staggered SAR Data
In this section, we present results obtained with synthetic data from the DLR-HR end-to-end simulator [42], [43]. The simulator used an input L-band reflectivity map retrieved from an ALOS-2 acquisition over Mexico City, and was used to generate data emulating both the Tandem-L experimental mode (parameters in the second column of Table I) and the NISAR staggered scenario (parameters in the third column of Table I).
A. NISAR Staggered Scenario
Fig. 12 shows the results obtained for the NISAR staggered scenario using different blockage recovery strategies. The recovery methods were, from left to right: BLU, MIAA in the raw-data domain (equivalent to the blockage recovery proposed in [20]), MIAA in the range-compressed domain, and the proposed hybrid approach. The plots on the top row show the normalized amplitude images, the plots in the middle show the difference in dB between the recovered images and the reference (i.e., the staggered SAR image with no blockage), and the images in the bottom row show the coherence between the recovered images and the nonblocked reference. In this simulation, three additional point-targets with a SCR above 50 dB were introduced at near-, middle-, and far-range, and they are clearly visible in the images, where the reconstruction occurs at raw-data level (see red lines along azimuth in the difference plots in the first and second columns). When recovering with MIAA in the range-compressed domain (third column), the artefacts of strong targets are suppressed at the expense of overall coherence reduction. Finally, the proposed two-steps recovery is able to suppress the strongest artefacts, while maintaining the decorrelation to the same level obtained with BLU. The computation times of the four methods used for the data recovery of the results shown in Fig. 12 are given in Table II. The computations were performed on an Intel(R) Xeon(R) CPU X7560 @ 2.27 GHz machine with 32 CPUs and 398 GB of RAM. Each method was limited to 12 parallel threads, and the complete data matrix had 11 k × 11 k samples. In this particular example, although the amount of missing samples is larger in the rage-compressed domain (around 15% of the data amount, against 8% in the raw-data domain), the computation time of the third and fourth approaches are smaller than the one of the second approach. This is because MIAA tends to converge faster for targets with increased SCR, i.e., it is potentially faster in the range-compressed domain. Finally, note that the implementation of MIAA was not optimized and the computation time of the hybrid approach can be reduced by using a mask to detect strong targets, as mentioned in Section III-10.
(Top) Amplitude, (middle) difference in dB between recovered data and non-blocked reference, and (bottom) coherence with respect to reference. The following blockage interpolation methods were used, from left to right: BLU, MIAA in the raw-data domain (equivalent to the blockage recovery in [20]), MIAA in the range-compressed domain, and the proposed approach.
In order to evaluate the impact on interferometry, the DLR-HR end-to-end simulator was used to generate a stack with 20 images and along-track baselines uniformly distributed in the interval between
Residual interferometric phases for along-track baselines of (left) 10 m, (middle) 50 m and (right) 110 m. Results with (top row) BLU and (bottom row) with the proposed approach.
Phase histograms over a region of interest containing the strongest artefacts at the center of the scene. The plot in the left corresponds to the 10 m baseline case, in the middle to the 50 m baseline case, and on the right to the 110 m baseline case. The curves in black show the results using BLU, whereas the one in red show the results with the proposed approach.
Persistent scatterers (PSs) were detected at full resolution considering an amplitude dispersion threshold of 0.2. The PSs were then processed and the mean differential deformation velocity was estimated. Arcs with a model coherence greater than 0.85 were considered valid and, after integration, a verification was performed to detect inconsistencies in the integrated mean deformation velocity. The estimated mean velocities are shown in Fig. 15 (BLU on the left, MIAA in the middle). The difference between the mean velocity maps appears on the right (for the common points). It is possible to see that despite the strong targets, the mean deformation velocity is mainly well estimated in both cases. This is because most side-lobes do not overlap in the different slaves and the effects cancel in average. Nevertheless, the BLU map contains a few residual biases (see red ovals). The larger the baseline diversity, the less likely such biases will be. In both estimations, it is possible to see that (residual) artifacts reduced the number of valid points detected in the middle stripe where the strong-point targets dominate. This effect will be reduced if temporal coherence instead of amplitude dispersion (and DSs instead of PSs) are used. Finally, note that both maps contain residual noise mainly due to the staggered operation. To quantify the noise, a stack without any deformation or atmosphere was simulated. The resulting standard deviation of the estimated mean deformation velocity was around 0.045 cm/month for BLU and 0.048 cm/month for the spectral estimation approach. For the same configuration, the standard deviation obtained using a reference stack formed by images with a constant PRF was around 0.02 cm/month.
Estimated mean deformation velocity with (left) BLU and (middle) the proposed approach. The difference map for common PSs appears on the right over the reflectivity image obtained with BLU.
B. Tandem-L Experimental Quad-Pol Scenario
The bandwidth of the transmitted signal in the Tandem-L quad-pol experimental scenario is larger than the NISAR one, which results in the strong artefacts being more smeared after range-cell migration correction [44]. Nevertheless, SEs can still improve the recovery of point-like targets as suggested by the performance analysis in Section II-B.
Unlike NISAR, Tandem-L is envisioned as a single-pass bistatic interferometer, and the decorrelation caused by the low-oversampled staggered operation will be more visible in the single-pass interferograms. A cross-platform approach to recover the blockage in the Tandem-L quad-pol experimental scenario—similar to what we proposed in [36] for the retrieval of the missing caused by the synchronization link in TanDEM-X data—could be an option to maximize the bistatic coherence while minimizing phase noise and/or artefacts. Since the current plan for Tandem-L is to have continuous acquisition by the slave system (i.e., the slave image would have no missing data), the bistatic data could be always used to interpolate the monostatic one, regardless of the along-track baseline (naturally, with varying performance according to the spectral overlap). The cross-platform interpolation requires the compensation of the response of one system with respect to the other, e.g., compensation for different system gains or antenna patterns, among others. However, the effects of the lack of precise calibration information and of changes in the back-scatter of semitransparent media due to the different geometries have to be further investigated.
A suboptimal approach which can decrease the decorrelation caused by the staggered operation is to match the processing filters. Specifically, blockage can be forced on the originally nonblocked coregistered bistatic slave at the same positions where the blockage of the monostatic master is expected to be. This will degrade the quality of the bistatic image, but can reduce coherence loss. An example considering such strategy is shown in Fig. 16. The figures on the top show the bistatic slave amplitudes, and the ones on the bottom show the single-pass interferometric coherences. An along-track baseline of 100 m and zero across-track baseline were considered. For the results on the left column, the proposed approach was employed to correct the blockage of the master while the slave has no blockage. For the results on the right column, the master blockage was forced on the slave raw data and the proposed approach was applied to both acquisitions. Although such approach can improve the performance of coherence based applications, it can also lead to phase biases depending on the blockage distribution within the synthetic aperture and on the quality of the interpolation. Moreover, it is clearly not indicated for amplitude-based applications, due to the inherent decreased quality loss of the slave image. As stated earlier, better performance can be potentially obtained if we interpolate the master blockage using information from the bistatic slave. This is the topic of a follow-on research work.
(Top) Bistatic slave amplitude and (bottom) single-pass interferometric coherence, for an along-track baseline of 100 m and zero accross-track baseline. For the results on the left column, the proposed approach was employed to correct the blockage of the master and the slave has no blockage. For the results on the right column, the master blockage was forced on the slave and the proposed approach was applied to both images.
Conclusion
In this article, we proposed an alternative strategy for the handling of low-oversampled staggered SAR data. The approach relies on the recovery of the blockage data using spectral-estimation techniques applied to data in the range-compressed domain, and on the discrimination between valid and invalid recovery results based on the characteristics of the estimated spectra. We validated the methodology with simulations considering the experimental Tandem-L fully polarimetric 350 km swath mode, and an eventual staggered NISAR scenario with chirp duration of 47
Although the proposed approach relies on the use of BLU and MIAA as the interpolators in the raw-data domain and range-compressed domain, those can be replaced by other interpolators according to availability. For example, the use of SEs, which consider a certain spectral extent [45] or newly developed SEs for mixed spectra [46] could be evaluated as an alternative to MIAA, at the possible expense of computational complexity.