Microwave power transmission (MPT) technology involves wirelessly transferring the electrical energy from transmitting antenna array to receiving antenna array by a microwave beam [1], [2]. It could be used for supplying high altitude airship, unmanned aerial vehicles, and so on [3], [4]. Beam capture efficiency (BCE) is the ratio of the captured power by the receiving array to the total radiated microwave power, and it is one key factor of overall efficiency for optimal MPT.
The highest dc to dc (dc-dc) overall efficiency of 54% in the history of MPT was proved by Dr. Brown in 1974 [5]. However, this culmination is hard to be achieved in a long-range MPT system. In 1975, an S-band MPT experiment at the range of 1.54 km was shown at the Venus Site of JPL’s Goldstone Facility. The obtained rectifying efficiency was up to 80%, whereas the dc-dc efficiency was only 7% due to poor BCE of 11.7% [6]. Another MPT experiment was carried out by John Mankins in 2008, and the obtained BCE was less than 1/1000th of 1% since the size of the transmitting and receiving arrays were too small for efficient transfer over 148 km distance [7]. Therefore, the transmitting array should be designed to improve BCE, which is of great importance to efficient operation of MPT systems.
In recent years, many efforts have been devoted to the transmitting array synthesis for optimal MPT without considering excitation errors or position errors [8]–[17]. The theoretical optimal BCE ($BCE^{\mathrm {opt}}$
) and the corresponding optimal distribution across the transmitting array can be achieved by exploiting the discrete prolate spheroidal sequences [8] or by solving a generalized eigenvalue problem [9], [10]. To simplify the complexity of the feed network under the condition of keeping a high BCE, several weighting technologies have been proposed for the transmitting array including Isosceles Trapezoidal Distribution (ITD) [11], ITD with Unequal spacing [12], stepped amplitude distribution [13], and uniform amplitude distribution with unequal spacing [14]. The unconventional array can also realize the feed network simplification by clustered exciting strategy [15]. The sparsification of the transmitting array was also discussed in [15]–[17] with a high BCE via compressive sensing (CS), convex programming (CP), and the combination of the aforementioned two methods, respectively. Unfortunately, random errors are inevitable due to the accuracy of manufacture, which always cause deviation of BCE from the designed one. Nevertheless, [18], [19] just analyzed the tolerance of BCE against excitation phase errors and position errors, respectively. To the best of the authors’ knowledge, the previous array synthesis works for optimal MPT were focused on ideal situation.
In this work, we describe a synthesis method of the transmitting array for optimal MPT by using stochastic optimization algorithm in the presence of excitation errors including amplitude errors and phase errors. Toward this purpose, a statistical analysis (SA) method is also presented to analyze the tolerance of BCE against excitation errors. Due to high dimensions of this optimization problem, cooperatively coevolving differential evolution (CCDE) algorithm is considered [20]. The outline of this paper is organized as follows. Section II describes the SA method for tolerance analysis of BCE against excitation errors. Section III introduces the synthesis model and the optimization procedure of CCDE algorithm, and Section IV presents the numerical results. Finally, section V gives the concluding remarks.
SECTION II.
Tolerance Analysis of $BCE$
in the Presence of Excitation Errors
The formulas of BCE are derived by using SA method while considering excitation errors. Then based on these formulas, we can get the upper and lower bounds of BCE.
A. Formulas of $BCE$
with Considering Excitation Errors
As shown in Fig. 1, the transmitting array can be an arbitrary shaped array located in the XOY plane while consisting of $N$
elements. As the effect of mutual coupling among the elements is ignored, the ideal array factor is \begin{equation} AF=\sum \limits _{n=1}^{N} {w_{n} \exp \left [{ {jk\left ({{ux_{n} +vy_{n} } }\right)} }\right]} \end{equation}
View Source
\begin{equation} AF=\sum \limits _{n=1}^{N} {w_{n} \exp \left [{ {jk\left ({{ux_{n} +vy_{n} } }\right)} }\right]} \end{equation}
where $w_{n}$
and ($x_{n},y_{n}$
) are, respectively, the complex excitation weight and position of $n$
th element. $k=2\pi /\lambda $
denotes the wave number, $u\!=\!\sin \theta \!\cos \varphi $
, and $\text {v}\!=\!\sin \theta \! \sin \varphi $
. Suppose that the receiving array is in the far region, BCE can be expressed as \begin{equation} BCE=\frac {\int _\Psi {\left |{ {\textrm {AF}} }\right |^{2}\mathrm {d}\Psi }}{\int _\Omega {\left |{ {\mathrm {AF}} }\right |^{2}\mathrm {d}\Omega }}=\frac {{\mathbf {wRw}}^{H}}{{\mathbf {wTw}}^{H}} \end{equation}
View Source
\begin{equation} BCE=\frac {\int _\Psi {\left |{ {\textrm {AF}} }\right |^{2}\mathrm {d}\Psi }}{\int _\Omega {\left |{ {\mathrm {AF}} }\right |^{2}\mathrm {d}\Omega }}=\frac {{\mathbf {wRw}}^{H}}{{\mathbf {wTw}}^{H}} \end{equation}
where $\Psi $
is the receiving region, $\Omega $
is the whole visible range of transmitting array, $\mathbf {w}\!\!=\!\![w_{1},w_{2},\ldots,w_{N}]$
, R and T are both $N\!\times \! N$
matrixes, and superscript “$^{H}$
” stands for transpose and complex-conjugate. The elements of R and T are calculated as discussed in [9].\begin{align} R_{mn}=&\int _\Psi {\exp \left [{ {jk\left ({{u\Delta x_{mn} +v\Delta y_{mn}} }\right)} }\right]} \mathrm {d}\Psi \\ T_{mn}=&4\pi \sin c\left ({{k\sqrt {\Delta x_{mn}^{2} +\Delta y_{mn}^{2}}} }\right) \end{align}
View Source
\begin{align} R_{mn}=&\int _\Psi {\exp \left [{ {jk\left ({{u\Delta x_{mn} +v\Delta y_{mn}} }\right)} }\right]} \mathrm {d}\Psi \\ T_{mn}=&4\pi \sin c\left ({{k\sqrt {\Delta x_{mn}^{2} +\Delta y_{mn}^{2}}} }\right) \end{align}
where $\Delta x_{mn}=x_{m}-x_{n}$
, and $\Delta y_{mn}=y_{m}-y_{n}$
. $BCE^{\mathrm {opt}}$
is the maximum eigenvalue of the generalized eigenvalue problem [9] \begin{equation} {\mathbf {Rw}}^{\mathrm {opt}}=BCE^{\mathrm {opt}}{\mathbf {Tw}}^{\mathrm {opt}} \end{equation}
View Source
\begin{equation} {\mathbf {Rw}}^{\mathrm {opt}}=BCE^{\mathrm {opt}}{\mathbf {Tw}}^{\mathrm {opt}} \end{equation}
in which $\mathbf{w}^{\mathrm {opt}}$
is the corresponding eigenvector. Considering the excitation amplitude and phase errors caused by mechanical and electrical errors, the array factor becomes \begin{equation} AF=\sum \limits _{n=1}^{N} {A_{n} \left ({{1+\delta _{n}} }\right)\exp \left [{ {j\varphi _{n} +jk\left ({{ux_{n} +vy_{n}} }\right)+j\phi _{n}} }\right]}\quad \end{equation}
View Source
\begin{equation} AF=\sum \limits _{n=1}^{N} {A_{n} \left ({{1+\delta _{n}} }\right)\exp \left [{ {j\varphi _{n} +jk\left ({{ux_{n} +vy_{n}} }\right)+j\phi _{n}} }\right]}\quad \end{equation}
where $A_{n}$
and $\varphi _{n}$
are, respectively, the amplitude and phase of complex excitation $w_{n}$
. The symbol $\delta _{n}$
denotes the amplitude error in percent, and $\Phi _{n}$
represents the phase error. Therefore, the microwave power flowing through the angular region $S=\{\Psi,\Omega \}$
is \begin{align} P^{S}=&\sum \limits _{m=1}^{N} {\sum \limits _{n=1}^{N} {a_{mn} \delta _{mn} s_{mn} \exp \left [{ {j\Delta \phi _{mn}} }\right]}} \\ s_{mn}=&\int _{S} {\exp \left [{ {jk\left ({{u\Delta x_{mn} +v\Delta y_{mn}} }\right)+j\Delta \varphi _{mn}} }\right]} \mathrm {d}S \end{align}
View Source
\begin{align} P^{S}=&\sum \limits _{m=1}^{N} {\sum \limits _{n=1}^{N} {a_{mn} \delta _{mn} s_{mn} \exp \left [{ {j\Delta \phi _{mn}} }\right]}} \\ s_{mn}=&\int _{S} {\exp \left [{ {jk\left ({{u\Delta x_{mn} +v\Delta y_{mn}} }\right)+j\Delta \varphi _{mn}} }\right]} \mathrm {d}S \end{align}
where $a_{mn}=A_{m}A_{n}$
, $\delta _{mn}=(1+\delta _{m})(1+\delta _{n})$
, $\Delta \phi _{mn}=\phi _{m}\!-\!\phi _{n}$
, and $\Delta \varphi _{mn} =\varphi _{m}\!-\!\varphi _{n}$
. It’s obvious to find that $s_{mn} = s^\ast _{nm}$
, in which “*” stands for complex-conjugate. Assume that $\delta _{n}$
and $\Phi _{n}$
are statistically independent and have a normal distribution with zero mean and standard deviation $\sigma _{\delta }$
and $\sigma _{\Phi }$
, respectively, it turns out that (see the Appendix A) \begin{equation} P^{S}=\sum \limits _{m=1}^{N} {\sum \limits _{n=1}^{N} {a_{mn} \tau _{mn}^{S} \left ({{1+2\delta _{m} +\delta _{m} \delta _{n}} }\right)}} \end{equation}
View Source
\begin{equation} P^{S}=\sum \limits _{m=1}^{N} {\sum \limits _{n=1}^{N} {a_{mn} \tau _{mn}^{S} \left ({{1+2\delta _{m} +\delta _{m} \delta _{n}} }\right)}} \end{equation}
where $\tau ^{S}_{mn}=s^{r}_{mn}\cos (\Delta \phi _{mn})-s^{i}_{mn}\sin (\Delta \phi _{mn})$
, $s^{r}_{mn}$
and $s^{i}_{mn}$
are the real and imaginary part of $s_{mn}$
, respectively, and $\tau ^{S}_{mn}=\tau ^{S}_{nm}$
. According to Taylor polynomial approximation, we can get $\cos (\Delta \phi _{mn})\approx 1-\phi ^{2}_{m}/2-\phi ^{2}_{n}/2+\phi _{m}\phi _{n}$
and $\sin(\Delta \phi _{mn})\approx \phi _{m}-\phi _{n}$
. By substituting the above two formulas in (9), $P^{S}$
turns out to be the sum of $P^{S}_{\textrm {A}}$
and $P^{S}_{\textrm {B}}$
, the expresses of which are (see the Appendix B) \begin{align} P_{\mathrm {A}}^{S} =\sum \limits _{m=1}^{N} {P_{\mathrm {A}m}^{\mathrm {S}}} \\ P_{\mathrm {B}}^{S} =\sum \limits _{m=1}^{N} {P_{\mathrm {B}m}^{\mathrm {S}}} \end{align}
View Source
\begin{align} P_{\mathrm {A}}^{S} =\sum \limits _{m=1}^{N} {P_{\mathrm {A}m}^{\mathrm {S}}} \\ P_{\mathrm {B}}^{S} =\sum \limits _{m=1}^{N} {P_{\mathrm {B}m}^{\mathrm {S}}} \end{align}
where $P^{S}_{\mathrm {A}m}=c^{S}_{\mathrm {A}m}(1+2\delta _{m})-\phi _{m}(1+\delta _{m})(c^{S}_{\mathrm {B}m}\phi _{m}+2c^{S}_{\mathrm {C}m})+c^{S}_{\mathrm {D}m}\delta ^ {2}_ {m}$
, and$\vphantom {^{\int }}\,\,P^{S}_{\mathrm {B}m}=\delta _{m} v^{S}_{\mathrm {A}m}+\phi _{m}v^{S}_{\mathrm {B}m}+\delta _{m}\phi _{m}v^{S}_{\mathrm {C}m}+\phi ^ {2}_{m}v^{S}_{\mathrm {D}m}$
. The coefficients in $P^{S}_{\mathrm {A}m}$
are given as \begin{align} c_{\mathrm {A}m}^{S}=&\sum \limits _{n=1}^{N} {a_{mn} s_{mn}^{r}} \\ c_{\mathrm {B}m}^{S}=&\sum \limits _{\substack { n=1 \\ n\ne m \\ }}^{N} {a_{mn} s_{mn}^{r}} \\ c_{\mathrm {C}m}^{S}=&\sum \limits _{\substack { n=1 \\ n\ne m \\ }}^{N} {a_{mn} s_{mn}^{i}} \\ c_{\mathrm {D}m}^{S}=&a_{mm} s_{mm}^{r} \end{align}
View Source
\begin{align} c_{\mathrm {A}m}^{S}=&\sum \limits _{n=1}^{N} {a_{mn} s_{mn}^{r}} \\ c_{\mathrm {B}m}^{S}=&\sum \limits _{\substack { n=1 \\ n\ne m \\ }}^{N} {a_{mn} s_{mn}^{r}} \\ c_{\mathrm {C}m}^{S}=&\sum \limits _{\substack { n=1 \\ n\ne m \\ }}^{N} {a_{mn} s_{mn}^{i}} \\ c_{\mathrm {D}m}^{S}=&a_{mm} s_{mm}^{r} \end{align}
and the coefficients in $P^{S}_{\mathrm {B}m}$
are \begin{align} v_{\mathrm {A}m}^{S}=&\sum \limits _{\substack { n=1 \\ n\ne m \\ }}^{N} {\left [{ {a_{mn} s_{mn}^{r} \delta _{n} \left ({{1-\phi _{n}^{2}} }\right)} }\right.\left.{ {+a_{mn} s_{mn}^{i} 2\phi _{n} \left ({{1+\delta _{n}} }\right)} }\right]}\notag \\ \\ v_{\mathrm {B}m}^{S}=&\sum \limits _{\substack { n=1 \\ n\ne m \\ }}^{N} {a_{mn} s_{mn}^{r} \phi _{n}} \\ v_{\mathrm {C}m}^{S}=&\sum \limits _{\substack { n=1 \\ n\ne m \\ }}^{N} {a_{mn} s_{mn}^{r} \left ({{2+\delta _{n}} }\right)\phi _{n} } \\ v_{\mathrm {D}m}^{S}=&-\sum \limits _{\substack { n=1 \\ n\ne m \\ }}^{N} {a_{mn} s_{mn}^{r} \delta _{n}} \end{align}
View Source
\begin{align} v_{\mathrm {A}m}^{S}=&\sum \limits _{\substack { n=1 \\ n\ne m \\ }}^{N} {\left [{ {a_{mn} s_{mn}^{r} \delta _{n} \left ({{1-\phi _{n}^{2}} }\right)} }\right.\left.{ {+a_{mn} s_{mn}^{i} 2\phi _{n} \left ({{1+\delta _{n}} }\right)} }\right]}\notag \\ \\ v_{\mathrm {B}m}^{S}=&\sum \limits _{\substack { n=1 \\ n\ne m \\ }}^{N} {a_{mn} s_{mn}^{r} \phi _{n}} \\ v_{\mathrm {C}m}^{S}=&\sum \limits _{\substack { n=1 \\ n\ne m \\ }}^{N} {a_{mn} s_{mn}^{r} \left ({{2+\delta _{n}} }\right)\phi _{n} } \\ v_{\mathrm {D}m}^{S}=&-\sum \limits _{\substack { n=1 \\ n\ne m \\ }}^{N} {a_{mn} s_{mn}^{r} \delta _{n}} \end{align}
Thus BCE can be expressed as $\textit {BCE}=(\eta +P^\Psi _{\mathrm {B}}/P^\Omega _{\mathrm {A}})/ (1+P^\Omega _{\mathrm {B}}/P^\Omega _{\mathrm {A}})$
, in which$\vphantom {^{\int ^{R}}}~P^{\Psi }$
is the receiving power, $P^{\Omega }$
is the total transmitting power, and $\eta =P^\Psi _{\mathrm {A}}/P^\Omega _{A}$
. In order to get the upper and lower bounds of BCE, the bounds of $\eta $
, $P^{S}_{\mathrm {A}}$
and $P^{S}_{\mathrm {B}}$
should be obtained in advance.
B. BOUNDS OF $P^{S}_{\mathrm {A}} (S=\{\Psi,\Omega \})$
AND $\eta $
The mean$\vphantom {_{\int }}$
of $P^{S}_{\mathrm {A}m}$
is $u^{S}_{\mathrm {A}m}=c^{S}_{\mathrm {A}m}-\sigma ^{2}_\Phi c^{S}_{\mathrm {B}m}+\sigma ^{2}_\delta c^{S}_{\mathrm {D}m}$
, and the variance is $(\sigma ^ {S}_{\mathrm {A}m})^{2}=\sigma ^{2}_\delta (2c^{S}_{\mathrm {A}m}-\sigma ^{2}_{\Phi } c^{S}_{\mathrm {B}m})^{2}+2\sigma ^ {2}_\Phi (1+\sigma ^{2}_\delta)[\sigma ^{2}_\Phi (c^{S}_{\mathrm {B}m})^{2}+2(c^{S}_{\mathrm {C}m})^{2}]+2\sigma ^{4}_\delta (c^{S}_{\mathrm {D}m})^{2}$
. Since$\vphantom {^{\int ^{R}}}~P^{S}_{\mathrm {A}n}$
and $P^{S}_{\mathrm {A}m}~(m\ne n)$
are statistically independent, the mean and variance of $P^{S}_{\mathrm {A}}$
can be got as \begin{align} u_{\mathrm {A}}^{S}=&\sum \limits _{m=1}^{N} {u_{\mathrm {A}m}^{S}} \\ \left ({{\sigma _{\mathrm {A}}^{S}} }\right)^{2}=&\sum \limits _{m=1}^{N} {\left ({{\sigma _{\mathrm {A}m}^{S}} }\right)^{2}} \end{align}
View Source
\begin{align} u_{\mathrm {A}}^{S}=&\sum \limits _{m=1}^{N} {u_{\mathrm {A}m}^{S}} \\ \left ({{\sigma _{\mathrm {A}}^{S}} }\right)^{2}=&\sum \limits _{m=1}^{N} {\left ({{\sigma _{\mathrm {A}m}^{S}} }\right)^{2}} \end{align}
and the correlation coefficient between $P^\Psi _{\mathrm {A}}$
and $P^\Omega _{\mathrm {A}}$
is \begin{equation} \rho =\sum \limits _{m=1}^{N} {\rho _{m}} \end{equation}
View Source
\begin{equation} \rho =\sum \limits _{m=1}^{N} {\rho _{m}} \end{equation}
where$\vphantom {_{\int }}\,\,\rho _{m}=[\sigma ^{2}_\delta (2c^\Psi _{\mathrm {A}m}-\sigma ^{2}_\Phi c^\Psi _{\mathrm {B}m})\,\,(2c^\Omega _{\mathrm {A}m}-\sigma ^{2}_\Phi c^\Omega _{\mathrm {B}m})+2\sigma ^{2}_\Phi (1+\sigma ^{2}_\delta)(\sigma ^{2}_\Phi c^\Psi _{\mathrm {B}m}c^\Omega _{\mathrm {B}m}+2c^\Psi _{\mathrm {C}m}c^\Omega _{\mathrm {C}m})+2\sigma ^{4}_\delta c^\Psi _{\mathrm {D}m}c^\Omega _{\mathrm {D}m}]/\sigma ^\Psi _{\mathrm {A}}\sigma ^\Omega _{\mathrm {A}}$
. We define $L^{S}$
as \begin{equation} L^{S}=\frac {1}{\left ({{\sigma _{\mathrm {A}}^{S}} }\right)^{4}}\sum \limits _{m=1}^{N} {\textrm {E}\left [{ {\left |{ {P_{\mathrm {A}m}^{S} -u_{\mathrm {A}m}^{S}} }\right |^{4}} }\right]} \end{equation}
View Source
\begin{equation} L^{S}=\frac {1}{\left ({{\sigma _{\mathrm {A}}^{S}} }\right)^{4}}\sum \limits _{m=1}^{N} {\textrm {E}\left [{ {\left |{ {P_{\mathrm {A}m}^{S} -u_{\mathrm {A}m}^{S}} }\right |^{4}} }\right]} \end{equation}
where E() returns the expected value. The values of $L^{S}$
are shown in Fig. 2. In the calculation process, MPT system has a square transmitting array displaced on a regular lattice of $L_{n}\times L_{n}$
and a circular receiving region ($u^{2}+v^{2}\le r^{2}_{0}$
and $r_{0}=2/L_{n}$
). The complex excitations are set as $\mathbf{w}^{\mathrm {opt}}$
in [9]. As a result, the limit of $L^{S}$
as $N$
approaches infinity is zero. According to Lyapunov central limit theorem (CLT), $P^{S}_{\mathrm {A}}$
becomes a normal distribution. For the confidence level $\gamma $
, the confidence interval is $[u^{S}_{\mathrm {A}}-\beta _{1}\sigma ^ {S}_{\mathrm {A}},u^{S}_{\mathrm {A}}+\beta _{1}\sigma ^{S}_{\mathrm {A}}]$
. Namely, the probability that $P^{S}_{\mathrm {A}}$
lay within the interval is equal to $\gamma $
. For example, $\beta _{1} =1.96$
when $\gamma =95\%$
. Thus the lower bound $(P^{S}_{\mathrm {A}})^{\mathrm {L}}$
is $u^{\mathrm {S}}_{\mathrm {A}}-\beta _{1}\sigma ^{\mathrm {S}}_{\mathrm {A}}$
and the upper bound $(P^{S}_{\mathrm {A}})^{\mathrm {U}}$
is $u^{\mathrm {S}}_{\mathrm {A}}+\beta _{1}\sigma ^{\mathrm {S}}_{A}$
.
The bounds of $\eta $
are related to the correlation coefficient. If $\rho =\pm 1$
, which indicates a linear relationship between $P^\Psi _{\mathrm {A}}$
and $P^\Omega _{\mathrm {A}}$
, it turns out that $P^\Psi _{\textrm {A}} = aP^\Omega _{\mathrm {A}}+b$
, $a=\sigma ^\Psi _{\mathrm {A}}/\sigma ^{\Omega }_{\mathrm {A}}$
, and $b= u^\Psi _{\mathrm {A}}-\textit {au}^\Omega _{\mathrm {A}}$
. Then $\eta $
can be rewritten as $\eta =a+b/P^\Omega _{\mathrm {A}}$
. With the bounds of $P^{S}_{\mathrm {A}}$
, we can get $\eta _{1}=\mathrm {a+b}/(P^\Omega _{\mathrm {A}})^{\mathrm {L}}$
and $\eta _{2}=\mathrm {a+b}/(P^\Omega _{\mathrm {A}})^{\mathrm {U}}$
. So the lower bound $\eta ^{\mathrm {L}}=\min (\eta _{1}, \eta _{2})$
and the upper bound $\eta ^{\mathrm {L}}=\max (\eta _{1}, \eta _{2})$
, where min() returns the smaller element and max() returns the larger element. If $\rho \ne \pm $
1, the joint probability density function of $P^\Psi _{\mathrm {A}}$
and $P^\Omega _{\mathrm {A}}$
is \begin{align} f_{1} \left ({{P_{\mathrm {A}}^{\Psi },P_{\mathrm {A}}^{\Omega }} }\right)=&\frac {1}{2\pi \sigma _{\mathrm {A}}^{\Psi } \sigma _{\mathrm {A}}^{\Omega } \sqrt {1-\rho ^{2}}}\exp \Biggl \{{{-\frac {1}{2\left ({{1-\rho ^{2}} }\right)}} } \notag \\&\cdot \Biggl [{ {\frac {\left ({{P_{\mathrm {A}}^{\Psi } -u_{\mathrm {A}}^{\Psi }} }\right)^{2}}{2\left ({{\sigma _{\mathrm {A}}^{\Psi } } }\right)^{2}}+\frac {\left ({{P_{\mathrm {A}}^{\Omega } -u_{\mathrm {A}}^{\Omega }} }\right)^{2}}{2\left ({{\sigma _{\mathrm {A}}^{\Omega }} }\right)^{2}}} } \notag \\&{ {{ {-2\,\rho \frac {\left ({{P_{\mathrm {A}}^{\Psi } -u_{\mathrm {A}}^{\Psi }} }\right)\left ({{P_{\mathrm {A}}^{\Omega } -u_{\mathrm {A}}^{\Omega }} }\right)}{\sigma _{\mathrm {A}}^{\Psi } \sigma _{\mathrm {A}}^{\Omega }}} }\Biggr]} }\Biggr \} \end{align}
View Source
\begin{align} f_{1} \left ({{P_{\mathrm {A}}^{\Psi },P_{\mathrm {A}}^{\Omega }} }\right)=&\frac {1}{2\pi \sigma _{\mathrm {A}}^{\Psi } \sigma _{\mathrm {A}}^{\Omega } \sqrt {1-\rho ^{2}}}\exp \Biggl \{{{-\frac {1}{2\left ({{1-\rho ^{2}} }\right)}} } \notag \\&\cdot \Biggl [{ {\frac {\left ({{P_{\mathrm {A}}^{\Psi } -u_{\mathrm {A}}^{\Psi }} }\right)^{2}}{2\left ({{\sigma _{\mathrm {A}}^{\Psi } } }\right)^{2}}+\frac {\left ({{P_{\mathrm {A}}^{\Omega } -u_{\mathrm {A}}^{\Omega }} }\right)^{2}}{2\left ({{\sigma _{\mathrm {A}}^{\Omega }} }\right)^{2}}} } \notag \\&{ {{ {-2\,\rho \frac {\left ({{P_{\mathrm {A}}^{\Psi } -u_{\mathrm {A}}^{\Psi }} }\right)\left ({{P_{\mathrm {A}}^{\Omega } -u_{\mathrm {A}}^{\Omega }} }\right)}{\sigma _{\mathrm {A}}^{\Psi } \sigma _{\mathrm {A}}^{\Omega }}} }\Biggr]} }\Biggr \} \end{align}
then the probability density function of $\eta $
is \begin{equation} f_{2} \left ({\eta }\right)=\int _{0}^{+\infty } {f_{1} \left ({{P_{\mathrm {A}}^{\Omega },P_{\mathrm {A}}^{\Omega } \eta } }\right)} P_{\mathrm {A}}^{\Omega } \mathrm {d}\left ({{P_{\mathrm {A}}^{\Omega }} }\right) \end{equation}
View Source
\begin{equation} f_{2} \left ({\eta }\right)=\int _{0}^{+\infty } {f_{1} \left ({{P_{\mathrm {A}}^{\Omega },P_{\mathrm {A}}^{\Omega } \eta } }\right)} P_{\mathrm {A}}^{\Omega } \mathrm {d}\left ({{P_{\mathrm {A}}^{\Omega }} }\right) \end{equation}
For the same confidence level $\gamma $
, the bounds of $\eta $
can be expressed as [$\eta _{0}-\beta _{2}$
, $\eta _{0}+\beta _{2}$
], in which $\eta _{0}=\smallint ^{1}_{0}f_{2}(\eta)\eta d\eta $
and $\beta _{2}$
is the solution of $\gamma =\smallint _{\eta _{0}-\beta _{2}}^{\eta _{0}+\beta _{2}} f_{2}(\eta)d\eta $
.
C. BOUNDS OF $P^{S}_{\mathrm {B}}\,\,(S=\{\Psi,\Omega \})$
Through the similar discussion of $P^{S}_{\mathrm {A}}$
, variables $v^{S}_{\mathrm {A}m}$
, $v^{S}_{\mathrm {B}m}$
, $v^{S}_{\mathrm {C}m}$
, and $v^{S}_{\mathrm {D}m}$
are all found to be normal distributions with zero means. Their$\vphantom {_{\int }}$
variances are, respectively, $(\sigma ^ {S}_{1m})^{2}=\sigma ^{2}_{\delta } (3\sigma ^{4}_{\Phi }-2\sigma ^{2}_{\Phi }+1)(\kappa ^{S}_{m})^{2}+4(1+\sigma ^{2}_\delta)\sigma ^{2}_\Phi (\vartheta ^{S}_{m})^{2}$
, $(\sigma ^{S}_{2m})^{2}=\sigma ^{2}_\Phi (\kappa ^{S}_{m})^{2}$
, $(\sigma ^{S}_{3m})^{2}=\sigma ^{2}_\Phi (4+\sigma ^{2}_\delta)$
$(\kappa ^{S}_{m})^{2}$
, and$\vphantom {^{\int }}$
$(\sigma ^{S}_{4m})^{2}= \sigma ^{2}_{\delta } (\kappa ^{S}_{m})^{2}$
, in which \begin{align} \kappa _{m}^{\mathrm {S}}=&\sqrt {\sum \limits _{\substack {n=1 \\ n\ne m \\ }}^{N} {a_{mn}^{2} \left ({{s_{mn}^{r}} }\right)^{2}}} \\ \vartheta _{m}^{\mathrm {S}}=&\sqrt {\sum \limits _{\substack {n=1 \\ n\ne m}}^{N} {a_{mn}^{2} \left ({{s_{mn}^{i}} }\right)^{2}}} \end{align}
View Source
\begin{align} \kappa _{m}^{\mathrm {S}}=&\sqrt {\sum \limits _{\substack {n=1 \\ n\ne m \\ }}^{N} {a_{mn}^{2} \left ({{s_{mn}^{r}} }\right)^{2}}} \\ \vartheta _{m}^{\mathrm {S}}=&\sqrt {\sum \limits _{\substack {n=1 \\ n\ne m}}^{N} {a_{mn}^{2} \left ({{s_{mn}^{i}} }\right)^{2}}} \end{align}
Therefore, the confidence intervals are $[-\beta _{1}\sigma ^{S}_{qm}, \beta _{1}\sigma ^{S}_{qm}]$
$(q \,=\,1,2,3,4)$
for the confidence level $\gamma $
. Define $p^{S}_{\mathrm {B}}$
as \begin{equation} p_{\mathrm {B}}^{\mathrm {S}} =\beta _{1} \sum \limits _{m=1}^{N} {p_{\mathrm {B}m}^{\mathrm {S}}} \end{equation}
View Source
\begin{equation} p_{\mathrm {B}}^{\mathrm {S}} =\beta _{1} \sum \limits _{m=1}^{N} {p_{\mathrm {B}m}^{\mathrm {S}}} \end{equation}
where $p^{S}_{\mathrm {B}m}=\delta _{m}\sigma ^{S}_{1m}+\phi _{m}\sigma ^{S}_{2m}+\delta _{m}\phi _{m}\sigma ^{S}_{3m}+\phi ^{2} _{m}\sigma ^{S}_{4m}$
, and it is obvious that $\vert P^{S}_{\mathrm {B}} \vert \le \vert p^{S}_{\mathrm {B}}\vert $
. By using the Lyapunov CLT again, $p^{S}_{\mathrm {B}}$
is also a normal distribution, the mean and variance of which are, respectively \begin{align} u_{\mathrm {B}}^{\mathrm {S}}=&\beta _{1} \sigma _{\delta } \sigma _{\phi }^{2} \sum \limits _{m=1}^{N} {\kappa _{m}^{\mathrm {S}}} \\ \left ({{\sigma _{\mathrm {B}}^{\mathrm {S}}} }\right)^{2}=&\beta _{1}^{2} \sum \limits _{m=1}^{N} {\Biggl [{ {\sigma _{\delta }^{2} \left ({{\sigma _{\mathrm {B1}m}^{\mathrm {S}}} }\right)^{2}+\sigma _{\phi }^{2} \left ({{\sigma _{\mathrm {B}2m}^{\mathrm {S}}} }\right)^{2}} }} \notag \\&{ {+\,\sigma _{\delta }^{2} \sigma _{\phi }^{2} \left ({{\sigma _{\mathrm {B3}m}^{\mathrm {S}}} }\right)^{2}+2\sigma _{\phi }^{4} \left ({{\sigma _{\mathrm {B4}m}^{\mathrm {S}}} }\right)^{2}} }\Biggr] \end{align}
View Source
\begin{align} u_{\mathrm {B}}^{\mathrm {S}}=&\beta _{1} \sigma _{\delta } \sigma _{\phi }^{2} \sum \limits _{m=1}^{N} {\kappa _{m}^{\mathrm {S}}} \\ \left ({{\sigma _{\mathrm {B}}^{\mathrm {S}}} }\right)^{2}=&\beta _{1}^{2} \sum \limits _{m=1}^{N} {\Biggl [{ {\sigma _{\delta }^{2} \left ({{\sigma _{\mathrm {B1}m}^{\mathrm {S}}} }\right)^{2}+\sigma _{\phi }^{2} \left ({{\sigma _{\mathrm {B}2m}^{\mathrm {S}}} }\right)^{2}} }} \notag \\&{ {+\,\sigma _{\delta }^{2} \sigma _{\phi }^{2} \left ({{\sigma _{\mathrm {B3}m}^{\mathrm {S}}} }\right)^{2}+2\sigma _{\phi }^{4} \left ({{\sigma _{\mathrm {B4}m}^{\mathrm {S}}} }\right)^{2}} }\Biggr] \end{align}
For the confidence level $\gamma $
, we can get $\vert p^{S}_{\mathrm {B}}\vert \le u^{S}_{\mathrm {B}}+\beta _{1}\sigma ^{S}_{\mathrm {B}}$
. Considering$\vphantom {_{\int }}$
the condition $\vert P^{S}_{\mathrm {B}} \vert \le \vert p^{S}_{\mathrm {B}}\vert $
, the lower and upper bounds of $P^{S}_{\mathrm {B}}$
are $-u^{S}_{\mathrm {B}}-\beta _{1}\sigma ^{S}_{\mathrm {B}}$
and $u^{S}_{\mathrm {B}}+\beta _{1}\sigma ^{S}_{\mathrm {B}}$
, respectively.
D. Bounds of $BCE$
Based on$\vphantom {_{\int }}$
the above sections, we can get the upper bound of BCE as $\varsigma ^{\mathrm {U}}=[\eta ^{\mathrm {U}}+(P^\Psi _{\mathrm {B}})^{\mathrm {U}}/(P^\Omega _{\mathrm {A}})^{\mathrm {L}}]/ [1+(P^\Omega _{\mathrm {B}})^{\mathrm {L}}/(P^\Omega _{\mathrm {A}})^{\mathrm {L}}]$
and$\vphantom {_{\int }}$
lower bounds as $\varsigma ^{\mathrm {L}}=[\eta ^{\mathrm {L}}+(P^\Psi _{\mathrm {B}})^{\mathrm {L}}/(P^\Omega _{\mathrm {A}})^{\mathrm {L}}]/ [1+(P^\Omega _{\mathrm {B}})^{\mathrm {U}}/(P^\Omega _{\mathrm {A}})^{\mathrm {L}}]$
. By considering$\vphantom {_{\int }}$
the practical situation, the bounds of BCE should be modified as $BCE^{\mathrm {U}} \,=\text{min}(\varsigma ^{\mathrm {U}}, BCE^{\mathrm {opt}})$
and $BCE^{\mathrm {L}} \,=\text{max}(\varsigma ^{\mathrm {L}},0)$
, respectively, where $BCE^{\mathrm {opt}}$
is the optimal BCE.
E. Inclusion Property of SA-Based Bounds
Here, it should be indicated that the proposed SA method is not fully inclusive. Namely, not all possible BCE are analytically included in the SA-based bounds. However, a more accurate interval can be obtained because of this property. The following example is given to explain this feature.
Random variable $X_{n}(n=1,2,\ldots,N)$
is supposed to be distributed normally with mean $u_{n}$
and variance $\sigma ^{2}_{n}$
, and $X_{m}$
and $X_{n}$
($m\ne n$
) are statistically independent. Then $Y=X_{1}+X_{2}+\ldots + X_{N}$
is also a normal distribution with mean $u_{Y}=u_{1}+\ldots +u_{N}$
and variance $\sigma ^{2}_{Y}=\sigma ^{2}_{1}+\ldots +\sigma ^{2}_{N}$
. For the confidence lever $\gamma $
, the confidence interval of $X_{n}-u_{n}$
is $-\beta \sigma _{n} \le X_{n}-u_{n} \le \beta \sigma _{n}$
. Therefore, we can get $-\beta \sigma _{p} \le Y_{n}-u_{Y} \le \beta \sigma _{p}$
by inequality rules and $-\beta \sigma _{Y} \le Y_{n}-u_{Y} \le \beta \sigma _{Y}$
by the SA method, in which $\sigma _{p}=\sigma _{1}+\ldots +\sigma _{N}$
and $\sigma _{Y}<\sigma _{p}$
. We can get a shorter interval by the SA method.
Then we discuss the probability of $Y_{n}-u_{Y}\,=\,\pm \beta \sigma _{p}$
, defined by $p_{e}$
. If $\sigma _{1}\,=\,\ldots \,=\,\sigma _{N}\,=\,\sigma $
, $p_{e} \,=\text{exp}(-N\beta ^{2}/2)/ (2N\pi \sigma ^{2})^{0.5}$
. We give some numerical results with $\sigma \,=\,1$
and $\beta \,=\,3$
for different $N$
. For $N \,=\,10$
, $p_{e} \,=\, 3.6\times 10^{-21}$
; for $N \,=\,20$
, $p_{e} \,=\, 7.3\times 10^{-41}$
. Therefore, $Y_{n}-u_{Y}$
is impossible to reach $\pm \beta \sigma _{p}$
, which are not included in the SA-based bounds. As a result, the SA-based bounds are better than the inequality-based bounds.
SECTION III.
Array Synthesis in the Presence of Excitation Errors
Array synthesis has been studied in many works without considering random excitation errors which are inevitable in practice and will bring bad influence on the MPT applications. It can be seen from (8) that $s^{i}_{mn}$
will be zero when all elements are excited in-phase for regular receiving region, such as circular region. This feature is good for decreasing the variances of $P^{S}_{\mathrm {A}}$
and $P^{S}_{\mathrm {B}}$
, and then the deviation of BCE from the designed one will be reduced. So, antenna elements are considered to be in-phase in the following discussion.
In this paper, the positions and the nominal excitation amplitudes of antenna elements are optimized simultaneously by using CCDE algorithm to improve the worst performance $BCE^{\mathrm {L}}$
of transmitting array based on the proposed SA method. The positions and excitation amplitudes are supposed to be symmetrical about the x-axis and the y-axis, which could reduce the complexity of feed network and the problem dimensions. The optimization model can be established as \begin{align}&\mathrm {Find}~[{x_{1},\ldots,x_{N_{1}},y_{1},\ldots,y_{N_{1}},A_{1},\ldots,A_{N_{1}}}] \\&\mathrm {Max}\cdot f=BCE^{L} \end{align}
View Source
\begin{align}&\mathrm {Find}~[{x_{1},\ldots,x_{N_{1}},y_{1},\ldots,y_{N_{1}},A_{1},\ldots,A_{N_{1}}}] \\&\mathrm {Max}\cdot f=BCE^{L} \end{align}
where $N_{1}=N/4$
. Variables $x_{n}$
, $y_{n}$
and $A_{n}(n=1,2,\ldots,N_{1})$
are the $x$
position, $y$
position and excitation amplitude of $n$
th element in the first quadrant. The element spacing is constrained by $\Delta x^{2}_{mn}+\Delta y^{2}_{mn}\ge d^{2}_{\min }(m\ne n)$
, $x_{n}\ge d_{\mathrm {min}}/2$
and $y_{n}\ge d_{\mathrm {min}}/2$
, in which $d_{\mathrm {min}}$
is the minimum spacing between adjacent elements. Moreover, all elements are confined on a $D_{x}\times D_{y}$
aperture, which can be guaranteed by $x_{n}\le D_{x}/2$
and $y_{n}\le D_{y}/2$
. In order to deal with the minimum element spacing constraint in computer program, the three steps are carried out. Firstly, random positions are generated for each element by evolutionary algorithm. Secondly, we get the distances between each position and find all unreasonable distances which are less than $d_{\mathrm {min}}$
. Thirdly, a penalty according to these unreasonable distances is added to the fitness function (32).
CCDE algorithm, namely the combination of cooperatively coevolving algorithm and differential evolution algorithm, is used to solve the large-scale optimization problem. The $S_{K}$
strategy in [21] is considered. As shown in Fig. 3, the $3N_{1}$
dimensional population with $I$
individuals is decomposed into $J$
subpopulations. The $i$
th individual is denoted as $p_{i}$
($i=1,2,\ldots,I$
), and the $j$
th part of $p_{i}$
is denoted as $p_{ij}(i=1,2,\ldots,I$
, and $j=1,2,\ldots,J$
), which is $s_{j}$
dimensional. In most cases, $s_{1}=s_{2}=\ldots =s_{J-1}=s$
and $0<s_{J}<s\,\,(3N_{1}=(J-1)s+s_{J})$
.
The CCDE algorithm will work better if interacting variables are placed within the same subpopulation. However, it is not always known in advance how these $3N_{1}$
variables are related. To alleviate this problem, we adopt the random grouping structure as proposed in [22]. By randomly decompose the $3N_{1}$
dimensional population into $J$
subpopulations at each iteration, the probability of placing two interacting variables into the same subpopulation becomes higher and higher over an increasing number of iterations. The pseudocode of CCDE algorithm is given as
SECTION Algorithm 1
Create and Initialize $J$
Subpopulations
while termination criterion is not met do
for each subpopulation $j=1,2,\ldots,J$
do
for each individual $i=1,2,\ldots,I $
do
if $f(\text{com}(p_{ij}, p^{\mathrm {best}}_{i})) >\text{f}(p^{\mathrm {best}}_{i})$
then
replace the $i$
th part of $p^{\mathrm {best}}_{i}$
by $p_{ij}$
;
if $f(p^{\mathrm {best}_{i}}) >\text{f}(p^{\mathrm {best}})$
then
replace the $p^{\mathrm {best}}$
by $p^{\mathrm {best}}_{i}$
;
where $p^{\mathrm {best}}_{i}=(p^{\mathrm {best}}_{i1},\ldots,p^{\mathrm {best}}_{ij},\ldots,p^{\mathrm {best}}_ {iJ})$
is the optimal individual over the history of $p_{i}$
, $p^{\mathrm {best}}_{ij}$
is the $j$
th part of $p^{\mathrm {best}}_{i}$
, and $p^{\mathrm {best}}$
is the optimal individual of the population over the iterations. The operator com($p_{ij}$
, $p^{\mathrm {best}}_{i}$
) returns $(p^{\mathrm {best}}_{i1},\ldots, p^{\mathrm {best}}_{i(j-1}),p_{ij},p^{\mathrm {best}}_{i(j+1)},\ldots,p^{\mathrm {best}}_{iJ})$
, and $f$
is the fitness function given as (32).
Here, we adopt DE/Rand/1 strategy in the mutation operation. Then a mutated individual $v_{i}$
can be generated as \begin{equation} v_{i} ({t+1})=p_{r_{1}} (t)+F [{p_{r_{2}} (t)-p_{r_{3}} (t)}] \end{equation}
View Source
\begin{equation} v_{i} ({t+1})=p_{r_{1}} (t)+F [{p_{r_{2}} (t)-p_{r_{3}} (t)}] \end{equation}
where $t$
is the current iteration time, $r_{1}$
, $r_{2}$
and $r_{3}$
are three random integrals selected from $\{1,2,\ldots,I\}$
, and $r_{1}\ne r_{2}\ne r_{3}$
. The parameter $F$
is a scaling factor within [0, 1]. Then the trial individual $u_{i}$
is generated by \begin{equation} u_{ij} ({t+1})= \begin{cases} v_{ij} ({t+1}),\quad \mathrm {if~rand()}\le C_{R} \\ p_{ij} ({t+1}),\quad \mathrm {otherwise} \\ \end{cases} \end{equation}
View Source
\begin{equation} u_{ij} ({t+1})= \begin{cases} v_{ij} ({t+1}),\quad \mathrm {if~rand()}\le C_{R} \\ p_{ij} ({t+1}),\quad \mathrm {otherwise} \\ \end{cases} \end{equation}
where rand() returns a random decimal between 0 and 1, and $C_{R}$
is the crossover probability. The unchanged time of $p^{\mathrm {best}} $
is used as a convergence criterion. When $p^{\mathrm {best}} $
is not changed above 100 iterations, we should stop the CCDE algorithm.
SECTION IV.
Numerical Results
In this section, we analyze the tolerance of BCE against excitation errors by computer program. Next, we give analysis results of an unequal spacing transmitting array.
A. Tolerance Analysis
The validity of the proposed SA method is verified in advance. Accordingly, the minimum confidence level $\gamma $
is discussed. And then a set of numerical results are provided for different excitation errors and for transmitting arrays different in size. Without loss of generality, MPT system is supposed to have a square transmitting array of $L_{n}\times L_{n}$
positions and a square receiving region ($-u_{0}\le u\le u_{0}, -v_{0}\le v\le v_{0}$
). The excitation weights across transmitting array are set as $\mathbf{w}^{\mathrm {opt}}$
[9] which is corresponding to the $BCE^{\mathrm {opt}}$
.
Provided that $u_{0} \,=\, v_{0} \,=\,0.2$
and $L_{n}\,=\,10$
, $BCE^{\mathrm {opt}}$
is 95.4% which agrees well with the result achieved in [9] ($BCE\,=\,96.45\%$
). The following two error cases are considered: $(\sigma _{\delta }, \sigma _{\Phi }) \,=(0.05,5^{\circ})$
and $(\sigma _{\delta },\sigma _{\Phi }) \,=(0.1,10^{\circ})$
. For a preliminary verification, $Q \,=\, 10^{5}$
different random excitation errors corresponding to ($\sigma _{\delta }, \sigma _{\Phi }$
) have been generated. Hence every BCE $^{q}(q\,=\,1,2,3\ldots,Q)$
can be calculated. When $\gamma \,=\,99.9\%$
, the SA-based bounds of BCE are [92.0%,95.4%] and [81.7%,95.4%], respectively. As shown in Fig. 4, the fact that all $BCE^{q}$
are within the SA-based bounds fully confirms the validity of the proposed SA method.
The bounds of BCE are directly related to the confidence level $\gamma $
. To discuss about the minimum $\gamma $
, the width of BCE is denoted as $\Delta BCE=BCE^{\mathrm {U}}-{BCE}^{\mathrm {L}}$
and the probability that $BCE^{q}$
lay within the SA-bounds is denoted as $p_{\mathrm {in}}$
. The numerical results of different $\gamma $
are shown in Fig. 5 for $(\sigma _{\delta }, \sigma _{\Phi }) \,=(0.1,10^{\circ})$
. As $\gamma $
increases, $BCE^{\mathrm {U}}$
increases and $BCE^{\mathrm {L}}$
decreases. Therefore, $p_{\mathrm {in}}$
gradually increases due to the larger $\Delta $
BCE. The results show that $\gamma $
should be larger than 97% to guarantee $p^{\mathrm {in}} \ge 99.9\%$
.
Numerical results for different excitation errors are shown in Fig. 6 with $\gamma \,=\,97$
%. The maximum deviation of BCE from the optimal one $BCE^{\mathrm {opt}}$
is defined as $d_{BCE}\,=BCE^{\mathrm {L}}-\textit {BCE}^{\mathrm {opt}}$
, which indicates the worst performance of transmitting array. Obviously, $BCE^{\mathrm {L}}$
and $BCE^{\mathrm {U}}$
both decrease when $\sigma _{\delta }$
or $\sigma _{\Phi }$
increases. As a result, $d_{BCE}$
increases to 12.6%. With these results, we can do some preliminary predication. For example, $\sigma _{\delta }$
should not be larger than 0.07 for the case of $\sigma _{\Phi } \,=\, 5^\circ $
if the deviation is confined by $d_{BCE} <3\%$
.
The next example is concerned with arrays different in element number (corresponding to $L_{n}$
). In order to eliminate the effects of other factors, the $BCE^{\mathrm {opt}}$
is constrained to be 95%. The receiving region parameters $u_{0}$
and $v_{0}$
are decided by a bisection method to guarantee the BCE of 95%. With ($\sigma _{\delta },\sigma _{\Phi }$
) is fixed as (0.1,10°), numerical results are shown in Fig. 7. It can be seen that the deviation of BCE decreases from 11.8% to 8.7% when the element number varies from 36 to 400. As a result, we can reduce the impact of random excitation errors by increasing the element number of transmitting array.
B. Array Synthesis
The unequal spacing planar array (100 elements) in [14] is considered as a reference array, because it has a BCE of 89.96% which is 3.5% higher than the optimal one in [9] ($BCE^{\mathrm {opt}} \,=\,86.48\%$
). The receiving region is a circular one ($u^{2}+v^{2}\le r^{2}_{0}$
and $r_{0} \,=\,0.2$
). However, the $BCE^{\mathrm {L}}$
is 81.1% for the excitation errors $(\sigma _{\delta },\sigma _{\Phi })\,=(0.1,10^\circ)$
, which is 8.9% lower than the designed one. Based on the SA-CCDE algorithm, the positions and nominal excitation amplitudes are optimized simultaneous to improve BCE $^{\mathrm {L}}$
. The minimum spacing between adjacent elements $d_{\mathrm {min}}$
is $0.4\lambda $
, and the maximum aperture size is $4.5\lambda \times 4.5\lambda $
. As a result, the $BCE^{\mathrm {L}}$
is improved by 3.8% from 81.1% to 84.9%. That means we can guarantee BCE of 84.9% in the presence of excitation errors $(\sigma _{\delta },\sigma _{\Phi }) \,=(0.1,10^\circ)$
. The corresponding positions and nominal excitation amplitudes are shown in Fig. 8. The circles indicate element positions, and the values in circles denote the nominal excitation amplitudes.
In the next example, the elements are confined on a $6\lambda \times 6\lambda $
aperture, while the element number, the minimum element spacing and the receiving region are not changed. The optimized positions and nominal excitation amplitudes are shown in Fig. 9. The $BCE^{\mathrm {L}}$
is improved by 4% from 81.1% to 85.1%, which is close to the one of $4.5\lambda \times 4.5\lambda $
transmitting aperture. From the two optimized array and the numerical results of tolerance analysis, we can find that the $BCE^{\mathrm {L}}$
is sensitive to the element number, not the aperture size.
In this paper, a SA method is presented to evaluate the achievable BCE in the presence of excitation errors. Then based on the worst BCE obtained by the SA method, we propose a synthesis method of transmitting array for optimal MPT by using CCDE algorithm. The tolerance of BCE against the random excitation errors is simulated by computer program, and the positions and nominal excitation amplitudes are both simultaneously optimized to improve the worst BCE. Numerical results indicate the validity of the SA method, and show that the worst BCE is improved about 4% from 81% to 85% based on the SA-CCDE synthesis method for the excitation errors $(\sigma _{\delta },\sigma _{\Phi }) \,=(0.1,10)$
.
APPENDIX A
DERIVATION OF (9)
With $a_{mn}= a_{nm}$
, $\delta _{mn}= \delta _{nm}$
, $\Delta \phi _{mn}=-\Delta \phi _{nm}$
, and $s_{mn}=s^\ast _{nm}$
, (7) can be rewritten as \begin{align} P^{S}=&\sum \limits _{m=1}^{N} {\sum \limits _{n=m+1}^{N} {a_{mn} \delta _{mn} \Biggl [{ {s_{mn} \exp \left ({{j\Delta \phi _{mn}} }\right)} }}} \notag \\&{ {+\,s_{mn}^{\ast } \exp \left ({{-j\Delta \phi _{mn}} }\right)} }\Biggr]+\sum \limits _{m=1}^{N} {a_{mm} \delta _{mm} s_{mm}} \notag \\=&2\sum \limits _{m=1}^{N} {\sum \limits _{n=m+1}^{N} {a_{mn} \delta _{mn} \Biggl [{ {s_{mn}^{r} \cos \left ({{\Delta \phi _{mn}} }\right)} }}} \notag \\&{ {-\,s_{mn}^{i} \sin \left ({{\Delta \phi _{mn}} }\right)} }\Biggr]+\sum \limits _{m=1}^{N} {a_{mm} \delta _{mm} s_{mm}} \end{align}
View Source
\begin{align} P^{S}=&\sum \limits _{m=1}^{N} {\sum \limits _{n=m+1}^{N} {a_{mn} \delta _{mn} \Biggl [{ {s_{mn} \exp \left ({{j\Delta \phi _{mn}} }\right)} }}} \notag \\&{ {+\,s_{mn}^{\ast } \exp \left ({{-j\Delta \phi _{mn}} }\right)} }\Biggr]+\sum \limits _{m=1}^{N} {a_{mm} \delta _{mm} s_{mm}} \notag \\=&2\sum \limits _{m=1}^{N} {\sum \limits _{n=m+1}^{N} {a_{mn} \delta _{mn} \Biggl [{ {s_{mn}^{r} \cos \left ({{\Delta \phi _{mn}} }\right)} }}} \notag \\&{ {-\,s_{mn}^{i} \sin \left ({{\Delta \phi _{mn}} }\right)} }\Biggr]+\sum \limits _{m=1}^{N} {a_{mm} \delta _{mm} s_{mm}} \end{align}
Due to $s_{mm}= s^{r}_{mm}$
, $P^{S}$
can be expressed as \begin{align} P^{S}=\sum \limits _{m=1}^{N} {\sum \limits _{n=1}^{N} {a_{mn} \delta _{mn} \left [{ {s_{mn}^{r} \cos \left ({{\Delta \phi _{mn}} }\right)-s_{mn}^{i} \sin \left ({{\Delta \phi _{mn}} }\right)} }\right]}}\notag \\ {}\end{align}
View Source
\begin{align} P^{S}=\sum \limits _{m=1}^{N} {\sum \limits _{n=1}^{N} {a_{mn} \delta _{mn} \left [{ {s_{mn}^{r} \cos \left ({{\Delta \phi _{mn}} }\right)-s_{mn}^{i} \sin \left ({{\Delta \phi _{mn}} }\right)} }\right]}}\notag \\ {}\end{align}
so \begin{equation} P^{S}=\sum \limits _{m=1}^{N} {\sum \limits _{n=1}^{N} {a_{mn} \tau _{mn}^{S} \left ({{1+\delta _{m} +\delta _{n} +\delta _{m} \delta _{n}} }\right)}} \end{equation}
View Source
\begin{equation} P^{S}=\sum \limits _{m=1}^{N} {\sum \limits _{n=1}^{N} {a_{mn} \tau _{mn}^{S} \left ({{1+\delta _{m} +\delta _{n} +\delta _{m} \delta _{n}} }\right)}} \end{equation}
With $\tau ^{S}_{mn}=\tau ^{S}_{nm}$
, the third part of $P^{S}$
can be transformed as \begin{align} \sum \limits _{m=1}^{N} {\sum \limits _{n=1}^{N} {a_{mn} \tau _{mn}^{S} \delta _{n} }} \!=\!\sum \limits _{m=1}^{N} {\sum \limits _{n=1}^{N} {a_{nm} \tau _{nm}^{S} \delta _{m}}} \!=\!\sum \limits _{m=1}^{N} {\sum \limits _{n=1}^{N} {a_{mn} \tau _{mn}^{S} \delta _{m}}}\notag \\ {}\end{align}
View Source
\begin{align} \sum \limits _{m=1}^{N} {\sum \limits _{n=1}^{N} {a_{mn} \tau _{mn}^{S} \delta _{n} }} \!=\!\sum \limits _{m=1}^{N} {\sum \limits _{n=1}^{N} {a_{nm} \tau _{nm}^{S} \delta _{m}}} \!=\!\sum \limits _{m=1}^{N} {\sum \limits _{n=1}^{N} {a_{mn} \tau _{mn}^{S} \delta _{m}}}\notag \\ {}\end{align}
The transform method will be used many times in the following Appendix B. Substituting (38) in (37), (9) can be obtained.