Signal processing using statistical methods enjoys unflagging interest. Therefore, new tools for determining the parameters of the observed processes are being designed, and new techniques for generating processes for given model parameters are being created. Statistical signal processing is used in more and more research areas. The generalized Gaussian distribution (GGD), among other things, is of such interest to scientists. GGD has been widely used in various engineering applications. GGD itself has been extended to various types of random variables, as well as to multivariate random variables. Thus, different types of signals and processes can be modeled with GGD. Many methods of determining the parameters of this distribution have been developed too.
This distribution can be found under other names in the literature:
GND – the generalized normal distribution,
the Subbotin distribution,
GED – the generalized error distribution,
EPD – the exponential power distribution (also called as the Box-Tiao distribution [1]).
The generalized Gaussian probability density function for a continuous random variable is [2] and [3] \begin{equation*}f(x)=\frac {\lambda \cdot p}{2 \cdot \Gamma \left ({{\frac {1}{p}}}\right )}e^{-[\lambda \cdot |x|]^{p}}, \tag {1}\end{equation*}
View Source
\begin{equation*}f(x)=\frac {\lambda \cdot p}{2 \cdot \Gamma \left ({{\frac {1}{p}}}\right )}e^{-[\lambda \cdot |x|]^{p}}, \tag {1}\end{equation*}
where $\Gamma (\cdot )$
is the gamma function $\Gamma (z)=\int _{0}^{\infty }t^{z-1}e^{-t}dt, z\gt 0$
[4], p is the shape parameter and $\lambda $
depends on p and the standard deviation of the distribution [5].
This distribution owes its popularity to the inclusion of various other distributions, from super–Gaussian, Gaussian to sub–Gaussian. Depending on the p value of the shape parameter, special cases of GGD are obtained, which are known distributions:
$p=1$
: the Laplacian distribution (LD), (also called the double exponential distribution),
$p=2$
: the Gaussian distribution (GD),
$p \rightarrow \infty $
: a uniform distribution,
$p \rightarrow 0$
: an impulse function,
$p=1/2$
: introduced in [6],
$p=1/3$
: presented in [7],
$p=1/m$
for $m=2,3,\ldots $
: covered in [8].
Based on special cases, the analysis and calculations can be simplified and equations can be determined in closed form.
The most popular method for determining distribution parameters is the maximum likelihood (ML) method. The ML method for determining the shape parameter p of the one-dimensional GGD distribution described by formula (1) was given by Du [3] \begin{align*} & \hspace {-3pc}\frac {\Psi \left ({{1+\frac {1}{\hat {p}}}}\right )+\log (\hat {p})}{\hat {p}^{2}}+ \frac {1}{\hat {p}^{2}}\log \left ({{\frac {1}{N}\sum _{i=1}^{N}|x_{i}|^{\hat {p}} }}\right ) \\ & \qquad \qquad -\frac {\sum _{i=1}^{N}|x_{i}|^{\hat {p}} \log (|x_{i}|)}{\hat {p} \sum _{i=1}^{N}|x_{i}|^{\hat {p}}} =0, \tag {2}\end{align*}
View Source
\begin{align*} & \hspace {-3pc}\frac {\Psi \left ({{1+\frac {1}{\hat {p}}}}\right )+\log (\hat {p})}{\hat {p}^{2}}+ \frac {1}{\hat {p}^{2}}\log \left ({{\frac {1}{N}\sum _{i=1}^{N}|x_{i}|^{\hat {p}} }}\right ) \\ & \qquad \qquad -\frac {\sum _{i=1}^{N}|x_{i}|^{\hat {p}} \log (|x_{i}|)}{\hat {p} \sum _{i=1}^{N}|x_{i}|^{\hat {p}}} =0, \tag {2}\end{align*}
where N denotes the number of observed variables, and $\{x_{1},x_{2},\ldots ,x_{N}\}$
is the collection of N i.i.d. zero-mean random variables and where \begin{equation*}\Psi (\tau )=-\gamma + \int _{0}^{1} (1-t^{\tau -1})(1-t)^{-1}dt, \tag {3}\end{equation*}
View Source
\begin{equation*}\Psi (\tau )=-\gamma + \int _{0}^{1} (1-t^{\tau -1})(1-t)^{-1}dt, \tag {3}\end{equation*}
and $\gamma =0.577\ldots $
denotes the Euler constant. Equation (2) must be solved numerically due to $\hat {p}$
. The $\lambda $
parameter is also determined by the ML method for the known shape parameter p found in the previous equation (2) \begin{equation*}\lambda =\left ({{ \frac {N}{p\cdot \sum _{i=1}^{N}|x_{i}|^{p}}}}\right )^{\frac {1}{p}}. \tag {4}\end{equation*}
View Source
\begin{equation*}\lambda =\left ({{ \frac {N}{p\cdot \sum _{i=1}^{N}|x_{i}|^{p}}}}\right )^{\frac {1}{p}}. \tag {4}\end{equation*}
Many articles have been published presenting the applications of GGD in many research areas, e.g., in image segmentation [9], stereoscopic images [10], image synthesis [11], the binarization of historical degraded document images [12], image compression [7], [13], [14], in testing adaptive filters [15], and the vibroacoustic method of detecting damage to the power transformer core [16]. In [17] the probabilistic function as a mixture of generalized Gaussian distributions was built in order to predict the failure of rotating machines. Thus, the generalized Gaussian hidden Markov model (GGHMM) was created for machinery prognostic purposes. A new statistical image watermarking scheme in nonsubsampled Shearlet transform (NSST) domain was proposed by Wang et al. [18]. It was based on the bounded generalized Gaussian mixture model (BGGMM) based hidden Markov tree (HMT). Whereas, BGGMM was introduced in the paper [19]. Discrete wavelet coefficients were modeled using GGD in image watermarking techniques [20], texture retrieval [21], [22], fingerprint image compression [23], image denoising [24], and texture classification [25]. Allili [26] proposed a new statistical framework based on finite mixtures of generalized Gaussian (MoGG) distributions to represent the marginal distribution of the wavelet coefficients and described its application to texture discrimination and retrieval. Multiresolution image denoising schemes in the wavelet domain were analyzed in [27]. GGD has been used in a variety of other applications, in probabilistic classifiers [28], in an automatic change detection in multitemporal SAR images [29]. GGD for modeling multiple access interference in ultra-wide bandwidth (UWB) systems was applied in [30].
The Gaussian, the generalized Gaussian and all the compound Gaussian distributions are encompassed by the elliptically symmetric (ES) distributions [31]. An extension of the matrix Slepian–Bangs (SB) formula to ES distributions was considered and the closed-form expression of a simple corrective coefficient for GGD was given in [31].
GGD is also a special case of the generalized Gamma distribution (G$\Gamma $
D) [22]. These two distributions were compared by Song [32].
The complex generalized Gaussian distribution (CGGD) was extensively discussed in [33]. The paper presented a procedure for generating complex random variables for this distribution. The shape and covariance parameters in the complex domain were estimated by a maximum likelihood estimation (MLE) method. And finally, CGGD was applied to actual radar data. Testing the circularity of CGGD can be found in [34].
CGGD belongs to the wide family of complex elliptically symmetric (CES) distributions [35], [36], [37]. The Fisher information matrix (FIM) for the estimation of the shape and scale parameters, and the normalized covariance matrix, also called scatter matrix, for CGGD were derived in [35]. CGGD was applied in a wide variety of signal processing applications, e.g., in medical imaging [38]. A detector based on convolutional neural networks (CNNs) for spectrum sensing (SS) problems under various noise models (among others, isometric CGGD) was considered in [39].
In order to model multidimensional signals, the multivariate generalized Gaussian distribution (MGGD) was used (also called the multivariate power–exponential distribution [40], [41]). The maximum likelihood method for estimating the parameters of MGGD was addressed in [42], whereas a Riemannian averaged Fixed-Point algorithm was introduced in [43].
In [44] an unsupervised classifier based on a finite mixture model using the multivariate generalized Gaussian distribution was introduced. This classifier was applied on a dataset of weld defect radiographic images. In order to achieve timely and accurate detection of downhole faults, a systematic fault detection method was proposed based on MGGD and the Kullback Leibler Divergence (KLD) [45], [46].
The beginning of quaternions is attributed to Hamilton [47]. Quaternions are a noncommutative extension of complex numbers [48]. Quaternions allow you to model signals in three– and four–dimensional space. $\mathbb {H}$
denotes the algebra of quaternions. The concept of $\mathbb {H}$
-properness was introduced in [49] as the invariance of the probability density function (pdf) of a quaternion–valued variable q under an arbitrary axis and angle of rotation $\phi $
— a variable q is said to be proper, if $pdf(q) = pdf(e^{\eta \phi }q)$
for any pure unit quaternion $\eta $
. Random variables and processes with a vanishing pseudo–covariance are called proper [50], [51]. Properness in the quaternion domain, also denoted as $\mathbb {H}$
-properness, is based on the vanishing of three different complementary covariance matrices [51].
Took et al. [52] discussed extensively the augmented quaternion statistics. Augmented quaternion second–order statistics have been used in several applications, among others:
the nonlinear minimum mean–squared error (MMSE) estimation problem [53],
the independent component analysis (ICA) [54],
the quaternion–valued echo state networks (QESNs) for nonlinear adaptive filtering [55],
the quaternion widely linear (QWL) processing [51], [52], [56], [57],
the widely linear quaternion-valued Kalman filter (WL–QKF) and widely linear quaternion-extended Kalman filter (WL–QEKF) [58],
the adaptive widely linear quaternion least mean square (WL–QLMS) algorithm for the modeling and forecasting of three dimensional wind field [59],
the widely linear quaternion multiple-model adaptive estimation (WL–QMMAE) algorithm based on the widely linear quaternion Kalman filter and Bayesian inference [60],
the augmented quaternion extreme learning machine (QELM) models for quaternion signal processing [61],
and the augmented online sequential quaternion extreme learning machine (OS–QELM) for the real-time learning of feedforward neural networks [62].
A variational autoencoder (QVAE) in the quaternion domain
$\mathbb {H}$
leveraging the augmented second–order statistics of
$\mathbb {H}$
-proper signals was analyzed in
[63]. Augmented quaternions were used for remaining useful life (RUL) estimation of rolling bearings
[64], for degradation prognostics of rolling bearings
[65]. A generic reproducing kernel Hilbert space (RKHS) framework for the statistical analysis of augmented quaternion random vectors was presented in
[66].
Real-world seismic signals cannot be completely described using existing Gaussian models [67], hence the generalized Gaussian distribution with an augmented quaternion variable (QGGD) was used in [68]. QGGD parameterized variations in the vector–quaternion during the presence and absence of a polarized source (human footsteps) were presented. The data from the three orthogonal channels was treated as a pure quaternion. In order to quantify the inter–channel correlation of the tri–axial geophone, the augmented covariance matrix of QGGD was used. In the absence of footsteps, random Gaussian noise with comparable power in all the three channels of the geophone was observed. Since there exists no correlation between the orthogonal axes, an unpolarized $\mathbb {Q}$
-proper diagonal structure was obtained. Footstep signals were expected to be $\mathbb {Q}$
-improper due to the presence of elliptically polarized signals. Venkatraman et al. [68] iteratively looked at values in the $\lt 0.1,2\gt $
range to estimate the shape parameter as the best chi-square fit.
The 3D generalized Gaussian distribution (3D GGD) parameterized by the shape parameter p and the covariance matrix C was revised in [69]. Based on that, the alternative QGGD was derived for an augmented quaternion valued random variable. The procedure for generating the augmented quaternion valued random variables was defined for this distribution. In the case of pure quaternions, this probability density function becomes a limiting case. Therefore, for this case, the dedicated generalized Gaussian distribution of an augmented quaternion random variable for pure quaternions was introduced in [70].
As it has been shown, GGD is widely used and various variants of GGD appear in the literature. It has also been shown that augmented quaternions are used in many research areas. Therefore, there is a need to model processes using new models. Accordingly, a new GGD will be given in the article.
The article presents 4D GGD with a random variable consisting of four components. This is to correspond to the four components of a full quaternion. 4D GGD is the basis for creating GGD with an augmented full quaternion random variable. In previous works, GGD with an augmented quaternion random variable was based on 3D GGD.
The rest of the paper is organized as follows. In Section II-A, augmented quaternion statistics is recalled. Then, Section II-B describes MGGD. 3D GGD and QGGD are presented in Section II-C. Then, 4D GGD based on MGGD for a full quaternion from Section III is extended for an augmented quaternion random variable in Section IV and then simplified for an $\mathbb {H}$
-proper quaternion random variable in Section V. Section V-A describes the ML equations for GGD with an $\mathbb {H}$
-proper quaternion random variable. Section VI presents numerical experiments to show the performance of the ML estimators.
A. Augmented Quaternion Statistics
Augmented quaternion statistics is recalled on the basis of the articles [52], [69], [70] and will be used in later sections.
A full quaternion can be represented by real numbers $q_{a}$
, $q_{b}$
, $q_{c}$
, and $q_{d}$
and three axes $\imath , \jmath , \kappa $
as: $q=q_{a} + \imath q_{b} + \jmath q_{c} + \kappa q_{d}$
. A full quaternion $q=q_{a} + q_{p}$
consists of a real part $q_{a}$
and a vector part $q_{p}=\imath q_{b} + \jmath q_{c} + \kappa q_{d}$
(also called a pure quaternion or a vector quaternion).
The orthogonal unit vectors $\imath , \jmath , and \kappa $
satisfy the following relations [52], [69]\begin{align*} \imath \jmath & =\kappa \qquad \jmath \kappa =\imath \qquad \kappa \imath =\jmath , \\ \imath \jmath \kappa & =\imath ^{2}=\jmath ^{2}=\kappa ^{2}=-1. \tag {5}\end{align*}
View Source
\begin{align*} \imath \jmath & =\kappa \qquad \jmath \kappa =\imath \qquad \kappa \imath =\jmath , \\ \imath \jmath \kappa & =\imath ^{2}=\jmath ^{2}=\kappa ^{2}=-1. \tag {5}\end{align*}
Quaternions can be represented in vector form: a full quaternion as a real valued quadrivariate vector $\overrightarrow {q}=[q_{a},q_{b},q_{c},q_{d}]^{T}$
and a pure quaternion as a real valued trivariate vector $\overrightarrow {q_{p}}=[q_{b},q_{c},q_{d}]^{T}$
.
The augmented quaternion can be represented in vector form [49], [52]: $\overrightarrow {q^{a}}=[q,q^{\imath },q^{\jmath },q^{\kappa }]^{T}$
, where three perpendicular quaternion involutions (self-inverse mappings) are given by [52] and [69] \begin{align*} q^{\imath }& =-\imath q \imath = q_{a} + \imath q_{b} - \jmath q_{c} - \kappa q_{d}, \\ q^{\jmath }& =-\jmath q \jmath = q_{a} - \imath q_{b} + \jmath q_{c} - \kappa q_{d}, \\ q^{\kappa }& =-\kappa q \kappa = q_{a} - \imath q_{b} - \jmath q_{c} + \kappa q_{d}. \tag {6}\end{align*}
View Source
\begin{align*} q^{\imath }& =-\imath q \imath = q_{a} + \imath q_{b} - \jmath q_{c} - \kappa q_{d}, \\ q^{\jmath }& =-\jmath q \jmath = q_{a} - \imath q_{b} + \jmath q_{c} - \kappa q_{d}, \\ q^{\kappa }& =-\kappa q \kappa = q_{a} - \imath q_{b} - \jmath q_{c} + \kappa q_{d}. \tag {6}\end{align*}
The dependence of the augmented quaternion on a real valued quadrivariate vector can be represented by a transformation [52], [69] \begin{align*} \begin{bmatrix} q \\ q^{\imath } \\ q^{\jmath } \\ q^{\kappa } \end{bmatrix} = \begin{bmatrix} 1 & \quad \imath & \quad \jmath & \quad \kappa \\ 1 & \quad \imath & \quad -\jmath & \quad -\kappa \\ 1 & \quad -\imath & \quad \jmath & \quad -\kappa \\ 1 & \quad -\imath & \quad -\jmath & \quad \kappa \end{bmatrix} \begin{bmatrix} q_{a} \\ q_{b} \\ q_{c} \\ q_{d} \end{bmatrix}, \tag {7}\end{align*}
View Source
\begin{align*} \begin{bmatrix} q \\ q^{\imath } \\ q^{\jmath } \\ q^{\kappa } \end{bmatrix} = \begin{bmatrix} 1 & \quad \imath & \quad \jmath & \quad \kappa \\ 1 & \quad \imath & \quad -\jmath & \quad -\kappa \\ 1 & \quad -\imath & \quad \jmath & \quad -\kappa \\ 1 & \quad -\imath & \quad -\jmath & \quad \kappa \end{bmatrix} \begin{bmatrix} q_{a} \\ q_{b} \\ q_{c} \\ q_{d} \end{bmatrix}, \tag {7}\end{align*}
and in abbreviated form \begin{equation*}\overrightarrow {q^{a}}=A\cdot \overrightarrow {q}. \tag {8}\end{equation*}
View Source
\begin{equation*}\overrightarrow {q^{a}}=A\cdot \overrightarrow {q}. \tag {8}\end{equation*}
The inverse of the matrix A is [52] and [69] \begin{equation*}A^{-1}=\frac {1}{4}A^{H}, \tag {9}\end{equation*}
View Source
\begin{equation*}A^{-1}=\frac {1}{4}A^{H}, \tag {9}\end{equation*}
where $(\cdot )^{H}$
denotes a quaternion conjugate transpose operator. Given the equation (9), the quaternion conjugate transpose of the inverse matrix $A^{-1}$
is \begin{equation*}A^{-H}=\left ({{A^{-1}}}\right )^{H}=\left ({{\frac {1}{4}A^{H}}}\right )^{H}=\frac {1}{4}A. \tag {10}\end{equation*}
View Source
\begin{equation*}A^{-H}=\left ({{A^{-1}}}\right )^{H}=\left ({{\frac {1}{4}A^{H}}}\right )^{H}=\frac {1}{4}A. \tag {10}\end{equation*}
The inverse of the transformation (8) where a real valued quadrivariate vector depends on an augmented quaternion can be represented by [52] and [69]\begin{equation*} \overrightarrow {q} =\frac {1}{4}A^{H} \cdot \overrightarrow {q^{a}}. \tag {11}\end{equation*}
View Source
\begin{equation*} \overrightarrow {q} =\frac {1}{4}A^{H} \cdot \overrightarrow {q^{a}}. \tag {11}\end{equation*}
The determinant of A is [52] \begin{equation*}|A|=16. \tag {12}\end{equation*}
View Source
\begin{equation*}|A|=16. \tag {12}\end{equation*}
It can be found from the LU decomposition and the product of its diagonal entries or as a product of its singular values.
For a full quaternion as a random variable of the form $\overrightarrow {Q}=[Q_{a},Q_{b},Q_{c},Q_{d}]^{T}$
, the covariance matrix C can be determined as the expected value $E\{\cdot \}$
of the product $\overrightarrow {Q}\cdot \overrightarrow {Q}^{T}$
. The real valued quadrivariate covariance matrix C is then [69]\begin{align*} C& =E\{\overrightarrow {Q}\cdot \overrightarrow {Q}^{T}\} =E\{\overrightarrow {Q}\cdot \overrightarrow {Q}^{H}\} \\ & =\begin{bmatrix} \sigma ^{2}_{Q_{a}} & \quad \sigma _{Q_{a} Q_{b}} & \quad \sigma _{Q_{a} Q_{c}} & \quad \sigma _{Q_{a} Q_{d}} \\ \sigma _{Q_{b} Q_{a}} & \quad \sigma ^{2}_{Q_{b}} & \quad \sigma _{Q_{b} Q_{c}} & \quad \sigma _{Q_{b} Q_{d}} \\ \sigma _{Q_{c} Q_{a}} & \quad \sigma _{Q_{c} Q_{b}} & \quad \sigma ^{2}_{Q_{c}} & \quad \sigma _{Q_{c} Q_{d}} \\ \sigma _{Q_{d} Q_{a}} & \quad \sigma _{Q_{d} Q_{b}} & \quad \sigma _{Q_{d} Q_{c}} & \quad \sigma ^{2}_{Q_{d}} \end{bmatrix}, \tag {13}\end{align*}
View Source
\begin{align*} C& =E\{\overrightarrow {Q}\cdot \overrightarrow {Q}^{T}\} =E\{\overrightarrow {Q}\cdot \overrightarrow {Q}^{H}\} \\ & =\begin{bmatrix} \sigma ^{2}_{Q_{a}} & \quad \sigma _{Q_{a} Q_{b}} & \quad \sigma _{Q_{a} Q_{c}} & \quad \sigma _{Q_{a} Q_{d}} \\ \sigma _{Q_{b} Q_{a}} & \quad \sigma ^{2}_{Q_{b}} & \quad \sigma _{Q_{b} Q_{c}} & \quad \sigma _{Q_{b} Q_{d}} \\ \sigma _{Q_{c} Q_{a}} & \quad \sigma _{Q_{c} Q_{b}} & \quad \sigma ^{2}_{Q_{c}} & \quad \sigma _{Q_{c} Q_{d}} \\ \sigma _{Q_{d} Q_{a}} & \quad \sigma _{Q_{d} Q_{b}} & \quad \sigma _{Q_{d} Q_{c}} & \quad \sigma ^{2}_{Q_{d}} \end{bmatrix}, \tag {13}\end{align*}
where $\sigma _{XY}$
denotes the covariance between the scalar components of the vector and $\sigma ^{2}_{X}$
denotes the variance of the vector component.
For an augmented quaternion as a random variable of the form $\overrightarrow {Q^{a}}=[Q,Q^{\imath },Q^{\jmath },Q^{\kappa }]^{T}$
, the covariance matrix $C^{a}$
can be determined as the expected value $E\{\cdot \}$
of the product $\overrightarrow {Q^{a}}\cdot \overrightarrow {Q^{a}}^{H}$
. The augmented quaternion valued covariance matrix $C^{a}$
is [52], [69] \begin{align*} C^{a}& =E\{\overrightarrow {Q^{a}}\cdot \overrightarrow {Q^{a}}^{H}\} =E\{A\cdot \overrightarrow {Q}\cdot \overrightarrow {Q}^{H} \cdot A^{H}\} \\ & =A\cdot E\{\overrightarrow {Q}\cdot \overrightarrow {Q}^{H} \}\cdot A^{H} =A\cdot C \cdot A^{H}. \tag {14}\end{align*}
View Source
\begin{align*} C^{a}& =E\{\overrightarrow {Q^{a}}\cdot \overrightarrow {Q^{a}}^{H}\} =E\{A\cdot \overrightarrow {Q}\cdot \overrightarrow {Q}^{H} \cdot A^{H}\} \\ & =A\cdot E\{\overrightarrow {Q}\cdot \overrightarrow {Q}^{H} \}\cdot A^{H} =A\cdot C \cdot A^{H}. \tag {14}\end{align*}
It can be noticed that the matrix $C^{a}$
depends on the matrix C.
The inverse of the transformation (14) where the real valued quadrivariate covariance matrix depends on the augmented quaternion valued covariance matrix can be represented by [52] and [69] \begin{equation*}C=A^{-1}\cdot C^{a} \cdot A^{-H} \tag {15}\end{equation*}
View Source
\begin{equation*}C=A^{-1}\cdot C^{a} \cdot A^{-H} \tag {15}\end{equation*}
and taking into account (9) and (10), (15) can be rewritten as [52] \begin{equation*}C=\frac {1}{16}A^{H}\cdot C^{a} \cdot A. \tag {16}\end{equation*}
View Source
\begin{equation*}C=\frac {1}{16}A^{H}\cdot C^{a} \cdot A. \tag {16}\end{equation*}
The inverse matrix $C^{-1}$
is found from (15) \begin{equation*}C^{-1}=A^{H}\cdot \left ({{ C^{a}}}\right )^{-1} \cdot A. \tag {17}\end{equation*}
View Source
\begin{equation*}C^{-1}=A^{H}\cdot \left ({{ C^{a}}}\right )^{-1} \cdot A. \tag {17}\end{equation*}
Based on (17), the inverse of the augmented quaternion valued covariance matrix expressed in terms of the real valued quadrivariate covariance matrix is \begin{equation*}\left ({{ C^{a}}}\right )^{-1}=A^{-H}\cdot C^{-1} \cdot A^{-1} \tag {18}\end{equation*}
View Source
\begin{equation*}\left ({{ C^{a}}}\right )^{-1}=A^{-H}\cdot C^{-1} \cdot A^{-1} \tag {18}\end{equation*}
and taking into account (9) and (10), (18) can be rewritten as \begin{equation*} \left ({{ C^{a}}}\right )^{-1}=\frac {1}{16}A\cdot C^{-1} \cdot A^{H}. \tag {19}\end{equation*}
View Source
\begin{equation*} \left ({{ C^{a}}}\right )^{-1}=\frac {1}{16}A\cdot C^{-1} \cdot A^{H}. \tag {19}\end{equation*}
The determinant of C can be expressed as a function of $C^{a}$
from (15) as [52] and [69] \begin{equation*}|C|=|A^{-1}\cdot C^{a} \cdot A^{-H}|=|A^{-1}|\cdot |C^{a}| \cdot |A^{-H}| \tag {20}\end{equation*}
View Source
\begin{equation*}|C|=|A^{-1}\cdot C^{a} \cdot A^{-H}|=|A^{-1}|\cdot |C^{a}| \cdot |A^{-H}| \tag {20}\end{equation*}
and since $|A^{-1}|=|A|^{-1}$
, and $A^{-H}$
is given in (10), the above expression can be further simplified to \begin{align*} |C|& = |A|^{-1}\cdot |C^{a}| \cdot \left |{{\frac {1}{4}A}}\right | \\ & =|A|^{-1}\cdot |C^{a}| \cdot \left ({{\frac {1}{4}}}\right )^{4}\cdot |A| \\ & =\left ({{\frac {1}{16}}}\right )^{2} \cdot |C^{a}|, \tag {21}\end{align*}
View Source
\begin{align*} |C|& = |A|^{-1}\cdot |C^{a}| \cdot \left |{{\frac {1}{4}A}}\right | \\ & =|A|^{-1}\cdot |C^{a}| \cdot \left ({{\frac {1}{4}}}\right )^{4}\cdot |A| \\ & =\left ({{\frac {1}{16}}}\right )^{2} \cdot |C^{a}|, \tag {21}\end{align*}
where $|A|$
is given in (12).
The structure of the augmented covariance matrix $C^{a}$
can be written in the form [52] \begin{align*} C^{a} = \begin{bmatrix} \sigma _{Q Q^{*}} & \quad \sigma _{Q Q^{\imath *}} & \quad \sigma _{Q Q^{\jmath *}} & \quad \sigma _{Q Q^{\kappa *}} \\ \sigma _{Q^{\imath } Q^{*}} & \quad \sigma _{Q^{\imath } Q^{\imath *}} & \quad \sigma _{Q^{\imath } Q^{\jmath *}} & \quad \sigma _{Q^{\imath } Q^{\kappa *}} \\ \sigma _{Q^{\jmath } Q^{*}} & \quad \sigma _{Q^{\jmath } Q^{\imath *}} & \quad \sigma _{Q^{\jmath } Q^{\jmath *}} & \quad \sigma _{Q^{\jmath } Q^{\kappa *}} \\ \sigma _{Q^{\kappa } Q^{*}} & \quad \sigma _{Q^{\kappa } Q^{\imath *}} & \quad \sigma _{Q^{\kappa } Q^{\jmath *}} & \quad \sigma _{Q^{\kappa } Q^{\kappa *}} \end{bmatrix}, \tag {22}\end{align*}
View Source
\begin{align*} C^{a} = \begin{bmatrix} \sigma _{Q Q^{*}} & \quad \sigma _{Q Q^{\imath *}} & \quad \sigma _{Q Q^{\jmath *}} & \quad \sigma _{Q Q^{\kappa *}} \\ \sigma _{Q^{\imath } Q^{*}} & \quad \sigma _{Q^{\imath } Q^{\imath *}} & \quad \sigma _{Q^{\imath } Q^{\jmath *}} & \quad \sigma _{Q^{\imath } Q^{\kappa *}} \\ \sigma _{Q^{\jmath } Q^{*}} & \quad \sigma _{Q^{\jmath } Q^{\imath *}} & \quad \sigma _{Q^{\jmath } Q^{\jmath *}} & \quad \sigma _{Q^{\jmath } Q^{\kappa *}} \\ \sigma _{Q^{\kappa } Q^{*}} & \quad \sigma _{Q^{\kappa } Q^{\imath *}} & \quad \sigma _{Q^{\kappa } Q^{\jmath *}} & \quad \sigma _{Q^{\kappa } Q^{\kappa *}} \end{bmatrix}, \tag {22}\end{align*}
where $(\cdot )^{*}$
denotes a quaternion conjugate operator and where $\sigma _{XY}$
denotes the quaternion-valued covariance between the quaternion components of the vector $\overrightarrow {Q^{a}}$
or the conjugate quaternion components of the vector $\overrightarrow {Q^{a}}$
. Observe that the real and imaginary parts of each component of $C^{a}$
(23)–(38) are linear functions of the real-valued variance and covariance between the scalar components of the vector $\overrightarrow {Q}=[Q_{a},Q_{b},Q_{c},Q_{d}]^{T}$
[52].\begin{align*} \sigma _{Q Q^{*}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{b} Q_{a}} - \sigma _{Q_{a} Q_{b}} - \sigma _{Q_{c} Q_{d}} + \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{b} Q_{d}} - \sigma _{Q_{a} Q_{c}} + \sigma _{Q_{c} Q_{a}} - \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{c} Q_{b}} - \sigma _{Q_{b} Q_{c}} - \sigma _{Q_{a} Q_{d}} + \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {23}\\ \sigma _{Q Q^{\imath *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{b} Q_{a}} - \sigma _{Q_{a} Q_{b}} + \sigma _{Q_{c} Q_{d}} - \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{a} Q_{c}} - \sigma _{Q_{b} Q_{d}} + \sigma _{Q_{c} Q_{a}} - \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{a} Q_{d}} + \sigma _{Q_{b} Q_{c}} + \sigma _{Q_{c} Q_{b}} + \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {24}\\ \sigma _{Q Q^{\jmath *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{a} Q_{b}} + \sigma _{Q_{b} Q_{a}} + \sigma _{Q_{c} Q_{d}} + \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{c} Q_{a}} - \sigma _{Q_{b} Q_{d}} - \sigma _{Q_{a} Q_{c}} + \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{a} Q_{d}} - \sigma _{Q_{b} Q_{c}} - \sigma _{Q_{c} Q_{b}} + \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {25}\\ \sigma _{Q Q^{\kappa *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{a} Q_{b}} + \sigma _{Q_{b} Q_{a}} - \sigma _{Q_{c} Q_{d}} - \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{a} Q_{c}} + \sigma _{Q_{b} Q_{d}} + \sigma _{Q_{c} Q_{a}} + \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{b} Q_{c}} - \sigma _{Q_{a} Q_{d}} - \sigma _{Q_{c} Q_{b}} + \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {26}\\ \sigma _{Q^{\imath } Q^{*}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{b} Q_{a}} - \sigma _{Q_{a} Q_{b}} + \sigma _{Q_{c} Q_{d}} - \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{b} Q_{d}} - \sigma _{Q_{a} Q_{c}} - \sigma _{Q_{c} Q_{a}} + \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(- \sigma _{Q_{a} Q_{d}} - \sigma _{Q_{b} Q_{c}} - \sigma _{Q_{c} Q_{b}} - \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {27}\\ \sigma _{Q^{\imath } Q^{\imath *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{b} Q_{a}} - \sigma _{Q_{a} Q_{b}} - \sigma _{Q_{c} Q_{d}} + \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{a} Q_{c}} - \sigma _{Q_{b} Q_{d}} - \sigma _{Q_{c} Q_{a}} + \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{a} Q_{d}} + \sigma _{Q_{b} Q_{c}} - \sigma _{Q_{c} Q_{b}} - \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {28}\\ \sigma _{Q^{\imath } Q^{\jmath *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{a} Q_{b}} + \sigma _{Q_{b} Q_{a}} - \sigma _{Q_{c} Q_{d}} - \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(- \sigma _{Q_{a} Q_{c}} - \sigma _{Q_{b} Q_{d}} - \sigma _{Q_{c} Q_{a}} - \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{a} Q_{d}} - \sigma _{Q_{b} Q_{c}} + \sigma _{Q_{c} Q_{b}} - \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {29}\\ \sigma _{Q^{\imath } Q^{\kappa *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{a} Q_{b}} + \sigma _{Q_{b} Q_{a}} + \sigma _{Q_{c} Q_{d}} + \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{a} Q_{c}} + \sigma _{Q_{b} Q_{d}} - \sigma _{Q_{c} Q_{a}} - \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{b} Q_{c}} - \sigma _{Q_{a} Q_{d}} + \sigma _{Q_{c} Q_{b}} - \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {30}\\ \sigma _{Q^{\jmath } Q^{*}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(- \sigma _{Q_{a} Q_{b}} - \sigma _{Q_{b} Q_{a}} - \sigma _{Q_{c} Q_{d}} - \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{c} Q_{a}} - \sigma _{Q_{b} Q_{d}} - \sigma _{Q_{a} Q_{c}} + \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{b} Q_{c}} - \sigma _{Q_{a} Q_{d}} + \sigma _{Q_{c} Q_{b}} - \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {31}\end{align*}
View Source
\begin{align*} \sigma _{Q Q^{*}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{b} Q_{a}} - \sigma _{Q_{a} Q_{b}} - \sigma _{Q_{c} Q_{d}} + \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{b} Q_{d}} - \sigma _{Q_{a} Q_{c}} + \sigma _{Q_{c} Q_{a}} - \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{c} Q_{b}} - \sigma _{Q_{b} Q_{c}} - \sigma _{Q_{a} Q_{d}} + \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {23}\\ \sigma _{Q Q^{\imath *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{b} Q_{a}} - \sigma _{Q_{a} Q_{b}} + \sigma _{Q_{c} Q_{d}} - \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{a} Q_{c}} - \sigma _{Q_{b} Q_{d}} + \sigma _{Q_{c} Q_{a}} - \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{a} Q_{d}} + \sigma _{Q_{b} Q_{c}} + \sigma _{Q_{c} Q_{b}} + \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {24}\\ \sigma _{Q Q^{\jmath *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{a} Q_{b}} + \sigma _{Q_{b} Q_{a}} + \sigma _{Q_{c} Q_{d}} + \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{c} Q_{a}} - \sigma _{Q_{b} Q_{d}} - \sigma _{Q_{a} Q_{c}} + \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{a} Q_{d}} - \sigma _{Q_{b} Q_{c}} - \sigma _{Q_{c} Q_{b}} + \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {25}\\ \sigma _{Q Q^{\kappa *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{a} Q_{b}} + \sigma _{Q_{b} Q_{a}} - \sigma _{Q_{c} Q_{d}} - \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{a} Q_{c}} + \sigma _{Q_{b} Q_{d}} + \sigma _{Q_{c} Q_{a}} + \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{b} Q_{c}} - \sigma _{Q_{a} Q_{d}} - \sigma _{Q_{c} Q_{b}} + \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {26}\\ \sigma _{Q^{\imath } Q^{*}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{b} Q_{a}} - \sigma _{Q_{a} Q_{b}} + \sigma _{Q_{c} Q_{d}} - \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{b} Q_{d}} - \sigma _{Q_{a} Q_{c}} - \sigma _{Q_{c} Q_{a}} + \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(- \sigma _{Q_{a} Q_{d}} - \sigma _{Q_{b} Q_{c}} - \sigma _{Q_{c} Q_{b}} - \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {27}\\ \sigma _{Q^{\imath } Q^{\imath *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{b} Q_{a}} - \sigma _{Q_{a} Q_{b}} - \sigma _{Q_{c} Q_{d}} + \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{a} Q_{c}} - \sigma _{Q_{b} Q_{d}} - \sigma _{Q_{c} Q_{a}} + \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{a} Q_{d}} + \sigma _{Q_{b} Q_{c}} - \sigma _{Q_{c} Q_{b}} - \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {28}\\ \sigma _{Q^{\imath } Q^{\jmath *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{a} Q_{b}} + \sigma _{Q_{b} Q_{a}} - \sigma _{Q_{c} Q_{d}} - \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(- \sigma _{Q_{a} Q_{c}} - \sigma _{Q_{b} Q_{d}} - \sigma _{Q_{c} Q_{a}} - \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{a} Q_{d}} - \sigma _{Q_{b} Q_{c}} + \sigma _{Q_{c} Q_{b}} - \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {29}\\ \sigma _{Q^{\imath } Q^{\kappa *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{a} Q_{b}} + \sigma _{Q_{b} Q_{a}} + \sigma _{Q_{c} Q_{d}} + \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{a} Q_{c}} + \sigma _{Q_{b} Q_{d}} - \sigma _{Q_{c} Q_{a}} - \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{b} Q_{c}} - \sigma _{Q_{a} Q_{d}} + \sigma _{Q_{c} Q_{b}} - \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {30}\\ \sigma _{Q^{\jmath } Q^{*}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(- \sigma _{Q_{a} Q_{b}} - \sigma _{Q_{b} Q_{a}} - \sigma _{Q_{c} Q_{d}} - \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{c} Q_{a}} - \sigma _{Q_{b} Q_{d}} - \sigma _{Q_{a} Q_{c}} + \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{b} Q_{c}} - \sigma _{Q_{a} Q_{d}} + \sigma _{Q_{c} Q_{b}} - \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {31}\end{align*}
\begin{align*} \sigma _{Q^{\jmath } Q^{\imath *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{c} Q_{d}} - \sigma _{Q_{b} Q_{a}} - \sigma _{Q_{a} Q_{b}} + \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{a} Q_{c}} + \sigma _{Q_{b} Q_{d}} + \sigma _{Q_{c} Q_{a}} + \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{a} Q_{d}} - \sigma _{Q_{b} Q_{c}} + \sigma _{Q_{c} Q_{b}} - \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {32}\\ \sigma _{Q^{\jmath } Q^{\jmath *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{a} Q_{b}} - \sigma _{Q_{b} Q_{a}} + \sigma _{Q_{c} Q_{d}} - \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{b} Q_{d}} - \sigma _{Q_{a} Q_{c}} + \sigma _{Q_{c} Q_{a}} - \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{a} Q_{d}} + \sigma _{Q_{b} Q_{c}} - \sigma _{Q_{c} Q_{b}} - \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {33}\end{align*}
View Source
\begin{align*} \sigma _{Q^{\jmath } Q^{\imath *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{c} Q_{d}} - \sigma _{Q_{b} Q_{a}} - \sigma _{Q_{a} Q_{b}} + \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{a} Q_{c}} + \sigma _{Q_{b} Q_{d}} + \sigma _{Q_{c} Q_{a}} + \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{a} Q_{d}} - \sigma _{Q_{b} Q_{c}} + \sigma _{Q_{c} Q_{b}} - \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {32}\\ \sigma _{Q^{\jmath } Q^{\jmath *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{a} Q_{b}} - \sigma _{Q_{b} Q_{a}} + \sigma _{Q_{c} Q_{d}} - \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{b} Q_{d}} - \sigma _{Q_{a} Q_{c}} + \sigma _{Q_{c} Q_{a}} - \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{a} Q_{d}} + \sigma _{Q_{b} Q_{c}} - \sigma _{Q_{c} Q_{b}} - \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {33}\end{align*}
\begin{align*} \sigma _{Q^{\jmath } Q^{\kappa *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{a} Q_{b}} - \sigma _{Q_{b} Q_{a}} - \sigma _{Q_{c} Q_{d}} + \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{a} Q_{c}} - \sigma _{Q_{b} Q_{d}} + \sigma _{Q_{c} Q_{a}} - \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(- \sigma _{Q_{a} Q_{d}} - \sigma _{Q_{b} Q_{c}} - \sigma _{Q_{c} Q_{b}} - \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {34}\\ \sigma _{Q^{\kappa } Q^{*}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{c} Q_{d}} - \sigma _{Q_{b} Q_{a}} - \sigma _{Q_{a} Q_{b}} + \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(- \sigma _{Q_{a} Q_{c}} - \sigma _{Q_{b} Q_{d}} - \sigma _{Q_{c} Q_{a}} - \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{b} Q_{c}} - \sigma _{Q_{a} Q_{d}} - \sigma _{Q_{c} Q_{b}} + \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {35}\\ \sigma _{Q^{\kappa } Q^{\imath *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(- \sigma _{Q_{a} Q_{b}} - \sigma _{Q_{b} Q_{a}} - \sigma _{Q_{c} Q_{d}} - \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{a} Q_{c}} + \sigma _{Q_{b} Q_{d}} - \sigma _{Q_{c} Q_{a}} - \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{a} Q_{d}} - \sigma _{Q_{b} Q_{c}} - \sigma _{Q_{c} Q_{b}} + \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {36}\\ \sigma _{Q^{\kappa } Q^{\jmath *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{a} Q_{b}} - \sigma _{Q_{b} Q_{a}} - \sigma _{Q_{c} Q_{d}} + \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{b} Q_{d}} - \sigma _{Q_{a} Q_{c}} - \sigma _{Q_{c} Q_{a}} + \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{a} Q_{d}} + \sigma _{Q_{b} Q_{c}} + \sigma _{Q_{c} Q_{b}} + \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {37}\\ \sigma _{Q^{\kappa } Q^{\kappa *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{a} Q_{b}} - \sigma _{Q_{b} Q_{a}} + \sigma _{Q_{c} Q_{d}} - \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{a} Q_{c}} - \sigma _{Q_{b} Q_{d}} - \sigma _{Q_{c} Q_{a}} + \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{c} Q_{b}} - \sigma _{Q_{b} Q_{c}} - \sigma _{Q_{a} Q_{d}} + \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {38}\end{align*}
View Source
\begin{align*} \sigma _{Q^{\jmath } Q^{\kappa *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{a} Q_{b}} - \sigma _{Q_{b} Q_{a}} - \sigma _{Q_{c} Q_{d}} + \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{a} Q_{c}} - \sigma _{Q_{b} Q_{d}} + \sigma _{Q_{c} Q_{a}} - \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(- \sigma _{Q_{a} Q_{d}} - \sigma _{Q_{b} Q_{c}} - \sigma _{Q_{c} Q_{b}} - \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {34}\\ \sigma _{Q^{\kappa } Q^{*}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{c} Q_{d}} - \sigma _{Q_{b} Q_{a}} - \sigma _{Q_{a} Q_{b}} + \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(- \sigma _{Q_{a} Q_{c}} - \sigma _{Q_{b} Q_{d}} - \sigma _{Q_{c} Q_{a}} - \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{b} Q_{c}} - \sigma _{Q_{a} Q_{d}} - \sigma _{Q_{c} Q_{b}} + \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {35}\\ \sigma _{Q^{\kappa } Q^{\imath *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(- \sigma _{Q_{a} Q_{b}} - \sigma _{Q_{b} Q_{a}} - \sigma _{Q_{c} Q_{d}} - \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{a} Q_{c}} + \sigma _{Q_{b} Q_{d}} - \sigma _{Q_{c} Q_{a}} - \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{a} Q_{d}} - \sigma _{Q_{b} Q_{c}} - \sigma _{Q_{c} Q_{b}} + \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {36}\\ \sigma _{Q^{\kappa } Q^{\jmath *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{a} Q_{b}} - \sigma _{Q_{b} Q_{a}} - \sigma _{Q_{c} Q_{d}} + \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{b} Q_{d}} - \sigma _{Q_{a} Q_{c}} - \sigma _{Q_{c} Q_{a}} + \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{a} Q_{d}} + \sigma _{Q_{b} Q_{c}} + \sigma _{Q_{c} Q_{b}} + \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {37}\\ \sigma _{Q^{\kappa } Q^{\kappa *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(\sigma _{Q_{a} Q_{b}} - \sigma _{Q_{b} Q_{a}} + \sigma _{Q_{c} Q_{d}} - \sigma _{Q_{d} Q_{c}})\cdot \imath \\ & \quad +(\sigma _{Q_{a} Q_{c}} - \sigma _{Q_{b} Q_{d}} - \sigma _{Q_{c} Q_{a}} + \sigma _{Q_{d} Q_{b}})\cdot \jmath \\ & \quad +(\sigma _{Q_{c} Q_{b}} - \sigma _{Q_{b} Q_{c}} - \sigma _{Q_{a} Q_{d}} + \sigma _{Q_{d} Q_{a}})\cdot \kappa \tag {38}\end{align*}
The quaternion-valued covariance components of the augmented covariance matrix $C^{a}$
satisfy the following relationships \begin{align*} \sigma _{Q^{\imath } Q^{*}}& =\left ({{\sigma _{Q Q^{\imath *}}}}\right )^{\imath }, \tag {39}\\ \sigma _{Q^{\jmath } Q^{*}}& =\left ({{\sigma _{Q Q^{\jmath *}}}}\right )^{\jmath }, \tag {40}\\ \sigma _{Q^{\kappa } Q^{*}}& =\left ({{\sigma _{Q Q^{\kappa *}}}}\right )^{\kappa }, \tag {41}\\ \sigma _{Q^{\jmath } Q^{\imath *}}& =\left ({{\sigma _{Q^{\imath } Q^{\jmath *}}}}\right )^{\imath \jmath }, \tag {42}\\ \sigma _{Q^{\kappa } Q^{\imath *}}& =\left ({{\sigma _{Q^{\imath } Q^{\kappa *}}}}\right )^{\imath \kappa }, \tag {43}\\ \sigma _{Q^{\kappa } Q^{\jmath *}}& =\left ({{\sigma _{Q^{\jmath } Q^{\kappa *}}}}\right )^{\jmath \kappa }. \tag {44}\end{align*}
View Source
\begin{align*} \sigma _{Q^{\imath } Q^{*}}& =\left ({{\sigma _{Q Q^{\imath *}}}}\right )^{\imath }, \tag {39}\\ \sigma _{Q^{\jmath } Q^{*}}& =\left ({{\sigma _{Q Q^{\jmath *}}}}\right )^{\jmath }, \tag {40}\\ \sigma _{Q^{\kappa } Q^{*}}& =\left ({{\sigma _{Q Q^{\kappa *}}}}\right )^{\kappa }, \tag {41}\\ \sigma _{Q^{\jmath } Q^{\imath *}}& =\left ({{\sigma _{Q^{\imath } Q^{\jmath *}}}}\right )^{\imath \jmath }, \tag {42}\\ \sigma _{Q^{\kappa } Q^{\imath *}}& =\left ({{\sigma _{Q^{\imath } Q^{\kappa *}}}}\right )^{\imath \kappa }, \tag {43}\\ \sigma _{Q^{\kappa } Q^{\jmath *}}& =\left ({{\sigma _{Q^{\jmath } Q^{\kappa *}}}}\right )^{\jmath \kappa }. \tag {44}\end{align*}
If the covariance between the scalar components of the vector $\overrightarrow {Q}$
satisfies the condition $\sigma _{XY}=\sigma _{YX}$
, where $X,Y \in \{Q_{a},Q_{b},Q_{c},Q_{d}\}$
and $X\neq Y$
, then the structure of $C^{a}$
is reduced and (23)–(38) can be rewritten as (45)–(60).\begin{align*} \sigma _{Q Q^{*}}& =\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}} \tag {45}\\ \sigma _{Q Q^{\imath *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{c}} - 2\cdot \sigma _{Q_{b} Q_{d}})\cdot \jmath \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{d}} + 2\cdot \sigma _{Q_{b} Q_{c}})\cdot \kappa \tag {46}\\ \sigma _{Q Q^{\jmath *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{b}} + 2\cdot \sigma _{Q_{c} Q_{d}})\cdot \imath \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{d}} - 2\cdot \sigma _{Q_{b} Q_{c}})\cdot \kappa \tag {47}\\ \sigma _{Q Q^{\kappa *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{b}} - 2\cdot \sigma _{Q_{c} Q_{d}})\cdot \imath \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{c}} + 2\cdot \sigma _{Q_{b} Q_{d}})\cdot \jmath \tag {48}\\ \sigma _{Q^{\imath } Q^{*}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{b} Q_{d}} - 2\cdot \sigma _{Q_{a} Q_{c}})\cdot \jmath \\ & \quad +(- 2\cdot \sigma _{Q_{a} Q_{d}} - 2\cdot \sigma _{Q_{b} Q_{c}})\cdot \kappa \tag {49}\\ \sigma _{Q^{\imath } Q^{\imath *}}& =\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}} \tag {50}\\ \sigma _{Q^{\imath } Q^{\jmath *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{b}} - 2\cdot \sigma _{Q_{c} Q_{d}})\cdot \imath \\ & \quad +(- 2\cdot \sigma _{Q_{a} Q_{c}} - 2\cdot \sigma _{Q_{b} Q_{d}})\cdot \jmath \tag {51}\\ \sigma _{Q^{\imath } Q^{\kappa *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{b}} + 2\cdot \sigma _{Q_{c} Q_{d}})\cdot \imath \\ & \quad +(2\cdot \sigma _{Q_{b} Q_{c}} - 2\cdot \sigma _{Q_{a} Q_{d}})\cdot \kappa \tag {52}\\ \sigma _{Q^{\jmath } Q^{*}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(- 2\cdot \sigma _{Q_{a} Q_{b}} - 2\cdot \sigma _{Q_{c} Q_{d}})\cdot \imath \\ & \quad +(2\cdot \sigma _{Q_{b} Q_{c}} - 2\cdot \sigma _{Q_{a} Q_{d}})\cdot \kappa \tag {53}\\ \sigma _{Q^{\jmath } Q^{\imath *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{c} Q_{d}} - 2\cdot \sigma _{Q_{a} Q_{b}})\cdot \imath \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{c}} + 2\cdot \sigma _{Q_{b} Q_{d}})\cdot \jmath \tag {54}\\ \sigma _{Q^{\jmath } Q^{\jmath *}}& =\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}} \tag {55}\\ \sigma _{Q^{\jmath } Q^{\kappa *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{c}} - 2\cdot \sigma _{Q_{b} Q_{d}})\cdot \jmath \\ & \quad +(- 2\cdot \sigma _{Q_{a} Q_{d}} - 2\cdot \sigma _{Q_{b} Q_{c}})\cdot \kappa \tag {56}\\ \sigma _{Q^{\kappa } Q^{*}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{c} Q_{d}} - 2\cdot \sigma _{Q_{a} Q_{b}})\cdot \imath \\ & \quad +(- 2\cdot \sigma _{Q_{a} Q_{c}} - 2\cdot \sigma _{Q_{b} Q_{d}})\cdot \jmath \tag {57}\\ \sigma _{Q^{\kappa } Q^{\imath *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(- 2\cdot \sigma _{Q_{a} Q_{b}} - 2\cdot \sigma _{Q_{c} Q_{d}})\cdot \imath \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{d}} - 2\cdot \sigma _{Q_{b} Q_{c}})\cdot \kappa \tag {58}\\ \sigma _{Q^{\kappa } Q^{\jmath *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{b} Q_{d}} - 2\cdot \sigma _{Q_{a} Q_{c}})\cdot \jmath \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{d}} + 2\cdot \sigma _{Q_{b} Q_{c}})\cdot \kappa \tag {59}\\ \sigma _{Q^{\kappa } Q^{\kappa *}}& =\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}} \tag {60}\end{align*}
View Source
\begin{align*} \sigma _{Q Q^{*}}& =\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}} \tag {45}\\ \sigma _{Q Q^{\imath *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{c}} - 2\cdot \sigma _{Q_{b} Q_{d}})\cdot \jmath \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{d}} + 2\cdot \sigma _{Q_{b} Q_{c}})\cdot \kappa \tag {46}\\ \sigma _{Q Q^{\jmath *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{b}} + 2\cdot \sigma _{Q_{c} Q_{d}})\cdot \imath \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{d}} - 2\cdot \sigma _{Q_{b} Q_{c}})\cdot \kappa \tag {47}\\ \sigma _{Q Q^{\kappa *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{b}} - 2\cdot \sigma _{Q_{c} Q_{d}})\cdot \imath \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{c}} + 2\cdot \sigma _{Q_{b} Q_{d}})\cdot \jmath \tag {48}\\ \sigma _{Q^{\imath } Q^{*}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{b} Q_{d}} - 2\cdot \sigma _{Q_{a} Q_{c}})\cdot \jmath \\ & \quad +(- 2\cdot \sigma _{Q_{a} Q_{d}} - 2\cdot \sigma _{Q_{b} Q_{c}})\cdot \kappa \tag {49}\\ \sigma _{Q^{\imath } Q^{\imath *}}& =\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}} \tag {50}\\ \sigma _{Q^{\imath } Q^{\jmath *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{b}} - 2\cdot \sigma _{Q_{c} Q_{d}})\cdot \imath \\ & \quad +(- 2\cdot \sigma _{Q_{a} Q_{c}} - 2\cdot \sigma _{Q_{b} Q_{d}})\cdot \jmath \tag {51}\\ \sigma _{Q^{\imath } Q^{\kappa *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{b}} + 2\cdot \sigma _{Q_{c} Q_{d}})\cdot \imath \\ & \quad +(2\cdot \sigma _{Q_{b} Q_{c}} - 2\cdot \sigma _{Q_{a} Q_{d}})\cdot \kappa \tag {52}\\ \sigma _{Q^{\jmath } Q^{*}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(- 2\cdot \sigma _{Q_{a} Q_{b}} - 2\cdot \sigma _{Q_{c} Q_{d}})\cdot \imath \\ & \quad +(2\cdot \sigma _{Q_{b} Q_{c}} - 2\cdot \sigma _{Q_{a} Q_{d}})\cdot \kappa \tag {53}\\ \sigma _{Q^{\jmath } Q^{\imath *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{c} Q_{d}} - 2\cdot \sigma _{Q_{a} Q_{b}})\cdot \imath \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{c}} + 2\cdot \sigma _{Q_{b} Q_{d}})\cdot \jmath \tag {54}\\ \sigma _{Q^{\jmath } Q^{\jmath *}}& =\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}} \tag {55}\\ \sigma _{Q^{\jmath } Q^{\kappa *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{c}} - 2\cdot \sigma _{Q_{b} Q_{d}})\cdot \jmath \\ & \quad +(- 2\cdot \sigma _{Q_{a} Q_{d}} - 2\cdot \sigma _{Q_{b} Q_{c}})\cdot \kappa \tag {56}\\ \sigma _{Q^{\kappa } Q^{*}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{c} Q_{d}} - 2\cdot \sigma _{Q_{a} Q_{b}})\cdot \imath \\ & \quad +(- 2\cdot \sigma _{Q_{a} Q_{c}} - 2\cdot \sigma _{Q_{b} Q_{d}})\cdot \jmath \tag {57}\\ \sigma _{Q^{\kappa } Q^{\imath *}}& =(\sigma ^{2}_{Q_{a}} - \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(- 2\cdot \sigma _{Q_{a} Q_{b}} - 2\cdot \sigma _{Q_{c} Q_{d}})\cdot \imath \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{d}} - 2\cdot \sigma _{Q_{b} Q_{c}})\cdot \kappa \tag {58}\\ \sigma _{Q^{\kappa } Q^{\jmath *}}& =(\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} - \sigma ^{2}_{Q_{c}} - \sigma ^{2}_{Q_{d}}) \\ & \quad +(2\cdot \sigma _{Q_{b} Q_{d}} - 2\cdot \sigma _{Q_{a} Q_{c}})\cdot \jmath \\ & \quad +(2\cdot \sigma _{Q_{a} Q_{d}} + 2\cdot \sigma _{Q_{b} Q_{c}})\cdot \kappa \tag {59}\\ \sigma _{Q^{\kappa } Q^{\kappa *}}& =\sigma ^{2}_{Q_{a}} + \sigma ^{2}_{Q_{b}} + \sigma ^{2}_{Q_{c}} + \sigma ^{2}_{Q_{d}} \tag {60}\end{align*}
For (45)–(60), the augmented covariance matrix satisfies the $C^{a}=(C^{a})^{H}$
condition.
The real-valued variance of each single component $Q_{a}$
, $Q_{b}$
, $Q_{c}$
and $Q_{d}$
of the random quaternion $\overrightarrow {Q}$
and the real-valued covariance between each component $Q_{a}$
, $Q_{b}$
, $Q_{c}$
and $Q_{d}$
can be expressed in terms of the quaternion-valued covariance [52], that is \begin{align*} \sigma ^{2}_{Q_{a}}& =\frac {1}{4}\Re \left \{{{\sigma _{Q Q^{*}} + \sigma _{Q Q^{\imath *}} + \sigma _{Q Q^{\jmath *}} + \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {61}\\ \sigma _{Q_{b} Q_{a}}& =\frac {1}{4}\Im _{\imath }\left \{{{\sigma _{Q Q^{*}} + \sigma _{Q Q^{\imath *}} + \sigma _{Q Q^{\jmath *}} + \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {62}\\ \sigma _{Q_{c} Q_{a}}& =\frac {1}{4}\Im _{\jmath }\left \{{{\sigma _{Q Q^{*}} + \sigma _{Q Q^{\imath *}} + \sigma _{Q Q^{\jmath *}} + \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {63}\\ \sigma _{Q_{d} Q_{a}}& =\frac {1}{4}\Im _{\kappa }\left \{{{\sigma _{Q Q^{*}} + \sigma _{Q Q^{\imath *}} + \sigma _{Q Q^{\jmath *}} + \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {64}\\ \sigma ^{2}_{Q_{b}}& =\frac {1}{4}\Re \left \{{{\sigma _{Q Q^{*}} + \sigma _{Q Q^{\imath *}} - \sigma _{Q Q^{\jmath *}} - \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {65}\\ \sigma _{Q_{a} Q_{b}}& =-\frac {1}{4}\Im _{\imath }\left \{{{\sigma _{Q Q^{*}} + \sigma _{Q Q^{\imath *}} - \sigma _{Q Q^{\jmath *}} - \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {66}\\ \sigma _{Q_{d} Q_{b}}& =-\frac {1}{4}\Im _{\jmath }\left \{{{\sigma _{Q Q^{*}} + \sigma _{Q Q^{\imath *}} - \sigma _{Q Q^{\jmath *}} - \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {67}\\ \sigma _{Q_{c} Q_{b}}& =\frac {1}{4}\Im _{\kappa }\left \{{{\sigma _{Q Q^{*}} + \sigma _{Q Q^{\imath *}} - \sigma _{Q Q^{\jmath *}} - \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {68}\\ \sigma ^{2}_{Q_{c}}& =\frac {1}{4}\Re \left \{{{\sigma _{Q Q^{*}} - \sigma _{Q Q^{\imath *}} + \sigma _{Q Q^{\jmath *}} - \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {69}\\ \sigma _{Q_{d} Q_{c}}& =\frac {1}{4}\Im _{\imath }\left \{{{\sigma _{Q Q^{*}} - \sigma _{Q Q^{\imath *}} + \sigma _{Q Q^{\jmath *}} - \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {70}\\ \sigma _{Q_{a} Q_{c}}& =-\frac {1}{4}\Im _{\jmath }\left \{{{\sigma _{Q Q^{*}} - \sigma _{Q Q^{\imath *}} + \sigma _{Q Q^{\jmath *}} - \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {71}\\ \sigma _{Q_{b} Q_{c}}& =-\frac {1}{4}\Im _{\kappa }\left \{{{\sigma _{Q Q^{*}} - \sigma _{Q Q^{\imath *}} + \sigma _{Q Q^{\jmath *}} - \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {72}\\ \sigma ^{2}_{Q_{d}}& =\frac {1}{4}\Re \left \{{{\sigma _{Q Q^{*}} - \sigma _{Q Q^{\imath *}} - \sigma _{Q Q^{\jmath *}} + \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {73}\\ \sigma _{Q_{c} Q_{d}}& =-\frac {1}{4}\Im _{\imath }\left \{{{\sigma _{Q Q^{*}} - \sigma _{Q Q^{\imath *}} - \sigma _{Q Q^{\jmath *}} + \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {74}\\ \sigma _{Q_{b} Q_{d}}& =\frac {1}{4}\Im _{\jmath }\left \{{{\sigma _{Q Q^{*}} - \sigma _{Q Q^{\imath *}} - \sigma _{Q Q^{\jmath *}} + \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {75}\\ \sigma _{Q_{a} Q_{d}}& =-\frac {1}{4}\Im _{\kappa }\left \{{{\sigma _{Q Q^{*}} - \sigma _{Q Q^{\imath *}} - \sigma _{Q Q^{\jmath *}} + \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {76}\end{align*}
View Source
\begin{align*} \sigma ^{2}_{Q_{a}}& =\frac {1}{4}\Re \left \{{{\sigma _{Q Q^{*}} + \sigma _{Q Q^{\imath *}} + \sigma _{Q Q^{\jmath *}} + \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {61}\\ \sigma _{Q_{b} Q_{a}}& =\frac {1}{4}\Im _{\imath }\left \{{{\sigma _{Q Q^{*}} + \sigma _{Q Q^{\imath *}} + \sigma _{Q Q^{\jmath *}} + \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {62}\\ \sigma _{Q_{c} Q_{a}}& =\frac {1}{4}\Im _{\jmath }\left \{{{\sigma _{Q Q^{*}} + \sigma _{Q Q^{\imath *}} + \sigma _{Q Q^{\jmath *}} + \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {63}\\ \sigma _{Q_{d} Q_{a}}& =\frac {1}{4}\Im _{\kappa }\left \{{{\sigma _{Q Q^{*}} + \sigma _{Q Q^{\imath *}} + \sigma _{Q Q^{\jmath *}} + \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {64}\\ \sigma ^{2}_{Q_{b}}& =\frac {1}{4}\Re \left \{{{\sigma _{Q Q^{*}} + \sigma _{Q Q^{\imath *}} - \sigma _{Q Q^{\jmath *}} - \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {65}\\ \sigma _{Q_{a} Q_{b}}& =-\frac {1}{4}\Im _{\imath }\left \{{{\sigma _{Q Q^{*}} + \sigma _{Q Q^{\imath *}} - \sigma _{Q Q^{\jmath *}} - \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {66}\\ \sigma _{Q_{d} Q_{b}}& =-\frac {1}{4}\Im _{\jmath }\left \{{{\sigma _{Q Q^{*}} + \sigma _{Q Q^{\imath *}} - \sigma _{Q Q^{\jmath *}} - \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {67}\\ \sigma _{Q_{c} Q_{b}}& =\frac {1}{4}\Im _{\kappa }\left \{{{\sigma _{Q Q^{*}} + \sigma _{Q Q^{\imath *}} - \sigma _{Q Q^{\jmath *}} - \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {68}\\ \sigma ^{2}_{Q_{c}}& =\frac {1}{4}\Re \left \{{{\sigma _{Q Q^{*}} - \sigma _{Q Q^{\imath *}} + \sigma _{Q Q^{\jmath *}} - \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {69}\\ \sigma _{Q_{d} Q_{c}}& =\frac {1}{4}\Im _{\imath }\left \{{{\sigma _{Q Q^{*}} - \sigma _{Q Q^{\imath *}} + \sigma _{Q Q^{\jmath *}} - \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {70}\\ \sigma _{Q_{a} Q_{c}}& =-\frac {1}{4}\Im _{\jmath }\left \{{{\sigma _{Q Q^{*}} - \sigma _{Q Q^{\imath *}} + \sigma _{Q Q^{\jmath *}} - \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {71}\\ \sigma _{Q_{b} Q_{c}}& =-\frac {1}{4}\Im _{\kappa }\left \{{{\sigma _{Q Q^{*}} - \sigma _{Q Q^{\imath *}} + \sigma _{Q Q^{\jmath *}} - \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {72}\\ \sigma ^{2}_{Q_{d}}& =\frac {1}{4}\Re \left \{{{\sigma _{Q Q^{*}} - \sigma _{Q Q^{\imath *}} - \sigma _{Q Q^{\jmath *}} + \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {73}\\ \sigma _{Q_{c} Q_{d}}& =-\frac {1}{4}\Im _{\imath }\left \{{{\sigma _{Q Q^{*}} - \sigma _{Q Q^{\imath *}} - \sigma _{Q Q^{\jmath *}} + \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {74}\\ \sigma _{Q_{b} Q_{d}}& =\frac {1}{4}\Im _{\jmath }\left \{{{\sigma _{Q Q^{*}} - \sigma _{Q Q^{\imath *}} - \sigma _{Q Q^{\jmath *}} + \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {75}\\ \sigma _{Q_{a} Q_{d}}& =-\frac {1}{4}\Im _{\kappa }\left \{{{\sigma _{Q Q^{*}} - \sigma _{Q Q^{\imath *}} - \sigma _{Q Q^{\jmath *}} + \sigma _{Q Q^{\kappa *}}}}\right \}, \tag {76}\end{align*}
where $\Im _{\imath ,\jmath ,\kappa }\{\cdot \}$
denotes the $\imath $
-, $\jmath $
-, $\kappa $
-component of the vector imaginary part of the quaternion and $\Re \{\cdot \}$
denotes the scalar real part $q_{a}$
of the quaternion.
B. MGGD
The probability density function of a zero-mean MGGD in $\mathbb {R}^{d}$
is defined by [45] and [71] \begin{align*} f(\overrightarrow {x}|d,M,p)& = \frac {p\cdot \Gamma \left ({{d/2}}\right )} {\pi ^{d/2}\cdot \Gamma \left ({{d/(2p)}}\right )\cdot 2^{d/(2p)}\cdot \sqrt {|M|}}\cdot \\ & \qquad \cdot exp\left \{{{-\frac {1}{2}\cdot \left ({{\overrightarrow {x}^{T} M^{-1} \overrightarrow {x}}}\right )^{p}}}\right \} \tag {77}\end{align*}
View Source
\begin{align*} f(\overrightarrow {x}|d,M,p)& = \frac {p\cdot \Gamma \left ({{d/2}}\right )} {\pi ^{d/2}\cdot \Gamma \left ({{d/(2p)}}\right )\cdot 2^{d/(2p)}\cdot \sqrt {|M|}}\cdot \\ & \qquad \cdot exp\left \{{{-\frac {1}{2}\cdot \left ({{\overrightarrow {x}^{T} M^{-1} \overrightarrow {x}}}\right )^{p}}}\right \} \tag {77}\end{align*}
and \begin{equation*}M=\frac {d\cdot \Gamma \left ({{d/(2p)}}\right )}{2^{1/p}\cdot \Gamma \left ({{(d+2)/(2p)}}\right )}\cdot C \tag {78}\end{equation*}
View Source
\begin{equation*}M=\frac {d\cdot \Gamma \left ({{d/(2p)}}\right )}{2^{1/p}\cdot \Gamma \left ({{(d+2)/(2p)}}\right )}\cdot C \tag {78}\end{equation*}
for any $\overrightarrow {x}\in \mathbb {R}^{d}$
, where C is a $d \times d$
covariance matrix, d denotes the dimension of the probability space, $\overrightarrow {x}^{T}$
is the transpose of the vector $\overrightarrow {x}$
, p is the shape parameter of MGGD, $\Gamma (\cdot )$
is the gamma function $\Gamma (z)=\int _{0}^{\infty }t^{z-1}e^{-t}dt, z\gt 0$
[4].
The special cases with the exponents $p=1$
and $p=0.5$
cover the multivariate Gaussian distribution (MGD) and the multivariate Laplacian distribution (MLD), respectively. For $p \rightarrow \infty $
, the MGGD density function becomes a multivariate uniform distribution [40], [42].
The p parameter of the MGGD distribution can be determined using the ML method [42], [43], [72] \begin{align*} & \hspace {-1pc}\frac {d}{2}\cdot \frac {\sum _{i=1}^{N} u_{i}^{p}\cdot \log (u_{i})}{\sum _{i=1}^{N} u_{i}^{p}} \\ & \qquad \quad -\frac {d}{2p}\cdot \left [{{\Psi \left ({{\frac {d}{2p}}}\right ) + \log (2)}}\right ]-1 \\ & \qquad \quad -\frac {d}{2p}\cdot \log \left ({{\frac {p}{d\cdot N} \sum _{i=1}^{N} u_{i}^{p} }}\right ) =0, \tag {79}\end{align*}
View Source
\begin{align*} & \hspace {-1pc}\frac {d}{2}\cdot \frac {\sum _{i=1}^{N} u_{i}^{p}\cdot \log (u_{i})}{\sum _{i=1}^{N} u_{i}^{p}} \\ & \qquad \quad -\frac {d}{2p}\cdot \left [{{\Psi \left ({{\frac {d}{2p}}}\right ) + \log (2)}}\right ]-1 \\ & \qquad \quad -\frac {d}{2p}\cdot \log \left ({{\frac {p}{d\cdot N} \sum _{i=1}^{N} u_{i}^{p} }}\right ) =0, \tag {79}\end{align*}
where $u_{i}=\overrightarrow {x_{i}}^{T} C^{-1} \overrightarrow {x_{i}}$
and $\Psi (x)=\frac {d}{dx}log\bigl (\Gamma (x)\bigr )$
is digamma function [4], and $\{\overrightarrow {x_{1}},\overrightarrow {x_{2}},\ldots ,\overrightarrow {x_{N}}\}$
is a random sample of N observation vectors of dimension d. The nonlinear equation (79) must be solved numerically in terms of p assuming that C is known.
Whereas, the ML estimate $\hat {C}$
is the solution of the following equation [42], [43] \begin{equation*} C= \sum _{i=1}^{N} \frac {d}{u_{i}+u_{i}^{1-p} \sum _{j\neq i} u_{j}^{p}} \overrightarrow {x_{i}}\overrightarrow {x_{i}}^{T} \tag {80}\end{equation*}
View Source
\begin{equation*} C= \sum _{i=1}^{N} \frac {d}{u_{i}+u_{i}^{1-p} \sum _{j\neq i} u_{j}^{p}} \overrightarrow {x_{i}}\overrightarrow {x_{i}}^{T} \tag {80}\end{equation*}
for unknown C and assuming that p is known. In the general case, both (79) and (80) must be solved simultaneously due to the determination of the p parameter and the C parameter.
Equation (79) for $d=1$
after appropriate transformations can be reduced to the form (2).
C. 3D GGD
The probability density function of a zero-mean 3D GGD is [69] \begin{equation*} f_{X}(\overrightarrow {x})= \frac {p}{2\pi \cdot \sqrt {|C|}\cdot s^{\frac {3}{2p}} \Gamma \left ({{\frac {3}{2p}}}\right )}\cdot e^{-\frac {1}{s}\cdot \left ({{\overrightarrow {x}^{T}\cdot C^{-1} \cdot \overrightarrow {x}}}\right )^{p}} \tag {81}\end{equation*}
View Source
\begin{equation*} f_{X}(\overrightarrow {x})= \frac {p}{2\pi \cdot \sqrt {|C|}\cdot s^{\frac {3}{2p}} \Gamma \left ({{\frac {3}{2p}}}\right )}\cdot e^{-\frac {1}{s}\cdot \left ({{\overrightarrow {x}^{T}\cdot C^{-1} \cdot \overrightarrow {x}}}\right )^{p}} \tag {81}\end{equation*}
parameterized by the shape parameter p and the covariance matrix C, and the normalizing term is equal [69] \begin{equation*}s=\left ({{\frac {3\cdot \Gamma \left ({{\frac {3}{2p}}}\right )}{ \Gamma \left ({{\frac {5}{2p}}}\right )}}}\right )^{p}. \tag {82}\end{equation*}
View Source
\begin{equation*}s=\left ({{\frac {3\cdot \Gamma \left ({{\frac {3}{2p}}}\right )}{ \Gamma \left ({{\frac {5}{2p}}}\right )}}}\right )^{p}. \tag {82}\end{equation*}
Equation (81) is equal (77) for $d=3$
. The random variable $\overrightarrow {X}=[X_{0},X_{1},X_{2}]^{T}$
consists of three random variables $X_{0}$
, $X_{1}$
, and $X_{2}$
($\overrightarrow {x}\in \mathbb {R}^{3}$
). The random variable $\overrightarrow {X}=[X_{0},X_{1},X_{2}]^{T}$
corresponds to a pure quaternion random variable $\overrightarrow {Q_{p}}=[Q_{b},Q_{c},Q_{d}]^{T}$
, where the real part is $Q_{a}=0$
.
This distribution was the starting point for creating GGD with an augmented quaternion random variable [69] \begin{equation*}f(\overrightarrow {q^{a}})=\frac {8p}{\pi \cdot \sqrt {|C^{a}|}\cdot s^{\frac {3}{2p}} \Gamma \left ({{\frac {3}{2p}}}\right )}\cdot e^{-\frac {1}{s}\cdot \left ({{\overrightarrow {q^{a}}^{H}\cdot \left ({{ C^{a}}}\right )^{-1} \cdot \overrightarrow {q^{a}}}}\right )^{p}}. \tag {83}\end{equation*}
View Source
\begin{equation*}f(\overrightarrow {q^{a}})=\frac {8p}{\pi \cdot \sqrt {|C^{a}|}\cdot s^{\frac {3}{2p}} \Gamma \left ({{\frac {3}{2p}}}\right )}\cdot e^{-\frac {1}{s}\cdot \left ({{\overrightarrow {q^{a}}^{H}\cdot \left ({{ C^{a}}}\right )^{-1} \cdot \overrightarrow {q^{a}}}}\right )^{p}}. \tag {83}\end{equation*}
This probability density function is parameterized by the shape parameter p and the augmented covariance matrix $C^{a}$
. The connection of the augmented covariance matrix $C^{a}$
with the original covariance matrix C is described by (16).
In the case of the random variable $\overrightarrow {Q}=[0,Q_{b},Q_{c},Q_{d}]^{T}$
, which corresponds to a pure quaternion, a limiting case is obtained. Because the real valued quadrivariate covariance matrix C (13) is equal [69], [70] \begin{align*} C= \begin{bmatrix} 0 & \quad 0 & \quad 0 & \quad 0 \\ 0 & \quad \sigma ^{2}_{Q_{b}} & \quad \sigma _{Q_{b} Q_{c}} & \quad \sigma _{Q_{b} Q_{d}} \\ 0 & \quad \sigma _{Q_{c} Q_{b}} & \quad \sigma ^{2}_{Q_{c}} & \quad \sigma _{Q_{c} Q_{d}} \\ 0 & \quad \sigma _{Q_{d} Q_{b}} & \quad \sigma _{Q_{d} Q_{c}} & \quad \sigma ^{2}_{Q_{d}} \end{bmatrix}, \tag {84}\end{align*}
View Source
\begin{align*} C= \begin{bmatrix} 0 & \quad 0 & \quad 0 & \quad 0 \\ 0 & \quad \sigma ^{2}_{Q_{b}} & \quad \sigma _{Q_{b} Q_{c}} & \quad \sigma _{Q_{b} Q_{d}} \\ 0 & \quad \sigma _{Q_{c} Q_{b}} & \quad \sigma ^{2}_{Q_{c}} & \quad \sigma _{Q_{c} Q_{d}} \\ 0 & \quad \sigma _{Q_{d} Q_{b}} & \quad \sigma _{Q_{d} Q_{c}} & \quad \sigma ^{2}_{Q_{d}} \end{bmatrix}, \tag {84}\end{align*}
then the value of the determinant of the matrix C (84) is $|C|=0$
. According to (14), also $|C^{a}|=0$
. Since this value is in the denominator of (83), the limiting case for QGGD is obtained. Therefore, GGD for an augmented pure quaternion random variable is [70] \begin{align*} f(\overrightarrow {q_{p}^{a}})& = \frac {32p}{\pi \cdot \sqrt {|C_{A}|}\cdot s^{\frac {3}{2p}} \Gamma \left ({{\frac {3}{2p}}}\right )} \\ & \qquad \qquad \cdot e^{-\frac {1}{s}\cdot \left ({{ \overrightarrow {q_{p}^{a}}^{H}\cdot A_{p}\cdot C_{A}^{-1} \cdot A_{p}^{H} \cdot \overrightarrow {q_{p}^{a}} }}\right )^{p}} \tag {85}\end{align*}
View Source
\begin{align*} f(\overrightarrow {q_{p}^{a}})& = \frac {32p}{\pi \cdot \sqrt {|C_{A}|}\cdot s^{\frac {3}{2p}} \Gamma \left ({{\frac {3}{2p}}}\right )} \\ & \qquad \qquad \cdot e^{-\frac {1}{s}\cdot \left ({{ \overrightarrow {q_{p}^{a}}^{H}\cdot A_{p}\cdot C_{A}^{-1} \cdot A_{p}^{H} \cdot \overrightarrow {q_{p}^{a}} }}\right )^{p}} \tag {85}\end{align*}
that is parameterized by the shape parameter p, s (82) and the matrix $C_{A}$
(86) depends on the augmented covariance matrix $C_{p}^{a}$
\begin{equation*} C_{A}=A_{p}^{H}\cdot C_{p}^{a} \cdot A_{p}. \tag {86}\end{equation*}
View Source
\begin{equation*} C_{A}=A_{p}^{H}\cdot C_{p}^{a} \cdot A_{p}. \tag {86}\end{equation*}
where \begin{align*} C_{p}^{a}& =A_{p}\cdot C_{p} \cdot A_{p}^{H}. \tag {87}\\ C_{p}& = \begin{bmatrix} \sigma ^{2}_{Q_{b}} & \quad \sigma _{Q_{b} Q_{c}} & \quad \sigma _{Q_{b} Q_{d}} \\ \sigma _{Q_{c} Q_{b}} & \quad \sigma ^{2}_{Q_{c}} & \quad \sigma _{Q_{c} Q_{d}} \\ \sigma _{Q_{d} Q_{b}} & \quad \sigma _{Q_{d} Q_{c}} & \quad \sigma ^{2}_{Q_{d}} \end{bmatrix}. \tag {88}\\ A_{p}& = \begin{bmatrix} \imath & \quad \jmath & \quad \kappa \\ \imath & \quad -\jmath & \quad -\kappa \\ -\imath & \quad \jmath & \quad -\kappa \\ -\imath & \quad -\jmath & \quad \kappa \end{bmatrix} \tag {89}\end{align*}
View Source
\begin{align*} C_{p}^{a}& =A_{p}\cdot C_{p} \cdot A_{p}^{H}. \tag {87}\\ C_{p}& = \begin{bmatrix} \sigma ^{2}_{Q_{b}} & \quad \sigma _{Q_{b} Q_{c}} & \quad \sigma _{Q_{b} Q_{d}} \\ \sigma _{Q_{c} Q_{b}} & \quad \sigma ^{2}_{Q_{c}} & \quad \sigma _{Q_{c} Q_{d}} \\ \sigma _{Q_{d} Q_{b}} & \quad \sigma _{Q_{d} Q_{c}} & \quad \sigma ^{2}_{Q_{d}} \end{bmatrix}. \tag {88}\\ A_{p}& = \begin{bmatrix} \imath & \quad \jmath & \quad \kappa \\ \imath & \quad -\jmath & \quad -\kappa \\ -\imath & \quad \jmath & \quad -\kappa \\ -\imath & \quad -\jmath & \quad \kappa \end{bmatrix} \tag {89}\end{align*}
The direct relationship between $C_{A}$
and $C_{p}$
is given in [70] as $C_{A}=16\cdot C_{p}$
. The original covariance matrix C for GGD with an augmented pure quaternion random variable is represented by the matrix $C_{p}$
of size $3\times 3$
. Whereas the dimension of the augmented covariance matrix $C_{p}^{a}$
is $4\times 4$
.
The probability density function (85) is defined for a pure quaternion random variable $\overrightarrow {Q_{p}}=[Q_{b},Q_{c},Q_{d}]^{T}$
, where \begin{equation*} \overrightarrow {q_{p}^{a}}=A_{p}\cdot \overrightarrow {q_{p}}. \tag {90}\end{equation*}
View Source
\begin{equation*} \overrightarrow {q_{p}^{a}}=A_{p}\cdot \overrightarrow {q_{p}}. \tag {90}\end{equation*}
SECTION III.
Full Quaternion GGD
Since a full quaternion has four components, 4D GGD will be used instead of 3D GGD (81) to create GGD for a full quaternion. Based on (77) and (78) for $d=4$
, a zero-mean GGD can be given for a full quaternion random variable $\overrightarrow {Q}=[Q_{a},Q_{b},Q_{c},Q_{d}]^{T}$
as \begin{align*} f(\overrightarrow {q})& = \frac {p\cdot \Gamma ^{2}\left ({{3/p}}\right )} {16\pi ^{2}\cdot \Gamma ^{3}\left ({{2/p}}\right )\cdot \sqrt {|C|}} \cdot \\ & \qquad \quad \cdot exp\left \{{{ -\left ({{\frac {\Gamma \left ({{3/p}}\right )}{4\Gamma \left ({{2/p}}\right )}}}\right )^{p} \cdot \left ({{\overrightarrow {q}^{T} C^{-1} \overrightarrow {q}}}\right )^{p} }}\right \} \tag {91}\end{align*}
View Source
\begin{align*} f(\overrightarrow {q})& = \frac {p\cdot \Gamma ^{2}\left ({{3/p}}\right )} {16\pi ^{2}\cdot \Gamma ^{3}\left ({{2/p}}\right )\cdot \sqrt {|C|}} \cdot \\ & \qquad \quad \cdot exp\left \{{{ -\left ({{\frac {\Gamma \left ({{3/p}}\right )}{4\Gamma \left ({{2/p}}\right )}}}\right )^{p} \cdot \left ({{\overrightarrow {q}^{T} C^{-1} \overrightarrow {q}}}\right )^{p} }}\right \} \tag {91}\end{align*}
This probability density function is the basis for creating GGD with an augmented full quaternion random variable. In previous works, QGGD was based on 3D GGD.
The parameters of this distribution (91) can be determined using (79) and (80) for $d=4$
. Equation (79) simplifies to \begin{align*} & \hspace {-1pc}\frac {1}{p}\cdot \Psi \left ({{\frac {2}{p}}}\right ) +\frac {1}{2} +\frac {1}{p}\cdot \log \left ({{\frac {p}{2\cdot N} \sum _{i=1}^{N} u_{i}^{p} }}\right ) \\ & \qquad \quad -\frac {\sum _{i=1}^{N} u_{i}^{p}\cdot \log (u_{i})}{\sum _{i=1}^{N} u_{i}^{p}} =0, \tag {92}\end{align*}
View Source
\begin{align*} & \hspace {-1pc}\frac {1}{p}\cdot \Psi \left ({{\frac {2}{p}}}\right ) +\frac {1}{2} +\frac {1}{p}\cdot \log \left ({{\frac {p}{2\cdot N} \sum _{i=1}^{N} u_{i}^{p} }}\right ) \\ & \qquad \quad -\frac {\sum _{i=1}^{N} u_{i}^{p}\cdot \log (u_{i})}{\sum _{i=1}^{N} u_{i}^{p}} =0, \tag {92}\end{align*}
where $u_{i}=\overrightarrow {q_{i}}^{T} C^{-1} \overrightarrow {q_{i}}$
and $\{\overrightarrow {q_{1}},\overrightarrow {q_{2}},\ldots ,\overrightarrow {q_{N}}\}$
is a random sample of N observation vectors of dimension 4. Both (92) and (80) must be solved simultaneously due to the determination of the p parameter and the C matrix.
SECTION IV.
GGD with an Augmented Full Quaternion Random Variable
In (91), the dependence on $C^{a}$
and $\overrightarrow {q^{a}}$
should be introduced instead of C and $\overrightarrow {q}$
. The quadratic function $\overrightarrow {q}^{T}\cdot C^{-1} \cdot \overrightarrow {q}$
, taking into account (8) and (17), can be written as [52] and [69]\begin{align*} \overrightarrow {q}^{T}\cdot C^{-1}\cdot \overrightarrow {q} & =\overrightarrow {q}^{H}\cdot C^{-1}\cdot \overrightarrow {q} \\ & =\overrightarrow {q}^{H}\cdot A^{H}\cdot \left ({{C^{a}}}\right )^{-1} \cdot A \cdot \overrightarrow {q} \\ & =\overrightarrow {q^{a}}^{H}\cdot \left ({{ C^{a}}}\right )^{-1} \cdot \overrightarrow {q^{a}}. \tag {93}\end{align*}
View Source
\begin{align*} \overrightarrow {q}^{T}\cdot C^{-1}\cdot \overrightarrow {q} & =\overrightarrow {q}^{H}\cdot C^{-1}\cdot \overrightarrow {q} \\ & =\overrightarrow {q}^{H}\cdot A^{H}\cdot \left ({{C^{a}}}\right )^{-1} \cdot A \cdot \overrightarrow {q} \\ & =\overrightarrow {q^{a}}^{H}\cdot \left ({{ C^{a}}}\right )^{-1} \cdot \overrightarrow {q^{a}}. \tag {93}\end{align*}
The equation for the generalized Gaussian probability density function for an augmented full quaternion random variable is obtained by substituting (21) and (93) into (91) \begin{align*} f(\overrightarrow {q^{a}})& = \frac {p\cdot \Gamma ^{2}\left ({{3/p}}\right )} {\pi ^{2}\cdot \Gamma ^{3}\left ({{2/p}}\right )\cdot \sqrt {|C^{a}|}} \cdot \\ & \quad \cdot exp\left \{{{ -\left ({{\frac {\Gamma \left ({{3/p}}\right )}{4\Gamma \left ({{2/p}}\right )}}}\right )^{p} \cdot \left ({{\overrightarrow {q^{a}}^{H}\cdot \left ({{ C^{a}}}\right )^{-1} \cdot \overrightarrow {q^{a}}}}\right )^{p} }}\right \} \tag {94}\end{align*}
View Source
\begin{align*} f(\overrightarrow {q^{a}})& = \frac {p\cdot \Gamma ^{2}\left ({{3/p}}\right )} {\pi ^{2}\cdot \Gamma ^{3}\left ({{2/p}}\right )\cdot \sqrt {|C^{a}|}} \cdot \\ & \quad \cdot exp\left \{{{ -\left ({{\frac {\Gamma \left ({{3/p}}\right )}{4\Gamma \left ({{2/p}}\right )}}}\right )^{p} \cdot \left ({{\overrightarrow {q^{a}}^{H}\cdot \left ({{ C^{a}}}\right )^{-1} \cdot \overrightarrow {q^{a}}}}\right )^{p} }}\right \} \tag {94}\end{align*}
that is parameterized by the shape parameter p and the augmented covariance matrix $C^{a}$
(14).
SECTION V.
GGD with an $\mathbb {H}$
-Proper Quaternion Random Variable
Definition 1:
($\mathbb {H}$
-properness) [49] A quaternion random variable q is said to be $\mathbb {H}$
-proper if:\begin{equation*} q\overset {d}{=}e^{\eta \phi }q, \forall \phi \tag {95}\end{equation*}
View Source
\begin{equation*} q\overset {d}{=}e^{\eta \phi }q, \forall \phi \tag {95}\end{equation*}
and for any pure unit quaternion $\eta $
.
An $\mathbb {H}$
-proper quaternion random variable has a distribution that is invariant by left Clifford translation of axis $\eta $
and of any angle $\phi $
.
In this case, the quaternion representations of the augmented covariance matrix have the following structure [49]:\begin{equation*}C^{a}=4\sigma _{q}^{2} I_{4}, \tag {96}\end{equation*}
View Source
\begin{equation*}C^{a}=4\sigma _{q}^{2} I_{4}, \tag {96}\end{equation*}
where $I_{4}$
is the $4\times 4$
identity matrix and \begin{equation*} E\{Q_{a}^{2}\}=E\{Q_{b}^{2}\}=E\{Q_{c}^{2}\}=E\{Q_{d}^{2}\}=\sigma _{q}^{2}. \tag {97}\end{equation*}
View Source
\begin{equation*} E\{Q_{a}^{2}\}=E\{Q_{b}^{2}\}=E\{Q_{c}^{2}\}=E\{Q_{d}^{2}\}=\sigma _{q}^{2}. \tag {97}\end{equation*}
An $\mathbb {H}$
-proper signal is uncorrelated with its perpendicular involutions and it also has uncorrelated components $Q_{a}$
, $Q_{b}$
, $Q_{c}$
and $Q_{d}$
that have equal variance $\sigma _{q}^{2}$
. The distribution of an $\mathbb {H}$
-proper quaternion random variable is invariant under any four dimensional isometric transformation.
The form of (96) can also be obtained by substituting (97) into (45)–(60).
Given (96), the determinant of the augmented quaternion valued covariance matrix $C^{a}$
of an augmented quaternion random variable can be simplified to \begin{equation*}|C^{a}|=\left ({{4\sigma _{q}^{2}}}\right )^{4}. \tag {98}\end{equation*}
View Source
\begin{equation*}|C^{a}|=\left ({{4\sigma _{q}^{2}}}\right )^{4}. \tag {98}\end{equation*}
After rearranging (96), the inverse of the augmented quaternion valued covariance matrix is \begin{equation*}\left ({{ C^{a}}}\right )^{-1}=\frac {1}{4\sigma _{q}^{2}}I_{4}. \tag {99}\end{equation*}
View Source
\begin{equation*}\left ({{ C^{a}}}\right )^{-1}=\frac {1}{4\sigma _{q}^{2}}I_{4}. \tag {99}\end{equation*}
For an $\mathbb {H}$
-proper random variable, it can be shown (using (98) and (99)) that QGGD (94) simplifies to \begin{align*} f(\overrightarrow {q^{a}}& =[q,q^{\imath },q^{\jmath },q^{\kappa }]^{T}) \\ & =\frac {p\cdot \Gamma ^{2}\left ({{3/p}}\right )} {16\pi ^{2}\cdot \Gamma ^{3}\left ({{2/p}}\right )\cdot \sigma _{q}^{4}} \\ & \qquad \quad \cdot exp\left \{{{ -\left ({{\frac {\Gamma \left ({{3/p}}\right )}{4\sigma _{q}^{2}\Gamma \left ({{2/p}}\right )}}}\right )^{p} \cdot \left ({{\overrightarrow {q}^{T} \overrightarrow {q}}}\right )^{p} }}\right \} \tag {100}\end{align*}
View Source
\begin{align*} f(\overrightarrow {q^{a}}& =[q,q^{\imath },q^{\jmath },q^{\kappa }]^{T}) \\ & =\frac {p\cdot \Gamma ^{2}\left ({{3/p}}\right )} {16\pi ^{2}\cdot \Gamma ^{3}\left ({{2/p}}\right )\cdot \sigma _{q}^{4}} \\ & \qquad \quad \cdot exp\left \{{{ -\left ({{\frac {\Gamma \left ({{3/p}}\right )}{4\sigma _{q}^{2}\Gamma \left ({{2/p}}\right )}}}\right )^{p} \cdot \left ({{\overrightarrow {q}^{T} \overrightarrow {q}}}\right )^{p} }}\right \} \tag {100}\end{align*}
that is parameterized by the shape parameter p. It should be noted that the parameter $\sigma _{q}$
is not the standard deviation of the probability density function (100).
A. Maximum Likelihood Estimators
The maximum likelihood estimator of the shape parameter p is obtained by differentiating the log-likelihood function of (100) with respect to p. The resulting equation, which has to be solved numerically, is given by \begin{align*} & \hspace {-1pc}\frac {1}{p}\cdot \Psi \left ({{\frac {2}{p}}}\right ) +\frac {1}{2} +\frac {1}{p}\cdot \log \left [{{\frac {p}{2\cdot N} \sum _{i=1}^{N} \left ({{\overrightarrow {q_{i}}^{T} \overrightarrow {q_{i}}}}\right )^{p} }}\right ] \\ & \qquad \quad -\frac {\sum _{i=1}^{N} \left ({{\overrightarrow {q_{i}}^{T} \overrightarrow {q_{i}}}}\right )^{p}\cdot \log \left ({{\overrightarrow {q_{i}}^{T} \overrightarrow {q_{i}}}}\right )} {\sum _{i=1}^{N} \left ({{\overrightarrow {q_{i}}^{T} \overrightarrow {q_{i}}}}\right )^{p}} =0. \tag {101}\end{align*}
View Source
\begin{align*} & \hspace {-1pc}\frac {1}{p}\cdot \Psi \left ({{\frac {2}{p}}}\right ) +\frac {1}{2} +\frac {1}{p}\cdot \log \left [{{\frac {p}{2\cdot N} \sum _{i=1}^{N} \left ({{\overrightarrow {q_{i}}^{T} \overrightarrow {q_{i}}}}\right )^{p} }}\right ] \\ & \qquad \quad -\frac {\sum _{i=1}^{N} \left ({{\overrightarrow {q_{i}}^{T} \overrightarrow {q_{i}}}}\right )^{p}\cdot \log \left ({{\overrightarrow {q_{i}}^{T} \overrightarrow {q_{i}}}}\right )} {\sum _{i=1}^{N} \left ({{\overrightarrow {q_{i}}^{T} \overrightarrow {q_{i}}}}\right )^{p}} =0. \tag {101}\end{align*}
The solution to (101) is the estimated value of $\hat {p}$
. Note the similarity to (92), from which the dependence on the covariance matrix C has been removed.
The maximum likelihood estimator of the $\sigma _{q}$
parameter is obtained by differentiating the log-likelihood function of (100) with respect to $\sigma _{q}$
. The resulting equation is given by \begin{equation*} \hat {\sigma _{q}}= 0.5 \left ({{ \frac {\Gamma \left ({{3/p}}\right )} {\Gamma \left ({{2/p}}\right )} }}\right )^{0.5} \left [{{ \frac {p}{2N} \sum _{i=1}^{N} \left ({{\overrightarrow {q_{i}}^{T} \overrightarrow {q_{i}}}}\right )^{p} }}\right ]^{1/(2p)}, \tag {102}\end{equation*}
View Source
\begin{equation*} \hat {\sigma _{q}}= 0.5 \left ({{ \frac {\Gamma \left ({{3/p}}\right )} {\Gamma \left ({{2/p}}\right )} }}\right )^{0.5} \left [{{ \frac {p}{2N} \sum _{i=1}^{N} \left ({{\overrightarrow {q_{i}}^{T} \overrightarrow {q_{i}}}}\right )^{p} }}\right ]^{1/(2p)}, \tag {102}\end{equation*}
where $\hat {\sigma _{q}}$
denotes the estimated value. The found p value from (101) is substituted into (102) to determine $\hat {\sigma _{q}}$
.
An equation with the same form as (101) can be obtained using (92) and the information that the quaternion representations of the covariance matrix have the following structure [49] \begin{equation*} C=\sigma _{q}^{2} I_{4} \tag {103}\end{equation*}
View Source
\begin{equation*} C=\sigma _{q}^{2} I_{4} \tag {103}\end{equation*}
for an $\mathbb {H}$
-proper quaternion random variable.
The relative mean square error (RMSE) was used to evaluate the performance of the introduced estimators. RMSE was calculated from the equation \begin{equation*} RMSE=\frac {1}{M}\sum _{i=1}^{M}\frac {(\hat {x}_{i}-x)^{2}}{x^{2}}, \tag {104}\end{equation*}
View Source
\begin{equation*} RMSE=\frac {1}{M}\sum _{i=1}^{M}\frac {(\hat {x}_{i}-x)^{2}}{x^{2}}, \tag {104}\end{equation*}
where $\hat {x}_{i}$
is an estimated value ($\hat {p}$
or $\hat {\sigma _{q}}$
) by the ML method and x is a real value (p or $\sigma _{q}$
). M denotes the number of repetitions and was set to $M=10^{4}$
for all experiments.
First, the value of the shape parameter $\hat {p}$
was determined from (101). Then the found $\hat {p}$
value was substituted into (102) and the $\hat {\sigma _{q}}$
value was determined. The experiment was repeated M times and for the determined values of $\hat {p}_{i}$
and $\hat {\sigma _{q}}_{i}$
, p RMSE and $\sigma _{q}$
RMSE were calculated, respectively.
Equations (101) and (102) from the article were validated with the MGGD generator [40], [42] with the fixed parameter $\sigma _{q}=0.5$
or $\sigma _{q}=5$
and varying shape parameter in the range $p \in \lt 0.3,8\gt $
. The range of the p parameter was chosen as the most common range in the literature when modeling signals with the use of GGD. The vector size N was set to $5\cdot 10^{4}$
. Since $\sigma ^{2}_{q}$
corresponds to the variance of the $Q_{a}$
, $Q_{b}$
, $Q_{c}$
, and $Q_{d}$
components, Figs. 1 and 2 have been prepared for the lower variance $\sigma ^{2}_{q}=0.5^{2}$
and the relatively higher value $\sigma ^{2}_{q}=5^{2}$
of the variance.
Fig. 1 shows that as the value of the shape parameter p increases, RMSE of the shape parameter estimator also increases. However, whether it is $\sigma _{q}=0.5$
or $\sigma _{q}=5$
, it does not significantly affect p RMSE.
In the case of the $\sigma _{q}$
parameter, RMSE of the $\sigma _{q}$
parameter estimator decreases with the increasing value of the shape parameter p (Fig. 2). Also in this case, whether it is $\sigma _{q}=0.5$
or $\sigma _{q}=5$
, it does not significantly affect $\sigma _{q}$
RMSE.
In the next step, the influence of the sample size N on the RMSE value was examined. The range of changes of the parameter N has been set to $5\cdot 10^{2}$
to $5\cdot 10^{4}$
. The value of the parameter $\sigma _{q}$
has been set to a fixed value of 1. As expected, the accuracy of the ML estimators decreases with decreasing sample size. Therefore, large sample sizes are recommended for the ML estimators. The results are shown in Figs. 3 and 4.
According to Fig. 3, for small sample sizes, for a more impulsive distribution $p=0.3$
, the ML estimator of parameter p has a smaller RMSE error than for a distribution with $p=2.5$
. The opposite behavior is observed for RMSE of the $\sigma _{q}$
estimator (Fig. 4). The $\sigma _{q}$
RMSE values for the more impulsive distribution $p=0.3$
are larger than those for the distribution $p=2.5$
.
Additionally, (101) and (102) were validated with the MGGD generator with the fixed shape parameter $p=0.3$
or $p=2.5$
and varying $\sigma _{q}$
parameter in the range $\sigma _{q} \in \lt 0.1,10\gt $
. It has already been noticed in Figs. 1 and 2 that the p RMSE and $\sigma _{q}$
RMSE values are comparable for $\sigma _{q}=0.5$
or $\sigma _{q}=5$
for the same value of p. This can also be seen in Figs. 5 and 6, where for a variable $\sigma _{q}$
value in the range $\lt 0.1,10\gt $
the p RMSE and $\sigma _{q}$
RMSE values are kept at an average constant level.
Therefore, the p RMSE and $\sigma _{q}$
RMSE values are more influenced by p than by $\sigma _{q}$
for the considered values.
GGD and augmented quaternions have been used in many engineering applications. In the literature you can find GGDs for different types of random variables. This allows you to model different processes and test different systems. The currently known GGDs with an augmented quaternion random variable are based on 3D GGD. That is, they are based on random variables with three components. The paper uses 4D GGD with four random components to correspond to the four components of a full quaternion. Based on this distribution, the probability density function for GGD with the an augmented quaternion random variable has been given. Then the definition of $\mathbb {H}$
-properness was recalled and this QGGD has been simplified to GGD for an $\mathbb {H}$
-proper quaternion random variable. For the latter distribution, the estimators have been derived using the ML method. In the experimental part, the performance of the estimators has been checked. It was shown that for the RMSE criterion a relatively large random sample is recommended.