I. Introduction
The least mean square (LMS) algorithm is probably the most-often-used adaptive signal processing algorithm, due to its simple structure and robustness. However, the convergence in the mean square error (MSE) of LMS is highly dependent on the conditioning of the input correlation matrix in the sense that large eigenvalues impose an upper bound on the learning rate to guarantee stability, while small eigenvalues result in slow convergence modes in the MSE function [1], [2]. Therefore, the best operating condition for LMS is when all the eigenvalues are identical as in the case of a white input, while for correlated inputs we need to pre-process data through an appropriate whitening transform. The Newton-LMS and self-orthogonalizing transform domain LMS algorithms are such whitening transforms which have been proposed in [3]–[6], with the aim to improve the convergence of LMS by regularizing the negative gradient with the inverse of the estimated input autocorrelation matrix. However, Newton-LMS algorithms suffer from a high computational cost in calculating the inverse correlation matrix, together with mathematical intractability when input signals are highly correlated. From the perspective of implementation complexity, transform domain LMS algorithms are more attractive, since after performing an orthogonal transform, the input autocorrelation matrix is approximately diagonalized in an asymptotic sense.