Introduction
In real-world engineering, people often encounter a large number of nonlinear problems. Over the last few decades, a variety of nonlinear adaptive filters have been established. Among them, the Volterra filter is one of the most commonly used models in system recognition, chaos prediction, image processing, spread-spectrum communications and other fields [1]–[10]. Jian et al. [11] proposed a method of combining recursive least square (RLS) and least mean square (LMS) to compensate the nonlinear and memorial distortion of orthogonal frequency division multiplexing (OFDM) power amplifier. Many strategies based on RLS and LMS algorithms had been designed for system identification of Volterra model [12]–[15]. Batista and Rui [16] combined LMS and normalized LMS (NLMS) algorithms to update the coefficients of the sparse-interpolated Volterra structure. In summary, LMS, NLMS and RLS algorithms have been widely applied in Volterra model. However, neither the convergence of LMS algorithm nor that of RLS algorithm is efficient enough, whilst the improvement of convergence rate by NLMS algorithm is at the expense of computational complexity. Besides, in the perspective of application, there are parameters selection issues on these three algorithms, which may cause instability problem.
Artificial bee colony (ABC) algorithm enlightened by the behavior of honey bees, is introduced by Karaboga and Basturk [17], which is a stochastic optimization algorithm. The algorithm has the advantages of good stability, less control parameters and simple implementation. However, ABC algorithm also existed many problems, such as weak local search ability and easy to fall into evolutionary stagnation. To deal with the problems of ABC algorithm, many scholars have done various experiments and researched on modification strategies. One of discoveries was that the position updating equation of ABC algorithm, which can lead to good capability on exploration at the expense of relatively poor competence on exploitation [18]. Thus, many modified ABC algorithms have been proposed. For example, Gao et al. [19] introduced the differential evolution to get a ABC/best/1 algorithm. An improved ABC algorithm named gbest-guide ABC (GABC) inspired by Zhu and Kwong [20]. These improved ABC algorithms improve performance and the candidate solutions generated in algorithms are very close to the current global optimal solution, but exploitation and exploration of ABC algorithm are still unbalanced.
Based on above analysis, an adaptive global-best ABC (AGABC) optimization algorithm is proposed in this paper, which represents a novel approach to Volterra filter by modifying the solution search equation. To avoid solutions trapping in local optimum initially, a variable weight coefficient and the neighbor information are introduced based on the original search equation. Then, the candidate solutions based on the current best solution are generated to find the global optimal. Besides, the aim to enhance global convergence, we add the chaotic model to initialize the population. The AGABC algorithm was proposed by Liu et al. [21], which demonstrated good performance. Based on this previous work, the AGABC optimization algorithm is introduced to second-order Volterra filter (SOVF) for solving the kernel coefficients. We propose a novel AGABC-SOVF prediction model with an explicit configuration for speech signal. In experiments, we apply the AGABC-SOVF model to multi-step prediction of speech signals. For Lorenz chaotic time series and real measured speech signal series, simulations of multi-step prediction employing the AGABC-SOVF model are performed. Also, results of multi-step prediction using the AGABC-SOVF model are compared with the ABC-SOVF model. Then, under certain errors, prediction length comparisons of multi-step prediction between ABC-SOVF model and AGABC-SOVF model are performed by using Lorenz chaotic time series and phonetic symbol [g].
This paper is organized as follows. Section II introduces Volterra filter model. Section III describes the original ABC algorithm. Section IV presents the AGABC-SOVF model. Section V shows and discusses numerical simulation results, and finally, Section VI offers conclusions.
Volterra Filter Model
Modeling and identification tools for nonlinear systems are often used for signal processing. One such widely utilized modeling tool is Volterra filter. The salient features of the second-order truncated Volterra filter are briefly described in this section. A second-order truncated Volterra filter [22], used in modeling and analysis of nonlinear systems, is defined as follows:\begin{align*}&\hspace {-0.6pc}\hat {y}(n)=h_{0}+\sum \limits _{i=0}^{m-1}h_{1}(i;n-1)x(n-i) \\&\quad \qquad \qquad +\,\sum \limits _{i=0}^{m-1}\sum \limits _{j=0}^{m-1} h_{2}(i,j;n-1)x(n-i)x(n-j)\tag{1}\end{align*}
\begin{align*} H(n-1)=&[h_{1} (0;n-1),h_{1} (1;n-1), \\&\cdots,h_{1} (m-1;n-1),h_{2} (0,0;n-1),h_{2} \\&(0,1;n-1),\cdots, h_{2} (m-1,m-1;n-1)]^{T} \\ X(n)=&[x(n),x(n-1),\cdots,x(n-(m-1)),\\&x^{2}(n),x(n)x(n-1),\cdots,x^{2}(n-(m-1))]^{T}\end{align*}
\begin{equation*} \hat {y}(n)=H^{\text {T}}(n-1)X(n).\tag{2}\end{equation*}
\begin{equation*} e^{2}(n)=(y(n)-\hat (n))^{2}.\tag{3}\end{equation*}
\begin{equation*} H(n)=H(n-1)+2\mu e(n)X(n),\tag{4}\end{equation*}
Conventional ABC Algorithm
ABC algorithm, which is stimulated by the foraging behavior of bees, is a kind of swarm intelligence algorithm. In ABC algorithm, the function extrema value required optimization is simulated as food sources and evaluated by fitness function. The solutions generated during the function optimization process are modeled as bees to search within the solution space of the function. There are three types of bees: employed bees, onlooker bees and scout bees according to different divisions of labor. Employed bees randomly select food sources at the initial stage and search neighbors based on greedy selection to optimize their selections. Onlooker bees choose the employed bees to follow according to the feedback information on food sources qualities and transform into employed bees for neighbor search. Once the selection of food source has not been updated for certain times, employed bees will transform into scout bees and new scout bees will continue to seek new food sources. In forms of transformation among three types, bees cooperated with one another to find the best food source which represents the optimal extremum value of the function. The detailed algorithm is as follows [25].
First, in the search space, initial food sources are generated randomly and spread uniformly. This process can be defined as follow:\begin{equation*} x_{ij} =x_{j}^{l} +rand(0,1)\cdot (x_{j}^{u} -x_{j}^{l})\tag{5}\end{equation*}
A. Employed Bees Phase
Each employed bee searches the neighbors of initial food sources. A new selection of food source \begin{equation*} v_{ij} =x_{ij} +\varphi _{ij} \cdot (x_{ij} -x_{neighbor})\tag{6}\end{equation*}
B. Onlooker Bees Phase
The food source shared by employed bees is selected by each onlooker bee and the selection depends on the probability of its fitness value, which can be expressed as following equation:\begin{equation*} p_{i} =\frac {fit(x_{i})}{\sum \nolimits _{i=1}^{SN} {fit(x_{i})}}\tag{7}\end{equation*}
C. Scout Bees Phase
If a food source has not been updated after visiting a certain times, this source will be ignored and the corresponding employed bee will transformed into a scout bee. The scout bees randomly select a new food source in the global scope to continue the neighbor search. The generation of new food source selection can be described as Equation (5).
The AGABC-SOVF Model
A. Proposed Model
ABC algorithm, popular for its simplicity, has strong robustness and good operability. We can use this algorithm to solve the Volterra kernel problem. However, the search formula of ABC algorithm is adept at exploration but less effective in exploitation, which means that ABC algorithm has an excellent capability in finding the optimal extremum value of the function in the global scope but relatively weak in seeking the extremum of the function with higher precision in the local scope. For this reason, we modify the solution of search equation in the onlooker bees’ phase of ABC algorithm to better derive the Volterra kernel.
The nonlinear single-input and single-output system can be described as Equation (2). In the linear combination relationship, the kernel coefficients of SOVF model is represented as a vector \begin{equation*} F(H_{i})=\frac {1}{L}\sum \limits _{t=1}^{L} {(y(t)-H_{i}^{T} \cdot X(t))^{2}}\tag{8}\end{equation*}
1) Initial Colony Phase
The
2) Employed Bees Phase
A neighbor node is randomly generated at the current food source, then the fitness values of these two positions are compared. If\begin{equation*} H_{ij} =H_{ij} +\varphi _{ij} \cdot (H_{ij} -H_{neighbor})\tag{9}\end{equation*}
3) Onlooker Bees Phase
In this stage, a new search equation is proposed to avoid biasing the optimal solution during initialization, which can obtain information from the global optimum and the neighbors. This new equation is inspired by a literature which has proposed the converge-onlookers ABC (COABC) algorithm, in which the new solution is generated based on the current global optimal food source [26]. From simulation results of the algorithm, it can be seen that the idea of sharing global information is beneficial for improving exploitation performance. The current positions of the food source are sorted according to their fitness values. The best position is denoted as \begin{align*}&\hspace {-0.8pc}H_{ij} =\omega \cdot H_{best} +c_{1} \cdot \phi \cdot (H_{best} -H_{ij}) \\&\qquad \qquad \qquad \qquad \qquad +\,c_{2} \cdot \varphi \cdot (H_{neighbor} -H_{ij})\tag{10}\end{align*}
\begin{align*} \omega=&\omega _{\min } +\rho \cdot (\omega _{\max } -\omega _{\min }) \tag{11}\\ \rho=&\left({\left({-\cos \frac {cycle}{\max cycle}\cdot \pi +1}\right)\cdot \frac {1}{2}}\right)^{a}\tag{12}\end{align*}
4) Scout Bees Phase
Whether or not the value of the accumulative count NP reaches the preset value of limit will be judged. If it exceeds, the current food position is randomly assigned, otherwise, the next iteration calculation is continued. The search equation can be written as \begin{equation*} H_{ij} =H_{j}^{l} +rand(0,1)\cdot (H_{j}^{u} -H_{j}^{l})\tag{13}\end{equation*}
5) Judgement
In multiple iterations of the AGABC algorithm, each iteration needs to determine whether the current number of steps is less than or equal to a preset maximum number of iterations or a minimum error requirement. If it meets requirements, the output vector \begin{equation*} H_{best} =\arg \min \nolimits _{1 < i < SN} [F(H_{ij})]\tag{14}\end{equation*}
B. Chaos Initialization
Population initialization is an indispensable and vital part of the bionic evolutionary algorithm. It not only has a significant impact on convergence speed of the algorithm, but also imposes certain constraints on the quality of the final solution to the problem. In traditional ABC algorithm, initialization of the honeybee population is mainly achieved through the use of uniformly distributed random sequence generated by the random function. Because the generated random sequence presents periodic cyclic repetition which causes a risk of degradation in individual diversity of the population, this method of random initialization reveals great limitations. It has been proven in the literature that chaotic time series with its unique characteristics have significant improvements over random series [27]. Therefore, chaotic series can be used to promote diversification of individual populations to further enhance convergence rate of the algorithm. The chaotic map \begin{equation*} ch_{i+1} =4\cdot ch_{i} \cdot (1-ch_{i}),\quad 1\le K\tag{15}\end{equation*}
Performance Evaluation of the Proposed Method
A. Performance Comparisons of Original ABC, ABC/BEST/1, GABC and AGABC Algorithms
Some classical benchmark functions presented in Table 1 were used to evaluate performance of the proposed algorithm. Results of AGABC algorithm have been compared with the results of the original ABC, ABC/best/1 and GABC algorithms. Each of experiments was repeated 30 times with different random seeds, and experiment stops when the number of iterations reaches
The common control parameters of the algorithms are given as follows. The population size is 25, and the limit is 100. AGABC algorithm has a few parameters:
Figure 2 shows iteration convergence of the four algorithms, and clearly can be seen that comparisons between the algorithms. The curves in the figure represent changes of the function values of the algorithms in the iteration. When the curve becomes a flat line, number of iterations reaches the global optimization. In the optimization of a unipolar value function, the effect is slightly worse than best/1/ABC, but better than the other two. However, the AGABC algorithm has obvious performance advantages over other three multi-modal functions, and is far superior to other three algorithms in convergence speed and solution accuracy.
B. Analysis of Multi-Step Prediction of Lorenz Chaotic Series
In experiments, chaotic map is directly iterated according to initial values. Integration step size and initial condition for Lorenz series are taken as 0.01 and [−1 0 1] respectively. Simulation series with 1300 data points is computed by using fourth-order Runge-Kutta integration method. The first 300 data are for training and the remaining 1000 data are for test. The Lorenz series equation is defined as follows \begin{equation*} \begin{cases} \dot {q}=a\left ({r-q }\right) \\ \dot {r}=-qs+cs-r \\ \dot {s}=qr-bs \\ \end{cases}\tag{16}\end{equation*}
Multi-step prediction errors for \begin{equation*} y\left ({i }\right)=\frac {2\left ({x\left ({i }\right)-\min {x\left ({i }\right)} }\right)}{\mathrm {max} x\left ({i }\right)-\mathrm {min}~ x\left ({i }\right)},\quad i=1,2,\cdots,n,\end{equation*}
Figure 3 and Figure 4 show that performance of AGABC-SOVF is more prominent, especially for prediction of the first 100 points, that are almost identical to the original series. The predicted values of the two models can reflect the original signal values more intuitively, but the predicted values of the AGABC-SOVF model are closer to the original values than the ABC-SOVF model in the large fluctuation part. When a spike in the original series, both the predicted results of the two models show errors, but the error distance of AGABC-SOVF model is far less than that of ABC-SOVF model. From Figure 5 we can see that absolute error of the AGABC algorithm is much lower than that of the ABC algorithm, especially when sample number is bigger. Based on the above analysis, it can be concluded that the AGABC-SOVF model performs well in prediction, and has an obvious pre-improvement advantage.
C. Comparisons of Prediction Length of Multi-Step Prediction
This experiment is mainly used to test prediction length of multi-step prediction by applying the AGABC-SOVF model. The first 200 points of speech signal of certain length are used as training to configure the model, which is used for prediction after 200 points, then prediction is terminated when the error is greater than 0.05.
Figure 6 and Figure 7 show that AGABC-SOVF model and ABC-SOVF model are used in predictions of phonetic symbol [g] and Lorenz series, respectively. We note that the prediction results of AGABC-SOVF model and ABC-SOVF model for about 600 steps are basically consistent with the original series. In contrast, AGABC-SOVF model is able to continue prediction for the next 300 points and the error is very small. So, the AGABC-SOVF model is obviously better than that of the ABC-SOVF model in length prediction.
Predictive length comparison of multi-step prediction for phonetic symbol [g] when using AGABC-SOVF and ABC-SOVF models.
Predictive length comparison of multi-step prediction for Lorenz series when using AGABC-SOVF and ABC-SOVF models.
D. Predictions of Real Measured Speech Signal Series Using AGABC-SOVF and ABC-SOVF Model
In this experiment, validity of the proposed nonlinear prediction model is verified by the results of the multi-frame speech signal. Experimental data contains four phonetic symbols [b], [d], [g], [o] and the sentence (long time no see). In order to verify the validity and generalization ability of our method, the AGABC-SOVF model is applied to the prediction of sentence sample to verify the effectiveness of the model. The pre-processing of the signal is as same as above, and ABC-SOVF model and AGABC-SOVF model were used to predict and compare the waveform, absolute error and MSE. Experimental results of four phonetic symbols and the sentence are shown in Table 4, Figure 8 and Figure 9, respectively.
From Figure 8 and Figure 9, it can be seen that prediction values of two types of model can be more intuitive response to the original signal. But in the intermediate part of larger fluctuations, prediction values obtained from the AGABC-SOVF model are more close to the original signals. And for each frame data, from Table 4, it can be seen that the AGABC-SOVF model has a much smaller MSE values than that of the ABC-SOVF model, which shows that the AGABC-SOVF prediction model proposed in this paper has better prediction results for multi-frame speech signals.
Conclusion
In this paper, the AGABC algorithm based on the search equation of onlooker stage is proposed. In the algorithm, we study a new search equation which is based on the search equation of the original ABC algorithm, that combines a variable weight coefficient with the neighbor information. Experimental results show that the AGABC algorithm guarantee a good balance between exploitation and exploration in global optimization. After that, the AGABC algorithm is applied to SOVF model to estimate kernel coefficients. Performance of the AGABC algorithm was compared with the original ABC and other two improved algorithms ABC/best/1 and GABC. Results show that AGABC algorithm achieves improved performance in multi-modal functions. For Lorenz series and real measured speech signal series, when MSE is used as an evaluation, multi-step prediction accuracy of the proposed AGABC-SOVF model is better than that of the ABC-SOVF model. Within a certain error permissible range, the AGABC-SOVF model predicts longer than the ABC-SOVF model. Meanwhile, the AGABC-SOVF model can better reflect trends and regularities of the speech signal series and fully meet requirements for speech signal prediction. The proposed model can present a nonlinear analysis and more valuable model structure for speech signal series, and opens up a new way to speech signal reconstruction and compression coding.