Loading [MathJax]/extensions/MathMenu.js
Kernel Estimation of Volterra Using an Adaptive Artificial Bee Colony Optimization and Its Application to Speech Signal Multi-Step Prediction | IEEE Journals & Magazine | IEEE Xplore

Kernel Estimation of Volterra Using an Adaptive Artificial Bee Colony Optimization and Its Application to Speech Signal Multi-Step Prediction


The predictive comparison of the proposed AGABC-SOVF model for Lorenz time series.

Abstract:

In order to solve parameters selection problem when applying recursive least square (RLS), least mean square (LMS) or normalized LMS (NLMS) algorithms to estimate kernels...Show More
Topic: Collaboration for Internet of Things

Abstract:

In order to solve parameters selection problem when applying recursive least square (RLS), least mean square (LMS) or normalized LMS (NLMS) algorithms to estimate kernels of second-order Volterra filter (SOVF), a novel adaptive gbest-guide artificial bee colony (AGABC) optimization algorithm is used to derive kernels of Volterra, that is a type of the AGABC-SOVF prediction model with an explicit configuration for speech signal is proposed. The AGABC algorithm modifies the solution search equation of ABC algorithm and combines the best solution with neighborhood information at present iteration, which not only ensures the exploration of the global optimization algorithm but also improves the exploitation. The AGABC-SOVF model is performed to predict speech signal series of the given English phonemes, sentences, and chaotic time series. Simulation results based on benchmark function show that AGABC algorithm performs faster convergence in achieving higher quality solutions than original ABC and other improved ABC algorithms. Prediction results of applying the AGABC-SOVF model to multi-step predictions for Lorenz time series reveal its stability and convergence properties. For the measured multi-frame speech signals, prediction accuracy and length of multi-step prediction using the AGABC-SOVF model are better than that of the ABC-SOVF model. The AGABC-SOVF model can better predict chaotic time series and the real measured speech signal series.
Topic: Collaboration for Internet of Things
The predictive comparison of the proposed AGABC-SOVF model for Lorenz time series.
Published in: IEEE Access ( Volume: 7)
Page(s): 49048 - 49058
Date of Publication: 06 December 2018
Electronic ISSN: 2169-3536

Funding Agency:


SECTION I.

Introduction

In real-world engineering, people often encounter a large number of nonlinear problems. Over the last few decades, a variety of nonlinear adaptive filters have been established. Among them, the Volterra filter is one of the most commonly used models in system recognition, chaos prediction, image processing, spread-spectrum communications and other fields [1]–​[10]. Jian et al. [11] proposed a method of combining recursive least square (RLS) and least mean square (LMS) to compensate the nonlinear and memorial distortion of orthogonal frequency division multiplexing (OFDM) power amplifier. Many strategies based on RLS and LMS algorithms had been designed for system identification of Volterra model [12]–​[15]. Batista and Rui [16] combined LMS and normalized LMS (NLMS) algorithms to update the coefficients of the sparse-interpolated Volterra structure. In summary, LMS, NLMS and RLS algorithms have been widely applied in Volterra model. However, neither the convergence of LMS algorithm nor that of RLS algorithm is efficient enough, whilst the improvement of convergence rate by NLMS algorithm is at the expense of computational complexity. Besides, in the perspective of application, there are parameters selection issues on these three algorithms, which may cause instability problem.

Artificial bee colony (ABC) algorithm enlightened by the behavior of honey bees, is introduced by Karaboga and Basturk [17], which is a stochastic optimization algorithm. The algorithm has the advantages of good stability, less control parameters and simple implementation. However, ABC algorithm also existed many problems, such as weak local search ability and easy to fall into evolutionary stagnation. To deal with the problems of ABC algorithm, many scholars have done various experiments and researched on modification strategies. One of discoveries was that the position updating equation of ABC algorithm, which can lead to good capability on exploration at the expense of relatively poor competence on exploitation [18]. Thus, many modified ABC algorithms have been proposed. For example, Gao et al. [19] introduced the differential evolution to get a ABC/best/1 algorithm. An improved ABC algorithm named gbest-guide ABC (GABC) inspired by Zhu and Kwong [20]. These improved ABC algorithms improve performance and the candidate solutions generated in algorithms are very close to the current global optimal solution, but exploitation and exploration of ABC algorithm are still unbalanced.

Based on above analysis, an adaptive global-best ABC (AGABC) optimization algorithm is proposed in this paper, which represents a novel approach to Volterra filter by modifying the solution search equation. To avoid solutions trapping in local optimum initially, a variable weight coefficient and the neighbor information are introduced based on the original search equation. Then, the candidate solutions based on the current best solution are generated to find the global optimal. Besides, the aim to enhance global convergence, we add the chaotic model to initialize the population. The AGABC algorithm was proposed by Liu et al. [21], which demonstrated good performance. Based on this previous work, the AGABC optimization algorithm is introduced to second-order Volterra filter (SOVF) for solving the kernel coefficients. We propose a novel AGABC-SOVF prediction model with an explicit configuration for speech signal. In experiments, we apply the AGABC-SOVF model to multi-step prediction of speech signals. For Lorenz chaotic time series and real measured speech signal series, simulations of multi-step prediction employing the AGABC-SOVF model are performed. Also, results of multi-step prediction using the AGABC-SOVF model are compared with the ABC-SOVF model. Then, under certain errors, prediction length comparisons of multi-step prediction between ABC-SOVF model and AGABC-SOVF model are performed by using Lorenz chaotic time series and phonetic symbol [g].

This paper is organized as follows. Section II introduces Volterra filter model. Section III describes the original ABC algorithm. Section IV presents the AGABC-SOVF model. Section V shows and discusses numerical simulation results, and finally, Section VI offers conclusions.

SECTION II.

Volterra Filter Model

Modeling and identification tools for nonlinear systems are often used for signal processing. One such widely utilized modeling tool is Volterra filter. The salient features of the second-order truncated Volterra filter are briefly described in this section. A second-order truncated Volterra filter [22], used in modeling and analysis of nonlinear systems, is defined as follows:\begin{align*}&\hspace {-0.6pc}\hat {y}(n)=h_{0}+\sum \limits _{i=0}^{m-1}h_{1}(i;n-1)x(n-i) \\&\quad \qquad \qquad +\,\sum \limits _{i=0}^{m-1}\sum \limits _{j=0}^{m-1} h_{2}(i,j;n-1)x(n-i)x(n-j)\tag{1}\end{align*} View SourceRight-click on figure for MathML and additional features. where $x(n)$ and $\hat {y}(n)$ are input signal and prediction output signal at time instant $n$ , respectively. $h{}_{0}$ is a constant (for simplicity, let $h{}_{0}$ equals to 0), $m$ is memory length of the filter. $h_{1}(i;n-1)$ and $h_{2}(i,j,n-1)$ are filter kernel coefficients to be updated at time instant $n$ , which are linear and quadratic coefficients, respectively. Let \begin{align*} H(n-1)=&[h_{1} (0;n-1),h_{1} (1;n-1), \\&\cdots,h_{1} (m-1;n-1),h_{2} (0,0;n-1),h_{2} \\&(0,1;n-1),\cdots, h_{2} (m-1,m-1;n-1)]^{T} \\ X(n)=&[x(n),x(n-1),\cdots,x(n-(m-1)),\\&x^{2}(n),x(n)x(n-1),\cdots,x^{2}(n-(m-1))]^{T}\end{align*} View SourceRight-click on figure for MathML and additional features. where $[\cdot]^{T}$ denotes vector transpose. So, Equation (1) can be denoted by the following vector form \begin{equation*} \hat {y}(n)=H^{\text {T}}(n-1)X(n).\tag{2}\end{equation*} View SourceRight-click on figure for MathML and additional features. Let $e(n)$ be a priori error signal, is desired output at time instant. So square error at time instant can be expressed \begin{equation*} e^{2}(n)=(y(n)-\hat (n))^{2}.\tag{3}\end{equation*} View SourceRight-click on figure for MathML and additional features. For the second-order Volterra model formulated by Equation (2), iterative equation for using standard LMS algorithm can be expressed \begin{equation*} H(n)=H(n-1)+2\mu e(n)X(n),\tag{4}\end{equation*} View SourceRight-click on figure for MathML and additional features. where $\mu $ is a convergence factor that controls convergence speed and stability of LMS algorithm. To ensure convergence of the algorithm, $0 < \mu < 1/\lambda _{\text {max}}$ , where $\lambda _{\text {max}}$ is the largest eigenvalue of autocorrelation matrix of input signal vector $X(n)$ [23], [24]. So, when the largest eigenvalue is relatively large, has to take a relatively small value, which results in slower convergence speed.

SECTION III.

Conventional ABC Algorithm

ABC algorithm, which is stimulated by the foraging behavior of bees, is a kind of swarm intelligence algorithm. In ABC algorithm, the function extrema value required optimization is simulated as food sources and evaluated by fitness function. The solutions generated during the function optimization process are modeled as bees to search within the solution space of the function. There are three types of bees: employed bees, onlooker bees and scout bees according to different divisions of labor. Employed bees randomly select food sources at the initial stage and search neighbors based on greedy selection to optimize their selections. Onlooker bees choose the employed bees to follow according to the feedback information on food sources qualities and transform into employed bees for neighbor search. Once the selection of food source has not been updated for certain times, employed bees will transform into scout bees and new scout bees will continue to seek new food sources. In forms of transformation among three types, bees cooperated with one another to find the best food source which represents the optimal extremum value of the function. The detailed algorithm is as follows [25].

First, in the search space, initial food sources are generated randomly and spread uniformly. This process can be defined as follow:\begin{equation*} x_{ij} =x_{j}^{l} +rand(0,1)\cdot (x_{j}^{u} -x_{j}^{l})\tag{5}\end{equation*} View SourceRight-click on figure for MathML and additional features. where $i\in \{1,2,\cdots,SN\}$ , $SN$ is the total number of colony, $j\in \{1,2,\cdots,n\}$ , $x_{j}^{l}$ and $x_{j}^{u} $ are lower bound and upper bound for the $j$ th dimension, respectively.

A. Employed Bees Phase

Each employed bee searches the neighbors of initial food sources. A new selection of food source $v_{ij} $ is generated during this process and is updated in accordance with the information provided by neighbors of its present position $x_{ij} $ . $v_{ij}$ can be calculated as follow:\begin{equation*} v_{ij} =x_{ij} +\varphi _{ij} \cdot (x_{ij} -x_{neighbor})\tag{6}\end{equation*} View SourceRight-click on figure for MathML and additional features. where $neighbor\in \{1,2,\cdots,SN\}$ and $j\in \{1,2,\cdots,n\}$ are random numbers, and $neighbor\ne i$ , $\varphi _{ij}$ is a random number in the range of [−1,1] that is evenly distributed to control the searching range. $v_{ij}$ and $x_{ij}$ are selected by employed bees based on the principle of greedy selection. Compared to the original food sources, if the new one is better, which means $fit(v_{ij})\ge fit(x_{ij})$ , $x_{ij} $ will be replaced by $v_{ij} $ , otherwise $x_{ij} $ will be retained.

B. Onlooker Bees Phase

The food source shared by employed bees is selected by each onlooker bee and the selection depends on the probability of its fitness value, which can be expressed as following equation:\begin{equation*} p_{i} =\frac {fit(x_{i})}{\sum \nolimits _{i=1}^{SN} {fit(x_{i})}}\tag{7}\end{equation*} View SourceRight-click on figure for MathML and additional features. where $fit(x_{i})$ is the fitness value at the $i$ th food source. For food sources with high fitness value, there will be more onlooker bees searching around and in this way convergence speed of the algorithm can be improved.

C. Scout Bees Phase

If a food source has not been updated after visiting a certain times, this source will be ignored and the corresponding employed bee will transformed into a scout bee. The scout bees randomly select a new food source in the global scope to continue the neighbor search. The generation of new food source selection can be described as Equation (5).

SECTION IV.

The AGABC-SOVF Model

A. Proposed Model

ABC algorithm, popular for its simplicity, has strong robustness and good operability. We can use this algorithm to solve the Volterra kernel problem. However, the search formula of ABC algorithm is adept at exploration but less effective in exploitation, which means that ABC algorithm has an excellent capability in finding the optimal extremum value of the function in the global scope but relatively weak in seeking the extremum of the function with higher precision in the local scope. For this reason, we modify the solution of search equation in the onlooker bees’ phase of ABC algorithm to better derive the Volterra kernel.

The nonlinear single-input and single-output system can be described as Equation (2). In the linear combination relationship, the kernel coefficients of SOVF model is represented as a vector $H$ , which is substituted into the AGABC algorithm by its role as a food source. In calculation process, the evaluation function for each food source is defined as \begin{equation*} F(H_{i})=\frac {1}{L}\sum \limits _{t=1}^{L} {(y(t)-H_{i}^{T} \cdot X(t))^{2}}\tag{8}\end{equation*} View SourceRight-click on figure for MathML and additional features. where $i=1,2,3,\cdots,SN$ , and $L$ is the length of signal data. The optimal predictive effect of training is to iteratively compute the minimum value of the objective function. The algorithm process is as follows

1) Initial Colony Phase

The $SN$ employed bees in the whole search space are evenly and randomly distributed to ensure diversity of solutions, and the position of the search space equals to the food source, corresponding to the kernel vector $H$ of SOVF model.

2) Employed Bees Phase

A neighbor node is randomly generated at the current food source, then the fitness values of these two positions are compared. If$F(H_{i}) < F(H_{i-1})\mathrm {H}$ (), the current position is replaced by the neighbor node, otherwise an accumulative count NP is performed. The original search equation is written as \begin{equation*} H_{ij} =H_{ij} +\varphi _{ij} \cdot (H_{ij} -H_{neighbor})\tag{9}\end{equation*} View SourceRight-click on figure for MathML and additional features.

3) Onlooker Bees Phase

In this stage, a new search equation is proposed to avoid biasing the optimal solution during initialization, which can obtain information from the global optimum and the neighbors. This new equation is inspired by a literature which has proposed the converge-onlookers ABC (COABC) algorithm, in which the new solution is generated based on the current global optimal food source [26]. From simulation results of the algorithm, it can be seen that the idea of sharing global information is beneficial for improving exploitation performance. The current positions of the food source are sorted according to their fitness values. The best position is denoted as $H_{best} $ , and a neighbor food source is randomly selected. The search equation can be written as \begin{align*}&\hspace {-0.8pc}H_{ij} =\omega \cdot H_{best} +c_{1} \cdot \phi \cdot (H_{best} -H_{ij}) \\&\qquad \qquad \qquad \qquad \qquad +\,c_{2} \cdot \varphi \cdot (H_{neighbor} -H_{ij})\tag{10}\end{align*} View SourceRight-click on figure for MathML and additional features. where $\omega $ is the inertia weight that controls the impact of the global best solution at current iteration; $c_{1}$ and $c_{2}$ are positive constant parameters; $\phi $ and $\varphi $ are uniform random numbers in the range [−1,1], $j\in (1,2,\cdots,n)$ . In our proposed method, the parameter $\omega $ is updated adaptively, which is computed by:\begin{align*} \omega=&\omega _{\min } +\rho \cdot (\omega _{\max } -\omega _{\min }) \tag{11}\\ \rho=&\left({\left({-\cos \frac {cycle}{\max cycle}\cdot \pi +1}\right)\cdot \frac {1}{2}}\right)^{a}\tag{12}\end{align*} View SourceRight-click on figure for MathML and additional features. where $\omega _{\min }$ and $\omega _{\max } $ represents the lower bound and upper bound of the $\omega $ respectively; $\max cycle$ is the maximum number of iteration; $cycle$ is the number of iteration. $\rho $ changes dynamically with increase of iteration, as shown in Figure 1. The purpose of $\rho $ is to enable space to be entirely searched, and it can release the search method at initial step. Then, the information from best solution and neighbor is obtained by the above formula, which avoids stagnation around local minima.

FIGURE 1. - The relationship between 
$cycle$
 and 
$\rho $
.
FIGURE 1.

The relationship between $cycle$ and $\rho $ .

4) Scout Bees Phase

Whether or not the value of the accumulative count NP reaches the preset value of limit will be judged. If it exceeds, the current food position is randomly assigned, otherwise, the next iteration calculation is continued. The search equation can be written as \begin{equation*} H_{ij} =H_{j}^{l} +rand(0,1)\cdot (H_{j}^{u} -H_{j}^{l})\tag{13}\end{equation*} View SourceRight-click on figure for MathML and additional features.

5) Judgement

In multiple iterations of the AGABC algorithm, each iteration needs to determine whether the current number of steps is less than or equal to a preset maximum number of iterations or a minimum error requirement. If it meets requirements, the output vector $H$ is terminated; otherwise, the next calculation is continued. The optimal solution of the kernel coefficient of the Volterra model is shown in the following formula:\begin{equation*} H_{best} =\arg \min \nolimits _{1 < i < SN} [F(H_{ij})]\tag{14}\end{equation*} View SourceRight-click on figure for MathML and additional features.

B. Chaos Initialization

Population initialization is an indispensable and vital part of the bionic evolutionary algorithm. It not only has a significant impact on convergence speed of the algorithm, but also imposes certain constraints on the quality of the final solution to the problem. In traditional ABC algorithm, initialization of the honeybee population is mainly achieved through the use of uniformly distributed random sequence generated by the random function. Because the generated random sequence presents periodic cyclic repetition which causes a risk of degradation in individual diversity of the population, this method of random initialization reveals great limitations. It has been proven in the literature that chaotic time series with its unique characteristics have significant improvements over random series [27]. Therefore, chaotic series can be used to promote diversification of individual populations to further enhance convergence rate of the algorithm. The chaotic map $ch_{i} $ is generated by the following calculation equation:\begin{equation*} ch_{i+1} =4\cdot ch_{i} \cdot (1-ch_{i}),\quad 1\le K\tag{15}\end{equation*} View SourceRight-click on figure for MathML and additional features. where $K$ is the length of chaotic series, $ch_{0} \in (0,1)$ is a random number. Then $ch_{i} $ will replace the rand(0,1) in Equation (13).

SECTION V.

Performance Evaluation of the Proposed Method

A. Performance Comparisons of Original ABC, ABC/BEST/1, GABC and AGABC Algorithms

Some classical benchmark functions presented in Table 1 were used to evaluate performance of the proposed algorithm. Results of AGABC algorithm have been compared with the results of the original ABC, ABC/best/1 and GABC algorithms. Each of experiments was repeated 30 times with different random seeds, and experiment stops when the number of iterations reaches $\max cycle$ .

TABLE 1 Numerical Benchmark Function
Table 1- 
Numerical Benchmark Function

The common control parameters of the algorithms are given as follows. The population size is 25, and the limit is 100. AGABC algorithm has a few parameters: $\omega _{\min } =0.3$ , $\omega _{\max } =1.0$ , $c_{1} =0.02$ , $c_{2} =0.02$ [21]. The best value (Best), the mean value (Mean) and the standard deviation (SD) for different dimensions and iterations are recorded. The maximum number of iterations are 1000 and 1500 for dimension 20 and 30, respectively. In Table 2 and Table 3, as can be seen Mean values of AGABC algorithm are smaller than those of the original ABC algorithm in all test functions. Moreover, the performance of AGABC algorithm is more outstanding than other algorithms in both Mean values and convergence iteration when it minimize the multi-modal functions. In short, this proves that the AGABC algorithm has better optimization performance.

TABLE 2 Performance Comparisons of AGABC With Other Algorithms When the Maximum Number of Iterations is 1000
Table 2- 
Performance Comparisons of AGABC With Other Algorithms When the Maximum Number of Iterations is 1000
TABLE 3 Performance Comparisons of AGABC With Other Algorithms When the Maximum Number of Iterations is 1500
Table 3- 
Performance Comparisons of AGABC With Other Algorithms When the Maximum Number of Iterations is 1500

Figure 2 shows iteration convergence of the four algorithms, and clearly can be seen that comparisons between the algorithms. The curves in the figure represent changes of the function values of the algorithms in the iteration. When the curve becomes a flat line, number of iterations reaches the global optimization. In the optimization of a unipolar value function, the effect is slightly worse than best/1/ABC, but better than the other two. However, the AGABC algorithm has obvious performance advantages over other three multi-modal functions, and is far superior to other three algorithms in convergence speed and solution accuracy.

FIGURE 2. - Iteration convergence for ABC, GbestABC, ABC/best/1 and AGABC ALGORITHMS.
FIGURE 2.

Iteration convergence for ABC, GbestABC, ABC/best/1 and AGABC ALGORITHMS.

B. Analysis of Multi-Step Prediction of Lorenz Chaotic Series

In experiments, chaotic map is directly iterated according to initial values. Integration step size and initial condition for Lorenz series are taken as 0.01 and [−1 0 1] respectively. Simulation series with 1300 data points is computed by using fourth-order Runge-Kutta integration method. The first 300 data are for training and the remaining 1000 data are for test. The Lorenz series equation is defined as follows \begin{equation*} \begin{cases} \dot {q}=a\left ({r-q }\right) \\ \dot {r}=-qs+cs-r \\ \dot {s}=qr-bs \\ \end{cases}\tag{16}\end{equation*} View SourceRight-click on figure for MathML and additional features. where $a=16,b=4,c=45.92,q\left ({0 }\right)=-1,r\left ({0 }\right)=0, s\left ({0 }\right)=1$ .

Multi-step prediction errors for $x$ component of Lorenz series by applying the ABC-SOVF model and the AGABC-SOVF model are shown in Figure 3 and Figure 4, respectively. It is noted that Lorenz series is first normalized before simulation by the following formula:\begin{equation*} y\left ({i }\right)=\frac {2\left ({x\left ({i }\right)-\min {x\left ({i }\right)} }\right)}{\mathrm {max} x\left ({i }\right)-\mathrm {min}~ x\left ({i }\right)},\quad i=1,2,\cdots,n,\end{equation*} View SourceRight-click on figure for MathML and additional features. where $x\left ({i }\right)$ is original time series, $\max {x(i)}$ and $\min {x(i)}$ denote its maximum and minimum values, respectively. Normalized time series is represented as $y(i)$ .

FIGURE 3. - Prediction of Lorenz series using ABC-SOVF model.
FIGURE 3.

Prediction of Lorenz series using ABC-SOVF model.

FIGURE 4. - Prediction of Lorenz series using AGABC-SOVF model.
FIGURE 4.

Prediction of Lorenz series using AGABC-SOVF model.

Figure 3 and Figure 4 show that performance of AGABC-SOVF is more prominent, especially for prediction of the first 100 points, that are almost identical to the original series. The predicted values of the two models can reflect the original signal values more intuitively, but the predicted values of the AGABC-SOVF model are closer to the original values than the ABC-SOVF model in the large fluctuation part. When a spike in the original series, both the predicted results of the two models show errors, but the error distance of AGABC-SOVF model is far less than that of ABC-SOVF model. From Figure 5 we can see that absolute error of the AGABC algorithm is much lower than that of the ABC algorithm, especially when sample number is bigger. Based on the above analysis, it can be concluded that the AGABC-SOVF model performs well in prediction, and has an obvious pre-improvement advantage.

FIGURE 5. - Comparison of prediction errors of the two methods.
FIGURE 5.

Comparison of prediction errors of the two methods.

C. Comparisons of Prediction Length of Multi-Step Prediction

This experiment is mainly used to test prediction length of multi-step prediction by applying the AGABC-SOVF model. The first 200 points of speech signal of certain length are used as training to configure the model, which is used for prediction after 200 points, then prediction is terminated when the error is greater than 0.05.

Figure 6 and Figure 7 show that AGABC-SOVF model and ABC-SOVF model are used in predictions of phonetic symbol [g] and Lorenz series, respectively. We note that the prediction results of AGABC-SOVF model and ABC-SOVF model for about 600 steps are basically consistent with the original series. In contrast, AGABC-SOVF model is able to continue prediction for the next 300 points and the error is very small. So, the AGABC-SOVF model is obviously better than that of the ABC-SOVF model in length prediction.

FIGURE 6. - Predictive length comparison of multi-step prediction for phonetic symbol [g] when using AGABC-SOVF and ABC-SOVF models.
FIGURE 6.

Predictive length comparison of multi-step prediction for phonetic symbol [g] when using AGABC-SOVF and ABC-SOVF models.

FIGURE 7. - Predictive length comparison of multi-step prediction for Lorenz series when using AGABC-SOVF and ABC-SOVF models.
FIGURE 7.

Predictive length comparison of multi-step prediction for Lorenz series when using AGABC-SOVF and ABC-SOVF models.

D. Predictions of Real Measured Speech Signal Series Using AGABC-SOVF and ABC-SOVF Model

In this experiment, validity of the proposed nonlinear prediction model is verified by the results of the multi-frame speech signal. Experimental data contains four phonetic symbols [b], [d], [g], [o] and the sentence (long time no see). In order to verify the validity and generalization ability of our method, the AGABC-SOVF model is applied to the prediction of sentence sample to verify the effectiveness of the model. The pre-processing of the signal is as same as above, and ABC-SOVF model and AGABC-SOVF model were used to predict and compare the waveform, absolute error and MSE. Experimental results of four phonetic symbols and the sentence are shown in Table 4, Figure 8 and Figure 9, respectively.

TABLE 4 MSE Comparison of Different Methods for Phonetic Symbols [b], [d], [g], and [o]
Table 4- 
MSE Comparison of Different Methods for Phonetic Symbols [b], [d], [g], and [o]
FIGURE 8. - Predictive comparison of two models for phonetic symbols [b],[d],[g] and [o].
FIGURE 8.

Predictive comparison of two models for phonetic symbols [b],[d],[g] and [o].

FIGURE 9. - Predictive comparison of two models for the sentence [long time no see].
FIGURE 9.

Predictive comparison of two models for the sentence [long time no see].

From Figure 8 and Figure 9, it can be seen that prediction values of two types of model can be more intuitive response to the original signal. But in the intermediate part of larger fluctuations, prediction values obtained from the AGABC-SOVF model are more close to the original signals. And for each frame data, from Table 4, it can be seen that the AGABC-SOVF model has a much smaller MSE values than that of the ABC-SOVF model, which shows that the AGABC-SOVF prediction model proposed in this paper has better prediction results for multi-frame speech signals.

SECTION VI.

Conclusion

In this paper, the AGABC algorithm based on the search equation of onlooker stage is proposed. In the algorithm, we study a new search equation which is based on the search equation of the original ABC algorithm, that combines a variable weight coefficient with the neighbor information. Experimental results show that the AGABC algorithm guarantee a good balance between exploitation and exploration in global optimization. After that, the AGABC algorithm is applied to SOVF model to estimate kernel coefficients. Performance of the AGABC algorithm was compared with the original ABC and other two improved algorithms ABC/best/1 and GABC. Results show that AGABC algorithm achieves improved performance in multi-modal functions. For Lorenz series and real measured speech signal series, when MSE is used as an evaluation, multi-step prediction accuracy of the proposed AGABC-SOVF model is better than that of the ABC-SOVF model. Within a certain error permissible range, the AGABC-SOVF model predicts longer than the ABC-SOVF model. Meanwhile, the AGABC-SOVF model can better reflect trends and regularities of the speech signal series and fully meet requirements for speech signal prediction. The proposed model can present a nonlinear analysis and more valuable model structure for speech signal series, and opens up a new way to speech signal reconstruction and compression coding.

References

References is not available for this document.