Loading [MathJax]/extensions/MathZoom.js
Nonsingular Gradient Descent Algorithm for Interval Type-2 Fuzzy Neural Network | IEEE Journals & Magazine | IEEE Xplore

Nonsingular Gradient Descent Algorithm for Interval Type-2 Fuzzy Neural Network


Abstract:

Interval type-2 fuzzy neural network (IT2FNN) is widely used to model nonlinear systems. Unfortunately, the gradient descent-based IT2FNN with uncertain variances always ...Show More

Abstract:

Interval type-2 fuzzy neural network (IT2FNN) is widely used to model nonlinear systems. Unfortunately, the gradient descent-based IT2FNN with uncertain variances always suffers from low convergence speed due to its inherent singularity. To cope with this problem, a nonsingular gradient descent algorithm (NSGDA) is developed to update IT2FNN in this article. First, the widths of type-2 fuzzy rules are transformed into root inverse variances (RIVs) that always satisfy the sufficient condition of differentiability. Second, the singular RIVs are reformulated by the nonsingular Shapley-based matrices associated with type-2 fuzzy rules. It averts the convergence stagnation caused by zero derivatives of singular RIVs, thereby sustaining the gradient convergence. Third, an integrated-form update strategy (IUS) is designed to obtain the derivatives of parameters, including RIVs, centers, weight coefficients, deviations, and proportionality coefficient of IT2FNN. These parameters are packed into multiple subvariable matrices, which are capable to accelerate gradient convergence using parallel calculation instead of sequence iteration. Finally, the experiments showcase that the proposed NSGDA-based IT2FNN can improve the convergence speed through the improved learning algorithm.
Page(s): 8176 - 8189
Date of Publication: 07 December 2022

ISSN Information:

PubMed ID: 37015616

Funding Agency:


I. Introduction

The interval type-2 fuzzy neural networks (IT2FNNs) have powerful fuzzy reasoning capability and learning capability, which have been increasingly used in the identification of nonlinear systems [1], [2], [3]. Generally, IT2FNNs need an optimization technique to update the type-2 fuzzy rules during their applications [4], [5]. The most frequently used technique is the gradient descent algorithm due to its ease of implementation [6], [7], [8], [9]. Nevertheless, once the variances associated with Gaussian membership functions or the weights in IT2FNN tend to be infinitesimal, the first-order gradient learning typically stays at an extreme oscillation or a long plateau with a very small change. In such singularity phenomena, it is likely to suffer from local minima [10], [11], [12].

Contact IEEE to Subscribe

References

References is not available for this document.