Training matrix parameters by Particle Swarm Optimization using a fuzzy neural network for identification | IEEE Conference Publication | IEEE Xplore

Training matrix parameters by Particle Swarm Optimization using a fuzzy neural network for identification


Abstract:

In this article Particle Swarm Optimization that is a population-based method is applied to train the matrix parameters that are standard deviation and centers of Radial ...Show More

Abstract:

In this article Particle Swarm Optimization that is a population-based method is applied to train the matrix parameters that are standard deviation and centers of Radial Basis Function Fuzzy Neural Network. We have applied Least Square and Recursive Least Square in training the weights of this fuzzy neural network .There are four sets of data used to examine and prove that Particle Swarm Optimization is a good method for training these complicated matrices as antecedent part parameters.
Date of Conference: 25-28 November 2007
Date Added to IEEE Xplore: 24 October 2008
ISBN Information:
Conference Location: Kuala Lumpur, Malaysia

1. INTRODUCTION

Fuzzy systems have been proved to be very useful in control, pattern recognition, signal processing and nonlinear system modeling. Their usage has become popular in soft computing area during the recent years because of their similarity to human reasoning. There are several modeling methods that have been proposed for fuzzy neural networks in recent years [1]–[5] such as mamdani model, simplification reasoning, input function reasoning and TSK, that are being used in many of those networks like adaptive network-based fuzzy inference systems (ANFIS) and radial basis function neural networks (RBFFNN) and are trying to find a way for training strategy and achieving better results. Radial Basis Function Fuzzy Neural Networks (RBFFNN) have three parameters to be trained that are the centers, the standard deviation and the weights that are the values of the output membership functions. They are a kind of fuzzy neural network because if the density of input membership functions is more around one, the output membership function will be spread more around one.

References

References is not available for this document.