Loading [MathJax]/extensions/MathMenu.js
The Bounded Capacity of Fuzzy Neural Networks (FNNs) Via a New Fully Connected Neural Fuzzy Inference System (F-CONFIS) With Its Applications | IEEE Journals & Magazine | IEEE Xplore

The Bounded Capacity of Fuzzy Neural Networks (FNNs) Via a New Fully Connected Neural Fuzzy Inference System (F-CONFIS) With Its Applications


Abstract:

In this paper, a fuzzy neural network (FNN) is transformed into an equivalent three-layer fully connected neural inference system (F-CONFIS). This F-CONFIS is a new type ...Show More

Abstract:

In this paper, a fuzzy neural network (FNN) is transformed into an equivalent three-layer fully connected neural inference system (F-CONFIS). This F-CONFIS is a new type of a neural network whose links are with dependent and repeated weights between the input layer and hidden layer. For these special dependent repeated links of the F-CONFIS, some special properties are revealed. A new learning algorithm with these special properties is proposed in this paper for the F-CONFIS. The F-CONFIS is therefore applied for finding the capacity of the FNN. The lower bound and upper bound of the capacity of the FNN can be found from a new theorem proposed in this paper. Several examples are illustrated with satisfactory simulation results for the capacity of the F-CONFIS (or the FNN). These include “within capacity training of the FNN,” “over capacity training of the FNN,” “training by increasing the capacity of the FNN,” and “impact of the capacity of the FNN in clustering Iris Data.” It is noted that the finding of the capacity of the F-CONFIS, or FNN, has its emerging values in all engineering applications using fuzzy neural networks. This is to say that all engineering applications using FNN should not exceed the capacity of the FNN to avoid unexpected results. The clustering of Iris data using FNN illustrated in this paper is one of the most relevant engineering applications in this regards.
Published in: IEEE Transactions on Fuzzy Systems ( Volume: 22, Issue: 6, December 2014)
Page(s): 1373 - 1386
Date of Publication: 26 November 2013

ISSN Information:


I. Introduction

In the past decade, fuzzy neural networks (FNNs) have been widely used in many kinds of subject areas and engineering applications for problem solving, such as pattern recognition, intelligent adaptive control, regression or density estimation, and so on [1]–[6]. The FNN possesses the characteristics of linguistic information and the learning of a neural network (NN) [7]–[12]. If the FNN is properly constructed, then it follows universal approximate theorem (UAT), i.e., the properly constructed FNN can approximate any nonlinear functions [13]–[16]. However, the universal approximate theorem does not show us how to properly construct and tune the FNN. This is to say that the FNN designed for certain applications by human expert must have constraints, such as the maximum number of input and output samples it can approximate or memorize. Similar to the discussions of the capacity of multilayer NNs [17], the capacity of the FNN is thus defined as the maximum number of arbitrary distinct input samples that can be mapped to desired output samples with a zero error. The overcapacity may lead training process to diverge in the FNN. The training samples should be independent. During the past decade, the capacity of an associative memory and multilayer perceptron (MLP) has been derived, assuming a fully connected NN [18]–[24].

Contact IEEE to Subscribe

References

References is not available for this document.