Loading [MathJax]/extensions/MathMenu.js
On-Line nonlinear systems identification of coupled tanks via fractional differential neural networks | IEEE Conference Publication | IEEE Xplore

On-Line nonlinear systems identification of coupled tanks via fractional differential neural networks


Abstract:

Fractional differential neural network (FDNN) is the extended neural network using fractional-order operators. On-line nonlinear system identification using FDNN is studi...Show More

Abstract:

Fractional differential neural network (FDNN) is the extended neural network using fractional-order operators. On-line nonlinear system identification using FDNN is studied in this paper. Here all states of the non-linear system are assumed to be available in the system output. Through Lyapunov-like analysis, the fractional neural network parameters are adjusted so it will be proven that the identification error becomes bounded and tends to zero. To illustrate the applicability of the FDNN as a nonlinear identifier, two coupled tanks are considered as a case study. The results of simulation are very promising.
Date of Conference: 17-19 June 2009
Date Added to IEEE Xplore: 07 August 2009
ISBN Information:

ISSN Information:

Conference Location: Guilin, China

1 INTRODUCTION

Artificial neural network (NN) techniques seem to be very effective to identify a wide class of nonlinear systems especially when we have no information about the complete model, or even when the controlled plant is considered as a black box [1]. The NNs can be classified as static (feed forward) and dynamic (recurrent or differential) nets. In most of the recent publications [2]–[4] feed forward NNs deal with a class of global optimization problems. To do so, a learning rule is usually used to adjust the weights of a static NN by minimizing the identification error. Although static nets have been used successfully in many applications, the major disadvantages of their structure are the slow learning rate. It means that the weights do not utilize the information on the local NN structure as they update. Another deficiency is that static nets do not have memory, so their outputs are uniquely determined by the current inputs and weights, leads to a high sensitivity of the function approximation to the training data.

Contact IEEE to Subscribe

References

References is not available for this document.