Abstract:
The derivation of a supervised training algorithm for a neural network implies the selection of a norm criterion which gives a suitable global measure of the particular d...Show MoreMetadata
Abstract:
The derivation of a supervised training algorithm for a neural network implies the selection of a norm criterion which gives a suitable global measure of the particular distribution of errors. The author addresses this problem and proposes a correspondence between error distribution at the output of a layered feedforward neural network and L/sub p/ norms. The generalized delta rule is investigated in order to verify how its structure can be modified in order to perform a minimization in the generic L/sub p/ norm. The particular case of the Chebyshev norm is developed and tested.<>
Published in: IEEE Transactions on Neural Networks ( Volume: 2, Issue: 1, January 1991)
DOI: 10.1109/72.80298