Relating Bayesian learning to training in recurrent networks | IEEE Conference Publication | IEEE Xplore

Relating Bayesian learning to training in recurrent networks


Abstract:

It is demonstrated that a recurrent neural network relying on an error correcting learning algorithm and a localist coding scheme is able to converge to a solution that w...Show More

Abstract:

It is demonstrated that a recurrent neural network relying on an error correcting learning algorithm and a localist coding scheme is able to converge to a solution that would be expected from Bayesian learning. This is possible even without implementing Bayes theorem and without assigning prior probabilities to the model.
Date of Conference: 20-24 July 2003
Date Added to IEEE Xplore: 26 August 2003
Print ISBN:0-7803-7898-9
Print ISSN: 1098-7576
Conference Location: Portland, OR, USA

I. Introduction

Machine Learning and Statistics have many aspects in common. It will be shown that Bayes theorem can contribute to understand and predict to what solution recurrent networks with truncated gradient descent learning algorithms and a localist coding scheme converge. I have chosen a simple recurrent network, SRN [1], for my demonstration.

Contact IEEE to Subscribe

References

References is not available for this document.