Gaussian activation functions using Markov chains | IEEE Journals & Magazine | IEEE Xplore

Gaussian activation functions using Markov chains


Abstract:

We extend, in two major ways, earlier work in which sigmoidal neural nonlinearities were implemented using stochastic counters. 1) We define the signal to noise limitatio...Show More

Abstract:

We extend, in two major ways, earlier work in which sigmoidal neural nonlinearities were implemented using stochastic counters. 1) We define the signal to noise limitations of unipolar and bipolar stochastic arithmetic and signal processing. 2) We generalize the use of stochastic counters to include neural transfer functions employed in Gaussian mixture models. The hardware advantages of (nonlinear) stochastic signal processing (SSP) may be offset by increased processing time; we quantify these issues. The ability to realize accurate Gaussian activation functions for neurons in pulsed digital networks using simple hardware with stochastic signals is also analyzed quantitatively.
Published in: IEEE Transactions on Neural Networks ( Volume: 13, Issue: 6, November 2002)
Page(s): 1465 - 1471
Date of Publication: 30 November 2002

ISSN Information:

PubMed ID: 18244541
Citations are not available for this document.

I. Introduction

Signals IN digital neural networks may be represented by the Bernoulli probabilities of binary random variables. These signals may be estimated by the frequency of 1s or pulses, i.e., by their pulse count distributions, taken over a sampling interval of multiple clock cycles. Signal values may be multiplied using simple logic gates and may be added or weight-averaged using (stochastic) multiplexers. Unlike the binary radix representations of conventional digital signals, the stochastic signals have unary representations, and their estimates are, therefore, relatively insensitive to imperfect pulse detection and noise. These are among the advantages of (nonlinear) stochastic signal processing (SSP), which is a method of reducing the power dissipation and the silicon area of digital circuit implementations of neural networks, while improving their error and fault tolerance and enabling variable-precision computations in fixed hardware.

Cites in Papers - |

Cites in Papers - IEEE (3)

Select All
1.
G. D. Praveenkumar, R. Nagaraj, "Intelligent Adaptive Anisotropic Diffusion Filtered Deep Neural Network With Gaussian Activation For Image Classification", 2022 6th International Conference on Computing Methodologies and Communication (ICCMC), pp.1377-1382, 2022.
2.
Da Zhang, H. Li, S.Y. Foo, "A simplified FPGA implementation of neural network algorithms integrated with stochastic theory for power electronics applications", 31st Annual Conference of IEEE Industrial Electronics Society, 2005. IECON 2005., pp.6 pp.-, 2005.
3.
D.K. McNeill, H.C. Card, "Refractory pulse counting Processes in stochastic neural computers", IEEE Transactions on Neural Networks, vol.16, no.2, pp.505-508, 2005.

Cites in Papers - Other Publishers (1)

1.
S.P. Joy Vasantha Rani, K. Aruna Prabha, "Stochastic logic computation based RBFNN with adaptive hidden layer structure", Journal of Engineering, Design and Technology, vol.8, no.2, pp.206, 2010.

References

References is not available for this document.