Bipin Rajendran - IEEE Xplore Author Profile

Showing 1-25 of 81 results

Results

Bayesian Neural Networks (BNNs) generate an ensemble of possible models by treating model weights as random variables. This enables them to provide superior estimates of decision uncertainty. However, implementing Bayesian inference in hardware is resource-intensive, as it requires noise sources to generate the desired model weights. In this work, we introduce Bayes2IMC, an in-memory computing (IM...Show More
Common artefacts such as baseline drift, rescaling, and noise critically limit the performance of machine learning-based automated ECG analysis and interpretation. This study proposes Derived Peak (DP) encoding, a non-parametric method that generates signed spikes corresponding to zero crossings of the signal’s first and second-order time derivatives. Notably, DP encoding is invariant to shift and...Show More
Spiking Neural Networks (SNNs) have emerged as a promising approach to improve the energy efficiency of machine learning models, as they naturally implement event-driven computations while avoiding expensive multiplication operations.In this paper, we develop a hardware-software co-optimisation strategy to port software-trained deep neural networks (DNN) to reduced-precision spiking models demonst...Show More
In-context learning (ICL), a property demonstrated by transformer-based sequence models, refers to the automatic inference of an input-output mapping based on examples of the mapping provided as context. ICL requires no explicit learning, i.e., no explicit updates of model weights, directly mapping context and new input to the new output. Prior work has proved the usefulness of ICL for detection i...Show More
Spiking Neural Networks (SNNs) have been recently integrated into Transformer architectures due to their potential to reduce computational demands and to improve power efficiency. Yet, the implementation of the attention mechanism using spiking signals on general-purpose computing platforms remains ineffi-cient. In this paper, we propose a novel framework leveraging stochastic computing (SC) to ef...Show More
Bayesian neural networks offer better estimates of model uncertainty compared to frequentist networks. However, inference involving Bayesian models requires multiple instantiations or sampling of the network parameters, requiring significant computational resources. Compared to traditional deep learning networks, spiking neural networks (SNNs) have the potential to reduce computational area and po...Show More
The latest Satellite Communication (SatCom) missions are characterized by a fully reconfigurable on-board software-defined payload, capable of adapting radio resources to the temporal and spatial variations of the system traffic. As pure optimization-based solutions have shown to be computationally tedious and to lack flexibility, Machine Learning (ML)-based methods have emerged as promising alter...Show More
Bayesian Neural Networks (BNNs) can overcome the problem of overconfidence that plagues traditional frequentist deep neural networks, and are hence considered to be a key enabler for reliable AI systems. However, conventional hardware realizations of BNNs are resource intensive, requiring the imple-mentation of random number generators for synaptic sampling. Owing to their inherent stochasticity d...Show More
Brain-computer interfaces are being explored for a wide variety of therapeutic applications. Typically, this involves measuring and analyzing continuous-time electrical brain activity via techniques such as electrocorticogram (ECoG) or electroencephalography (EEG) to drive external devices. However, due to the inherent noise and variability in the measurements, the analysis of these signals is cha...Show More
In this paper, we present a scalable digital hardware accelerator based on non-volatile memory arrays capable of realizing deep convolutional spiking neural networks (SNNs). Our design studies are conducted using a compact model for spin-transfer torque random access memory (STT-RAM) devices. Large networks are realized by tiling multiple cores which communicate by transmitting spike packets via a...Show More
Neuromorphic data carries information in spatio-temporal patterns encoded by spikes. Accordingly, a central problem in neuromorphic computing is training spiking neural networks (SNNs) to reproduce spatio-temporal spiking patterns in response to given spiking stimuli. Most existing approaches model the input-output behavior of an SNN in a deterministic fashion by assigning each input to a specific...Show More
Spiking Neural Networks (SNNs) have recently gained popularity as machine learning models for on-device edge intelligence for applications such as mobile healthcare management and natural language processing due to their low power profile. In such highly personalized use cases, it is important for the model to be able to adapt to the unique features of an individual with only a minimal amount of t...Show More
The cost involved in training deep neural networks (DNNs) on von-Neumann architectures has motivated the development of novel solutions for efficient DNN training accelerators. We propose a hybrid in-memory computing (HIC) architecture for the training of DNNs on hardware accelerators that results in memory-efficient inference and outperforms baseline software accuracy in benchmark tasks. We intro...Show More
Deep neural networks (DNNs) have surpassed human-level accuracy in a variety of cognitive tasks but at the cost of significant memory/time requirements in DNN training. This limits their deployment in energy and memory limited applications that require real-time learning. Matrix-vector multiplications (MVM) and vector-vector outer product (VVOP) are the two most expensive operations associated wit...Show More
In this work, we present a scheme for implementing learning on a digital non-volatile memory (NVM) based hardware accelerator for Spiking Neural Networks (SNNs). Our design estimates across three prominent non-volatile memories - Phase Change Memory (PCM), Resistive RAM (RRAM), and Spin Transfer Torque RAM (STT-RAM) show that the STT-RAM arrays enable at least 2× higher throughput compared to the ...Show More
Kinetic Monte Carlo simulations of resistive memory devices have been performed by paying attention to the vacancy-interstitial generation near the Hafnia-metal electrode interface. In our model, an oxygen vacancy is generated in Hafnia near the interface, with the corresponding oxygen atom residing in the metal electrode. These oxygen atoms form a thin insulating oxide layer at the Hafnia-active ...Show More
In this paper, we propose a Spin Transfer Torque RAM (STT-RAM) based neurosynaptic core to implement a hardware accelerator for Spiking Neural Networks (SNNs), which mimic the time-based signal encoding and processing mechanisms of the human brain. The computational core consists of a crossbar array of non-volatile STT-RAMs, read/write peripheral circuits, and digital logic for the spiking neurons...Show More
Non-volatile analog memory devices such as phase-change memory (PCM) enable designing dedicated connectivity matrices for the hardware implementation of deep neural networks (DNN). In this in-memory computing approach, the analog conductance states of the memory device can be gradually updated to train DNNs on-chip or software trained connection strengths may be programmed one-time to the devices ...Show More
Machine learning has emerged as the dominant tool for implementing complex cognitive tasks that require supervised, unsupervised, and reinforcement learning. While the resulting machines have demonstrated in some cases even superhuman performance, their energy consumption has often proved to be prohibitive in the absence of costly supercomputers. Most state-of-the-art machine-learning solutions ar...Show More
Artificial Neural Networks (ANNs) are currently being used as function approximators in many state-of-the-art Reinforcement Learning (RL) algorithms. Spiking Neural Networks (SNNs) have been shown to drastically reduce the energy consumption of ANNs by encoding information in sparse temporal binary spike streams, hence emulating the communication mechanism of biological neurons. Due to their low e...Show More
In-memory computing is an emerging computing paradigm where certain computational tasks are performed in place in a computational memory unit by exploiting the physical attributes of the memory devices. Here, we present an overview of the application of in-memory computing in deep learning, a branch of machine learning that has significantly contributed to the recent explosive growth in artificial...Show More
In-memory computing is an emerging computing paradigm where certain computational tasks are performed in place in a computational memory unit by exploiting the physical attributes of the memory devices. Here, we present an overview of the application of in-memory computing in deep learning, a branch of machine learning that has significantly contributed to the recent explosive growth in artificial...Show More
Computing systems inspired by the architecture of the human brain is poised to revolutionize the engines for information processing and data analytics. However, the efficiency and performance of these platforms pale in comparison with the human brain, especially when benchmarked in terms of metrics such as intelligence per Watt per square mm. In this paper, we review some recent progress and futur...Show More
In-memory computing with nanoscale memristive devices such as phase-change memory (PCM) has emerged as an alternative to conventional von Neumann systems to train deep neural networks (DNN) where a synaptic weight is represented by the device conductance. However, PCM devices exhibit temporal evolution of the conductance values referred to as the conductance drift, which poses challenges for maint...Show More
In this work, we demonstrate a well-posed compact model for phase change memory (PCM) devices based on Ge2 Sb2 Te5, (GST) chalcogenide. This model supports all modes of simulation including transient, DC, and AC. The model is developed in Verilog-A and simulated using HSPICE. It is computationally simple and successfully captures the key high level behaviors of memory switching, including the resi...Show More