I. Introduction
Fall is the leading cause of increased morbidity, disability, and mortality rate in older adults over 65 years of age worldwide. More than 33% of the elderly fall each year globally and about 10% of them experience multiple falls in a year [1]. Falls lead to hospitalization and loss of confidence to live independently. Therefore, developing remote continuous monitoring systems for fall event detection in older adults is crucial. Classifying fall events from non-fall events using remote sensors like RGB and depth camera fail to preserve privacy of individuals being monitored. Radars serve as privacy-preserving contactless sensor that can be used to detect a fall event in an independent or assisted living environment for older adults. Radar-based fall event classification may be reduced to an image classification task, for instance, by converting 1-D received radar signals into 2D-spectrograms and non-linearly transforming obtained spectrograms into RGB images. Research in this field has progressed from classification using hand-crafted features for traditional machine learning to automatic feature extraction through deep learning. Convolutional neural network (CNN), long-short term memory (LSTM) and auto-encoders are widely used deep-learning algorithms for radar-based human-fall detection [2]–[4].