Loading [MathJax]/extensions/MathZoom.js
Light-Weight Learning Model with Patch Embeddings for Radar-based Fall Event Classification: A Multi-domain Decision Fusion Approach | IEEE Conference Publication | IEEE Xplore

Light-Weight Learning Model with Patch Embeddings for Radar-based Fall Event Classification: A Multi-domain Decision Fusion Approach


Abstract:

With the increasing morbidity and mortality rate in older adults above 65 years of age due to accidental fall, privacy-preserving radar-based fall event detection is beco...Show More

Abstract:

With the increasing morbidity and mortality rate in older adults above 65 years of age due to accidental fall, privacy-preserving radar-based fall event detection is becoming crucial. Deep learning algorithm like vision transformers (ViT) for human fall-event detection using different radar domain representation have shown excellent fall-detection accuracy. However, such techniques are computationally very expensive and unsuitable when training datasets are small. Patch-based learning models such as Multi-Layer Perceptron-Mixer (MLP-Mixer) and Convolutional-Mixer (ConvMixer) models have been developed as alternatives to ViT. In this work, the decision outputs of light-weight ConvMixer models with different domain representations of radar returns as inputs are fused for classifying the events as fall or non-fall. This proposed approach of event classification utilizes supplementary information present in different domains for enhancing the classification accuracy. Evaluation done on publicly available dataset shows an improved performance of the multi-domain ConvMixer model over ViT and MLP-Mixer. This further justifies the choice of light weight ConvMixer as a preferred learnable model when only limited training dataset is available.
Date of Conference: 01-05 May 2023
Date Added to IEEE Xplore: 21 June 2023
ISBN Information:
Conference Location: San Antonio, TX, USA

Funding Agency:

References is not available for this document.

I. Introduction

Fall is the leading cause of increased morbidity, disability, and mortality rate in older adults over 65 years of age worldwide. More than 33% of the elderly fall each year globally and about 10% of them experience multiple falls in a year [1]. Falls lead to hospitalization and loss of confidence to live independently. Therefore, developing remote continuous monitoring systems for fall event detection in older adults is crucial. Classifying fall events from non-fall events using remote sensors like RGB and depth camera fail to preserve privacy of individuals being monitored. Radars serve as privacy-preserving contactless sensor that can be used to detect a fall event in an independent or assisted living environment for older adults. Radar-based fall event classification may be reduced to an image classification task, for instance, by converting 1-D received radar signals into 2D-spectrograms and non-linearly transforming obtained spectrograms into RGB images. Research in this field has progressed from classification using hand-crafted features for traditional machine learning to automatic feature extraction through deep learning. Convolutional neural network (CNN), long-short term memory (LSTM) and auto-encoders are widely used deep-learning algorithms for radar-based human-fall detection [2]–[4].

Select All
1.
R. Vaishya and A. Vaish, "Falls in older adults are serious", Indian Journal of Orthopaedics, vol. 54, no. 1, pp. 69-74, 2010.
2.
J. Gutiérrez, V. Rodríguez and S. Martin, "Comprehensive review of vision-based fall detection systems", Sensors, vol. 21, no. 3, 2021.
3.
P. Wang, Q. Li, P. Yin, Z. Wang, Y. Ling, R. Gravina, et al., "A convolution neural network approach for fall detection based on adaptive channel selection of uwb radar signals", Neural Computing and Applications, pp. 1433-3058, 2022.
4.
M. M. Islam, O. Tayan, M. R. Islam, M. S. Islam, S. Nooruddin, M. Nomani Kabir, et al., "Deep learning based systems developed for fall detection: A review", IEEE Access, vol. 8, pp. 166117-166137, 2020.
5.
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, et al., "An image is worth 16x16 words: Transformers for image recognition at scale", arXiv, 2020.
6.
"Five reasons to embrace transformer in computer vision", Microsoft.com.
7.
S. Chen, W. He, J. Ren and X. Jiang, "Attention-based dual-stream vision transformer for radar gait recognition", IEEE International Conference on Acoustics Speech and Signal Processin, pp. 3668-3672, 2022.
8.
Y. Zhao, R. G. Guendel, A. Yarovoy and F. Fioranelli, "Distributed radar-based human activity recognition using vision transformer and cnns", European Radar Conference, pp. 301-304, 2022.
9.
A. Dey, S. Rajan, G. Xiao and J. Lu, "Fall event detection using vision transformer", 2022 IEEE Sensors, pp. 1-4, 2022.
10.
S. H. Lee, S. Lee and B. C. Song, "Vision transformer for small-size datasets", arXiv, 2021.
11.
L. Melas-Kyriazif, "Do you even need attention? A stack of feed-forward layers does surprisingly well on imagenet", arXiv, 2021.
12.
I. Tolstikhin, N. Houlsby, A. Kolesnikov, L. Beyer, X. Zhai, T. Unterthiner, et al., "MLP-Mixer: An all-MLP architecture for vision", arXiv, 2021.
13.
A. Trockman and J. Z. Kolter, "Patches are all you need?", arXiv, 2022.
14.
W. Ding, X. Guo and G. Wang, "Radar-based human activity recognition using hybrid neural network model with multidomain fusion", IEEE Transactions on Aerospace and Electronic Systems, vol. 57, no. 5, pp. 2889-2898, 2021.
15.
B. Jokanovic, M. Amin and B. Erol, "Multiple joint-variable domains recognition of human motion", IEEE Radar Conference, pp. 0948-0952, 2017.
16.
L. I. Kuncheva, "Fundamentals of pattern recognition" in Combining Pattern Classifiers: Methods and Algorithms, Wiley Publishing, pp. 1-47, 2014.
17.
M. Abdollahpour, T. Y. Rezaii, A. Farzamnia and S. Meshgini, "Sleep stage classification using Dempster-Shafer theory for classifier fusion", IEEE International Conference on Artificial Intelligence in Engineering and Technology, pp. 1-4, 2018.
18.
F. Fioranelli, S. A. Shah, H. Li, A. Shrestha, S. Yang and J. L. Kernec, "Radar signatures of human activities", Distributed by University of Glasgow, 2019.
19.
S. Rahman and D. A. Robertson, "Radar micro-doppler signatures of drones and birds at k-band and w-band", Scientific Reports, vol. 8, pp. 1-11, 2018.
20.
F. Fioranelli, S. A. Shah, H. Li, A. Shreshtha, S. Yang and J. Kernec, "Radar sensing for healthcare", Electronics Letters, vol. 55, no. 19, pp. 1022-1024, 2019.
21.
Y. Zhao, G. Wang, C. Tang, C. Luo, W. Zeng and Z.-J. Zha, "A battle of network structures: An empirical study of CNN transformer and MLP", arXiv, 2021.
22.
A. Le Bris, N. Chehata, W. Ouerghemmi, C. Wendl, T. Postadjian, A. Puissant, et al., "Decision fusion of remote-sensing data for land cover classification" in Multimodal Scene Understanding, Academic Press, pp. 341-382, 2019.
23.
U. G. Mangai, S. Samanta, S. Das and P. R. Chowdhury, "A survey of decision fusion and feature fusion strategies for pattern classification", IETE Technical Review, vol. 27, no. 4, pp. 293-307, 2010.
24.
S. Roheda, H. Krim, Z.-Q. Luo and T. Wu, "Decision level fusion: An event driven approach", European Signal Processing Conference, pp. 2598-2602, 2018.
25.
Q. Chen, A. Whitbrook, U. Aickelin and C. Roadknight, "Data classification using the Dempster-Shafer method", Journal of Experimental & Theoretical Artificial Intelligence, vol. 26, no. 4, pp. 493-517, 2014.
Contact IEEE to Subscribe

References

References is not available for this document.