Loading [MathJax]/extensions/MathZoom.js
Multimodal Fusion-AdaBoost Based Activity Recognition for Smart Home on WiFi Platform | IEEE Journals & Magazine | IEEE Xplore

Multimodal Fusion-AdaBoost Based Activity Recognition for Smart Home on WiFi Platform


Abstract:

With the ubiquity of commodity WiFi devices and the rapid development of Internet of Things (IoT), there are increasingly intelligent sensing applications emerging by uti...Show More

Abstract:

With the ubiquity of commodity WiFi devices and the rapid development of Internet of Things (IoT), there are increasingly intelligent sensing applications emerging by utilizing fine-grained channel state information (CSI) from WiFi signals, which can realize contactless human-computer interaction (HCI) for smart home. However, most of CSI-based activity recognition approaches are vulnerable to random noises derived from indoor environments. In this paper, we present a multimodal fusion-AdaBoost based human activity recognition scheme on WiFi platform. For this purpose, we develop two theoretical underpinnings, including a sensing model and a recognition model. The sensing model is firstly established to investigate the impact of human activities on propagation properties of WiFi signals. To this end, Fresnel zone model is used to quantify the correlation between CSI dynamics and human activities. Moreover, the recognition model is used to exploit physical activity-induced signal changes to infer potential activity information. In the process, we firstly construct a CSI tensor through integrating all CSI information at each WiFi receiver. Then, a CANDECAMP/PARAFAC (CP) decomposition method is applied to this CSI tensor for obtaining representative features. Finally, based on the features extracted, human activities are able to be recognized by combining multimodal fusion and AdaBoost. We implement the proposed scheme on a set of WiFi devices and evaluate it in both laboratory and corridor environments. The experimental results confirm that the proposed scheme can achieve average recognition accuracies of 96% and 95% in two indoor scenarios, respectively.
Published in: IEEE Sensors Journal ( Volume: 22, Issue: 5, 01 March 2022)
Page(s): 4661 - 4674
Date of Publication: 25 January 2022

ISSN Information:

Funding Agency:

No metrics found for this document.

I. Introduction

With rapid development and wide applications of Internet of Things (IoT) technology, human activity recognition is gaining increasingly attention in thriving research fields of human-computer interaction (HCI) [1]. In addition, activity recognition is an important component of more comprehensive context aware services which enable communication between cyberworld and physical world. Besides, activity recognition is able to be intergraded with a variety of applications that require HCI, such as smart home, gesture control, or interactive games [2]. When walking into a room, for instance, a system could set to users’ preferred temperature. Or when sitting on the couch, a system could instantly flick the television to the users’ favorite channel based on specific activities. More importantly, precise activity recognition can also be leveraged to greatly promote exchanges with those people who have difficulties in hearing or speech. In order to achieve the purpose of activity recognition, many advanced approaches with different techniques have been proposed these years, which are classified into three categories, including cameras-based [3]–[6], wearable sensors-based [7]–[25], and WiFi-based [26]–[43]. And with respect to imaging through cameras, it can offer rich information and recognize various activities accurately, which can enable a wide range of useful applications, for example in elderly care, medical adherence monitoring, or even hospital care. Meanwhile, these approaches also suffer from inherent limitations, such as high installation overheads, privacy concerns, Line-of-Sight (LoS) requirements, etc. For instance, most people are uncomfortable with being filmed all the time, especially in their own home. Those sensor-based approaches have shown excellent performance and are widely adopted to assist people in improving daily life quality. Likewise, these solutions also experience some limitations due to the reliance on dedicated sensors or requiring users to always carry or wear specialized devices on their bodies. Smart watch and bracelet, for example, can monitor the user’s actions status and then provide accurate information in real time. However, most people may leave devices on desks or tables, or forget to charge them, and activity recognition will terminate naturally. In view of this, a passive device-free activity recognition needs to be further developed to overcome these remained limitations of current approaches.

Usage
Select a Year
2025

View as

Total usage sinceJan 2022:1,235
051015202530JanFebMarAprMayJunJulAugSepOctNovDec161723000000000
Year Total:56
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.