Coupled Multimodal Emotional Feature Analysis Based on Broad-Deep Fusion Networks in Human–Robot Interaction | IEEE Journals & Magazine | IEEE Xplore

Coupled Multimodal Emotional Feature Analysis Based on Broad-Deep Fusion Networks in Human–Robot Interaction


Abstract:

A coupled multimodal emotional feature analysis (CMEFA) method based on broad–deep fusion networks, which divide multimodal emotion recognition into two layers, is propos...Show More

Abstract:

A coupled multimodal emotional feature analysis (CMEFA) method based on broad–deep fusion networks, which divide multimodal emotion recognition into two layers, is proposed. First, facial emotional features and gesture emotional features are extracted using the broad and deep learning fusion network (BDFN). Considering that the bi-modal emotion is not completely independent of each other, canonical correlation analysis (CCA) is used to analyze and extract the correlation between the emotion features, and a coupling network is established for emotion recognition of the extracted bi-modal features. Both simulation and application experiments are completed. According to the simulation experiments completed on the bimodal face and body gesture database (FABO), the recognition rate of the proposed method has increased by 1.15% compared to that of the support vector machine recursive feature elimination (SVMRFE) (without considering the unbalanced contribution of features). Moreover, by using the proposed method, the multimodal recognition rate is 21.22%, 2.65%, 1.61%, 1.54%, and 0.20% higher than those of the fuzzy deep neural network with sparse autoencoder (FDNNSA), ResNet-101 + GFK, C3D + MCB + DBN, the hierarchical classification fusion strategy (HCFS), and cross-channel convolutional neural network (CCCNN), respectively. In addition, preliminary application experiments are carried out on our developed emotional social robot system, where emotional robot recognizes the emotions of eight volunteers based on their facial expressions and body gestures.
Page(s): 9663 - 9673
Date of Publication: 20 January 2023

ISSN Information:

PubMed ID: 37021991

Funding Agency:

No metrics found for this document.

I. Introduction

With the rapid development of various technologies, artificial intelligence (AI) has become the focus of academic and research studies [1]. More and more AI products appear in human life [2], and people increasingly expect robots to exhibit emotional ability. However, current machines cannot communicate emotionally with humans in an intuitive way [3]. Among various ways of emotional communication, facial emotions and gestures [4] can convey 70% of information. In human–robot interaction, facial and body gesture emotion recognition is of great significance. Interpersonal human–human interaction is a dynamical exchange and coordination of social signals, feelings, and emotions usually performed through and across multiple modalities such as facial expressions, gestures, and language [5], and then infer the current emotional state. Therefore, facial emotions and body gestures can provide richer inner emotional states and a more accurate understanding of human emotions.

Usage
Select a Year
2025

View as

Total usage sinceJan 2023:1,056
0102030405060JanFebMarAprMayJunJulAugSepOctNovDec57470000000000
Year Total:104
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.