Automatic Facial Emotion Recognition Using Hybrid Deep Learning Approach | IEEE Conference Publication | IEEE Xplore

Automatic Facial Emotion Recognition Using Hybrid Deep Learning Approach


Abstract:

Facial emotion identification is an important task for human-computer interaction, self-driving vehicles, and a wide range of multimedia applications. This study proposes...Show More

Abstract:

Facial emotion identification is an important task for human-computer interaction, self-driving vehicles, and a wide range of multimedia applications. This study proposes an interconnected framework for recognizing human facial emotions. The architecture includes two methods for machine learning (identification and categorization) that can be learned offline for real-time applications. We begin by experimenting with the ResNet 50 classifier to find faces in photos. We then extract all of the features that reflect the characteristics of a face through specific appearance information. Following that, as a feature reduction stage, we used a genetic search optimization strategy. For emotion categorization, we utilize a BayesNet classifier with a latent the state of emotion that handles mis-/false observations. Furthermore, the proposed approach for recognizing emotion is gender- and facial skincolor independent. The study proposes an enhanced hybrid deep learning strategy that evaluates facial expressions in a picture to predict sentiments employing a convolution neural network (CNN). The algorithm that was created in this study adopts a hybrid CNN methodology for determining if the picture's fundamental emotion is happiness or sadness. The proposed algorithm was trained using the FER2013 dataset, and outcomes indicate that it is more effective than current state-of-the-art methods of determining emotions from facial expressions. The overall accuracy, precision, sensitivity, specificity and AVC of 91%, 93%, 90%, 93% and 0.95 is attained for emotion stated as ‘happy’. For emotion stated as ‘sad’ the accuracy, precision, sensitivity, specificity and AUC of 91%, 90%, 93%, 90% and 0.95 is attained.
Date of Conference: 21-23 November 2024
Date Added to IEEE Xplore: 11 February 2025
ISBN Information:
Conference Location: New Delhi, India

I. Introduction

The primary informal communication method that people utilize to convey their emotions is by means of their facial expressions [1]. In recent times, facial recognition has generated a lot of interest from a variety of fields, including informal communication app development and fingerprint authentication [2]. These components aim to use facial expression analysis to identify an individual's state of mind and feelings. Humans are capable of reasonably precisely assessing the emotional state of someone else just by looking at their face. Some gestures are so clear-cut that understanding their state of mind just requires a quick glance, whereas others are more difficult to interpret due to their subtle, unclear, or intricate presentations [3]. It is difficult to teach a machine to accurately identify the same emotions from gestures found in an archived or live photograph of a human. Computer recognition processes confront challenges from a wide range of factors, including facial length, eyes and lip positions, forehead curvature, background characteristics, and varying resolutions [4]. Scientists and experts believe that facial expressions are an important aspect of interpreting human emotion[5], [6]. It is difficult to determine human psychological state from facial expression traits, though, because to the subjectivity to outside influences like lighting and head movements. As humans, we can accurately assess emotion by simply examining another person's face. Identifying emotion can be challenging due to subtle, uncertain, or fuzzy emotions. Some expressions are easily identifiable. It's challenging to program a machine to identify emotions from a human's facial expressions in a saved or live photo. Conventional feature extraction and machine learning algorithms for automated classification struggle to achieve high recognition rates due to complexities [7] [8]. In this work, a CNN-a popular tool for computer vision applications-is utilized to analyze facial expressions and estimate emotions using an improved deep learning technique. Because of these drawbacks, it is difficult to get an excellent rate of recognition when utilizing neural networks and conventional feature extraction techniques for automated classification. The present study presents an enhanced deep learning model that uses a convolution neural network (CNN), which is often employed to address challenges in computer vision like identifying objects, monitoring of objects, classifying images [9]–[15], and segmentation of imagery to analyze facial movements and project reactions. The proposed methodology makes use of two different model components. The first is made up of a CNN model that classifies the main emotion visible in an image, like happiness or sadness, while the second one recognizes emotions using hybrid model that fuses an optimization technique and pre-trained CNN model. The applicable literature is addressed in Section 2 and the suggested model is shown in Section 3. Section 4 discusses the research study's findings.

Contact IEEE to Subscribe

References

References is not available for this document.