I. Introduction
The primary informal communication method that people utilize to convey their emotions is by means of their facial expressions [1]. In recent times, facial recognition has generated a lot of interest from a variety of fields, including informal communication app development and fingerprint authentication [2]. These components aim to use facial expression analysis to identify an individual's state of mind and feelings. Humans are capable of reasonably precisely assessing the emotional state of someone else just by looking at their face. Some gestures are so clear-cut that understanding their state of mind just requires a quick glance, whereas others are more difficult to interpret due to their subtle, unclear, or intricate presentations [3]. It is difficult to teach a machine to accurately identify the same emotions from gestures found in an archived or live photograph of a human. Computer recognition processes confront challenges from a wide range of factors, including facial length, eyes and lip positions, forehead curvature, background characteristics, and varying resolutions [4]. Scientists and experts believe that facial expressions are an important aspect of interpreting human emotion[5], [6]. It is difficult to determine human psychological state from facial expression traits, though, because to the subjectivity to outside influences like lighting and head movements. As humans, we can accurately assess emotion by simply examining another person's face. Identifying emotion can be challenging due to subtle, uncertain, or fuzzy emotions. Some expressions are easily identifiable. It's challenging to program a machine to identify emotions from a human's facial expressions in a saved or live photo. Conventional feature extraction and machine learning algorithms for automated classification struggle to achieve high recognition rates due to complexities [7] [8]. In this work, a CNN-a popular tool for computer vision applications-is utilized to analyze facial expressions and estimate emotions using an improved deep learning technique. Because of these drawbacks, it is difficult to get an excellent rate of recognition when utilizing neural networks and conventional feature extraction techniques for automated classification. The present study presents an enhanced deep learning model that uses a convolution neural network (CNN), which is often employed to address challenges in computer vision like identifying objects, monitoring of objects, classifying images [9]–[15], and segmentation of imagery to analyze facial movements and project reactions. The proposed methodology makes use of two different model components. The first is made up of a CNN model that classifies the main emotion visible in an image, like happiness or sadness, while the second one recognizes emotions using hybrid model that fuses an optimization technique and pre-trained CNN model. The applicable literature is addressed in Section 2 and the suggested model is shown in Section 3. Section 4 discusses the research study's findings.