Abstract:
Sign language serves as a vital medium of communication for the Deaf and dumb people. We present a real-time system to translate ASL into text, addressing the communicati...Show MoreMetadata
Abstract:
Sign language serves as a vital medium of communication for the Deaf and dumb people. We present a real-time system to translate ASL into text, addressing the communication gap between deaf and dumb people and normal people. Our system leverages computer vision and machine learning techniques to accurately recognize ASL gestures. In this study, we introduce a convolutional neural network (CNN) and (LSTM) approaches designed to identify hand gestures associated with human actions as captured through images from cameras. By utilizing the hand's spatial positioning and orientation, we construct the training and testing datasets for the CNN-LSTM model, we were used 4680 skeleton images of the ASL alphabet, from A to Z as a dataset. The hand image undergoes an initial filtering process followed by classification, where a classifier determines the classification of the hand gesture. Subsequently, the refined images are harnessed to train the CNN-LSTM model. Our real-time system provides accurate and reliable results in real-time 99.9%, in addition to generating new datasets that are processed and can be trained on any model.
Published in: 2023 International Conference on Recent Advances in Science and Engineering Technology (ICRASET)
Date of Conference: 23-24 November 2023
Date Added to IEEE Xplore: 08 February 2024
ISBN Information: