Abstract:
In order to provide a new interface between computers and deaf-and-dumb users, this paper proposed a method of translating sign language into a sequence of time-frequency...Show MoreMetadata
Abstract:
In order to provide a new interface between computers and deaf-and-dumb users, this paper proposed a method of translating sign language into a sequence of time-frequency spectrograms based on a 24 GHz 1T-2R Doppler radar sensor. By processing two pairs of the immediate frequency I/Q signals based on time-frequency analysis, a complete sign sentence can be captured and segmented according to the electromagnetic wave-based patterns. Rather than the traditional classifier, a convolutional neural network was utilized to classify the basic signs and make the complete sentence lucid to the computer. For greater accuracy, an attention module was augmented to the network. The proposed methods could reach the accuracy of 96% in translating short sentences such as “Yes”, “No”, “Thanks”, and “Hello”, which are with the highest usage rate in sign language. The work done by this paper can be considered as a supplement to current human-computer interactions, especially for the deaf-and-dumb community.
Date of Conference: 04-10 December 2021
Date Added to IEEE Xplore: 10 February 2022
ISBN Information:
Citations are not available for this document.
Cites in Papers - |
Cites in Papers - Other Publishers (2)
1.
Muhammad Imran Saleem, Atif Siddiqui, Shaheena Noor, Miguel-Angel Luque-Nieto, Enrique Nava-Baro, "A Machine Learning Based Full Duplex System Supporting Multiple Sign Languages for the Deaf and Mute", Applied Sciences, vol.13, no.5, pp.3114, 2023.
2.
Muhammad Imran Saleem, Atif Siddiqui, Shaheena Noor, Miguel-Angel Luque-Nieto, Pablo Otero, "A Novel Machine Learning Based Two-Way Communication System for Deaf and Mute", Applied Sciences, vol.13, no.1, pp.453, 2022.