Loading [MathJax]/extensions/MathMenu.js
Decoding Semantic Categories from EEG Activity in Silent Speech Imagination Tasks | IEEE Conference Publication | IEEE Xplore

Decoding Semantic Categories from EEG Activity in Silent Speech Imagination Tasks


Abstract:

Silent Speech Brain-Computer Interfaces try to decode imagined or silently spoken speech from brain activity. This technology holds big potential in various application d...Show More

Abstract:

Silent Speech Brain-Computer Interfaces try to decode imagined or silently spoken speech from brain activity. This technology holds big potential in various application domains, e.g. restoring communication abilities for handicapped people, or in settings where overtly spoken speech is not an option due to environmental conditions, e.g. noisy industrial or aerospace settings. However, one major drawback of this technology still is the limited number of words which can be distinguished at a time. This work therefore introduces the concept of Semantic Silent Speech BCIs, which adds a layer for semantic category classification prior to the actual word classification to multiply the number of classifiable words in Silent Speech BCIs many times over. We evaluated the possibilities of classifying 5 different semantic categories of words during a word imagination task by comparing various feature extraction and classification methods. Our results show remarkable classification accuracies of up to 95% for the single best subject with a Common Spatial Pattern (CSP) feature extraction and a Support Vector Machine (SVM) classifier and a best average classification accuracy of 60.44% for a combination of CSP and a Random Forrest (RF) classifier. Even a cross-subject analysis over the data of all subjects lead to results above the chance level of 20%, with a best performance of 43.54% for a self assembled feature vector and a RF classifier. Those results clearly indicate that the classification of the semantic category of an imagined word from EEG activity is possible and therefor lay the foundation for Semantic Silent Speech BCIs in the future.
Date of Conference: 22-24 February 2021
Date Added to IEEE Xplore: 05 April 2021
ISBN Information:

ISSN Information:

Conference Location: Gangwon, Korea (South)

I. Introduction

Silent Speech Interfaces (SSI) are defined in Human-Computer Interaction as the the concept of speech communication in the absence of an audible acoustic signal [1] and have become a widely researched topic in the field of Brain-Computer Interfaces (BCIs) over the last years [2]–[4]. Studies showed that it is possible to decode imagined words from brain activity measured invasively at the surface of the brain [5], [6] but even with non-invasive measures like Electroencephalography (EEG) [7]–[10]. One major drawback of the existing noninvasive approaches for such an alternative communication pathway however, is the maximum number of distinguishable words. The presented approach in [11] achieved a classification accuracy of 70% for a three word classification problem on EEG data, which makes those approaches appear applicable even in real world scenarios. However, three words result in limited possibilities concerning communication. As soon as the number of words increases the classification accuracy decreases significantly. [12] reported a classification accuracy of 58.41% for 5 silently spoken words and [13] even managed to classify 12 words from EEG activity with an accuracy of around 34.2%. These results are astonishing and clearly above chance level, but far below the expectations towards a classifier applied for a real world communication and application.

Contact IEEE to Subscribe

References

References is not available for this document.