I. Introduction
Silent Speech Interfaces (SSI) are defined in Human-Computer Interaction as the the concept of speech communication in the absence of an audible acoustic signal [1] and have become a widely researched topic in the field of Brain-Computer Interfaces (BCIs) over the last years [2]–[4]. Studies showed that it is possible to decode imagined words from brain activity measured invasively at the surface of the brain [5], [6] but even with non-invasive measures like Electroencephalography (EEG) [7]–[10]. One major drawback of the existing noninvasive approaches for such an alternative communication pathway however, is the maximum number of distinguishable words. The presented approach in [11] achieved a classification accuracy of 70% for a three word classification problem on EEG data, which makes those approaches appear applicable even in real world scenarios. However, three words result in limited possibilities concerning communication. As soon as the number of words increases the classification accuracy decreases significantly. [12] reported a classification accuracy of 58.41% for 5 silently spoken words and [13] even managed to classify 12 words from EEG activity with an accuracy of around 34.2%. These results are astonishing and clearly above chance level, but far below the expectations towards a classifier applied for a real world communication and application.