I. Introduction
Brain–Computer Interface (BCI) systems [1], [2] provide a novel communication channel for healthy and disabled people to interact with the environment. The core idea of a BCI is to decode the mental state of a subject from its brain activity and to use this information for controlling a computer application or a robotic device such as a wheelchair. There are several ways to voluntarily induce different mental states, one common approach is motor imagery. In this paradigm, participants are asked to imagine the movements of their hands, feet, or mouths. This alters the rhythmic activity over different locations in the sensorimotor cortex and can be measured in the electroencephalography (EEG). However, reliable decoding of mental state is a very challenging task as the recorded EEG signal contains contributions from both task-related and task-unrelated processes. In order to enhance the task-related neural activity, i.e., increase its signal-to-noise ratio, it is common to perform spatial filtering. A very popular method for this is common spatial patterns (CSP) (e.g., [3]–[7]). Spatial filters computed with CSP are well suited to discriminate between different mental states induced by motor imagery as they focus on the synchronization and desynchronization effects occurring over different locations of the sensorimotor cortex after performing motor imagery. Although impressive improvements in BCI efficiency have been achieved with CSP (see, e.g., BCI Competitions
http://www.bbci.de/competition/
[8]–[11]), the current BCI systems are far from being perfect in terms of reliability and generalizability. This suboptimal performance can be mainly attributed to a low signal-to-noise ratio [4], [12], [13], the presence of artifacts in the data [14]–[16], and the nonstationary nature of the EEG signal [17]–[19].