I. Introduction
Eye-tracking technologies greatly assist the interactions and communication acts of paralyzed people, specially of those only able to control their ocular movements (Locked-In Syndrome, LIS, as in late stages of Amyotrophic Lateral Sclerosis, ALS) [1] . However, few ocular control modalities have been explored so far, with a dearth of guidelines to build gaze-controlled systems [2] . In particular, most gaze commands are based on dwelling [3] (activating a UI item when the user looks at it for a certain time - dwell time) or on eye gestures [4] (e.g., looking from left to right). Gaze control often represents the LIS people’s sole interaction method, thus it is essential to make it easier, quicker and more efficient. Repetitive saccades to command a GUI are tiring [5] and should be limited.