I. Introduction
In the past decade, much progress has been made on face recognition [1]–[4]. However, computational facial expression analysis is still a challenging and attractive research topic in computer vision and intelligent human–computer interaction (HCI). Facial-expression-related research was launched by Ekman and Friesen [5] in the 1970s. In Ekman's early work, face actions were described in terms of the Facial Action Coding System (FACS) [7], and facial expressions were semantically coded with respect to seven basic but “universal” dimensions, i.e., neutral, anger, disgust, fear, joy, sadness, and surprise. However, in practice, automatic facial expression recognition by computer did not really begin until 1990s. Most approaches [9]–[17] are based on Ekman's theory for developing intelligent HCI at the current stage, although some researchers proposed approaches based on other emotion models, e.g., the valence/arousal dimensional model [8].