Interface
We designed our system to create emotional engagement between the virtual human and the user to increase learning efficiency. The information content broadcast by the digital human (the what) is independently modulated through emotion channels (the how) and delivered in a focused 3D environment (the where). This triad of what, how, and where forms a platform for a communication interface designed to enhance information absorption. We achieve this effect by enabling our virtual human to perceive—specifically, to see, touch, and hear. Specialized vision modules detect and recognize one or more people in front of the display and analyze their facial expressions, looking for signs of their emotional states. We model these emotional states and create a mechanism for the digital human to express its own feelings with the purpose of modulating the user's mood.