3D head pose estimation with convolutional neural network trained on synthetic images | IEEE Conference Publication | IEEE Xplore

3D head pose estimation with convolutional neural network trained on synthetic images


Abstract:

In this paper, we propose a method to estimate head pose with convolutional neural network, which is trained on synthetic head images. We formulate head pose estimation a...Show More

Abstract:

In this paper, we propose a method to estimate head pose with convolutional neural network, which is trained on synthetic head images. We formulate head pose estimation as a regression problem. A convolutional neural network is trained to learn head features and solve the regression problem. To provide annotated head poses in the training process, we generate a realistic head pose dataset by rendering techniques, in which we consider the variation of gender, age, race and expression. Our dataset includes 74000 head poses rendered from 37 head models. For each head pose, RGB image and annotated pose parameters are given. We evaluate our method on both synthetic and real data. The experiments show that our method improves the accuracy of head pose estimation.
Date of Conference: 25-28 September 2016
Date Added to IEEE Xplore: 19 August 2016
ISBN Information:
Electronic ISSN: 2381-8549
Conference Location: Phoenix, AZ, USA

1. Introduction

Head pose provides strong cues for human intention, motivation, and attention. Many applications rely on robust head pose estimation results, such as human behavior analysis, human computer interaction, gaze estimation etc. In the field of computer vision, a head pose is typically interpreted as an orientation of a person's head, which is represented by three angles: pitch, yaw and roll. The task of head pose estimation is challenging because of the large head appearance variation (expression, race and gender) and environmental factors (occlusion, noise and illumination). The techniques of head pose estimation are well summarized in [1].

Contact IEEE to Subscribe

References

References is not available for this document.