I. Introduction
For object detection and recognition to be useful in realworld applications such as robotics, surveillance, or video indexing, recognizers must have the ability to localize objects of interest in images under viewpoint changes (e.g. changes in object size or position) and must be robust to complex background clutter and preferably fast. The physical size of the objects is often restricted, but appears on multiple scales due to the relative position of camera and the object. Face detection is a particular example of such an object class. Although it has been studied for more than 30 years, developing a fast and robust face detection system that can handle the variations found in different faces, such as facial expressions, pose changes, illumination changes, complex backgrounds, and low resolutions, is still a challenging research topic. The size of the face does not vary much between subjects, but depending on the distance from the camera the apparent face size changes. This is complicated further by features emerging and disappearing with distance. Knowing the distance to the object may thus provide an important cue for face size normalization.