I. Introduction
Searching the environmental objects around the robot is a crucial precondition for the autonomous operation of mobile robot in indoor environments. Studies have shown that 83% of the human learning comes from the visual information [1]. In most cases, the object detection of mobile robot is realized by vision sensors. The visual data can be used for upper robotic tasks, including object recognition, location and mapping. In order to simplify these upper task, a segmentation is needed to eliminate the useless information in the scene. The primary goal of the segmentation for mobile robot is to obtain the object candidate region. This may be difficult to achieve, especially in most situation, the object is interlaced by cluttered background. Furthermore, during the movement of the robot, the background varies in the different occasions, hence the constant background model is difficult to define for mobile robots.