I. Introduction
In Recent years, distributed control within sensor networks has received wide attention in many engineering applications. Some representative works may be found in [1]–[7]. The distributed network-space system can monitor the occurrence within itself, build its own models, communicate with its inhabitants, and act on the decisions made by itself. For instance, a wheeled robot is designed to track a trajectory, which is often made up by a set of line segments subject to the architecture constraint, which proves to be practical for path planning [8]. In addition, many problems encountered in classic wheeled robots (e.g., localization [9], [10], high computational power [11], different software for different kinds of mobile robot [12], and interference between sensors [13]) may be resolved when they are cast into a distributed network-space system. On the other hand, almost all of the distributed charge-coupled devices (CCDs) are fixed. Therefore, the visible region is limited, or the number of CCDs must be increased when the visible area is increased [2], [3]. We note that, although the omnidirectional vision system (ODVS) possesses a 360° view angle, its image processing is time consuming, and the estimation error (or calibration error) is large due to image distortion [1], [5], [7], [14], [15].