I. Introduction
Using RGBD camera to provide depth and color information is used to provide the sensing capabilities of the work targets and the environment. As an active research area, computer vision and vision-based artificial intelligence are used widely to segment and identify objects of interest [1] and acquire objects’ poses in three-dimensional space [2], [3]. Object detection has now been widely used in many real-world applications, such as autonomous driving, robot vision, video surveillance, etc. For robot’s end effectors to act on identified objects, it is required to transform the objects’ poses from the camera coordinate to the robot’s base coordinate. In the visually guided drilling [4] and grinding [5] applications, accurate hand-eye calibration is needed to precisely position the drill and the grinder. In Li and other’s study on visually guided grinding application [5], a criterion sphere is used to calculate the calibration parameters, thus eliminating the complicated process of tracking correspondences. In addition, a unified objective function is used to model joint parameters and pose parameters. Other areas where the hand-eye calibration can play a major factor include robotic surgery [6], robotic welding [7], machine vision metrology [8], robotic grasping [9], and robotic obstacle avoidance [10].