I. Introduction
Guidance, navigation, and control technologies for autonomous rendezvous and docking (RVD) demand accurate, real-time measurements and estimations of relative range and attitude [1]. With the development of computer science, numerous scholars have attached importance to location and attitude estimation based on computer vision techniques in many research fields [1]–[45]. By comparison with active sensors, vision-based measurements don't suffer from the shortcomings of expensiveness, large power consumption, and small work field [1]. The traditional vision-based methods for RVD are to determine the relative attitude and range by several luminous artificial beacons precisely installed on the target spacecraft [2]–[4]. Therefore such methods are easily influenced by the light condition because it is difficult for optical sensors to clearly capture the beacons in a bright condition.