I. Introduction
Image matching methods are various, but all include the following two basic steps: feature extraction and feature matching [1]. Feature extraction is the basis for image matching. There are two main methods: the region-based extraction and feature-based extraction. The former mainly considers the impact of the edge. It is widely used in medical care, and the latter has important applications in the field of computer vision and pattern recognition, for example, Biometrics and Digital Watermarking. So far, there have been a variety of image feature points extraction methods, such as SIFT algorithm, Forstner algorithm, Harris algorithm and SUSAN algorithm, etc. Professor David G. Lowe of Columbia University proposed a algorithm called SIFT in 2004 [2] [3]. The algorithm can always get good results in dealing with translation, rotation, scaling, brightness change, partial occlusion, and perspective transformation, and was successfully applied to target identification [4], image restoration [5], image mosaic [6] and other fields. The core idea of the forstner algorithm calculate each pixel Robert's gradient gray level co-variance matrix of pixel in local area to find the corresponding error ellipse as close as possible to the point of the circle as feature points [7]. The harris feature point extraction algorithm is very sensitive to scale changes of the image and affine transformation, the algorithm only detects corner in a single scale [8]. In some special corners, there will be deviations of corner locating [9]. In 1995, Smith of Oxford University first proposed the SUSAN algorithm, a gray-scale image edge detection and corner point method [10].