A Binarized Multi-Resolution Feature-Based Offline Signature | IEEE Conference Publication | IEEE Xplore

A Binarized Multi-Resolution Feature-Based Offline Signature

Publisher: IEEE

Abstract:

When a higher level of exactness is needed, handwritten signature recognition is mainly employed to certify administrative and official papers. Despite extensive previous...View more

Abstract:

When a higher level of exactness is needed, handwritten signature recognition is mainly employed to certify administrative and official papers. Despite extensive previous research, offline signature recognition often remains a challenge, especially when distinguishing authentic signatures from forgeries. Indeed, the difference in appearance between a true and a forged signature may be much smaller than that of authentic signatures. Hence, the present paper outlines a new approach for the offline signature representation using multi-scale analysis. Indeed, this last presentation is intended to capture texture features over a wide range of resolutions. It is designed from binarized statistical features of the image, which are computed at different scales. The pre-learned filters, derived from natural images, are implemented on the signature images to reveal the signature structure and generate a discriminative image description. The reduced relevant data is assessed using the classifier for efficient offline signature recognition.
Date of Conference: 18-20 December 2023
Date Added to IEEE Xplore: 04 March 2024
ISBN Information:

ISSN Information:

Publisher: IEEE
Conference Location: Sousse, Tunisia

I. Introduction

Among the biometric systems used for personal identification are signature verification and identification. Based on the examination of handwriting styles that vary between and among individuals, signatures can be used to authenticate a person. Compared to other biometric recognition systems, signature identification is a non invasive process widely applied in daily life. Its potential applications are, therefore, multiple, especially in the legal, administrative, and financial sectors [1]. There are two types of signature recognition tools: both on and offline, depending on the device being used to acquire the signature data. So, for online signature recognition, the dynamic information of the signature process, including pen trajectory, pen tilt, and pen down pressure, is mainly employed in the feature extraction step. However, in offline handwriting recognition, the signature image is scanned through an optical scanner and then uploaded to the signature recognition system. The critical reasons for the difficulty in detecting signatures for those two categories are low inter-class variation, substantial intra-class disparity, and a small number of training samples. In the scientific community, several methods for offline signature identification have been presented, the majority of them concentrating on feature extraction and the assessment of similarity metrics [2]–[4]. Several were investigated to increase the resilience and accuracy of recognition in uncontrolled conditions. Signature verification [5], [6] is a typical application requiring high reliability. Although earlier techniques have enabled significant advances in signature verification [7], the detailed characteristics of signatures, which are usually embedded in local regions, have not been sufficiently exploited [8]. However, the particular characteristics of local regions or line sections are generally those that distinguish a skillfully falsified signature from a true one. Accordingly, several descriptors have been proposed for feature extraction cases, including directional properties of signature contours [2], curvelet coefficient energy [3], and binary features [4]. In parallel, several recent efforts have been devoted to exploiting the feature learning capability of deep neural networks (DNNs) [9], [10], specifically the convolutional neural network (CNN) [11], [12]. The Siamese convolutional network, having received as input two signature images. This network can often learn visual features in conjunction with a similarity metric [13], [14].

References

References is not available for this document.