1 Introduction
Edge detection has been a fundamental and core problem in computer vision since its inception [1], as a part of the segmentation-then-recognition paradigm where the image was to be first parsed into segments and then subjected to recognition. Under the contour-based version of this paradigm, edges need to be extracted first, linked into contours, and contours are closed to form regions. Edge detectors have evolved significantly from initial derivative operators, from the Perwitt and Sobel operators to Marr and Hildreth's zero-crossings of Laplacian of Gaussians, to the Canny edge detector, SUSAN and a host of others. Aside from a brief period where the elusiveness of segmentation led to the predominance of feature-based approaches, which bypass segmentation, hundreds of approaches to edge detection have been developed. While the majority of these approaches aim at improving the performance of edge detectors [2], or deal with specialized situations such as highly noisy images [3], they generally pay relatively less attention to finding the orientation or curvature of an edge, with few exceptions.