1. INTRODUCTION
The human can determine with ease the color of an object from the lighting reflected from its surface. Computational color constancy focuses on estimating the actual color of the object disregarding its illuminant. It is essential in many colour-based computer vision applications [1]. There are many papers proposed for computing the color constancy [2], [3], [4], [5], [6], [7], [8]. The most related work to ours is the algorithm in [9]: The Grey-Edge hypothesis solution for the color constancy problem was derived for any input image without any prior knowledge about the image. The proposed methods are formulated to estimate the Minkowski norm of color derivatives of the differences among neighboring image points. This is the first framework in which color constancy based on derivative structure of images is investigated, and which also includes the known algorithms directly using the original image intensities. The experiments show that the color constancy algorithms proposed in [9] can obtain comparable results as the state-of-the-art color constancy methods with the merit of computational efficiency. Furthermore, these algorithms does not require image databases taken under known light sources for calibration as used in some more deliberate color constancy methods [9].