I. Introduction
The concept of entropy is a very fundamental tool in statistical mechanics, thermodynamics, information sciences, and statistics. In Information Theory, Claude Shannon [1] is known as the first to introduce a measure of randomness or uncertainty in a discrete distribution in 1948. Suppose be a discrete random variable, which takes values in with . Shannon's proposed measure, known as Shannon's entropy, is given by , where 's are the probabilities associated with various realizations of . Different other entropies that have been proposed since the seminal work of Shannon include Rényi entropy [2], Tsallis entropy [3], Sharma-Mittal entropy [4], Cumulative Residual Entropy (CRE) [5], Havrda and Chavrat entropy [6], Awad entropy [7], and their extensions. In this paper, we propose a new entropy with the notion of new measure of information, which increases with increase in the amount of information and becomes saturated once the full information is known. We analytically show that our proposed entropy satisfies all the prominent axioms of an entropy measure. Through extensive experiments, we show that the proposed entropy can perform efficiently in the context of image segmentation, robust statistics and entropic clustering.