I. Introduction
When performing compression at relatively low bit rates, there is in general information lost between the original image and that recovered after decoding. Most compression schemes are based on minimizing the mean-square error (MSE) between the original and compressed imagery. While this is a natural direction in many applications, there are problems for which one will ultimately make a classification decision based on the decoded imagery. For example, in medical-image compression, for transmission or storage, an expert will often make a diagnosis based on the decoded imagery [3]. In remote sensing, one often collects very large quantities of data (e.g., infrared or synthetic-aperture-radar imagery), necessitating low-bit-rate compression. In the remote-sensing problem, humans will also often make decisions based on the decoded imagery. It is therefore desirable to encode the original imagery in a manner that accounts for the ultimate classification task, this motivating consideration of non-MSE distortion measures and, hence, modification of the associated encoders/decoders.