I. Introduction
Hyperspectral (HS) images comprised measurements of electromagnetic energy distributed in hundreds of narrow bands. Due to the rich information in both the spectral and spatial domains, hyperspectral imagery (HSI) has a wide variety of applications, such as assessment of food quality and safety [1], [2], artwork authentication [3] and examination of drug forgeries [4]. HSI is also employed in biomedical engineering applications such as the classification of corneal epithelium injuries [5], extraction of the properties of cornea tissues [6] and gastric cancer diagnosis [7]. In addition, HSI is also widely used in many remote sensing applications [8]–[12] including image classification and pattern recognition [13], [14], and spectral unmixing [15]. Unfortunately, all of these applications come with the cost of high memory requirements due to the huge amounts of data. To this end, lossy or lossless compression of HS images has been the focus of research publications in the last decade [12], [15]–[29]. These compression algorithms adopt a variety of approaches. Traditional 2-D image compression algorithms are applied to each band and achieve a compressed version of the HS cube [27], [30], [31]. These methods can provide satisfactory compression rates but fail to exploit interband correlation. To this end, some of these methods are extended to their 3-D versions for compression of HS images [12], [32], [33], though the extended methods inevitably suffer from the high computational complexity. For this reason, sparse representations via dictionary learning methods were proposed [17], [26], [34]. Matrix and tensor decomposition as well as factorization methods were also employed in HSI compression [24], [25], [35]–[37]. Besides, Wavelet-based compression methods are also developed to this end [12], [38], [39]. On the other hand, with the rapid improvement in GPU technology, convolutional neural networks-based schemes adopted to HSI compression [40].