I. Introduction
The deployment of deep learning structures for the examination of large-scale images introduces a multitude of challenges. One of the main challenges is the increased computational cost associated with the processing of larger images. As the dimensions of the image increase, the network has to manage a corresponding increase in the number of parameters, leading to extended training times and increased memory requirements [1]. This could make it more difficult for the model to identify and learn important features, potentially leading to overfitting. In the field of medical imaging, the requirement for high resolution and detail to ensure accurate patient diagnosis and treatment often results in large images [2]. Several strategies for enhancing the energy efficiency of deep learning encompass algorithmic optimizations such as quantization, pruning, compression, and approximations that streamline computation while preserving performance accuracy [3]. In the realm of medical imaging, the real-time scrutiny of extensive images is pivotal for making prompt decisions and delivering effective treatments [4]. Optimizing deep learning architectures to efficiently manage large images can facilitate real-time applications, leading to enhanced diagnosis, monitoring, and intervention in critical scenarios. Extensive images often encapsulate a wealth of information, and deep learning models must effectively capture and utilize pertinent features for accurate analysis and decision-making [5]. In our paper, we employ and advocate for the Forward-Forward algorithm [6], Knowledge Distillation [7], and Movement Pruning approach [8] to explore multiple scenarios where we endeavor to identify the leanest network. We also compress the data using various techniques and ascertain which among them yields the most reliable output. Additionally, we examine the performance of the leanest model on compressed data in terms of classification and generation tasks. Our findings contribute to the understanding of how compression affects the quality of deep learning results and provide a novel approach to addressing the challenges of analyzing large images using deep learning models.