1. Introduction
Understanding image quality is essential for many applications; however, it may be difficult since we periodically need an ideal reference image. We can address this issue using Non-Reference Image Quality Assessment (NR-IQA). The objective is to develop techniques that can independently assess the image’s quality without needing the original image. The importance of NR-IQA arises from its wide range of applications, including surveillance systems [1], medical imaging [2], content delivery networks [3], image & video compression [4], etc. It is vital in these domains to assess quality without the original reference image. NR-IQA advances imaging technology and improves user experience. Existing NR-IQA methods focus on developing novel algorithms to handle the problem of evaluating image quality. Test Time Adaptation technique for Image Quality Assessment (TTAIQA) [5], Quality-aware Pre-Trained (QPT) [6] models through self-supervised learning, the Language-Image Quality Evaluator (LIQE), the data-efficient image quality transformer (DEIQT) [7] represents strides in this field and many methods that leverage CNNs. However, shortcomings persist, particularly the limitation imposed by the scarcity of labeled data, hindering the effectiveness of deep learning models and capturing only local features via CNNs while disregarding the nonlocal features of the image that transformers can capture. Popular datasets like the largest NR-IQA dataset, FLIVE, fall short compared to those in other domains, impeding the robust training of NR-IQA models.