I. Introduction
Laparoscopic surgeries rely on highly sensitive video imaging sensors to accurately visualize and navigate within the surgical site. These sensors are manufactured to maintain optimal visibility even in the presence of various distortions [1]. These distortions include uneven illumination from fluctuating light sources, blur artifacts from improper focusing, lens fogging caused by tissue cauterization smoke, and noise distortions from channel transmission [2]. To ensure these distortions can be effectively managed in compliance with industry regulatory standards, the sensors undergo rigorous quality checks at every manufacturing stage [3]. Since performing this manually is a time consuming, expensive, and labor-intensive process for large-scale production, there is a high demand for developing laparoscopic video quality assessment (LVQA) algorithms that can produce outputs more closely aligned with human evaluations.