I. Introduction
Nowadays, omnidirectional display has been widely applied in Virtual Reality (VR) to provide powerful immersive experience using 360-degree surround information, and specific commercial software and hardware products have been produced consecutively. Given the inexpensive and reliable standalone Head-Mounted Display (HMD), like HTC Vive and Oculus Quest 2, VR is still expected to become the next-generation consumer-grade computing platform [1]. Additionally, various capturing systems (e.g., insta360, Facebook Surround 360, etc.) and a variety of 360-degree content resources supported by famous media platforms (e.g., YouTube, Netflix, etc.) further attract more consumers to participate into the visual experience of VR [2]. However, the visual quality degradation in each stage (e.g., immersive content acquisition, image stitching or compression) may affect the visual experience of users. It is also worth noting that although panoramic content seamlessly occupies the whole Field of View (FoV) of users, lacking of depth cues may lead to insufficient stereoscopic perception [3]. Therefore, we mainly focus on exploring the user’s perceptual quality of Stereoscopic Omnidirectional Images (SOIs) in this paper.