A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment | IEEE Conference Publication | IEEE Xplore

A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment


Abstract:

Many objective video quality assessment (VQA) algorithms include a key step of temporal pooling of frame-level quality scores. However, less attention has been paid to st...Show More

Abstract:

Many objective video quality assessment (VQA) algorithms include a key step of temporal pooling of frame-level quality scores. However, less attention has been paid to studying the relative efficiencies of different pooling methods on noreference (blind) VQA. Here we conduct a large-scale comparative evaluation to assess the capabilities and limitations of multiple temporal pooling strategies on blind VQA of usergenerated videos. The study yields insights and general guidance regarding the application and selection of temporal pooling models. In addition, we also propose an ensemble pooling model built on top of high-performing temporal pooling models. Our experimental results demonstrate the relative efficacies of the evaluated temporal pooling models, using several popular VQA algorithms evaluated on two recent largescale natural video quality databases. Conclusively, we also provide an empirical recipe for applying temporal pooling of frame-based quality predictions.
Date of Conference: 25-28 October 2020
Date Added to IEEE Xplore: 30 September 2020
ISBN Information:

ISSN Information:

Conference Location: Abu Dhabi, United Arab Emirates
No metrics found for this document.

1. Introduction

Video quality assessment (VQA) models have been widely studied [1] as an increasingly important toolset used by the streaming and social media industries. While full-reference (FR) VQA research is gradually maturing and several algorithms [2, 3] are quite widely deployed, recent attention has shifted more towards creating better no-reference (NR) VQA models that can be used to predict and monitor the quality of authentically distorted user-generated content (UGC) videos. UGC videos, which are typically created by amateur videographers, often suffer from unsatisfactory perceptual quality, arising from imperfect capture devices, uncertain shooting skills, a variety of possible content processes, as well as compression and streaming distortions. In this regard, predicting UGC video quality is much more challenging than assessing the quality of synthetically distorted videos in traditional video databases. UGC distortions are more diverse, complicated, commingled, and no “pristine” reference is available.

Usage
Select a Year
2025

View as

Total usage sinceSep 2020:977
0123456JanFebMarAprMayJunJulAugSepOctNovDec510000000000
Year Total:6
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.