Loading [MathJax]/extensions/MathZoom.js
SpEED-QA: Spatial Efficient Entropic Differencing for Image and Video Quality | IEEE Journals & Magazine | IEEE Xplore

SpEED-QA: Spatial Efficient Entropic Differencing for Image and Video Quality


Abstract:

Many image and video quality assessment (I/VQA) models rely on data transformations of image/video frames, which increases their programming and computational complexity....Show More

Abstract:

Many image and video quality assessment (I/VQA) models rely on data transformations of image/video frames, which increases their programming and computational complexity. By comparison, some of the most popular I/VQA models deploy simple spatial bandpass operations at a couple of scales, making them attractive for efficient implementation. Here we design reduced-reference image and video quality models of this type that are derived from the high-performance reduced reference entropic differencing (RRED) I/VQA models. A new family of I/VQA models, which we call the spatial efficient entropic differencing for quality assessment (SpEED-QA) model, relies on local spatial operations on image frames and frame differences to compute perceptually relevant image/video quality features in an efficient way. Software for SpEED-QA is available at: http://live.ece.utexas.edu/research/Quality/SpEED_Demo.zip.
Published in: IEEE Signal Processing Letters ( Volume: 24, Issue: 9, September 2017)
Page(s): 1333 - 1337
Date of Publication: 13 July 2017

ISSN Information:


I. Introduction

Objective image and video quality assessment (I/VQA) models aim to predict visual quality without the need to collect human subjective scores. These models often rely on statistical regularities (viz., natural scene statistics—NSS) that govern natural images and videos. NSS-derived features may be used to quantify deviations from these statistical properties that are predictive of visual impairments. There are three categories of objective I/VQA models: full-reference (FR), reduced-reference (RR) and no-reference (NR). FR models [1], [2] compare possibly distorted signals against entire reference versions of them. RR models [3] –[7] use only a subset of the reference data to predict quality, whereas NR models use only the distorted image/video to measure quality [8] –[10].

Contact IEEE to Subscribe

References

References is not available for this document.