Waterloo IVC Code Repository

SSIM

The Structural SIMilarity (SSIM) index is a method for measuring the similarity between two images. The SSIM index can be viewed as a quality measure of one of the images being compared, provided the other image is regarded as of perfect quality. It is an improved version of the universal image quality index proposed before.

  • Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, Apr. 2004.

 

IW-SSIM

Information content weighted structural similarity measure (IW-SSIM) is an extension of the structural similarity (SSIM) index by incorporating the idea of information content weighted pooling.

  • Zhou Wang and Qiang Li, “Information Content Weighting for Perceptual Image Quality Assessment,” IEEE Transactions on Image Processing, vol. 20, no. 5, pp. 1185-1198, May 2011.

 

MS-SSIM

Multi-scale structural similarity measure (MS-SSIM) is an extension of the structural similarity (SSIM) index, which supplies more flexibility than single-scale methods in incorporating the variations of image resolution and viewing condition.

  • Z. Wang, E. P. Simoncelli and A. C. Bovik, “Multi-scale structural similarity for image quality assessment,” Invited Paper, IEEE Asilomar Conference on Signals, Systems and Computers, Nov. 2003.

 

Universal Image Quality Index

The universal objective image quality index, which is easy to calculate and applicable to various image processing applications. Instead of using traditional error summation methods, the proposed index is designed by modeling any image distortion as a combination of three factors: loss of correlation, luminance distortion, and contrast distortion.

  • Zhou Wang and Alan C. Bovik, “A Universal Image Quality Index,” IEEE Signal Processing Letters, vol. 9, no. 3, pp. 81-84, March 2002.

 

TMQI

Tone mapping operators (TMOs) that convert high dynamic range (HDR) to low dynamic range (LDR) images provide practically useful tools for the visualization of HDR images on standard LDR displays. Tone Mapped image Quality Index (TMQI) is an objective quality assessment algorithm for tone mapped images.

  • H. Yeganeh and Z. Wang, “Objective Quality Assessment of Tone Mapped Images,” IEEE Transactions on Image Processing, vol. 22, no. 2, pp. 657-667, Feb. 2013.

 

STGSM

The noise robust motion estimation scheme enhances the temporal correlations between wavelet coefficients. A Bayesian least square estimation approach is then applied for video denoising, resulting in competitive performance in terms of both PSNR and SSIM evaluations.

  • G. Varghese and Z. Wang, “Video denoising based on a spatiotemporal Gaussian scale mixture model,” IEEE Transactions on Circuit and Systems for Video Technology, vol. 20, no. 7, pp. 1032-1040, July 2010.
  • G. Varghese and Z. Wang, “Video denoising using a spatiotemporal statistical model of wavelet coefficients,” IEEE International Conference on Acoustics, Speech, & Signal Processing, Las Vegas, Nevada, Mar. 30-Apr. 4, 2008.

 

No-Reference Perceptual Quality Assessment of JPEG Compressed Images

This is an implementation of the algorithm for calculating the quality score of JPEG compressed images without reference images.

  • Zhou Wang, H. R. Sheikh and Alan C. Bovik, “No-Reference Perceptual Quality Assessment of JPEG Compressed Images,” IEEE International Conference on Image Processing, September 2002.

 

Reduced-Reference Image Quality Assessment

Reduced-reference (RR) image quality metrics provide a solution that lies between full-reference (FR) and no-reference (NR) models. They are designed to predict the perceptual quality of distorted images with only partial information about the reference images. This package provides an implementation of the RR method.

  • Z. Wang and E. P. Simoncelli, “Reduced-reference image quality assessment using a wavelet-domain natural image statistic model,” Human Vision and Electronic Imaging X, Proc. SPIE, vol. 5666, San Jose, CA, Jan. 2005.

 

Quality-Aware Images

A quality-aware image is created by extracting certain features of the original (high-quality) image and embedding them into the image as invisible hidden messages. Such an image can be aware of its own quality degradation because when a distorted version of the image is received, users can decode the hidden messages and use them to provide an objective measure of the quality of the distorted image.

  • Z. Wang, Guixing Wu, Hamid R. Sheikh, E. P. Simoncelli, En-Hui Yang and Alan C. Bovik, “Quality-Aware images,” IEEE Transactions on Image Processing, vol. 15, no. 6, pp. 1680-1689, June 2006.

 

EONSS

We propose a CNN based approach to build a blind IQA model for multiply distorted images, namely End-to-end Optimized deep neural Network using Synthetic Scores (EONSS).