Zhongling Wang, Shahrukh Athar and Zhou Wang
We propose a CNN based approach to build a blind IQA model for multiply distorted images, namely End-to-end Optimized deep neural Network using Synthetic Scores (EONSS).
In real-world visual content acquisition and distribution systems, a vast majority of visual content undergoes multiple distortions between the source and the end user. However, traditional image quality assessment (IQA) algorithms are usually validated and at times trained on image databases with a single distortion stage. Existing IQA methods for multiply distorted images remain limited in their scope and performance. In this work we design a first-of-its-kind blind IQA model for multiply distorted visual content based on a deep end-to-end convolutional neural network. The network is trained on a newly developed dataset which is composed of millions of multiply distorted images annotated with synthetic quality scores. Our tests on three publicly available subject-rated multiply distorted image databases show that the proposed model outperforms state-of-the-art blind IQA methods in terms of both accuracy and speed.
- DEMO (testing code and pretrained model)
- Training code is coming soon.
We are making the EONSS code and pretrained mode available to the research community free of charge. If you use this code in your research, we kindly ask that you reference our papers listed below:
Z. Wang, S. Athar, Z. Wang, “Blind Quality Assessment of Multiply Distorted Images Using Deep Neural Networks”, 16th International Conference on Image Analysis and Recognition, Waterloo, Ontario, Canada, August 27-29, 2019.
author="Wang, Zhongling and Athar, Shahrukh and Wang, Zhou",
title="Blind Quality Assessment of Multiply Distorted Images Using Deep Neural Networks",
booktitle="International Conference on Image Analysis and Recognition",