What is Image Quality Assessment?

Slides:



Advertisements
Similar presentations
No-Reference Metrics For Video Streaming Applications International Packet Video Workshop (PV 2004) Presented by : Bhavana CPSC 538 February 21, 2004.
Advertisements

Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
R&D Forum - 22 maggio 2009 Image Processing Laboratory DEEI, University of Trieste, Italy
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
ASSESSMENT OF DAYLIT GLARE PARAMETERS WITH IMAGING LUMINANCE MEASURING DEVICES (ILMD) AND IMAGE PROCESSING Porsch, Tobias; Schmidt, Franz.
(JEG) HDR Project Boulder meeting January 2014 Phil Corriveau-Patrick Le Callet- Manish Narwaria.
ITU Regional Standardization Forum For Africa Dakar, Senegal, March 2015 QoS/QoE Assessment Methodologies (Subjective and Objective Evaluation Methods)
H. R. Sheikh, A. C. Bovik, “Image Information and Visual Quality,” IEEE Trans. Image Process., vol. 15, no. 2, pp , Feb Lab for Image and.
Guillaume Lavoué Mohamed Chaker Larabi Libor Vasa Université de Lyon
Maximizing Strength of Digital Watermarks Using Neural Network Presented by Bin-Cheng Tzeng 5/ Kenneth J.Davis; Kayvan Najarian International Conference.
“POLITEHNICA” UNIVERSITY OF TIMIOARA FACULTY OF ELECTRONICS AND TELECOMMUNICATIONS DEPARTMENT OF COMMUNICATIONS DIPLOMA THESIS VIDEO QUALITY ASSESSMENT.
Image Information and Visual Quality Hamid Rahim Sheikh and Alan C. Bovik IEEE Transactions on Image Processing, Feb Presented by Xiaoli Wang for.
Introduction to Image Quality Assessment
Mean Squared Error : Love It or Leave It ?. Why do we love the MSE ? It is simple. It has a clear physical meaning. The MSE is an excellent metric in.
Image Enhancement.
1 Blind Image Quality Assessment Based on Machine Learning 陈 欣
Perceived video quality measurement Muhammad Saqib Ilyas CS 584 Spring 2005.
1 Perception. 2 “The consciousness or awareness of objects or other data through the medium of the senses.”
Video Capacity of WLANs with a Multiuser Perceptual Quality Constraint Authors: Jing Hu, Sayantan Choudhury, Jerry D. Gibson Presented by: Vishwas Sathyaprakash,
Input: Original intensity image. Target intensity image (i.e. a value sketch). Using Value Images to Adjust Intensity in 3D Renderings and Photographs.
IMPLEMENTATION AND PERFOMANCE ANALYSIS OF H
An automated image prescreening tool for a printer qualification process by † Du-Yong Ng and ‡ Jan P. Allebach † Lexmark International Inc. ‡ School of.
Technology and digital images. Objectives Describe how the characteristics and behaviors of white light allow us to see colored objects. Describe the.
Digital Watermarking With Phase Dispersion Algorithm Team 1 Final Presentation SIMG 786 Advanced Digital Image Processing Mahdi Nezamabadi, Chengmeng Liu,
Comparative study of various still image coding techniques. Harish Bhandiwad EE5359 Multimedia Processing.
بسمه تعالی IQA Image Quality Assessment. Introduction Goal : develop quantitative measures that can automatically predict perceived image quality. 1-can.
IMAGE COMPRESSION USING BTC Presented By: Akash Agrawal Guided By: Prof.R.Welekar.
(JEG) HDR Project: update from IRCCyN July 2014 Patrick Le Callet-Manish Narwaria.
1 Requirements for the Transmission of Streaming Video in Mobile Wireless Networks Vasos Vassiliou, Pavlos Antoniou, Iraklis Giannakou, and Andreas Pitsillides.
IMPLEMENTATION AND PERFOMANCE ANALYSIS OF H.264 INTRA FRAME CODING, JPEG, JPEG-LS, JPEG-2000 AND JPEG-XR 1 EE 5359 Multimedia Project Amee Solanki ( )
Video Compression: Performance evaluation of available codec software Sridhar Godavarthy.
R. Ray and K. Chen, department of Computer Science engineering  Abstract The proposed approach is a distortion-specific blind image quality assessment.
Improvement of Multi-bit Information Embedding Algorithm for Palette-Based Images Anu Aryal, Kazuma Motegi, Shoko Imaizumi and Naokazu Aoki Division of.
MDDSP Literature Survey Presentation Eric Heinen
Sadaf Ahamed G/4G Cellular Telephony Figure 1.Typical situation on 3G/4G cellular telephony [8]
Robert Edge  Dr. James Walker  Mathematics  University of Wisconsin-Eau Claire Universal Image Quality PSNR IQI BEST IQI BOATASWDRJPEGASWDRJPEG 1:
03/05/03© 2003 University of Wisconsin Last Time Tone Reproduction If you don’t use perceptual info, some people call it contrast reduction.
1 Presented by Jari Korhonen Centre for Quantifiable Quality of Service in Communication Systems (Q2S) Norwegian University of Science and Technology (NTNU)
Objective Quality Assessment Metrics for Video Codecs - Sridhar Godavarthy.
23 November Md. Tanvir Al Amin (Presenter) Anupam Bhattacharjee Department of Computer Science and Engineering,
Single Image Super-Resolution: A Benchmark Chih-Yuan Yang 1, Chao Ma 2, Ming-Hsuan Yang 1 UC Merced 1, Shanghai Jiao Tong University 2.
Just Noticeable Difference Estimation For Images with Structural Uncertainty WU Jinjian Xidian University.
Department of computer science and engineering Evaluation of Two Principal Image Quality Assessment Models Martin Čadík, Pavel Slavík Czech Technical University.
An Improved Method Of Content Based Image Watermarking Arvind Kumar Parthasarathy and Subhash Kak 黃阡廷 2008/12/3.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
JPEG - JPEG2000 Isabelle Marque JPEGJPEG2000. JPEG Joint Photographic Experts Group Committe created in 1986 by: International Organization for Standardization.
Distance-Reciprocal Distortion Measure for Binary Document Images IEEE Signal Processing Letters, vol.11,No.2, Feb
COMPARATIVE STUDY OF HEVC and H.264 INTRA FRAME CODING AND JPEG2000 BY Under the Guidance of Harshdeep Brahmasury Jain Dr. K. R. RAO ID MS Electrical.
1 Marco Carli VPQM /01/2007 ON BETWEEN-COEFFICIENT CONTRAST MASKING OF DCT BASIS FUNCTIONS Nikolay Ponomarenko (*), Flavia Silvestri(**), Karen.
Here today. Gone Tomorrow Aaron McClennon-Sowchuk, Michail Greshischev.
南台科技大學 資訊工程系 Data hiding based on the similarity between neighboring pixels with reversibility Author:Y.-C. Li, C.-M. Yeh, C.-C. Chang. Date:
More digital 244 wk 12 Perry Sprawls, Ph.D. Professor Emeritus Department of Radiology Emory University School of Medicine Atlanta, GA,
Performance Measurement of Image Processing Algorithms By Dr. Rajeev Srivastava ITBHU, Varanasi.
Objective Quality Assessment Metrics for Video Codecs - Sridhar Godavarthy.
A Fast Video Noise Reduction Method by Using Object-Based Temporal Filtering Thou-Ho (Chao-Ho) Chen, Zhi-Hong Lin, Chin-Hsing Chen and Cheng-Liang Kao.
Implementation and comparison study of H.264 and AVS china EE 5359 Multimedia Processing Spring 2012 Guidance : Prof K R Rao Pavan Kumar Reddy Gajjala.
Adaptive Block Coding Order for Intra Prediction in HEVC
Structure Similarity Index
PERFORMANCE ANALYSIS OF VISUALLY LOSSLESS IMAGE COMPRESSION
Chapter IV, Introduction to Digital Imaging: Lesson III Understanding the Components of Image Quality
Nikolay Ponomarenkoa, Vladimir Lukina, Oleg I. Ieremeieva,
Chapter III, Desktop Imaging Systems and Issues: Lesson IV Working With Images
Pei Qi ECE at UW-Madison
Structural Similarity Index
Research Topic Error Concealment Techniques in H.264/AVC for Wireless Video Transmission Vineeth Shetty Kolkeri EE Graduate,UTA.
Exposing Digital Forgeries by Detecting Traces of Resampling Alin C
A Review in Quality Measures for Halftoned Images
Comparative study of various still image coding techniques.
Digital television systems (DTS)
Image quality measures
Presentation transcript:

What is Image Quality Assessment? Image quality is a characteristic of an image that measures the perceived image degradation It plays an important role in various image processing application. Goal of image quality assessment is to supply quality metrics that can predict perceived image quality automatically. Two Types of image quality assessment Objective quality assessment Subjective quality assessment

Two Types of image quality assessment Objective (MSE, PSNR, SSIM) [1,2] Automated Low cost Less accurate Subjective: Mean Opinion Score (MOS) [4] Involving human participants Expensive More accurate and ecologically valid

Subjective Quality Measure The best way to find the quality of an image is to look at it because human eyes are the ultimate viewer. Subjective image quality is concerned with how image is perceived by a viewer and give his or her opinion on a particular image.

Subjective Quality Measure The mean opinion score (MOS) has been used for subjective quality assessment for many years. In Audio standard subjective test listeners (both male and female, professional and non-professional) rate the heard audio quality over the communication medium being tested. In case of image, the image quality is measured by viewers (both male and female, professional and non-professional) on the display devise or printed media. However, this approach is too inconvenient, time consuming and expensive.

Example of mean opinion score (MOS) The MOS is generated by averaging the result of a set of standard, subjective tests. MOS is an indicator of the perceived image quality. Mean Opinion Score of 1 is worst image quality and 5 is best [4]. Mean Opinion Score (MOS) MOS Quality Impairment 5 Excellent Imperceptible 4 Good Perceptible but not annoying 3 Fair Slightly annoying 2 Poor Annoying 1 Bad Very annoying

Mean Opinion Score (MOS) Subjective test

Objective Quality Measure

Objective Quality Measure Mathematical models that approximate the results of subjective quality assessment. Goal of objective evaluation is to develop quantitative measure that can predict perceived image quality.

Objective Quality Measure Objective quality measure plays variety of roles: To monitor and control image quality for quality control systems To benchmark image processing systems; To optimize algorithms and parameters; To help home users better manage their digital photos and evaluate their expertise in photographing.

Objective evaluation Three types of objective evaluation It is classified according to the availability of an original image with which distorted image is to be compared Full reference (FR) No reference –Blind (NR) Reduced reference (RR)

Full reference quality metrics MSE and PSNR: the most widely used video/image quality metrics during last 20 years. SSIM: new metric (was suggested in 2004) shows better results than PSNR with reasonable computational complexity increasing. A great effort has been made to develop new objective quality measures for image/video that incorporate perceptual quality measures by considering the human visual system (HVS) characteristics.

HVS – Human visual system Quality assessment (QA) algorithms predict visual quality by comparing a distorted signal against a reference, typically by modeling the human visual system. The objective image quality assessment is based on well defined mathematical models that can predict perceived image quality between a distorted image and a reference image. These measurement methods consider human visual system (HVS) characteristics in an attempt to incorporate perceptual quality measures.

MSE – Mean square error MSE and PSNR are defined as Where x is the original image and y is the distorted image. M and N are the width and height of an image. L is the dynamic range of the pixel values. If the MSE decrease to zero, the pixel-by-pixel matching of the images becomes perfect.

Same MSE but different qualities MSE for all images = 215

SSIM – Structural similarity index Recently proposed approach for image quality assessment. A metric for measuring the similarity between two images. Its value lies between [0,1] The SSIM is designed to improve on traditional metrics like PSNR and MSE, which have proved to be inconsistent with human eye perception. Based on human visual system.

SSIM measurement system

How to measure SSIM Assume we have two images x and y, where xi and yi , i = 1, 2, …, N are their elements. The aim is to determine how similar images x and y are. Mean and SD for image x is defined as follows:

How to measure SSIM To compare the similarity of two images x and y, we need to compare their Luminance : l Contrast: c and Structure: s By combining them in a weighted function as follows: S(x,y) = f ( l(x,y), c(x,y), s(x,y) )

How to measure SSIM S(x,y) = f ( l(x,y), c(x,y), s(x,y) ) So we need to define the above three functions l,c,s and also the function f(.). However, the similarity measure S(x,y) should satisfy the following conditions: 1- Symmetry: S(x,y) = S(y,x) 2- Boundedness: S(x,y) ≤ 1 3- Unique maximum: S(x,y) = 1 if and only if x=y, that is xi = yi, for all i=1,2, …, N)

How to measure SSIM S(x,y) = f ( l(x,y), c(x,y), s(x,y) ) Luminance comparison is defined as: Where the constant C1 to avoid division by a very small (or zero) number. 𝐶 1 =( 𝐾 1 𝐿) 2 Where L is the dynamic range of pixel values (255 for 8-bit gray scale image), and K1 « 1 is a small constant. 𝑙 𝑥,𝑦 = 2 𝜇 𝑥 𝜇 𝑦 + 𝐶 1 𝜇 𝑥 2 + 𝜇 𝑦 2 + 𝐶 1

How to measure SSIM S(x,y) = f ( l(x,y), c(x,y), s(x,y) ) Contrast comparison is defined as: Where the constant C1 to avoid division by a very small (or zero) number. 𝐶 2 =( 𝐾 2 𝐿) 2 Where L is the dynamic range of pixel values (255 for 8-bit gray scale image), and K2 « 1 is a small constant. 𝑐 𝑐,𝑦 = 2 𝜎 𝑥 𝜎 𝑦 + 𝐶 2 𝜎 𝑥 2 + 𝜎 𝑦 2 + 𝐶 2

How to measure SSIM S(x,y) = f ( l(x,y), c(x,y), s(x,y) ) Structure comparison is defined as: Where the constant C3 to avoid division by a very small (or zero) number. 𝐶 3 =( 𝐶 2 /2) 𝑠 𝑥,𝑦 = 𝜎 𝑥𝑦 + 𝐶 3 𝜎 𝑥 𝜎 𝑦 + 𝐶 3 𝑆𝑆𝐼𝑀 𝑥,𝑦 = 𝑙(𝑥,𝑦) 𝛼 . 𝑐(𝑥,𝑦) 𝛽 . 𝑠(𝑥,𝑦) 𝛾

How to measure SSIM Where α > 0, β > 0, γ > 0 are parameters used to adjust the relative importance of the components. It is easy to verify that this definition of SSIM satisfies the three conditions: 1- Symmetry: S(x,y) = S(y,x) 2- Boundedness: S(x,y) ≤ 1 3- Unique maximum: S(x,y) = 1 if and only if x=y, 𝑆𝑆𝐼𝑀 𝑥,𝑦 = 𝑙(𝑥,𝑦) 𝛼 . 𝑐(𝑥,𝑦) 𝛽 . 𝑠(𝑥,𝑦) 𝛾

How to measure SSIM To simplify the above equation, we set α = β = γ = 1 SSIM ε [0,1], Values closer to 1 indicate higher similarity. 𝑆𝑆𝐼𝑀 𝑥,𝑦 = 𝑙(𝑥,𝑦) 𝛼 . 𝑐(𝑥,𝑦) 𝛽 . 𝑠(𝑥,𝑦) 𝛾 𝑆𝑆𝐼𝑀 𝑥,𝑦 = (2 𝜇 𝑥 𝜇 𝑦 + 𝐶 1 )(2 𝜎 𝑥𝑦 + 𝐶 2 ) ( 𝜇 𝑥 2 +𝜇 𝑦 2 + 𝐶 1 )( 𝜎 𝑥 2 + 𝜎 𝑦 2 + 𝐶 2 )

How to measure SSIM If we divide images x and y into blocks and then compute the similarity of each two corresponding blocks, and give a weight to the similarity measure of each block, then the average SSIM over all blocks is a new measure called MSSIM Where M is the number of blocks, and xi and yj are two corresponding image blocks. 𝑆𝑆𝐼𝑀 𝑥,𝑦 = (2 𝜇 𝑥 𝜇 𝑦 + 𝐶 1 )(2 𝜎 𝑥𝑦 + 𝐶 2 ) ( 𝜇 𝑥 2 +𝜇 𝑦 2 + 𝐶 1 )( 𝜎 𝑥 2 + 𝜎 𝑦 2 + 𝐶 2 )

MSE VS MSSIM original Image MSE=0, MSSIM=1 MSE=225, MSSIM=0.949

MSE vs. MSSIM Distorted image: Salt & pepper Noise Distorted image: Spackle Noise Distorted image: Gaussian Noise Distorted image:Blurred Distorted image: JPEG compressed Distorted image: Contrast Stretch Original image Sparkle noise causes artificial artifacts in digital images. It appears as a bright spots which have a typical intensity of 40% of maximum intensity.

MSE vs. MSSIM simulation result Type of Noise MSE MSSIM Salt & Pepper Noise 228.34 0.7237 Spackle Noise 225.91 0.4992 Gaussian Noise 226.80 0.4489 Blurred 225.80 0.7136 JPEG compressed 213.55 0.3732 Contrast Stretch 406.87 0.9100

MSE vs. MSSIM MSE=226.80 MSSIM =0.4489 MSE = 225.91 MSSIM =0.4992

MSE vs. MSSIM MSE = 213.55 MSSIM = 0.3732 MSE = 225.80 MSSIM =0.7136

MSE vs. MSSIM MSE = 226.80 MSSIM = 0.4489 MSE = 406.87 MSSIM =0.910

Why MSE is poor? MSE and PSNR are widely used because they are simple and easy to calculate and mathematically easy to deal with for optimization purpose. There are a number of reasons why MSE or PSNR may not correlate well with the human perception of quality.

Why MSE is poor? Digital pixel values, on which the MSE is typically computed, may not exactly represent the light stimulus entering the eye. Simple error summation, like the one implemented in the MSE formulation, may be markedly different from the way the HVS and the brain arrives at an assessment of the perceived distortion. Two distorted image signals with the same amount of error energy may have very different structure of errors, and hence different perceptual quality.

Conclusion We consider the proposed SSIM indexing approach as a particular implementation of the philosophy of structural similarity, from an image formation point of view.

References [1] Z. Wang and A. C. Bovik, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Processing, vol. 13, pp. 600 – 612, Apr. 2004. www.ece.uwaterloo.ca/~z70wang/publications/ssim.html [2] Z. Wang and A. C. Bovik, “Modern image quality assessment”, Morgan & Claypool Publishers, Jan. 2006. [3] X. Shang, “Structural similarity based image quality assessment: pooling strategies and applications to image compression and digit recognition” M.S. Thesis, EE Department, The University of Texas at Arlington, Aug. 2006.  [4] ITU-R Rec. BT. 500-11: Methodology for the Subjective Assessment of Quality for Television Pictures. June 2002.