Maria Grazia Albanesi, Riccardo Amadeo University of Pavia, Faculty of Engineering, Computer Department Impact of Fixation Time on Subjective Video Quality.

Slides:



Advertisements
Similar presentations
GMD German National Research Center for Information Technology Darmstadt University of Technology Perspectives and Priorities for Digital Libraries Research.
Advertisements

Eye Movements Study on Reading Behaviors of Automobile Advertisements Published in Mobile Phone Newspaper Zheng Yuan Min Zhang Qing Tang.
VQEG Boston meeting April 2006
Web Search Results Visualization: Evaluation of Two Semantic Search Engines Kalliopi Kontiza, Antonis Bikakis,
Eye Tracking Analysis of User Behavior in WWW Search Laura Granka Thorsten Joachims Geri Gay.
Image Information Retrieval Shaw-Ming Yang IST 497E 12/05/02.
Comparison of subjective test methodologies VQEG Berlin meeting June 2009 P. Le Callet, R. Pépion.
ITU Regional Standardization Forum For Africa Dakar, Senegal, March 2015 QoS/QoE Assessment Methodologies (Subjective and Objective Evaluation Methods)
Anahita: A System for 3D Video Streaming with Depth Customization
Azra Rafique Khalid Mahmood. Introduction “To learn each and everything in a limited time frame of degree course is not possible for students”. (Mahmood,
“POLITEHNICA” UNIVERSITY OF TIMIOARA FACULTY OF ELECTRONICS AND TELECOMMUNICATIONS DEPARTMENT OF COMMUNICATIONS DIPLOMA THESIS VIDEO QUALITY ASSESSMENT.
Noman Haleem B.E. in Textile Engineering from National Textile University Specialization in Yarn Manufacturing Topic “Determination of Twist per Inch.
The Use of Eye Tracking Technology in the Evaluation of e-Learning: A Feasibility Study Dr Peter Eachus University of Salford.
Correlation Between Image Reproduction Preferences and Viewing Patterns Measured with a Head Mounted Eye Tracker Lisa A. Markel Jeff B. Pelz, Ph.D. Center.
Perceptual Quality Assessment of P2P Assisted Streaming Video for Chunk-level Playback Controller Design Tom Z.J. Fu, CUHK W. T. Leung, CUHK P. Y. Lam,
Designing QoE experiments to evaluate Peer-to-Peer streaming applications Tom Z.J. Fu, CUHK Dah Ming Chiu, CUHK Zhibin Lei, ASTRI VCIP 2010, Huang Shan,
Impact of Reference Distance for Motion Compensation Prediction on Video Quality ACM/SPIE Multimedia Computing and Networking (MMCN) San Jose, California,
Introduction to Image Quality Assessment
User Interface Testing. Hall of Fame or Hall of Shame?  java.sun.com.
© Tefko Saracevic, Rutgers University1 digital libraries and human information behavior Tefko Saracevic, Ph.D. School of Communication, Information and.
Trip Report for The IASTED International Conference on Internet and Multimedia Systems and Applications (EuroIMSA 2006) February 13-15, 2006 Innsbruck,
Subjectif tests requirements for HDR
© Tefko Saracevic, Rutgers University1 digital libraries and human information behavior Tefko Saracevic, Ph.D. School of Communication, Information and.
Importance of region-of-interest on image difference metrics Marius Pedersen The Norwegian Color Research Laboratory Faculty of Computer Science and Media.
An Introduction to H.264/AVC and 3D Video Coding.
The role of eye tracking in usability evaluation of LMS in ODL context Mr Sam Ssemugabi Ms Jabulisiwe Mabila (Professor Helene Gelderblom) College of Science.
8th and 9th June 2004 Mainz, Germany Workshop on Wideband Speech Quality in Terminals and Networks: Assessment and Prediction 1 Vincent Barriac, Jean-Yves.
Content Classification Based on Objective Video Quality Evaluation for MPEG4 Video Streaming over Wireless Networks Asiya Khan, Lingfen Sun & Emmanuel.
Tutor: Prof. A. Taleb-Bendiab Contact: Telephone: +44 (0) CMPDLLM002 Research Methods Lecture 8: Quantitative.
Eye Tracking in the Design and Evaluation of Digital Libraries
بسمه تعالی IQA Image Quality Assessment. Introduction Goal : develop quantitative measures that can automatically predict perceived image quality. 1-can.
(JEG) HDR Project: update from IRCCyN July 2014 Patrick Le Callet-Manish Narwaria.
 An eye tracking system records how the eyes move when a subject is sitting in front of a computer screen.  The human eyes are constantly moving until.
Brain Wave Analysis in Optimal Color Allocation for Children’s Electronic Book Design Wu, Chih-Hung Liu, Chang Ju Tzeng, Yi-Lin.
1 Requirements for the Transmission of Streaming Video in Mobile Wireless Networks Vasos Vassiliou, Pavlos Antoniou, Iraklis Giannakou, and Andreas Pitsillides.
Project title : Automated Detection of Sign Language Patterns Faculty: Sudeep Sarkar, Barbara Loeding, Students: Sunita Nayak, Alan Yang Department of.
TAUCHI – Tampere Unit for Computer-Human Interaction Visualizing gaze path for analysis Oleg Špakov MUMIN workshop 2002, Tampere.
Hao Wu Nov Outline Introduction Related Work Experiment Methods Results Conclusions & Next Steps.
Content Clustering Based Video Quality Prediction Model for MPEG4 Video Streaming over Wireless Networks Asiya Khan, Lingfen Sun & Emmanuel Ifeachor 16.
1 CP586 © Peter Lo 2003 Multimedia Communication Video Terminology.
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
Proposed Task-Based VQEG Project Carolyn Ford, Mikołaj Leszczuk.
Image Processing and Computer Vision: 91. Image and Video Coding Compressing data to a smaller volume without losing (too much) information.
ALIP: Automatic Linguistic Indexing of Pictures Jia Li The Pennsylvania State University.
1 PSCR Sponsors Department of Homeland Security Office for Interoperability and Compatibility Department of Justice Office of Community Oriented Policing.
1 Presented by Jari Korhonen Centre for Quantifiable Quality of Service in Communication Systems (Q2S) Norwegian University of Science and Technology (NTNU)
AGH and Lancaster University. Assess based on visibility of individual packet loss –Frame level: Frame dependency, GoP –MB level: Number of affected MBs/slices.
©2009 Mladen Kezunovic. Improving Relay Performance By Off-line and On-line Evaluation Mladen Kezunovic Jinfeng Ren, Chengzong Pang Texas A&M University,
21/11/20151Gianluca Demartini Ranking Clusters for Web Search Gianluca Demartini Paul–Alexandru Chirita Ingo Brunkhorst Wolfgang Nejdl L3S Info Lunch Hannover,
Eye Tracking In Evaluating The Effectiveness OF Ads Guide : Dr. Andrew T. Duchowski.
Efficient Visual Object Tracking with Online Nearest Neighbor Classifier Many slides adapt from Steve Gu.
2005/12/021 Fast Image Retrieval Using Low Frequency DCT Coefficients Dept. of Computer Engineering Tatung University Presenter: Yo-Ping Huang ( 黃有評 )
Scene Reconstruction Seminar presented by Anton Jigalin Advanced Topics in Computer Vision ( )
The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed.
Dept. of Mobile Systems Engineering Junghoon Kim.
GENDER AND AGE RECOGNITION FOR VIDEO ANALYTICS SOLUTION PRESENTED BY: SUBHASH REDDY JOLAPURAM.
Digital Video Library Network Supervisor: Prof. Michael Lyu Student: Ma Chak Kei, Jacky.
Motion Detection and Processing Performance Analysis Thomas Eggers, Mark Rosenberg Department of Electrical and Systems Engineering Abstract Histograms.
Proposal for 3D evaluation test plan in VQEG June Jun Okamoto (NTT)
Transcoding based optimum quality video streaming under limited bandwidth *Michael Medagama, **Dileeka Dias, ***Shantha Fernando *Dialog-University of.
26/01/20161Gianluca Demartini Ranking Categories for Faceted Search Gianluca Demartini L3S Research Seminars Hannover, 09 June 2006.
Saving Bitrate vs. Users: Where is the Break-Even Point in Mobile Video Quality? ACM MM’11 Presenter: Piggy Date:
6.S196 / PPAT: Principles and Practice of Assistive Technology Wed, 19 Sept Prof. Rob Miller Today: User-Centered Design [C&H Ch. 4]
Objective Quality Assessment Metrics for Video Codecs - Sridhar Godavarthy.
Examining the Conspicuity of Infra-Red Markers For Use With 2-D Eye Tracking Abstract Physical infra-red (IR) markers are sometimes used to help aggregate.
Visual Information Processing. Human Perception V.S. Machine Perception  Human perception: pictorial information improvement for human interpretation.
CS201 Lecture 02 Computer Vision: Image Formation and Basic Techniques
Video-based human motion recognition using 3D mocap data
Hasan Kadhem IT Department College of IT
Realizing Closed-loop, Online Tuning and Control for Configurable-Cache Embedded Systems: Progress and Challenges Islam S. Badreldin*, Ann Gordon-Ross*,
Presentation transcript:

Maria Grazia Albanesi, Riccardo Amadeo University of Pavia, Faculty of Engineering, Computer Department Impact of Fixation Time on Subjective Video Quality Metric: a New Proposal for Lossy Compression Impairment Assessment ICMVIPPA 2011 : International Conference on Machine Vision, Image Processing, and Pattern Analysis Venezia (Mestre), November 28, 2011

The addressed problem: –subjective video quality assessment for lossy compression impairment The tools and the experiments –eye tracking and subjective experiments The goals –Comparison to literature The results and their interpretation A possible application: a new protocol for no- reference video quality assessment Future developments Outline 2 ICMVIPPA Venezia (Mestre) - November 28, 2011

How I can measure the loss of quality due to compression? Field of applications: TV, video services on Internet, video for mobile applications, test of emerging compression algorithms….. Evaluation of multimedia quality user experience Two approaches: objective and subjective metrics Our goal: find objective parameters coming form subjective experiments which reflect the subjective video quality, as perceived by a human observer. The problem 3 ICMVIPPA Venezia (Mestre) - November 28, 2011

Eye tracker: it records the point and the duration of fixation of the eye, when the observer looks at a monitor. Data are subsequently analyzed from a statistical point of view (mean, std. dev….) The tools: eye tracker and subjective QA experiments 4 ICMVIPPA Venezia (Mestre) - November 28, 2011

The set of videos 5 A set of 19 videos downloaded from available online public libraries – (Video trace library of Arizona State University) –ftp://ftp.tnt.uni-hannover.de/pub/svc/testsequences/ (Hannover Liebnitz University video library) – The original files: YUV sequences, 4:2:0, in CIF resolution (352x288) at 30 fps are converted in avi sequences and compressed by a H.264 at 2 bitrates: 450 bps and 150 bps ICMVIPPA Venezia (Mestre) - November 28, 2011

Examples: 6 ICMVIPPA Venezia (Mestre) - November 28, 2011

Visual behavior and impairment 7 Visual path for original «best» video Visual path for compressed (br150 bps) video

Protocol ACR5-HR (absolutely category ranking – hidden reference –MOS scale with five levels: –Only one observation for each video –The observer has no information about the unimpaired version of the video. The subjects: 8 females and 10 males, of age varying from 22 to 27 years old. – Their vision was normal or corrected-to-normal –They had no experience in subjective video quality assessment. –They had normal or good experience in using IT interfaces to watch videos both online and offline. Methodology 8 ICMVIPPA Venezia (Mestre) - November 28, 2011

Our parameter are not related to fixation points, but to the duration of the fixation. Videos are classified according to color content relevance and and movement relevance to create semantic filters Parameters: –Duration of fixation time –MOS, five point scale –Subjective Color Score, three point scale –Subjective Movement Score (SCS e SMS), three point scale. –Removal of «Memory effect» due to the conditioning of ocular motion activity by the visual attention of preceding scenes. Novelties and comparison to literature 9 Starting point: O. Le Meur, A. Ninassi, P. Le Callet, D. Barba, Overt visual attention for free-viewing and quality assessment tasks: Impact of the regions of interest on a video quality metric, Signal Processing Image Communication, 2010, vo. 25, pp

18 tester, 6 in each playlist Each video has three version: reference br450 br150 (57 videos) Each observer looks at only one version of each video. No repetitions are allowed in each playlists. Numero video utilizzati 19 N.IDPlaylist APlaylist BPlaylist C 1ForemanrefBr150Br450 2SilentBr150refBr450 3FlowerBr450Br150ref 4BusBr450refBr150 5TempeterefBr450Br150 6Bridge_closerefBr150Br450 7IceBr150refBr450 8CoastguardBr450refBr150 9Mother_daugBr150Br450ref 10FootballrefBr150Br450 11CrewBr450refBr150 12ParisBr450Br150ref 13ContainerrefBr450Br150 14HighwayBr150Br450ref 15WaterfallBr150refBr450 16HallBr450Br150ref 17StefanrefBr450Br150 18NewsrefBr150Br450 19MobileBr150Br450ref tot: ref766 Br Br Playlists to remove memory effect 10 ICMVIPPA Venezia (Mestre) - November 28, 2011

The MOS really reflect the progressive loss of quality due to compression. Mean Opinion Score 11

Color and movement are considered relevant if the score is > 2 «Highly animated video»: 2, 4, 7, 8, 10, 12, 14, 17, 19 «Highly coloured video» »: 3, 5, 7, 10, 15, 19 N.PlaylistSCSSMS 1Foreman1,671,50 2Silent1,282,28 3Flower2,331,61 4Bus1,672,33 5Tempete2,501,89 6Bridge_close1,441,50 7Ice2,112,33 8Coastguard1,722,33 9Mother_daughter1,781,28 10Football2,392,78 11Crew1,831,94 12Paris1,942,11 13Container1,831,67 14Highway1,392,56 15Waterfall2,721,72 16Hall1,72 17Stefan1,672,50 18News1,781,83 19Mobile2,562,22 SCS e SMS 12

The mean fixation time does not seem to be related to the video quality! Analysis of Mean fixation time (MFT) 13

MFT, semantic filtering Even by filtering by movement or colour, there is not a clear relation between MFT and MOS 14

Even the standard deviation of fixation time does not seem to be related to the video quality! Analysis of Standard deviation of FT 15

RefBr 450Br 150 Average SDoFT, SMS>2155, , ,5709 Standard deviation of SDoFT, SMS>269,736185, ,3062 RefBr 450Br 150 Average SDoFT149, , ,8378 Standard deviation of SDoFT60,671988,298790,7023 The solution: third order statistics! RefBr 450Br 150 Average SDoFT, SCS>2145, , ,9579 Standard deviation of SDoFT, SCS>269, ,821387,1634 Standard deviation of FT is on the average less for videos of high quality The semantic filtering shows that this behaviour is stressened for highly animated videos. 16

The duration of fixation time seems to have a more predictable behavior when the observer watches to a high quality video. If we compute the third order statistics on the fixation time, we can guess a rank of a collection of video which reflects the perceptive visual quality The experiments confirm this behavior for degradation due to lossy compression. The rank according third order statistic reflect the loss of quality and subjective MOS especially for highly animated videos. Future researches: Test on a greater level of quality impairments Test on other kinds of quality impairments Finding a more efficient semantic filtering about color or other criteria. Conclusions and future researches 17 ICMVIPPA Venezia (Mestre) - November 28, 2011