Assessment of Computational Visual Attention Models on Medical Images Varun Jampani 1, Ujjwal 1, Jayanthi Sivaswamy 1 and Vivek Vaidya 2 1 CVIT, IIIT Hyderabad,

Slides:



Advertisements
Similar presentations
Selective Visual Attention: Feature Integration Theory PS2009/10 Lecture 9.
Advertisements

Evidential modeling for pose estimation Fabio Cuzzolin, Ruggero Frezza Computer Science Department UCLA.
The Physiology of Attention. Physiology of Attention Neural systems involved in orienting Neural correlates of selection.
IIIT Hyderabad ROBUST OPTIC DISK SEGMENTATION FROM COLOUR RETINAL IMAGES Gopal Datt Joshi, Rohit Gautam, Jayanthi Sivaswamy CVIT, IIIT Hyderabad, Hyderabad,
Detail to attention: Exploiting Visual Tasks for Selective Rendering Kirsten Cater 1, Alan Chalmers 1 and Greg Ward 2 1 University of Bristol, UK 2 Anyhere.
Different Pathways, Different Processes. Retinocollicular vs. Retinostriate Recall that 10% of optic nerve gets routed through the Superior Colliculus.
Human (ERP and imaging) and monkey (cell recording) data together 1. Modality specific extrastriate cortex is modulated by attention (V4, IT, MT). 2. V1.
Chapter 6: Visual Attention. Overview of Questions Why do we pay attention to some parts of a scene but not to others? Do we have to pay attention to.
LOGO The role of attentional breadth in perceptual change detection Professor: Liu Student: Ruby.
Psych 216: Movement Attention. What is attention? There is too much information available in the world to process it all. Demonstration: change-detection.
Hierarchical Saliency Detection School of Electronic Information Engineering Tianjin University 1 Wang Bingren.
Chapter 6: Visual Attention. Scanning a Scene Visual scanning – looking from place to place –Fixation –Saccadic eye movement Overt attention involves.
GENERATING AUTOMATIC SEMANTIC ANNOTATIONS FOR RESEARCH DATASETS AYUSH SINGHAL AND JAIDEEP SRIVASTAVA CS DEPT., UNIVERSITY OF MINNESOTA, MN, USA.
Biased Normalized Cuts 1 Subhransu Maji and Jithndra Malik University of California, Berkeley IEEE Conference on Computer Vision and Pattern Recognition.
Control of Attention and Gaze in the Natural World.
Object Recognition Using Attention Tammy Avraham and Assoc.Prof Michael Lindenbaum Computer Science Dept., Technion Haifa, Israel Projects for implementing.
Image Retrieval Using Eye Movements Fred Stentiford & Wole Oyekoya University College London.
 Introduction  Principles of context – aware saliency  Detection of context – aware saliency  Result  Application  Conclusion.
Read this paper Chellazi et al. (1993) Nature 363 Pg
Stas Goferman Lihi Zelnik-Manor Ayellet Tal. …
Michigan State University1 Visual Attention and Recognition Through Neuromorphic Modeling of “Where” and “What” Pathways Zhengping Ji Embodied Intelligence.
Visual Attention More information in visual field than we can process at a given moment Solutions Shifts of Visual Attention related to eye movements Some.
Tracking multiple independent targets: Evidence for a parallel tracking mechanism Zenon Pylyshyn and Ron Storm presented by Nick Howe.
Visual search: Who cares?
Saliency & attention (P) Lavanya Sharan April 4th, 2011.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC. Lecture 12: Visual Attention 1 Computational Architectures in Biological Vision,
Michigan State University 1 “Saliency-Based Visual Attention” “Computational Modeling of Visual Attention”, Itti, Koch, (Nature Reviews – Neuroscience.
Presented by Zeehasham Rasheed
Michael Arbib & Laurent Itti: CS664 – Spring Lecture 5: Visual Attention (bottom-up) 1 CS 664, USC Spring 2002 Lecture 5. Visual Attention (bottom-up)
Introduction of Saliency Map
© 2013 IBM Corporation Efficient Multi-stage Image Classification for Mobile Sensing in Urban Environments Presented by Shashank Mujumdar IBM Research,
Speaker: Chi-Yu Hsu Advisor: Prof. Jian-Jung Ding Leveraging Stereopsis for Saliency Analysis, CVPR 2012.
A STUDY OF X-RAY IMAGE PERCEPTION FOR PNEUMOCONIOSIS DETECTION MS Thesis Presentation Varun Jampani ( ) Adviser:
Studying Visual Attention with the Visual Search Paradigm Marc Pomplun Department of Computer Science University of Massachusetts at Boston
Thien Anh Dinh1, Tomi Silander1, Bolan Su1, Tianxia Gong
Manipulating Attention in Computer Games Matthias Bernhard, Le Zhang, Michael Wimmer Institute of Computer Graphics and Algorithms Vienna University of.
Active Vision Key points: Acting to obtain information Eye movements Depth from motion parallax Extracting motion information from a spatio-temporal pattern.
1 Computational Vision CSCI 363, Fall 2012 Lecture 31 Heading Models.
Model comparison and challenges II Compositional bias of salient object detection benchmarking Xiaodi Hou K-Lab, Computation and Neural Systems California.
AUTOMATIZATION OF COMPUTED TOMOGRAPHY PATHOLOGY DETECTION Semyon Medvedik Elena Kozakevich.
黃文中 Introduction The Model Results Conclusion 2.
Hierarchical Annotation of Medical Images Ivica Dimitrovski 1, Dragi Kocev 2, Suzana Loškovska 1, Sašo Džeroski 2 1 Department of Computer Science, Faculty.
Peter-Paul van Maanen (TNO/VU), Lisette de Koning (TNO), Kees van Dongen (TNO) Effects of Task Performance and Task Complexity on the Validity of Computational.
Geodesic Saliency Using Background Priors
Department of Psychology & The Human Computer Interaction Program Vision Sciences Society’s Annual Meeting, Sarasota, FL May 13, 2007 Jeremiah D. Still,
Region-Based Saliency Detection and Its Application in Object Recognition IEEE TRANSACTIONS ON CIRCUITS AND SYSTEM FOR VIDEO TECHNOLOGY, VOL. 24 NO. 5,
MODELING MST OPTIC FLOW RESPONSES USING RECEPTIVE FIELD SEGMENTAL INTERACTIONS Chen-Ping Yu +, William K. Page*, Roger Gaborski +, and Charles J. Duffy.
VIP: Finding Important People in Images Clint Solomon Mathialagan Andrew C. Gallagher Dhruv Batra CVPR
Binding problems and feature integration theory. Feature detectors Neurons that fire to specific features of a stimulus Pathway away from retina shows.
Computer Science Readings: Reinforcement Learning Presentation by: Arif OZGELEN.
U SER I NTERFACE L ABORATORY Situation Awareness a state of knowledge, from the processes used to achieve that state (situation assessment) not encompass.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 12: Visual Attention 1 Computational Architectures in Biological.
Spatio-temporal saliency model to predict eye movements in video free viewing Gipsa-lab, Grenoble Département Images et Signal CNRS, UMR 5216 S. Marat,
“Nothing average ever stood as a monument to progress. When progress is looking for a partner it doesn't turn to those who believe they are only average.
An Eyetracking Analysis of the Effect of Prior Comparison on Analogical Mapping Catherine A. Clement, Eastern Kentucky University Carrie Harris, Tara Weatherholt,
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
Visual Perception CS4390/5390 Fall 2014 Shirley Moore, Instructor September 8,
Learning video saliency from human gaze using candidate selection CVPR2013 Poster.
Using the Forest to see the Trees: A computational model relating features, objects and scenes Antonio Torralba CSAIL-MIT Joint work with Aude Oliva, Kevin.
A computational model of stereoscopic 3D visual saliency School of Electronic Information Engineering Tianjin University 1 Wang Bingren.
Rich feature hierarchies for accurate object detection and semantic segmentation 2014 IEEE Conference on Computer Vision and Pattern Recognition Ross Girshick,
Feedforward Eye-Tracking for Training Histological Visual Searches Andrew T. Duchowski COMPUTER SCIENCE, CLEMSON UNIVERSITY Abstract.
ICCV 2009 Tilke Judd, Krista Ehinger, Fr´edo Durand, Antonio Torralba.
Date of download: 6/28/2016 Copyright © 2016 SPIE. All rights reserved. (a) An example stimuli slice that contains one target T (red circle), one distractor.
Enhanced-alignment Measure for Binary Foreground Map Evaluation
Cheng-Ming Huang, Wen-Hung Liao Department of Computer Science
Text Detection in Images and Video
MODELING MST OPTIC FLOW RESPONSES
MODELING MST OPTIC FLOW RESPONSES
John T. Serences, Geoffrey M. Boynton  Neuron 
Presentation transcript:

Assessment of Computational Visual Attention Models on Medical Images Varun Jampani 1, Ujjwal 1, Jayanthi Sivaswamy 1 and Vivek Vaidya 2 1 CVIT, IIIT Hyderabad, India 2 GE Global Research, Bangalore, India Center for Visual Information Technology International Institute of Information Technology, Hyderabad, INDIA Contact OBJECTIVE: To investigate the performance of computational saliency models in medical images in the context of abnormality detection. Visual Search in Medical Images Computational Visual Attention Models Study on Chest X-rays Results Conclusion and Future Work Feature Integration Theory [Treisman and Gelade 1980] Visual search paradigm involves identifying targets from surrounding distractors Disjunctive Target Conjunctive Target Bottom-up and Top-Down Visual attention models Produces saliency maps which predict salient regions Bottom-up models used in present study – Itti-Koch Model (IK) [Itti and Koch 2001] – Spectral residual model (SR) [Hou and Zhang 2007] – Graph based visual saliency model (GBVS) [Harel et al. 2007] Despite the importance of top-down knowledge, bottom-up knowledge plays considerable role Including top-down knowledge of anatomical regions resulted in marginal improvement in accuracy Saliency models can be used in the development of CAD tools for medical images Need to incorporate other types of top-down knowledge Visual Attention: Process of selectively attending to an area of visual field while ignoring the surrounding visual areas Influenced by image-dependent bottom-up features and task- dependent top-down features Plays important role in finding abnormalities in medical images [Matsumoto et al. 2011]: Showed that bottom-up mechanisms also play a significant role in guiding the eye movements of neurologists looking for stroke lesions on brain CT images Global-Focal model of visual search for tumor detection in chest x-rays [Nodine and Kundel 1987] – Overall Pattern recognition – Focal attention – Decision making Eye fixations of an observer on a sample chest x-ray We evaluated the role of bottom-up saliency in chest x-rays of pneumoconiosis by comparing the saliency maps against the eye fixations of observers of different expertize levels. Eye movement recordings were done using a remote head free eye tracker (SR Research - Eyelink 1000) See [Jampani et al. 2011] for experimental details ROC analysis is done by using saliency maps as classifiers and considering eye fixations as ground truth Sample X-ray segments showing different stages of Pneumoconiosis (Conjunctive Targets) A sample chest x-ray and the corresponding saliency maps computed using different saliency models Study on Retinal Images Saliency maps extracted from 2 sample retinal images 4 Sample retinal images showing hard exudates (Disjunctive Targets) A sample retinal image and ground truth markings by four medical experts. Taken from DIARETDB1 dataset [Kauppi et al. 2007] We evaluated the bottom-up saliency models by comparing the saliency maps against the ground truth markings by medical experts Publicly available DIARETDB1 dataset [Kauppi et al. 2007] ROC analysis is done to compare saliency maps with ground truth markings In Chest X-ray Study GBVS model (Mdn AUC = 0.77) outperforms (Z=0.0, p <.001) SR model (Mdn AUC = 0.67) and IK model (Mdn AUC = 0.67 ) in predicting eye fixations of observers AUC value of 0.77 suggests that GBVS saliency model can be used to a reasonably good accuracy to predict the fixations of the observers Extended GBVS model by giving more importance to lung regions AUCs for EGBVS (Mdn = 0.81) are significantly higher (Z = 2.0, p <.001) than those of GBVS (Mdn = 0.77) Mean AUCs corresponding to different saliency models for all the observers in chest x-ray study In Retinal Image Study AUC values found to be: 0.72 for IK, 0.70 for GBVS and 0.73 for SR All three models perform roughly the same AUC of SR model rose to 0.74 with optic disk suppression Extended SR model by computing saliency maps at multiple scales and combining them ESR model (Mean AUC = 0.94) performed significantly better (28.76% increase) than SR model (Mean AUC = 0.74) Average ROC curves for different saliency models in retinal image study Different steps in attracting EGBVS saliency map from a sample chest x-ray Steps for deriving ESR saliency maps Input Image Itti- Koch GBVS SR