Computer Science Department, Duke UniversityPhD Defense TalkMay 4, 2005 Fast Extraction of Feature Salience Maps for Rapid Video Data Analysis Nikos P.

Slides:



Advertisements
Similar presentations
Visual Saliency: the signal from V1 to capture attention Li Zhaoping Head, Laboratory of natural intelligence Department of Psychology University College.
Advertisements

Kien A. Hua Division of Computer Science University of Central Florida.
HMAX Models Architecture Jim Mutch March 31, 2010.
Activity Recognition Aneeq Zia. Agenda What is activity recognition Typical methods used for action recognition “Evaluation of local spatio-temporal features.
DUKE CSNUFFT on Multicores HPEC-2008 Sept. 25, 2008NUFFTs on Multicores 1 Parallelization of NUFFT with Radial Data on Multicore Processors Nikos Pitsianis.
IEEE TCSVT 2011 Wonjun Kim Chanho Jung Changick Kim
CS 561, Sessions 27 1 Towards intelligent machines Thanks to CSCI561, we now know how to… - Search (and play games) - Build a knowledge base using FOL.
SWE 423: Multimedia Systems
Visual Attention More information in visual field than we can process at a given moment Solutions Shifts of Visual Attention related to eye movements Some.
Vision Computing An Introduction. Visual Perception Sight is our most impressive sense. It gives us, without conscious effort, detailed information about.
Visual Information Retrieval Chapter 1 Introduction Alberto Del Bimbo Dipartimento di Sistemi e Informatica Universita di Firenze Firenze, Italy.
Christian Siagian Laurent Itti Univ. Southern California, CA, USA
Roadway Sign Data Collection Video log image data collection QA/QC Database GIS spatial analysis and presentation, and mapping GeoTranSolution (GTS) Automatic.
Jochen Triesch, UC San Diego, 1 COGS Visual Modeling Jochen Triesch & Martin Sereno Dept. of Cognitive Science UC.
Digital Image Processing
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
November 29, 2004AI: Chapter 24: Perception1 Artificial Intelligence Chapter 24: Perception Michael Scherger Department of Computer Science Kent State.
Sana Naghipour, Saba Naghipour Mentor: Phani Chavali Advisers: Ed Richter, Prof. Arye Nehorai.
Studying Visual Attention with the Visual Search Paradigm Marc Pomplun Department of Computer Science University of Massachusetts at Boston
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis Laurent Itti, Christof Koch, and Ernst Niebur IEEE PAMI, 1998.
Abstract Some Examples The Eye tracker project is a research initiative to enable people, who are suffering from Amyotrophic Lateral Sclerosis (ALS), to.
04/04/20071 Image Understanding Architecture: Exploiting Potential Parallelism in Machine Vision.
Shape Recognition and Pose Estimation for Mobile Augmented Reality Author : N. Hagbi, J. El-Sana, O. Bergig, and M. Billinghurst Date : Speaker.
Motion Object Segmentation, Recognition and Tracking Huiqiong Chen; Yun Zhang; Derek Rivait Faculty of Computer Science Dalhousie University.
REU Presentation Week 3 Nicholas Baker.  What features “pop out” in a scene?  No prior information/goal  Identify areas of large feature contrasts.
Visual Attention Accelerated Vehicle Detection in Low-Altitude Airborne Video of Urban Environment Xianbin Cao, Senior Member, IEEE, Renjun Lin, Pingkun.
Visual Attention Derek Hoiem March 14, 2007 Misc Reading Group.
Cognitive Systems Foresight 3D Vision. Cognitive Systems Foresight What are the potential implications of computer vision research for the study of biological.
黃文中 Introduction The Model Results Conclusion 2.
Radar Pulse Compression Using the NVIDIA CUDA SDK
Computer Vision Michael Isard and Dimitris Metaxas.
DUKEMIT-LLHPEC-2007 FFTs of Arbitrary Dimensions on GPUs Xiaobai Sun and Nikos Pitsianis Duke University September 19, 2007 At High Performance Embedded.
CUDA-based Volume Rendering in IGT Nobuhiko Hata Benjamin Grauer.
Geodesic Saliency Using Background Priors
Department of Psychology & The Human Computer Interaction Program Vision Sciences Society’s Annual Meeting, Sarasota, FL May 13, 2007 Jeremiah D. Still,
CS332 Visual Processing Department of Computer Science Wellesley College CS 332 Visual Processing in Computer and Biological Vision Systems Overview of.
Image Classification for Automatic Annotation
Autonomous Robots Vision © Manfred Huber 2014.
CDVS on mobile GPUs MPEG 112 Warsaw, July Our Challenge CDVS on mobile GPUs  Compute CDVS descriptor from a stream video continuously  Make.
고급 컴퓨터 그래픽스 중앙대학교 컴퓨터공학부 손 봉 수. Course Overview Level : CSE graduate course No required text. We will use lecture notes and on-line materials This course.
Computer Science Department, Duke UniversityPhD Defense TalkMay 4, 2005 FAST PATTERN MATCHING IN 3D IMAGES ON GPUS Patrick Eibl, Dennis Healy, Nikos P.
CIS 350 Principles and Applications Of Computer Vision Dr. Rolf Lakaemper.
Semantic Extraction and Semantics-Based Annotation and Retrieval for Video Databases Authors: Yan Liu & Fei Li Department of Computer Science Columbia.
Spatio-temporal saliency model to predict eye movements in video free viewing Gipsa-lab, Grenoble Département Images et Signal CNRS, UMR 5216 S. Marat,
고급 컴퓨터 그래픽스 (Advanced Computer Graphics)
Debunking the 100X GPU vs. CPU Myth An Evaluation of Throughput Computing on CPU and GPU Present by Chunyi Victor W Lee, Changkyu Kim, Jatin Chhugani,
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
Final Year Project. Project Title Kalman Tracking For Image Processing Applications.
Canny Edge Detection Using an NVIDIA GPU and CUDA Alex Wade CAP6938 Final Project.
Edge Segmentation in Computer Images CSE350/ Sep 03.
Discussion: Modeling Visual Cortex V4 in Naturalistic Conditions with Invariant and Sparse Image Representations Ping Li Department of Statistics and Biostatistics.
Image Perception ‘Let there be light! ‘. “Let there be light”
Convolutional Neural Network
Visual Information Retrieval
- photometric aspects of image formation gray level images
고급 컴퓨터 그래픽스 (Advanced Computer Graphics)
Yuanrui Zhang, Mahmut Kandemir
CSE Jeongbin Choe Advisor: Prof. Bohyung Han (CV Lab)
Action Recognition Experiments
Color-Texture Analysis for Content-Based Image Retrieval
Text Detection in Images and Video
Level Set Tree Feature Detection
Institute of Neural Information Processing (Prof. Heiko Neumann •
?. ? White Fuzzy Color Oblong Texture Shape Most problems in vision are underconstrained White Color Most problems in vision are underconstrained.
Pattern Recognition Binding Edge Detection
Introduction to Object Tracking
Fourier Transform of Boundaries
Topic 1 Three related sub-fields Image processing Computer vision
Presentation transcript:

Computer Science Department, Duke UniversityPhD Defense TalkMay 4, 2005 Fast Extraction of Feature Salience Maps for Rapid Video Data Analysis Nikos P. Pitsianis and Xiaobai Sun

Fast Extraction of Feature Salience Maps for Rapid Video Data Analysis FSMs are used in –separation or integration –automatic or assisted visual search tasks –target indication, object recognition, tracking Salient information in multiple feature dimensions –color, edge orientation, shape, texture, motion –selective tuning, feedback, attentional or intentional guidance High volume and rate of video data, frame by frame Involves many filtering steps at multiple spatial scales Feature Salience Maps (FSMs) Sept 15, 2010HPEC MUNDHENK, T. N., ITTI, L. Computational modeling and exploration of contour integration for visual saliency. Biological Cybernetics (2005).

Fast Extraction of Feature Salience Maps for Rapid Video Data Analysis Direct domain –Filter-centric –Image-centric Fourier domain –Based on the convolution theorem Processing of feature maps on GPU Sept 15, 2010HPEC Using CUDA SDK 3.1 –NVIDIA Tesla C processing 1.3GHz 4GB or GDDR3 –CUFFT CUDA FFT library –Asynchronous I/O and Streaming

Fast Extraction of Feature Salience Maps for Rapid Video Data Analysis The extraction and use of salient information from static or dynamic images are recent and active research topics The computation based on an extraction model serves two purposes –Test and validate the underlying neurobiological model for certain visual function in the visual system of the primate brain –Exploit the new understanding and model(s) for developing and improving artificial vision systems. Challenges : –generation of motion features, which are much more computation intensive –visual tasks at the higher levels segmentation, object recognition, tracking of moving targets. data representation at higher levels, sparse and irregular, but still structured –Efficiency of high-level processing steps on GPUs Discussion Sept 15, HPEC 2010