Computer Science Department Pacific University Artificial Intelligence -- Computer Vision.

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints David Lowe.
Advertisements

Artificial Intelligence
3D Model Matching with Viewpoint-Invariant Patches(VIP) Reporter :鄒嘉恆 Date : 10/06/2009.
1.1 Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. Al-Azhar University
5/13/2015CAM Talk G.Kamberova Computer Vision Introduction Gerda Kamberova Department of Computer Science Hofstra University.
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
Visual Event Detection & Recognition Filiz Bunyak Ersoy, Ph.D. student Smart Engineering Systems Lab.
Computer Vision - A Modern Approach Set: Introduction to Vision Slides by D.A. Forsyth Why study Computer Vision? Images and movies are everywhere Fast-growing.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
CPSC 425: Computer Vision (Jan-April 2007) David Lowe Prerequisites: 4 th year ability in CPSC Math 200 (Calculus III) Math 221 (Matrix Algebra: linear.
Advanced Computer Vision Introduction Goal and objectives To introduce the fundamental problems of computer vision. To introduce the main concepts and.
Overview of Computer Vision CS491E/791E. What is Computer Vision? Deals with the development of the theoretical and algorithmic basis by which useful.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Virtual Reality. What is virtual reality? a way to visualise, manipulate, and interact with a virtual environment visualise the computer generates visual,
Computing With Images: Outlook and applications
2007Theo Schouten1 Introduction. 2007Theo Schouten2 Human Eye Cones, Rods Reaction time: 0.1 sec (enough for transferring 100 nerve.
What is “Image Processing and Computer Vision”?
© 2004 by Davi GeigerComputer Vision January 2004 L1.1 Introduction.
Detecting Patterns So far Specific patterns (eyes) Generally useful patterns (edges) Also (new) “Interesting” distinctive patterns ( No specific pattern:
Computer Vision Marc Pollefeys COMP 256 Administrivia Classes: Mon & Wed, 11-12:15, SN115 Instructor: Marc Pollefeys (919) Room.
Panorama Stitching and Augmented Reality. Local feature matching with large datasets n Examples: l Identify all panoramas and objects in an image set.
CMSC 426: Image Processing (Computer Vision) David Jacobs.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
A Brief Overview of Computer Vision Jinxiang Chai.
Computer vision.
This action is co-financed by the European Union from the European Regional Development Fund The contents of this poster are the sole responsibility of.
Internet-scale Imagery for Graphics and Vision James Hays cs195g Computational Photography Brown University, Spring 2010.
Introduction to Computer Vision Olac Fuentes Computer Science Department University of Texas at El Paso El Paso, TX, U.S.A.
IIIT-B Computer Vision, Fall 2006 Lecture 1 Introduction to Computer Vision Arvind Lakshmikumar Technology Manager, Sarnoff Corporation Adjunct Faculty,
Object Tracking/Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition.
Speaker : Meng-Shun Su Adviser : Chih-Hung Lin Ten-Chuan Hsiao Ten-Chuan Hsiao Date : 2010/01/26 ©2010 STUT. CSIE. Multimedia and Information Security.
Perceptual and Sensory Augmented Computing Visual Object Recognition Tutorial Visual Object Recognition Bastian Leibe & Computer Vision Laboratory ETH.
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
Computer Vision Why study Computer Vision? Images and movies are everywhere Fast-growing collection of useful applications –building representations.
CS 8690: Computer Vision Ye Duan. CS8690 Computer Vision University of Missouri at Columbia Instructor Ye Duan (209 Engr West)
The University of Texas at Austin Vision-Based Pedestrian Detection for Driving Assistance Marco Perez.
Augmented Reality and 3D modelling By Stafford Joemat Supervised by Mr James Connan.
CVPR 2003 Tutorial Recognition and Matching Based on Local Invariant Features David Lowe Computer Science Department University of British Columbia.
MACHINE VISION Machine Vision System Components ENT 273 Ms. HEMA C.R. Lecture 1.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
AUGMENTED AND VISUAL REALITY. WHAT IS AUGMENTED AND VISUAL REALITY?
Vision Overview  Like all AI: in its infancy  Many methods which work well in specific applications  No universal solution  Classic problem: Recognition.
APECE-505 Intelligent System Engineering Basics of Digital Image Processing! Md. Atiqur Rahman Ahad Reference books: – Digital Image Processing, Gonzalez.
Reference books: – Digital Image Processing, Gonzalez & Woods. - Digital Image Processing, M. Joshi - Computer Vision – a modern approach, Forsyth & Ponce.
Computer Vision Overview Marc Schlosberg CS 175 – Spring 2015.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
MACHINE VISION GROUP MOBILE FEATURE-CLOUD PANORAMA CONSTRUCTION FOR IMAGE RECOGNITION APPLICATIONS Miguel Bordallo, Jari Hannuksela, Olli silvén Machine.
  Computer vision is a field that includes methods for acquiring,prcessing, analyzing, and understanding images and, in general, high-dimensional data.
Face Recognition Technology By Catherine jenni christy.M.sc.
Robot Vision SS 2009 Matthias Rüther ROBOT VISION 2VO 1KU Matthias Rüther.
Face Detection 蔡宇軒.
SIFT.
CONTENTS:  Introduction.  Face recognition task.  Image preprocessing.  Template Extraction and Normalization.  Template Correlation with image database.
By: Suvigya Tripathi (09BEC094) Ankit V. Gupta (09BEC106) Guided By: Prof. Bhupendra Fataniya Dept. of Electronics and Communication Engineering, Nirma.
Visual Information Processing. Human Perception V.S. Machine Perception  Human perception: pictorial information improvement for human interpretation.
SIFT Scale-Invariant Feature Transform David Lowe
Introduction to Computer and Human Vision
Why study Computer Vision?
Institute of Neural Information Processing (Prof. Heiko Neumann •
Brief Review of Recognition + Context
CSE (c) S. Tanimoto, 2002 Image Understanding
CMSC 426: Image Processing (Computer Vision)
SIFT.
Fast Forward, Part II Multi-view Geometry Stereo Ego-Motion
CSE (c) S. Tanimoto, 2007 Image Understanding
CSE (c) S. Tanimoto, 2001 Image Understanding
COMPUTER GRAPHICS with OpenGL (3rd Edition) Donald Hearn M
CSE (c) S. Tanimoto, 2004 Image Understanding
Presentation transcript:

Computer Science Department Pacific University Artificial Intelligence -- Computer Vision

Why Computer Vision? Vision, communication, & action Pacific University

Why study Computer Vision? F Images and video are everywhere F Fast-growing collection of useful applications F matching and modifying images from digital cameras F film special effects and post-processing F building representations of the 3D world from pictures F medical imaging, household robots, security, traffic control, cell phone location, face finding, video game interfaces,... F Various deep and attractive scientific mysteries F what can we know from an image? F how does object recognition work? F Greater understanding of human vision and the brain F about 25% of the human brain is devoted to vision F Images and video are everywhere F Fast-growing collection of useful applications F matching and modifying images from digital cameras F film special effects and post-processing F building representations of the 3D world from pictures F medical imaging, household robots, security, traffic control, cell phone location, face finding, video game interfaces,... F Various deep and attractive scientific mysteries F what can we know from an image? F how does object recognition work? F Greater understanding of human vision and the brain F about 25% of the human brain is devoted to vision Pacific University

Why is Vision Interesting? F Psychology F ~ 50% of cerebral cortex is for vision. F Vision is how we experience the world. F Engineering F Want machines to interact with world. F Digital images are everywhere. F Psychology F ~ 50% of cerebral cortex is for vision. F Vision is how we experience the world. F Engineering F Want machines to interact with world. F Digital images are everywhere.

Vision is inferential: Illumination Pacific University

Applications of Computer Vision: Texture generation Input image Pattern Repeated New texture generated from input Simple repetition Pacific University

Application: Football first-down line Requires (1) accurate camera registration; (2) a model for distinguishing foreground from background Pacific University

Application areas:  Film production (the “ match move ” problem)  Heads-up display for cars  Tourism  Architecture  Training Technical challenges:  Recognition of scene  Accurate sub-pixel 3-D pose  Real-time, low latency Application areas:  Film production (the “ match move ” problem)  Heads-up display for cars  Tourism  Architecture  Training Technical challenges:  Recognition of scene  Accurate sub-pixel 3-D pose  Real-time, low latency Application: Augmented Reality Pacific University

Application: Medical augmented Reality Visually guided surgery: recognition and registration Pacific University

Application: Automobile navigation Lane departure warning Pedestrian detection Mobileye (see mobileye.com) Other applications: intelligent cruise control, lane change assist, collision mitigation Systems already used in trucks and high-end cars Pacific University

Tracking (Comaniciu and Meer)

Understanding Action

Tracking and Understanding (

Tracking

Part I: The Physics of Imaging F How images are formed F Cameras F What a camera does F How to tell where the camera was (pose) F Light F How to measure light F What light does at surfaces F How the brightness values we see in cameras are determined F How images are formed F Cameras F What a camera does F How to tell where the camera was (pose) F Light F How to measure light F What light does at surfaces F How the brightness values we see in cameras are determined Pacific University

Part II: Early Vision in One Image F Representing local properties of the image F For three reasons F Sharp changes are important in practice -- find “edges” F We wish to establish correspondence between points in different images, so we need to describe the neighborhood of the points F Representing texture by giving some statistics of the different kinds of small patch present in the texture. F Tigers have lots of bars, few spots F Leopards are the other way F Representing local properties of the image F For three reasons F Sharp changes are important in practice -- find “edges” F We wish to establish correspondence between points in different images, so we need to describe the neighborhood of the points F Representing texture by giving some statistics of the different kinds of small patch present in the texture. F Tigers have lots of bars, few spots F Leopards are the other way Pacific University

Part III: Vision in Multiple Images F The geometry of multiple views F Where could it appear in camera 2 (3, etc.) given it was here in 1? F Stereopsis F What we know about the world from having 2 eyes F Structure from motion F What we know about the world from having many eyes F or, more commonly, our eyes moving. F Correspondence F Which points in the images are projections of the same 3D point? F Solve for positions of all cameras and points. F The geometry of multiple views F Where could it appear in camera 2 (3, etc.) given it was here in 1? F Stereopsis F What we know about the world from having 2 eyes F Structure from motion F What we know about the world from having many eyes F or, more commonly, our eyes moving. F Correspondence F Which points in the images are projections of the same 3D point? F Solve for positions of all cameras and points. Pacific University

Part IV: High Level Vision F Model based vision F find the position and orientation of known objects F Using classifiers and probability to recognize objects F Templates and classifiers F how to find objects that look the same from view to view with a classifier F Relations F break up objects into big, simple parts, find the parts with a classifier, and then reason about the relationships between the parts to find the object F Model based vision F find the position and orientation of known objects F Using classifiers and probability to recognize objects F Templates and classifiers F how to find objects that look the same from view to view with a classifier F Relations F break up objects into big, simple parts, find the parts with a classifier, and then reason about the relationships between the parts to find the object Pacific University

Pacific University

Pacific University

Object and Scene Recognition F Definition: Identify objects or scenes and determine their pose and model parameters F Applications F Industrial automation and inspection F Mobile robots, toys, user interfaces F Location recognition F Digital camera panoramas F 3D scene modeling F Definition: Identify objects or scenes and determine their pose and model parameters F Applications F Industrial automation and inspection F Mobile robots, toys, user interfaces F Location recognition F Digital camera panoramas F 3D scene modeling Pacific University

Invariant Local Features F Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging parameters SIFT Features Pacific University

Examples of view interpolation Pacific University

Recognition using View Interpolation Pacific University