2015-10-27 EIE426-AICV 1 Computer Vision Filename: eie426-computer-vision-0809.ppt.

Slides:



Advertisements
Similar presentations
Chapter 5: Space and Form Form & Pattern Perception: Humans are second to none in processing visual form and pattern information. Our ability to see patterns.
Advertisements

By: Mani Baghaei Fard.  During recent years number of moving vehicles in roads and highways has been considerably increased.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Dynamic Occlusion Analysis in Optical Flow Fields
DREAM PLAN IDEA IMPLEMENTATION Introduction to Image Processing Dr. Kourosh Kiani
Computer Vision Lecture 16: Region Representation
Image Indexing and Retrieval using Moment Invariants Imran Ahmad School of Computer Science University of Windsor – Canada.
Segmentation and Region Detection Defining regions in an image.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
EE 7730 Image Segmentation.
Advanced Computer Vision Introduction Goal and objectives To introduce the fundamental problems of computer vision. To introduce the main concepts and.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
CS 561, Sessions 27 1 Towards intelligent machines Thanks to CSCI561, we now know how to… - Search (and play games) - Build a knowledge base using FOL.
Processing Digital Images. Filtering Analysis –Recognition Transmission.
Segmentation Divide the image into segments. Each segment:
1 Introduction to 3D Imaging: Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1.
Vision Computing An Introduction. Visual Perception Sight is our most impressive sense. It gives us, without conscious effort, detailed information about.
Project 4 out today –help session today –photo session today Project 2 winners Announcements.
Introduction to Computer Vision 3D Vision Topic 9 Stereo Vision (I) CMPSCI 591A/691A CMPSCI 570/670.
Detecting Patterns So far Specific patterns (eyes) Generally useful patterns (edges) Also (new) “Interesting” distinctive patterns ( No specific pattern:
CS292 Computational Vision and Language Visual Features - Colour and Texture.
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
1 Chapter 21 Machine Vision. 2 Chapter 21 Contents (1) l Human Vision l Image Processing l Edge Detection l Convolution and the Canny Edge Detector l.
Information that lets you recognise a region.
CSE473/573 – Stereo Correspondence
Three-Dimensional Concepts
1 Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1. intrinsic images: a 2D representation.
November 29, 2004AI: Chapter 24: Perception1 Artificial Intelligence Chapter 24: Perception Michael Scherger Department of Computer Science Kent State.
Recap Low Level Vision –Input: pixel values from the imaging device –Data structure: 2D array, homogeneous –Processing: 2D neighborhood operations Histogram.
Neighborhood Operations
Computer vision.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Perception Introduction Pattern Recognition Image Formation
CS654: Digital Image Analysis Lecture 3: Data Structure for Image Analysis.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
DIGITAL IMAGE PROCESSING
Computer Vision Why study Computer Vision? Images and movies are everywhere Fast-growing collection of useful applications –building representations.
Chapter 10 Image Segmentation.
Stylization and Abstraction of Photographs Doug Decarlo and Anthony Santella.
November 13, 2014Computer Vision Lecture 17: Object Recognition I 1 Today we will move on to… Object Recognition.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Image Segmentation and Edge Detection Digital Image Processing Instructor: Dr. Cheng-Chien LiuCheng-Chien Liu Department of Earth Sciences National Cheng.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
Damien Blond Alim Fazal Tory Richard April 11th, 2000 PERCEPTION.
3D Imaging Motion.
(c) 2000, 2001 SNU CSE Biointelligence Lab Finding Region Another method for processing image  to find “regions” Finding regions  Finding outlines.
1 Machine Vision. 2 VISION the most powerful sense.
APECE-505 Intelligent System Engineering Basics of Digital Image Processing! Md. Atiqur Rahman Ahad Reference books: – Digital Image Processing, Gonzalez.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Thresholding Foundation:. Thresholding In A: light objects in dark background To extract the objects: –Select a T that separates the objects from the.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Image features and properties. Image content representation The simplest representation of an image pattern is to list image pixels, one after the other.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Digital Image Processing CSC331
Image Features (I) Dr. Chang Shu COMP 4900C Winter 2008.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Processing visual information for Computer Vision
DIGITAL SIGNAL PROCESSING
Image Segmentation – Edge Detection
Mean Shift Segmentation
Three-Dimensional Concepts. Three Dimensional Graphics  It is the field of computer graphics that deals with generating and displaying three dimensional.
Common Classification Tasks
Object Recognition Today we will move on to… April 12, 2018
Announcements Project 2 artifacts Project 3 due Thursday night
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Presentation transcript:

EIE426-AICV 1 Computer Vision Filename: eie426-computer-vision-0809.ppt

EIE426-AICV 2 Contents Perception generally Image formation Color vision Edge detection Image segmentation Visual attention 2D  3D Object recognition

Perception generally Stimulus (percept) S, World W S = g(W) E.g., g = “graphics." Can we do vision as inverse graphics? W = g -1 (S) Problem: massive ambiguity! EIE426-AICV 3 Missing depth information!

Better approaches Bayesian inference of world configurations: P(W|S) = P(S|W) x P(W) / P(S) = α x P(S|W) x P(W) “graphics” “prior knowledge” Better still: no need to recover exact scene! Just extract information needed for navigation manipulation recognition/identification EIE426-AICV 4

Vision “subsystems” EIE426-AICV 5 Vision requires combining multiple cues

Image formation P is a point in the scene, with coordinates (X; Y; Z) P’ is its image on the image plane, with coordinates (x; y; z) x = -fX/Z; y = -fY/Z (by similar triangles) Scale/distance is indeterminate! EIE426-AICV 6

Len systems f : the focal length of the lens

Images EIE426-AICV 8

Images (cont.) I(x; y; t) is the intensity at (x; y) at time t CCD camera 4,000,000 pixels; human eyes 240,000,000 pixels EIE426-AICV 9

Color vision Intensity varies with frequency  infinite-dimensional signal Human eye has three types of color-sensitive cells; each integrates the signal  3-element vector intensity EIE426-AICV 10

Color vision (cont.) EIE426-AICV 11

Edge detection Edges are straight lines or curves in the image plane across which there is “significant” changes in image brightness. The goal of edge detection is to abstract away from messy, multi- megabyte image and towards a more compact, abstract representation EIE426-AICV 12

Edge detection (cont.) Edges in image  discontinuities in scene: 1) Depth discontinuities 2) surface orientation 3) reflectance (surface markings) discontinuities 4) illumination discontinuities (shadows, etc.) EIE426-AICV 13

Edge detection (cont.) EIE426-AICV 14

Edge detection (cont.) Sobel operator Other operators: Roberts (2x2), Prewitt (3x3), Isotropic (3x3) the location of the origin (the image pixel to be processed)

Edge detection (cont.) EIE426-AICV 16 A color picture of a steam engine. The Sobel operator applied to that image.

Edge detection: application EIE426-AICV 17 An edge extraction based method to produce the pen- and-ink like drawings from photos

EIE557-CI&IA 18 Leaf (vein pattern) characterization Edge detection: application 2

Image segmentation In computer vision, segmentation refers to the process of partitioning a digital image into multiple segments (sets of pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics EIE426-AICV 19

Image segmentation (cont.) EIE426-AICV 20

Image segmentation: the quadtree partition based split-and-merge algorithm (1) Split into four disjoined quadrants any region R i where P(R i ) = FALSE. (2) Merge any adjacent regions R i and R k for which P(R i  R k ) = TRUE; and (3) Stop when no further merging or splitting is possible. P(R i ) = TRUE if all pixels in R i have the same intensity or are uniform in some measure EIE426-AICV 21

Image segmentation: the quadtree partition based split-and-merge algorithm (cont.) EIE426-AICV 22

Visual attention Attention is the cognitive process of selectively concentrating on one aspect of the environment while ignoring other things. Attention mechanism of human vision system has been applied to serve machine visual system for sampling data nonuniformly and utilizing its computational resources efficiently EIE426-AICV 23

Visual attention (cont.) The visual attention mechanism may have at least the following basic components: (1) the selection of a region of interest in the visual field; (2) the selection of feature dimensions and values of interest; (3) the control of information flow through the network of neurons that constitutes the visual system; and (4) the shifting from one selected region to the next in time EIE426-AICV 24

Attention-driven object extraction EIE426-AICV 25 The more attentive a object/region, the higher priority it has

Attention-driven object extraction (cont.) EIE426-AICV 26 Objects 1, 2, …,background

Motion EIE426-AICV 27 The rate of apparent motion can tell us something about distance. A nearer object has a larger motion. Object tracking

Motion Estimation EIE426-AICV 28

Stereo EIE426-AICV 29 The nearest point of the pyramid is shifted to the left in the right image and to the right in the left image. Disparity (x difference in two images)  Depth

Disparity and depth EIE426-AICV 30

Disparity and depth (cont.) Depth is inversely proportional to disparity.

Example: Electronic eyes for the blind EIE426-AICV 32

Example: Electronic eyes for the blind (cont.) EIE426-AICV 33 Nearer Farther object Left camera Right camera Left captured image Right captured image Pixels matching for calculating the disparities

Example: Electronic eyes for the blind (cont.) EIE426-AICV 34 Left: x=549 Right: x=476 ∆=73 Left: x=333 Right: x=273 ∆=60

Texture Texture: a spatially repeating pattern on a surface that can be sensed visually. Examples: the pattern windows on a building, the stitches on a sweater, The spots on a leopard’s skin, grass on a lawn, etc EIE426-AICV 35

Edge and vertex types EIE426-AICV 36 “+” and “-” labels represent convex and concave edges, respectively. These are associated with surface normal discontinuities wherein both surfaces that meet along the edge are visible. A “  ” or a “  ” represents an occluding convex edge. As one moves in the direction of the arrow, the (visible) surfaces are to the right. A “  ” or a “  ” represents a limb. Here, the surface curves smoothly around to occlude itself. As one moves in the direction of the twin arrow, the (visible) surfaces lies to the right.

Object recognition Simple idea: - extract 3-D shapes from image - match against “shape library” Problems: - extracting curved surfaces from image - representing shape of extracted object - representing shape and variability of library object classes - improper segmentation, occlusion - unknown illumination, shadows, markings, noise, complexity, etc. Approaches: - index into library by measuring invariant properties of objects - alignment of image feature with projected library object feature - match image against multiple stored views (aspects) of library object - machine learning methods based on image statistics EIE426-AICV 37

Biometric identification Criminal investigations and access control for restricted facilities require the ability to indentify unique individuals EIE426-AICV 38 (the blueish area)

Content-based image retrieval The application of computer vision to the image retrieval problem, that is, the problem of searching for digital images in large databases. “Content-based” means that the search will analyze the actual contents of the image. The term ‘content’ in this context might refer to colors, shapes, textures, or any other information that can be derived from the image itself. Without the ability to examine image content, searches must rely on metadata such as captions or keywords, which may be laborious or expensive to produce EIE426-AICV 39

Content-based image retrieval (cont.) EIE426-AICV 40

Handwritten digit recognition 3-nearest-neighbor = 2.4% error unit MLP (a neural network approach) = 1.6% error LeNet: unit MLP = 0.9% error EIE426-AICV 41

Summary Vision is hard -- noise, ambiguity, complexity Prior knowledge is essential to constrain the problem Need to combine multiple cues: motion, contour, shading, texture, stereo “Library” object representation: shape vs. aspects Image/object matching: features, lines, regions, etc EIE426-AICV 42