Sensing for Robotics & Control – Remote Sensors R. R. Lindeke, Ph.D.

Slides:



Advertisements
Similar presentations
ARTIFICIAL PASSENGER.
Advertisements

Applications of one-class classification
Machine Vision Applications Case Study No. 6 Inspecting Clear Glass or Plastic Bottles.
By: Mani Baghaei Fard.  During recent years number of moving vehicles in roads and highways has been considerably increased.
1 RTL Example: Video Compression – Sum of Absolute Differences Video is a series of frames (e.g., 30 per second) Most frames similar to previous frame.
Digital Image Fundamentals Selim Aksoy Department of Computer Engineering Bilkent University
Regional Processing Convolutional filters. Smoothing  Convolution can be used to achieve a variety of effects depending on the kernel.  Smoothing, or.
Digital Radiography.
Depth from Structured Light II: Error Analysis
Digital Image Fundamentals Selim Aksoy Department of Computer Engineering Bilkent University
5/13/2015CAM Talk G.Kamberova Computer Vision Introduction Gerda Kamberova Department of Computer Science Hofstra University.
Vision Based Control Motion Matt Baker Kevin VanDyke.
ABRF meeting 09 Light Microscopy Research Group. Why are there no standards? Imaging was largely an ultrastructure tool Digital imaging only common in.
1 Laser Beam Coherence Purpose: To determine the frequency separation between the axial modes of a He-Ne Laser All sources of light, including lasers,
Hyperspectral Imagery
Multimedia Data Introduction to Image Processing Dr Mike Spann Electronic, Electrical and Computer.
Segmentation Divide the image into segments. Each segment:
Feature matching and tracking Class 5 Read Section 4.1 of course notes Read Shi and Tomasi’s paper on.
8-1 Quality Improvement and Statistics Definitions of Quality Quality means fitness for use - quality of design - quality of conformance Quality is.
Chapter 2 Computer Imaging Systems. Content Computer Imaging Systems.
7/24/031 Ben Blazey Industrial Vision Systems for the extruder.
Digital Images The nature and acquisition of a digital image.
1 Basics of Digital Imaging Digital Image Capture and Display Kevin L. Lorick, Ph.D. FDA, CDRH, OIVD, DIHD.
Introduction to Machine Vision Systems
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
Spectral contrast enhancement
Module Code: CU0001NI Technical Information on Digital Images Week -2.
Comparing Regular Film to Digital Photography
1 Imaging Techniques for Flow and Motion Measurement Lecture 5 Lichuan Gui University of Mississippi 2011 Imaging & Recording Techniques.
 Refers to sampling the gray/color level in the picture at MXN (M number of rows and N number of columns )array of points.  Once points are sampled,
Perception Introduction Pattern Recognition Image Formation
ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The.
University of Kurdistan Digital Image Processing (DIP) Lecturer: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture,
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
David E. Pitts CSCI 5532 Overview of Image Processing by David E. Pitts Aug 22, 2010 copyright 2005, 2006, 2007, 2008, 2009, 2010.
Multimedia Data Introduction to Image Processing Dr Sandra I. Woolley Electronic, Electrical.
Biometrics Stephen Schmidt Brian Miller Devin Reid.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
September 17, 2013Computer Vision Lecture 5: Image Filtering 1ColorRGB HSI.
LCC, MIERSI SM 14/15 – T4 Special Effects Miguel Tavares Coimbra.
7 elements of remote sensing process 1.Energy Source (A) 2.Radiation & Atmosphere (B) 3.Interaction with Targets (C) 4.Recording of Energy by Sensor (D)
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Digital Image Processing Lecture 10: Image Restoration March 28, 2005 Prof. Charlene Tsai.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
Digital Image Processing Lecture 10: Image Restoration
Autonomous Robots Vision © Manfred Huber 2014.
1 Introduction What IS computer vision? Where do images come from? the analysis of digital images by a computer.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Sponge: Draw the four types of reflectors.. Light from different directions focuses at different points, and an image is formed near the prime focus.
Introduction to Image Processing. What is Image Processing? Manipulation of digital images by computer. Image processing focuses on two major tasks: –Improvement.
Digital Image Processing CSC331 Image Enhancement 1.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
Intelligent Robotics Today: Vision & Time & Space Complexity.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Machine Vision. Image Acquisition > Resolution Ability of a scanning system to distinguish between 2 closely separated points. > Contrast Ability to detect.
: Chapter 5: Image Filtering 1 Montri Karnjanadecha ac.th/~montri Image Processing.
An Introduction to Digital Image Processing Dr.Amnach Khawne Department of Computer Engineering, KMITL.
In conclusion the intensity level of the CCD is linear up to the saturation limit, but there is a spilling of charges well before the saturation if.
ITEC2110, Digital Media Chapter 3 Digital Image Processing 1 GGC -- ITEC Digital Media.
IMAGE PROCESSING APPLIED TO TRAFFIC QUEUE DETECTION ALGORITHM.
Digital Image Processing (DIP)
Digital Image Processing Lecture 10: Image Restoration
DIGITAL SIGNAL PROCESSING
Compression (of this 8-bit 397,000 pixel image):
Sponge: Draw the four types of reflectors.
Chapter I, Digital Imaging Fundamentals: Lesson II Capture
Digital Image Processing
Image and Video Processing
Presentation transcript:

Sensing for Robotics & Control – Remote Sensors R. R. Lindeke, Ph.D

Remote Sensing Systems: Radar – uses long wavelength microwaves for point or field detection Speed and range analysis Trajectory analysis Sonar – uses high energy/high frequency sound waves to detect range or create images in “conductive media” Vision Systems – Operations in Visible or near visible light regimes. Use structured light and high contrast environments to control data mapping problems

Parts of The Remote Sensors – field sensors Source information is a structured illumination system (high contrast) Receiver is a Field detector – in machine vision it is typically a CCD or CID CCD is a charge coupled device – where a detector (phototransistor) stores charge to a capacitor which is regularly sampled/harvested through a field sweep using a “rastering” technique (at about 60 hz) CID is a charge injected device where each pixel can be randomly sampled at variable times and rates Image analyzers that examine the raw field image and apply enhancement and identification algorithms to the data

The Vision System issues: Blurring of moving objects – A result of the data capture rastering through the receiver’s 2-D array, here, the sampling system lags the real world as it processes field by field with parts of the information being harvested and adjacent pixels continuing to change in time Limits speed of response, speed of objects and system thru-put rates Contrast Enhancements are developed by examining extrema in field of view:

Some Additional Issues: Must beware of ‘Bloom’ in the image Bloom is a problem when a high intensity pixel overflows into adjacent pixels increasing or changing the size of an information set Also consider Lensing and Operational errors: Vignetting – lenses transmits more effectively in the center than at their edges leading to intensity issues across the field of the image even without changes in the image field information itself Blur – caused by lack of full field focus Distortion – parabolic and geometric changes due to lens shape errors Motion Blur – moving images “smeared” over many pixels in capture (for CCD system we typically sample up to 3 to 5 field to build a stable image limiting one to about 12 to 20 stable images/sec)

Data Handling Issues: Typical Field Camera (780x640 or 499,200 pixels/image) with 8-bit color – means 3 separate 8 bit words (24 bit color) per pixel (32 bit color typically includes a saturation or brightness byte too) Data in one field image as captured during each rastering sweep: /8 = bytes/image*3 bytes of color = bytes/image In a minute: *60fr/s*60s/m = MBytes (raw – ie. without compression or processing) (40.4 Gigs/hour of video information)

Helping with this ‘Data Bloat’ Do we really need Color? – If no, the data is reduced by a factor of 3 Do we really need “shades”? – If no, the data set drops by a factor of 8 – but this requires ‘thresholding’ of the data field Thresholding is used to construct ‘bit maps’ After sampling of test cases, setting a level of pixel intensity corresponding to a value of 1 or ‘on’ while below this level of intensity the pixel is 0 or ‘off’ regardless of image difficulties and material variations Consideration is reduced to 1 bit rather than the 8 to 24 bits in the original field of view!

Analyzing the Images Do we really need the entire field – or just the important parts? – But this requires post processing to analyze what is in the ‘thresholded’ image Image processing is designed to build or “Grow” field maps of the important parts of an image for identification purposes These field maps then must be analyzed by applications that can make decisions using some form of intelligence as applied to the field data sets

Image Building First we enhance the data array Then we digitize (threshold) the array Then we look for image edges – an edge is where the pixel value changes from 0 to 1 or 1 to 0! Raw image before thresholding and image analysis:

Working the Array – hardware and software Bottles for selection, After Reorganizing the Image Field Field Image after Thresholding:

After Threshold The final image is a series of On and Off Pixels (the light and dark parts of the 2 nd Image as seen on the previous slide) The image is then scanned to detect edges in the information One popular method is using an algorithm “GROW” that searches the data array (of 1 and 0’s) to map out changes in pixel value abc def g

Using Grow Methods We begin a directed Scan – once a state level change is discovered we stop the directed scan and look around the changed pixel to see if it is just a bit error If it is next to changed bits in all “new” directions, we start exploring for edges by stepping forward from the 1 st bit and stepping back and forth about the change line as it “circumvents” the part The algorithm then is said to grow shapes from full arrays but done without exhaustive enumeration!

So lets see if it works: ___ ___ __ ---

Once Completed: An image must be compared to standard shapes The image can be analyzed to find centers, sizes or other shape information After analysis is completed, objects can then be handled and or sorted

Sorting Routines: Based on Conditional Probabilities: This is a measure of the probability that x is a member of class i (W i ) given a knowledge of the probability that x is not a member of the several other classes in the study (W j ’s)

Typically a Gaussian Approximation is Assumed: We perform the characteristic measurement (with the Vision System) We compute the conditional probability that X fits each class j – that with the highest value is accepted as a best fit (if the classes are each mutually exclusive)

Lets Examine The Use Of This Technique: Step One: present a “Training Set,” to the camera system including representative sizes and shapes of each type across its acceptable sizes Step Two: For each potential class, using its learned values, compute a mean and standard deviation for each class Step 3: Present unknowns to the trained system and make measurements – compute the appropriate dimensions and compute conditional probabilities for each potential class Step 4: Assign unknown to class having the highest conditional probability – if the value is above a threshold of acceptability

Using Vision To determine Class & Quality A system to sort by “Body Diagonals” (BD) for a series of Rectangular pieces: A is 2±.01” x 3±.01” B is 2±.01” x 3.25±.01” C is 1.75±.01” x 3.25±.01” Body Diagonals with part dimensions at acceptable limits: – A:  ( ) to  ( ) to (mean is 3.606”) – B:  ( ) to  ( ) to (mean is 3.816”) – C:  ( ) to  ( ) to (mean is 3.691”)

Computing Class Variances: Can use Range techniques: find range of samples for a class then using a statistic d 2 to compute σ: σ classi = (R sample )/d 2 Can also compute an estimate of σ class using sample standard deviations and a c 4 statistic: σ classi = (s sample )/c 4 c 4 or d 2 are available in any good engineering statistics text! – see handout

Computing Variances: Here using estimates from ideal values on BD range, σ classi is: If sample size is 2 – changes based on sample size

Fitting an unknown: Unknown Body Diagonal is measured at 3.681” Compute Z and Cond. Probability (each class) From our analysis we would tentatively place the Unknown in Class C – but more likely we would place it in a hand inspect bin!