Presentation on theme: "Introduction to Machine Vision Systems"— Presentation transcript:
1 Introduction to Machine Vision Systems Professor Nicola FerrierRoom 3128, ECB
2 Machine VisionTo become familiar with technologies used for machine vision as a sensor for robots.Camera and lighting technology (obtaining a digital representation of an image)Software (computational techniques to process or modify the image data)Analysis/decisions: using the results of the processing in robot controlAdditional material in CS766, ECE 533, ME 739
3 Machine Vision in Automation Use a camera to inspect parts to:Guide a robot or control automated equipmentSupport statistical analysis in a computer-assisted- manufacturing (CAM) systemEnsure quality in manufacturing process:dimensions/alignmentDetermine if all components are presentOther quality issues: color, placement, …
4 Why use Vision? Dynamic Range Can be remotely situated Passive emits no energy (cf. Laser, sonar, IR)no contact requiredFlexibilityAffordable
5 Why avoid Vision? / Computation must process images data = information CalibrationSensitivity to lighting conditions/Because the lighting is different, these 3 images appear substantially different to a computer – to a human we easily adapt our perception for variations in illumination and recognize that all three images are of the same object.Images (arrays of pixel data) must be processed to provide information
6 Example Application: Micro-manipulation Micro Object handling with Micro gripperPostech Robotics LabMicro gripperMicroscope Table
7 A machine vision system often includes the following elements: Image Acquisition (generally from a camera placed above the production line),Image Pre-Processing (e.g. increasing the contrast, motion de-blur, etc),Feature Extraction (e.g. measuring a distance, checking a screw is in place etc),Decisions (i.e. is the part OK to a tolerance, is a label in the correct position), and,Control (e.g. give the result to a Programmable Logic Controller (PLC) or robot controller).
8 Image AcquisitionTransforms the visual image of a physical objects into a set of digitized dataIlluminationImage formation (including focusing)Image detection or sensingFormatting camera output signal
9 Image Formation and Detection Vision systems have an optical-electro device that converts electromagnetic radiation from the image of the physical object into an electric signal used by the vision processing unitImage is formed by:Illumination flux from objectOptics (lens)Photosensitive detectors (photodiodes on solid state cameras)
10 Vision – Image Formation ShapeLightingRelative PositionsSensor sensitivitySame shape – very different images!
11 Lighting Structured Lighting Diffuse Backlighting Directional backlightingFiber-optic/LED ring lights
12 Lighting Polarized lighting Oblique lighting Direct front lighting Cross polarization
13 Lighting Diffuse front lighting Dark field illumination Fibre optic near in- lighting
15 Digitization of Camera Signal Analog image data (voltage) is sampled and quantized (often to 8 bits greyscale or 24 bits of color)
16 Software: Processing the Data The software allows the image to be processed, analyzed, and stored.Different types of software packages are available, ranging from easy-to-use packages with pre-defined tools, to SDKs (software development kits) that allow programmers to build custom imaging applications.Matlab™ has an image processing tool boxImage Pre-processingFeature Extraction
17 Image Pre-processing What to do with the image? May need to preprocess the image in order to analyze itRemove motion blur (ECE 533/738)Enhance contrast
18 I Can See It – Why can’t the Computer? Minimize possible problems – The human eye and brain are elaborate and versatile systems, capable of identifying objects in a wide variety of conditions. For example, we are able to identify familiar people even when they are wearing different clothes, and recognize familiar landmarks when driving on a foggy day. A PC-based imaging system is not as versatile; it can only perform what it has been programmed to perform. Knowing what the system can and cannot "see" are important points to keep in mind to obtain the results you want, and reduce errors and incorrect measurements. Common variables include:· Changes in object’s color· Changes in surrounding lighting· Changes in camera focus or position· Improperly mounted camera· Environmental vibrationA vibration-free environment with all extraneous light removed will eliminate many common problems.
19 Find the man….Visual tasks can be made difficult!
20 DistractorsNatural systems take advantage of the fact that visual tasks can be made difficult!
21 I Can See It – Why can’t the Computer? Minimize possible problems –Knowing what the system can and cannot "see" are important points to keep in mind to obtain the results you want, and reduce errors and incorrect measurements.Engineer the environment!Great examples include commercial motion capture systems
22 Feature Extraction/Analysis 2D Geometric Analysis:Must have high contrast to separate (“segment”) part from backgroundIn practice back lighting is often usedThe silhouette is used to determine:part dimensions: Width, height, orientation, etcPart features (e.g. number of holes)Relationships between parts
24 Measurements from Images Must have relationship between the image “pixels” and the world2D imagingthe image plane and the “world” plane are in 1-1 correspondence3D –harder
25 Goals for ME 439 and ME 739 Modeling Cameras Kinematics of Vision Basic of pinholeKinematics of VisionCoordinate transformationsProcessing ImagesSome simple features (sections )2D problemsModeling CamerasPinhole modelProjective mappingCalibration ProceduresKinematics of VisionCoordinate transformationsMotion field equationsProcessing ImagesFeature detection (lines, blobs)Visual Servoing (Eye-Hand Coordination) in 3D