Presentation is loading. Please wait.

Presentation is loading. Please wait.

IMAGE ANALYSIS AND COMPUTER VISION. Orientation basis Definition and history Image processing Basics and classification Digital image Image processing.

Similar presentations


Presentation on theme: "IMAGE ANALYSIS AND COMPUTER VISION. Orientation basis Definition and history Image processing Basics and classification Digital image Image processing."— Presentation transcript:

1 IMAGE ANALYSIS AND COMPUTER VISION

2 Orientation basis Definition and history Image processing Basics and classification Digital image Image processing methods EquipmentsSoftwares

3 Image processing basic steps 1. Image Enhancement 2. Image Restoration 3. Image Analysis 4. Image compression 5. Image Synthesis

4 Goal in image analysis Image analysis operations are used in applications, that require the measurement and classification of image information Examples: Cell recognition from tissue sample Object recognition from conveyor belt Zip code reading from envelope

5

6 Group discussion List application possibilities for image analysis!

7 Image analysis Basis is visual image, whose content should be interpreted As a result mostly non-image data As a goal is to understand images content classifying its content

8 Example: Robot vision

9 Example: Robot vehicle

10 Example: Traffic analysis

11 Image analysis operations

12 Segmentation – operation that highlights individual objects within an image. Feature Extraction – after segmentation->measure the individual features of each object Object Classification – classify the object to particular category feature extraction classification segment space distinct objects feature space feature classification space class Segmentation image

13 Segmentation operations Image Preprocessing Initial Object Discrimination Image Morphological Operations feature extraction classification segment space distinct objects feature space feature classification space class Segmentation image

14 Image Preprocessing In preprocessing eg. change images contrast, filter noise and remove distracting image background Image enhancement operations is used

15

16 Initial Object Discrimination Separates image objects into rough groups with like characteristics using image enhancement operations Outlining and contrast enhancement often used

17 Initial object discrimination - example Original Binary contrast enhanced

18 Initial object discrimination - example Sobel edge-enhancement 1 Filter with horizontal mask 2 Filter with vertical mask 3 Add -1 0 1 -1 -2 -1 -2 0 2 0 0 0 -1 0 1 1 2 1

19 Morphological processing In preprocessed image the boundaries are very rough-> need to “clean up” Morphological operations

20 Binary operations – Erosion and dilation – Opening and Closing – Outlining – Skeletonization Gray-scale operations – Top-Hat and Well transformations – Morphological gradient – Watershed edge detection

21 Binary morphology Focus on two brightness values. black=0, white=255 Technically same as spatial convolution combines pixel brightness with a structuring element, looking for specific pattern Array of logical values (cut=AND, union=OR, complement=NOT)

22

23 Binary morphology - equation O(x,y) = 0 or 1 (predefined) if X =I (x,y) AND X 0 =I (x+1,y) AND X 1 =I (x+1,y-1) AND X 2 =I (x,y-1) AND X 3 =I (x-1,y-1) AND X 4 =I (x-1,y) AND X 5 =I (x-1,y+1) AND X 6 =I (x,y+1) AND X 7 =I (x+1,y+1) otherwise, O(x,y) = opposite state

24 Erosion Reduces the size of the objects in relation to their background Mask 1 1 1 O(x,y)= 1 if “Hit” 1 1 1 1 1 1 = 0 if “Miss” 1 1 1

25 Example (1/2) Erosion image Original Binary contrast enhanced

26 Example (2/2) Eroded image Binary contrast enhanced

27 Dilation Uniformly expands the size of object Mask 0 0 0 O(x,y)= 0 if “Hit” 0 0 0 = 1 if “Miss” 0 0 0

28 Example Binary contrast enhanced Dilated image

29 Opening Erosion then Dilation Removes one pixel mistakes like erosion Object size remains Binary contrast enhanced ErosionDilation

30 Closing Dilation + erosion Fills pixel wide holes Object size remains Binary contrast enhanced DilationErosion

31 Cleaning Original binary contrast enhanced image OpeningCleaned = Opened and Closed

32 Outlining Forms one-pixel-wide outlines and tends to be more immune to image noise than most edge enhancement operations Implementation: Eroded image subtract from original

33 Outlining Original Binary contrast enhanced

34 Outlining Eroded imageOriginal - Eroded image

35 Skeletonizing Make “wireframe” model from image Uses different erosion masks Analogy: fire, which burns object from each side

36 Gray-scale operations Used when binary operations degrade an image Gray-scale operation can be followed binary operation Mask terms -255... 255 or “Don’t care”

37

38 Erosion and Dilation Erosion reduces the size of objects by darken the bright areas in image Dilation is inverse operation

39 Erosion and Dilation example Erosion Dilation Original

40 Opening and Closing Opening = erosion + dilation Opening reduces noise pixels Closing = dilation + erosion Closing fills one-pixel-wide holes

41 Opening example

42 Opening - example 2

43 Top-Hat transformation

44 Top-Hat example

45 Well transformation Opposite operation than Top-Hat Dark areas of image emphasizes Top-hat = Closed - Original

46 Well Top-Hat

47 Morphological gradient n Images outlines as a result n Make copy from image. Erosion to other image and dilation to other. Then images subtract from each other using a dual-image point process. Original Eroded Dilated Eroded - dilated = Gradient Image

48 Morphological gradient OriginalErosion DilationGradient=Erosion-Dilation

49 Watershed edge detection Good edge detection, if areas of an image where objects are touching, or where few gray levels exist between different objects Pixel brightness represented by a height coming out from page Complement the image, so bright objects appear as pixel depressions in the image

50 Watershed 2 Imagine flooding the image with water, the depressions initially fill with water.As the water level rises,neighboring objects collect water independently.The water fills the lowest portions of image first. Operation begins to mark pixels where single object pixel separates two water bodies.

51 Edge detection in 1-dimension signal with Watershed operation

52 Edges median line detection with Morphological gradient and Watershed operations

53 OriginalNegative Watershed Adjust / Curves

54 Feature Extraction Operation followed by segmentation Choose essential features and measure them from objects Goal is to find features, which help find out object’s class easier feature extraction classification segment space distinct objects feature space feature classification space class Segmentation image

55 Features Brightness and color Texture Shape Spatial moments Edge shape

56 Pornographic image analysis

57

58 Feature: Brightness and color Histogram can show Color (sorting by colors) Brightness average brightness Standard deviation brightness mode brightness sum of all pixel brightnesses energy (zero-order spatial moment)

59 Example: Fruit sorting Problem: Boxes goes on conveyor belt, which has green apples (Granny Smith) and red apples (Red Delicious) and also oranges Sort boxes to the correct follow on conveyor belts automatically

60 Example: Fruit sorting Solution: Capture image with camera to RGB images Convert RGB to HSL Explore Hue color component which fruit it is

61 RGB HSL

62 Feature: Texture Images spatial frequency shows is the one question smooth texture (low frequency) or coarse texture (high frequency)

63 Texture features - meters High-pass filter Fourier -transform Standard deviation of the brightnesses (big deviation = coarse texture, small deviation = smooth texture)

64 Feature: Shape Most common object measurement method Objects physical measures The goal is to use fewest necessary measure methods Often standard shapes, square, circular, elliptical Even the weirdest shapes can be measured

65 Shape measures Perimeter Area (pixels) Area to perimeter ratio Roundness=(4  *Area)/Perimeter² Value between 0 and 1.

66 Shape measures Major axis (x1,y1 and x2,y2) Major axis length sqr((x2-x1)²+(y2-y1)²) Major axis angle

67 Shape measures Minor axis Minor axis width Minor axis width to Major axis length ratio Value between 0 (elongated) and 1 (circle)

68 Shape measures Bounding box area = Major axis length * Minor axis length Number of holes Total hole area Total hole area to object area ratio Value between 0 (no holes) and 1 (entirely a hole)

69 Feature: Spatial moment Statistical shape measures Zero-order spatial moment Sum of object’s pixel brightness. Same as object’s area in binary image and object’s ‘energy’ in gray-scale images

70 Spatial moment First-order spatial moment m x sum =  i * x i i=0 where i = pixel’s x-coordinate x i = pixel’s value at i m= image’s horizontal dimension y-moment n Y sum =  j * y j j=0

71 Center of mass (=Centroid) Center of mass= first-order / zero-order Binary images sum of x-pixel coordinates CoM x = number of pixels in object sum of y-coordinates CoM y = number of pixel in image

72 Feature: Boundary Descriptions Explicit description Chain codes Line segment representation Fourier descriptors

73 Boundary shape, accuracy Sequential list of the boundary pixels save the boundary (x,y)-values No change of location, orientation or size

74 Boundary shape, Chain code Boundary can be followed and recorded using chain code pick a starting pixel and detect direction to the next boundary pixel (values 0..7)

75 Chain codes Absolute chain code Each code represents absolute direction, so can’t handle geometric change Relative chain code Direction values are relative to earlier values. Can handle geometric change.

76

77 Relative chain code

78 Line segment representation Some cases reduced form of the outline may be sufficient Example replacing individual boundary (x,y) locations with line segments.

79 Line segment representation - generally form Like chain code, but segment lengths and angles can be value free

80 Variable line segment boundary representation Combines adjoining line segments with similar direction angles

81 Fourier descriptors Treats the (x,y) points of an object outline as two functions and decomposes them into values representing their frequency components. Abstract description of the boundary. Usually it’s enough to compare highest frequency components with known sets of descriptors to classify the object Inverse-transformation recreates the boundary

82 Object Classification Compare the measurements of a new object with those of a known object or other known criteria In recognition the objects found in image ( example letters and numbers) gets labels. feature extraction classification segment space distinct objects feature space feature classification space class Segmentation image

83 Knowledge base In image analysis you need pre-knowledge from problem area. This is saved in knowledge base. It can contain example different kind of letter and number features, where the wanted objects may have locate Knowledge base also controls different module’s operation.

84 Object classification Comparing measures with known features Classification: pick feature measures pick tolerances create classes

85 Object classification - example Recognize ring and spade from conveyor belt Features: Hole or no hole Tolerance:Hole size 5mm ± 5% Classes:Good, if in tolerances Otherwise reject

86 Other classification methods Gray-level classification Image information is divided in classes by gray level; text, logo, background... Fuzzy logic If several features which depend on each other Neural networks Unique ability to be “trained”

87 Measure Invariance Measures are independent Location Angle Size

88 Measure invariance Instead of using end point coordinates use major axis length values Opening up the tolerances Give measures in multiple of a unit distance

89 Measure invariance Relative chain coding and line segment coding

90 Measure invariance - example Recognition of a plane type

91 Object class Image matching Convolution method, where image mask is compared to image under analysis Add image mask’s pixel value with equivalent image pixel and sum them. Big numbers as a result, if target and image mask match

92 Object class Image matching

93 Image analysis example - text recognition Problem: reading zip code or car’s register number recognition Image capture Segmentation Feature extraction Class

94 Application level Basic description of application Example: 1. Capture image from register plate 2. Process and interpret objects 3. Search register database, is there offenses

95 Image analysis n Segmentation –Image preprocessing –Initial object discrimination –Outline cleaning

96 Feature extraction Form a relative chain code with edge detection algorithm from objects

97 Classification Compare the text vectors with model library and interpret them to equivalent class Knowledge base ?

98 Object detection system - planning Object localization Feature choosing Classifiers planning Classifiers teaching Capacity evaluating

99 Pattern recognition

100 Applications

101

102 Cell analysis

103 Chromosome analysis

104 Thermo analysis

105 Micro hardness analysis

106 DNA analysis

107 Motion analysis

108 Counting trees QR Code 108/11 Computer vision examples

109 109/11 Facial recognition (=there is a face)

110 Facial recognition principle

111 Face features

112 More information Home Assignment: Install CVIPTools software Version 5.3G and try out image processing and analysis operations. I’ll ask something about it in the exam. Computer Vision and Image Processing Tools (CVIPTools) http://cviptools.ece.siue.edu/index.php


Download ppt "IMAGE ANALYSIS AND COMPUTER VISION. Orientation basis Definition and history Image processing Basics and classification Digital image Image processing."

Similar presentations


Ads by Google