# Perception Introduction Pattern Recognition Image Formation

## Presentation on theme: "Perception Introduction Pattern Recognition Image Formation"— Presentation transcript:

Perception Introduction Pattern Recognition Image Formation
Image Processing Summary

Introduction Perception is initiated by sensors.
The focus is on vision (as opposed to hearing, sensing). How do we process info. provided by sensors? What can I infer about the world from this sequence of sensors?

Processing Sensor Data
It has several uses: Manipulation. Navigation. Object Recognition.

Perception Introduction Pattern Recognition Image Formation
Image Processing Summary

Recognizing Patterns Definition.
Pattern recognition is the “act of taking in raw data and taking an action or category of the pattern” (Pattern Classification, Duda, Hart, and Stork). Computer Pattern Recognition Action Input Signal

A Particular Example Fish packing plant
Sort incoming fish on a belt according to two classes: Salmon or Sea Bass Steps: Preprocessing (segmentation) Feature extraction (measure features or properties) Classification (make final decision)

Figure 1.1

Figure 1.2

Figure 1.3

Decision Theory Most times we assume “symmetry” in the cost.
(e.g., it is as bad to misclassify salmon as sea bass). That is not always the case: Case 1. Case 2. Sea bass can with pieces of salmon X Salmon can with pieces of sea bass

Decision Boundary We will normally deal with several features at a time. An object will be represented as a feature vector X = x1 x2 Our problem then is to separate the space of feature values into a set of regions corresponding to the number of classes. The separating boundary is called the decision boundary.

Figure 1.4

Generalization The main goal of pattern classification is as follows:
To generalize or suggest the class or action of objects as yet unseen. Some complex decision boundaries are not good at generalization. Some simple boundaries are not good either. One must look for a tradeoff between performance and simplicity This is at the core of statistical pattern recognition

Figure 1.5

Figure 1.6

Designing Pattern Recognition Systems
Components in a system: Sensing devices Often a transducer such as a camera or microphone (features: bandwidth, resolution, sensitivity, distortion, latency, etc.) Segmentation and grouping Patterns must be segmented (there may be overlapping). Feature extraction Extract features that simplify classification. Ideally: values similar in same category and different among categories. That is we need distinguishing features (invariant to transformations).

Designing Pattern Recognition Systems
Components in a system: d) Classification Use feature vectors to assign an object to the right category. Ideally: determine probability of category membership for an object. Learn to handle noise. e) Post processing Use output of classifier to suggest an action. Classifier performance? Error rate? Minimize expected cost or “risk”.

Figure 1.7 Our focus of study today

Applying Pattern Recognition Systems
Steps: Data Collection Usually very time consuming Feature Choice Prior knowledge is crucial Model Choice Switch to new features, new classifier Training Learn from example patterns Evaluation Avoid overfitting

Perception Introduction Image Formation Image Processing Summary

Image Formation Consists of creating a 2-D image of a scene.
We can do this through a pinhole camera. Image is inverted through “perspective projection”.

Translation of Coordinates
Let (x,y,z) be a point in the image. Let (X,Y,Z) be a point in the scene. Then, -x/f = X/Z y/f = Y/z x = -fX/Z y = -fY/Z

Lenses Real cameras use a lens. More light comes in.
(but not all can be in sharp focus). Scene points within certain range Zo can be identified with sharp focus. This is called the depth of the field.

CCD Camera The image plane is subdivided into pixels.
(typically 512x512). The signal is modeled by the variation in image brightness over time.

Fig. 24.4a

Fig. 24.4b

Photometry of Image Formation
The brightness of a pixel is proportional to the amount of light directed toward the camera. Light reflected can be of two types: Diffusely reflected (penetrates below the surface of the object and is re-emitted). b. Specularly reflected. Light is reflected from the outer surface of the object.

Photometry of Image Formation
Most surfaces have a combination of diffusely specularly reflected light. This is the key to “modeling” in computer graphics

Perception Introduction Image Formation Image Processing Summary

Image Processing One important step is “edge detection”. Motivation:
Edge contours correspond to scene contours.

Image Processing Typically there are problems: missing contours
noise contours

Edge Detection Edges are curves in the image plane where
there is a clear change of brightness. How do we detect edges? Consider the profile of image brightness along a 1-D cross-section perpendicular to an edge.

Solutions Look for places where the derivative is large.
(many subsidiary peaks may show up). 2. Combine differentiation with smoothing.

Extracting 3-D Information
Normally divided into three steps: Segmentation. Determining position and orientation. Important for navigation and manipulation. Determining the shape of objects

Stereopsis Idea is similar to motion parallax.
We use images separated in space. Superposing the images would show a disparity in the location of image features.

Perception Introduction Image Formation Image Processing Summary

Summary We need to extract information from sensor
data for activities such as manipulation, navigation, and object recognition. A signal is modeled by the variation in image brightness over time. Light reflected can be diffusely reflected or specularly reflected. Stereopsis is similar to motion parallax. What is machine learning?