Pattern Recognition Random Thoughts

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

Mobile Robot Localization and Mapping using the Kalman Filter
Linear Regression.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Introduction To Tracking
Chapter 6 Feature-based alignment Advanced Computer Vision.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Visual Recognition Tutorial
Formation et Analyse d’Images Session 8
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Fitting a Model to Data Reading: 15.1,
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
29 Palms Vehicle Detection (what we wanted to do).
Novel approach to nonlinear/non- Gaussian Bayesian state estimation N.J Gordon, D.J. Salmond and A.F.M. Smith Presenter: Tri Tran
Tracking with Linear Dynamic Models. Introduction Tracking is the problem of generating an inference about the motion of an object given a sequence of.
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
Lecture 17 Interaction Plots Simple Linear Regression (Chapter ) Homework 4 due Friday. JMP instructions for question are actually for.
Chapter 10 Real Inner Products and Least-Square (cont.)
Adaptive Signal Processing
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
1 Formation et Analyse d’Images Session 7 Daniela Hall 7 November 2005.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean Hall 5409 T-R 10:30am – 11:50am.
METU Informatics Institute Min 720 Pattern Classification with Bio-Medical Applications PART 2: Statistical Pattern Classification: Optimal Classification.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Colorado Center for Astrodynamics Research The University of Colorado STATISTICAL ORBIT DETERMINATION Project Report Unscented kalman Filter Information.
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Example Clustered Transformations MAP Adaptation Resources: ECE 7000:
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Introduction SNR Gain Patterns Beam Steering Shading Resources: Wiki:
1 Least squares procedure Inference for least squares lines Simple Linear Regression.
TP15 - Tracking Computer Vision, FCUP, 2013 Miguel Coimbra Slides by Prof. Kristen Grauman.
/09/dji-phantom-crashes-into- canadian-lake/
Computer vision: models, learning and inference Chapter 19 Temporal models.
Tracking at LHCb Introduction: Tracking Performance at LHCb Kalman Filter Technique Speed Optimization Status & Plans.
HOUGH TRANSFORM Presentation by Sumit Tandon
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
STS track recognition by 3D track-following method Gennady Ososkov, A.Airiyan, A.Lebedev, S.Lebedev, E.Litvinenko Laboratory of Information Technologies.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Modern Navigation Thomas Herring MW 11:00-12:30 Room
23 November Md. Tanvir Al Amin (Presenter) Anupam Bhattacharjee Department of Computer Science and Engineering,
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
Course 8 Contours. Def: edge list ---- ordered set of edge point or fragments. Def: contour ---- an edge list or expression that is used to represent.
An Introduction To The Kalman Filter By, Santhosh Kumar.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Using Kalman Filter to Track Particles Saša Fratina advisor: Samo Korpar
Tracking with dynamics
Digital Image Processing Lecture 17: Segmentation: Canny Edge Detector & Hough Transform Prof. Charlene Tsai.
Robust Localization Kalman Filter & LADAR Scans
École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
Computational Intelligence: Methods and Applications Lecture 22 Linear discrimination - variants Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Computational Intelligence: Methods and Applications Lecture 26 Density estimation, Expectation Maximization. Włodzisław Duch Dept. of Informatics, UMK.
Hough Transform CS 691 E Spring Outline Hough transform Homography Reading: FP Chapter 15.1 (text) Some slides from Lazebnik.
Object Recognition. Segmentation –Roughly speaking, segmentation is to partition the images into meaningful parts that are relatively homogenous in certain.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Track Reconstruction: the ftf and trf toolkits Norman Graf (SLAC) Common Software Working Meeting CERN, January 31, 2013.
Part 2 non-linear problems example of a straight line fit the progressive fit exact constraints.
Chapter 7. Classification and Prediction
Linear Regression.
ASEN 5070: Statistical Orbit Determination I Fall 2014
Fitting: Voting and the Hough Transform
Detection of discontinuity using
ASEN 5070: Statistical Orbit Determination I Fall 2015
PSG College of Technology
Fitting Curve Models to Edges
Kalman Filtering COS 323.
Unfolding with system identification
Kalman Filter: Bayes Interpretation
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Pattern Recognition Random Thoughts N.A. Graf UTeV March 2, 2000

Pattern Recognition There are many kinds of patterns. Visual, auditory, temporal, logical, … Using a broad enough interpretation, we can find pattern recognition in every intelligent activity. No single theory of pattern recognition can possibly cope with such a broad range of problems.

Overview Restrict our attention to the following 3 classes of pattern recognition techniques: Template Matching Global, fixed patterns Hough Transform Global, parameterized patterns Kalman Filter Local, dynamic state following

Suppose that we are working with visual patterns, and we know that the patterns of interest represent the 26 letters of the Roman alphabet. Then we can say that the pattern recognition problem is one of assigning the input to one of 26 classes. In general, we will limit ourselves to the problem of deciding if the input belongs to Class 1 or Class 2 or ... or Class c.

An obvious approach is to compare the input with a standard pattern for each class, and to choose the class that matches best. The obvious problem with this approach is that it doesn't say what to compare or how to measure the degree of match.

Template Matching Once digitized, one can compare images bit by bit with a matching template to classify. Works very well in specific cases, but not in general (fonts, shearing, rotation, etc.)

Template Matching in HEP In high energy physics experiments, the detectors are fixed, so template matching is a good solution for fast characterization of events. Commonly used to trigger on charged particle tracks. Use MC to build up a library of most probable patterns.

Parametric Feature Extraction Often, one is interested in extracting topological information from “images”. Finding “edges” in pictures. Finding “tracks” in events. For patterns which can be parameterized, such as curves, features can be identified using conformal mapping techniques.

The Hough Transform Patented by Paul Hough in 1962 as a technique for detecting curves in binary image data. Determines whether edge-detected points are components of a specific type of parametric curve. Maps “image space” points into “parameter space” curves by incrementing elements of an accumulator whose array indices are the curve parameters.

The Hough Transform Developed to detect straight lines using the slope-intercept form y=mx+b Every point in the image gives rise to a line in the accumulator. Curve parameters are identified as array maxima location gives parameters entries gives number of points contributing Right-click and “Open in New Window”

The Hough Transform Richard Duda and Peter Hart in 1972 introduced the - parameterization. y y=mx+b   x

The Hough Transform The - accumulator is incremented using values for the angle and radius that satisfy  = xcos + ysin Sinusoidal curves are produced. Intersection of the curves indicates likely location of lines in the image. Normal form is periodic, limiting the range of values for the angle and eliminating the difficulties encountered with large slopes.

Finding Straight Lines in Images Start with a digitized image Apply the Hough transform Find the “edges” Extract the features

F(x,y,a,b) = (x-a)2 + (y-b)2 - r2 = 0 Other Curves The technique can be generalized to include arbitrary parametric curves. Finding charged tracks in a solenoidal field motivates the circle algorithm. F(x,y,a,b) = (x-a)2 + (y-b)2 - r2 = 0 Simplify by fixing one of the parameters, e.g. radius: Right-click and “Open in New Window”

Charged Tracks in HEP Want to find tracks that come from origin Construct line connecting each measured point to the origin. Orthogonal bisector of this line passes through the circle’s origin. Fill accumulator with each point’s line one-to-many mapping Fill accumulator with intersection of lines coming from two points many-to-one mapping

y p2 p1 Intersection gives center of circle  pT and 0 x

larger size  better resolution  more resources The resolution with which one can determine the curve parameters using the Hough transform is determined by the accumulator size. larger size  better resolution  more resources Use HT for pattern recognition then fit points which contributed to functional form. Use Adaptive HT Use coarse array to find regions of interest Backmap points to finer-binned accumulator

HT Summary Works very well for well-defined problems. Ideally suited to modern, digital devices. Global, “democratic” method individual points “vote” independently Very robust against noise and inefficiency. Can be generalized to find arbitrary parameterized curves. AHT offers solution to trade-off between speed and resolution

The Kalman Filter In 1960 Rudolf Kalman published “A new Approach to Linear Filtering and Prediction Problems” in the ASME Journal of Basic Engineering. The best estimate for the state of a system and its covariance can be obtained recursively from the previous best estimate and its covariance matrix. Essential for real-time applications with noisy data, e.g. moon-landing, Stock Market predictions, military targeting

Running Average Discrete measurements an of a constant A. Compare starting over with each new measurement via with the recursive formula

Filtering Another class of pattern recognition involves systems for which an existing state is known and one wishes to add additional information. How does one reconcile new, perhaps noisy, information with an existing “best” estimate?

Dynamic System Description A discrete dynamic system is characterized at each time tk by a state vector xk, the evolution of which is characterized by a time dependent transformation: fk: a deterministic function wk: random disturbance of the system (process noise)

Normally one only observes a function of the state vector, corrupted by some measurement noise: mk: vector of observations at time tk k: measurement noise

The simplest case has both f and h linear:

Progressive Fitting There are three basic operations in the analysis of a dynamic system: Filtering: estimation of the present state vector, based upon all the past measurements Prediction: estimation of the state vector at a future time Smoothing Improved estimation of the state vector at some time in the past, based upon all measurements taken up to the present time.

Prediction One assumes that at a given initial point the state vector parameters x and their covariance matrix C are known. Parameter vector and covariance matrix are propagated to the position i+1 via:

Filtering At position i+1 one has a measurement mi+1 which can contain measurements of an arbitrary number of the state vector parameters. The question is how to reconcile this measurement with the existing prediction for the state vector at this position.

The Kalman Filter The Kalman Filter is the optimum solution in the sense that it minimizes the mean square estimation error. If the system is linear and the noise is Gaussian, then the Kalman Filter is the optimal filter; no nonlinear filter can do better.

Combine the noisy measurement with the prior estimate: where Ki+1 is the Kalman Gain matrix:

Kalman Filter Flow Begin with a prior estimate and covariance matrix Compute the Kalman Gain Update estimate with measurement Predict ahead

Kalman Filter Advantages Combines pattern recognition with parameter estimation Number of computations increases only linearly with the number of detectors. Estimated parameters closely follow real path. Matrix size limited to number of state parameters

Relationship to Least Squares Fitting To solve the matrix equation: we solve where minimizes

Consider the generalized weighted sum of squared residuals: to minimize, take the derivative and set to 0.

Consider the Kalman Filter solution: For no a priori knowledge about x: Giving the Kalman Gain The estimate for the state vector is:

For a constant system state vector with an overdetermined system of linear equations and no a priori information, the Kalman filter reproduces the deterministic least squares result. In most cases, however, one does have prior knowledge and the Kalman filter’s advantage is the convenient way in which it accounts for this prior knowledge via the initial conditions. Basically a least squares best fit problem done sequentially rather than in batch mode.

Estimating a Constant Iteration Voltage + Measurement variance > true variance Measurement variance < true variance Measurement variance = true variance

Track Fitting In the ‘80s, Billoir and Frühwirth adapted the KF to track finding and fitting in HEP. Combined pattern recognition with parameter fitting. Use track state prediction to discriminate between multiple hits in detector elements. Dynamic system accommodates physics: multiple scattering energy loss magnetic field stepping

Fitting a Straight Line in 2D State Vector: Track Model: Straight Line

The next position is simply the old plus the slope times the interval: The slope remains the same: Therefore the transformation matrix is:

Ansatz for the initial state: with a1 arbitrary and M>>1 Predict next state:

The predicted position equals the measured position at the next surface, since we took the error on the predicted slope to be very large, i,e. we did not trust the prediction; the optimal solution is to use the measurement. The predicted slope is y/x, as we would expect for no a priori knowledge. The initial guess for the slope does not appear in the final result, since we had assigned the prediction a large uncertainty. We now have a good estimate for the slope and its uncertainty and will now iterate.

+

Kalman Filter Summary The Kalman Filter provides an elegant formalism for reconciling measurements with an existing hypothesis. Its progressive, or iterative, nature allows the algorithm to be cleanly implemented in software. “Extended” KF removes limitations on linear systems with Gaussian noise.

Summary I have only barely touched the surface in presenting these three techniques here this evening. There exists a broad spectrum of pattern recognition techniques, but these are fairly representative of the most-used ones. Go out and implement them!