ICCV 2007 National Laboratory of Pattern Recognition Institute of Automation Chinese Academy of Sciences Half Quadratic Analysis for Mean Shift: with Extension.

Slides:



Advertisements
Similar presentations
Introduction to Support Vector Machines (SVM)
Advertisements

Active Shape Models Suppose we have a statistical shape model –Trained from sets of examples How do we use it to interpret new images? Use an “Active Shape.
CSCE643: Computer Vision Bayesian Tracking & Particle Filtering Jinxiang Chai Some slides from Stephen Roth.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Unsupervised Detection of Regions of Interest Using Iterative Link Analysis Gunhee Kim 1 Antonio Torralba 2 1: SCS, CMU 2: CSAIL, MIT Neural Information.
Support Vector Machines
ICCV 2007 tutorial Part III Message-passing algorithms for energy minimization Vladimir Kolmogorov University College London.
CHAPTER 8 A NNEALING- T YPE A LGORITHMS Organization of chapter in ISSO –Introduction to simulated annealing –Simulated annealing algorithm Basic algorithm.
Information Bottleneck EM School of Engineering & Computer Science The Hebrew University, Jerusalem, Israel Gal Elidan and Nir Friedman.
FastPlace: Efficient Analytical Placement using Cell Shifting, Iterative Local Refinement and a Hybrid Net Model FastPlace: Efficient Analytical Placement.
Ziming Zhang*, Ze-Nian Li, Mark Drew School of Computing Science Simon Fraser University Vancouver, Canada {zza27, li, AdaMKL: A Novel.
Computer vision: models, learning and inference
Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1, Lehigh University.
Visual Recognition Tutorial
EE 290A: Generalized Principal Component Analysis Lecture 6: Iterative Methods for Mixture-Model Segmentation Sastry & Yang © Spring, 2011EE 290A, University.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Mean-Shift Algorithm and Its Application Bohyung Han
Lecture 5: Learning models using EM
Announcements  Project proposal is due on 03/11  Three seminars this Friday (EB 3105) Dealing with Indefinite Representations in Pattern Recognition.
An Optimal Learning Approach to Finding an Outbreak of a Disease Warren Scott Warren Powell
Sketched Derivation of error bound using VC-dimension (1) Bound our usual PAC expression by the probability that an algorithm has 0 error on the training.
Expectation Maximization Algorithm
Optimization Methods One-Dimensional Unconstrained Optimization
Recovering Articulated Object Models from 3D Range Data Dragomir Anguelov Daphne Koller Hoi-Cheung Pang Praveen Srinivasan Sebastian Thrun Computer Science.
Maximum Likelihood (ML), Expectation Maximization (EM)
Support Vector Regression David R. Musicant and O.L. Mangasarian International Symposium on Mathematical Programming Thursday, August 10, 2000
Dorin Comaniciu Visvanathan Ramesh (Imaging & Visualization Dept., Siemens Corp. Res. Inc.) Peter Meer (Rutgers University) Real-Time Tracking of Non-Rigid.
Relevance Feedback Content-Based Image Retrieval Using Query Distribution Estimation Based on Maximum Entropy Principle Irwin King and Zhong Jin The Chinese.
Optimization Methods One-Dimensional Unconstrained Optimization
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Advanced Image Processing Image Relaxation – Restoration and Feature Extraction 02/02/10.
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
Computer Vision James Hays, Brown
Biointelligence Laboratory, Seoul National University
Mean-shift and its application for object tracking
Least-Mean-Square Training of Cluster-Weighted-Modeling National Taiwan University Department of Computer Science and Information Engineering.
Machine Learning Seminar: Support Vector Regression Presented by: Heng Ji 10/08/03.
CSE 185 Introduction to Computer Vision Pattern Recognition 2.
CHAPTER 7: Clustering Eick: K-Means and EM (modified Alpaydin transparencies and new transparencies added) Last updated: February 25, 2014.
EECS 274 Computer Vision Segmentation by Clustering II.
Algorithms for MAP estimation in Markov Random Fields Vladimir Kolmogorov University College London.
Nonlinear Data Discrimination via Generalized Support Vector Machines David R. Musicant and Olvi L. Mangasarian University of Wisconsin - Madison
ESPL 1 Wordlength Optimization with Complexity-and-Distortion Measure and Its Application to Broadband Wireless Demodulator Design Kyungtae Han and Brian.
CS654: Digital Image Analysis Lecture 30: Clustering based Segmentation Slides are adapted from:
Non-Photorealistic Rendering and Content- Based Image Retrieval Yuan-Hao Lai Pacific Graphics (2003)
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Multiple Instance Learning via Successive Linear Programming Olvi Mangasarian Edward Wild University of Wisconsin-Madison.
Spoken Language Group Chinese Information Processing Lab. Institute of Information Science Academia Sinica, Taipei, Taiwan
Biointelligence Laboratory, Seoul National University
CHAPTER 17 O PTIMAL D ESIGN FOR E XPERIMENTAL I NPUTS Organization of chapter in ISSO –Background Motivation Finite sample and asymptotic (continuous)
MCMC (Part II) By Marc Sobel. Monte Carlo Exploration  Suppose we want to optimize a complicated distribution f(*). We assume ‘f’ is known up to a multiplicative.
Prototype Classification Methods Fu Chang Institute of Information Science Academia Sinica ext. 1819
Simulation Study for Longitudinal Data with Nonignorable Missing Data Rong Liu, PhD Candidate Dr. Ramakrishnan, Advisor Department of Biostatistics Virginia.
Real-Time Tracking with Mean Shift Presented by: Qiuhua Liu May 6, 2005.
ECE 8443 – Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem Proof EM Example – Missing Data Intro to Hidden Markov Models.
Final Exam Review CS479/679 Pattern Recognition Dr. George Bebis 1.
Mean Shift ; Theory and Applications Presented by: Reza Hemati دی 89 December گروه بینایی ماشین و پردازش تصویر Machine Vision and Image Processing.
Maximum Entropy Discrimination Tommi Jaakkola Marina Meila Tony Jebara MIT CMU MIT.
Computational Intelligence: Methods and Applications Lecture 24 SVM in the non-linear case Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Incremental Reduced Support Vector Machines Yuh-Jye Lee, Hung-Yi Lo and Su-Yun Huang National Taiwan University of Science and Technology and Institute.
Non-parametric Methods for Clustering Continuous and Categorical Data Steven X. Wang Dept. of Math. and Stat. York University May 13, 2010.
Massive Support Vector Regression (via Row and Column Chunking) David R. Musicant and O.L. Mangasarian NIPS 99 Workshop on Learning With Support Vectors.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem.
Presenter: Jae Sung Park
LECTURE 10: EXPECTATION MAXIMIZATION (EM)
Probabilistic Models for Linear Regression
Pattern Recognition CS479/679 Pattern Recognition Dr. George Bebis
Biointelligence Laboratory, Seoul National University
10701 Recitation Pengtao Xie
Presentation transcript:

ICCV 2007 National Laboratory of Pattern Recognition Institute of Automation Chinese Academy of Sciences Half Quadratic Analysis for Mean Shift: with Extension to A Sequential Data Mode-Seeking Method Xiaotong Yuan, Stan Z. Li October 17, 2007

ICCV 2007 Outline Motivation Theoretical Exploration Algorithm Extension Summary

ICCV 2007 Motivation Put Mean Shift on proper grounds Better understand the essence Facilitate the numerical study Fast multiple data mode-seeking Improve exhaustive initialization based method

ICCV 2007 Mean Shift as Half Quadratic Optimization

ICCV 2007 Background of Mean Shift Mean Shift: A fixed point iteration algorithm to find the local maximum of Prior weights Mahalanobis Distance kernel function

ICCV 2007 Prior Understanding Gradient ascent algorithm with an adaptive step size [Cheng 1995] Quadratic bounding optimization [Fashing et al. 2005] Gaussian Mean shift is an EM algorithm [Carreira-Perpinan 2007]

ICCV 2007 Half Quadratic Optimization Theory of convex conjugated functions Non-quadratic convex objective functions is optimized in a quadratic-like way Convergence property is deeply studied [Mila 2005, Allain 2006]

ICCV 2007 Preliminary Facts All the conditions we impose on kernel are summarized as below:

ICCV 2007 HQ Optimization For KDE The supermum is reached at quadratic term conjugated term dual variable

ICCV 2007 Alternate Maximization A new objective function on extended domain: Equivalent to Mean-Shift

ICCV 2007 Relation to Bound Optimization Quadratic bounding optimization [ Fashing et al ] HQ formulation: for a fixed point, denote analytically defines a quadratic lower bound for at each time stamp

ICCV 2007 Relation to EM Gaussian Mean shift is an EM algorithm [Carreira-Perpinan 2007] HQ optimization: the alternate maximization scheme is equivalent to E-Step and M-Step

ICCV 2007 Convergence Rate When is isotropic, the root-convergence of convex kernel mean shift is at least linear with rate

ICCV 2007 Adaptive Mean Shift for Sequential Data Mode-Seeking

ICCV 2007 Multiple Mode-Seeking Exhaustive Initialization Run the Mean Shift in parallel

ICCV 2007 Adaptive Mean Shift Basic idea Properly estimate the starting points near the significant modes Sequentially re-weight the samples to guide the search

ICCV 2007 Algorithm Description Initialization: Repeat - Starting point estimation: - Local Mode estimation: - Sample prior re-weight: Until ever-found mode reappears Global mode estimation by Annealed MS [Shen 2007] By traditional mean shift

ICCV 2007 Four iterations only! Prior weight curve Illustration

ICCV 2007 Advantages Initialization invariant Highly efficient with linear complexity is the number of significant modes

ICCV 2007 Numerical Test: Image Segmentation

ICCV 2007 Color transformation color surfaces under “canonical” illuminant Application: Color Constancy Linear render model [ Manduchi 2006 ] Test Image containing the color surfaces with illumination change Gaussian distribution Problem: How to estimate possible existing ?

ICCV 2007 Existing Method MAP formulation [ Manduchi 2006 ]: Possible existing are estimated with EM algorithm Limitation: number of render vectors should be known a prior All the render vectors should be properly initialized

ICCV 2007 Convex Kernel based Solution Compensation accuracy Ada-MS for sequential modes seeking Prior weight: pixel i on training surface j

ICCV 2007 Results Ground Truth Mapping Compensated image Prior weight images

ICCV 2007 Summary Put the mean-shift on a proper ground: HQ optimization Connection with previous viewpoints: bounding optimization and EM Fast multiple data modes seeking (linear complexity and initialization invariant) Very simple to implement We hope to see more applications