Radiometric Normalization Spring 2009 Ben-Gurion University of the Negev.

Slides:



Advertisements
Similar presentations
Active Shape Models Suppose we have a statistical shape model –Trained from sets of examples How do we use it to interpret new images? Use an “Active Shape.
Advertisements

Why does it work? We have not addressed the question of why does this classifier performs well, given that the assumptions are unlikely to be satisfied.
Intensity Transformations (Chapter 3)
Automatic Histogram Threshold Using Fuzzy Measures 呂惠琪.
Digital Image Processing In The Name Of God Digital Image Processing Lecture3: Image enhancement M. Ghelich Oghli By: M. Ghelich Oghli
電腦視覺 Computer and Robot Vision I Chapter2: Binary Machine Vision: Thresholding and Segmentation Instructor: Shih-Shinh Huang 1.
Multiple People Detection and Tracking with Occlusion Presenter: Feifei Huo Supervisor: Dr. Emile A. Hendriks Dr. A. H. J. Stijn Oomes Information and.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
SEMANTIC FEATURE ANALYSIS IN RASTER MAPS Trevor Linton, University of Utah.
Content Based Image Retrieval
EE 290A: Generalized Principal Component Analysis Lecture 6: Iterative Methods for Mixture-Model Segmentation Sastry & Yang © Spring, 2011EE 290A, University.
EEE 498/591- Real-Time DSP1 What is image processing? x(t 1,t 2 ) : ANALOG SIGNAL x : real value (t 1,t 2 ) : pair of real continuous space (time) variables.
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 10 Statistical Modelling Martin Russell.
1 Improving Entropy Registration Theodor D. Richardson.
Segmentation Divide the image into segments. Each segment:
PART 7 Constructing Fuzzy Sets 1. Direct/one-expert 2. Direct/multi-expert 3. Indirect/one-expert 4. Indirect/multi-expert 5. Construction from samples.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Fitting a Model to Data Reading: 15.1,
Comparison and Combination of Ear and Face Images in Appearance-Based Biometrics IEEE Trans on PAMI, VOL. 25, NO.9, 2003 Kyong Chang, Kevin W. Bowyer,
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean Hall 5409 T-R 10:30am – 11:50am.
FEATURE EXTRACTION FOR JAVA CHARACTER RECOGNITION Rudy Adipranata, Liliana, Meiliana Indrawijaya, Gregorius Satia Budhi Informatics Department, Petra Christian.
Chapter 10: Image Segmentation
Chapter 3: Image Restoration Geometric Transforms.
Lecture 3: Region Based Vision
Digital Image Processing In The Name Of God Digital Image Processing Lecture8: Image Segmentation M. Ghelich Oghli By: M. Ghelich Oghli
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Chapter 3 Image Enhancement in the Spatial Domain.
1 Statistical Distribution Fitting Dr. Jason Merrick.
Intelligent Scissors for Image Composition Anthony Dotterer 01/17/2006.
Digital Image Processing Lecture 4: Image Enhancement: Point Processing Prof. Charlene Tsai.
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
G52IVG, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
Ensemble Learning Spring 2009 Ben-Gurion University of the Negev.
CS654: Digital Image Analysis Lecture 18: Image Enhancement in Spatial Domain (Histogram)
Parameter estimation. 2D homography Given a set of (x i,x i ’), compute H (x i ’=Hx i ) 3D to 2D camera projection Given a set of (X i,x i ), compute.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Ensemble Color Segmentation Spring 2009 Ben-Gurion University of the Negev.
Review of fundamental 1 Data mining in 1D: curve fitting by LLS Approximation-generalization tradeoff First homework assignment.
Fuzzy Optimization D Nagesh Kumar, IISc Water Resources Planning and Management: M9L1 Advanced Topics.
Digital Image Processing
Similarity Measures Spring 2009 Ben-Gurion University of the Negev.
Change Detection Goal: Use remote sensing to detect change on a landscape over time.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Markov Random Fields (MRF) Spring 2009 Ben-Gurion University of the Negev.
Digital Image Processing Lecture 4: Image Enhancement: Point Processing January 13, 2004 Prof. Charlene Tsai.
Semantic Alignment Spring 2009 Ben-Gurion University of the Negev.
Parameter estimation class 5 Multiple View Geometry CPSC 689 Slides modified from Marc Pollefeys’ Comp
Lecture 11: Linkage Analysis IV Date: 10/01/02  linkage grouping  locus ordering  confidence in locus ordering.
Detecting Image Features: Corner. Corners Given an image, denote the image gradient. C is symmetric with two positive eigenvalues. The eigenvalues give.
May 2003 SUT Color image segmentation – an innovative approach Amin Fazel May 2003 Sharif University of Technology Course Presentation base on a paper.
Blob detection.
Grouping and Segmentation. Sometimes edge detectors find the boundary pretty well.
图像处理技术讲座(3) Digital Image Processing (3) Basic Image Operations
DIGITAL SIGNAL PROCESSING
Nonparametric Semantic Segmentation
Parameter estimation class 5
A special case of calibration
Digital Image Processing
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Lecture 31: Graph-Based Image Segmentation
Comparing Images Using Hausdorff Distance
Introduction to Sensor Interpretation
Introduction to Sensor Interpretation
Calibration and homographies
“Traditional” image segmentation
Image Stitching Linda Shapiro ECE/CSE 576.
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

Radiometric Normalization Spring 2009 Ben-Gurion University of the Negev

Sensor Fusion Spring 2009 Instructor Dr. H. B Mitchell

Sensor Fusion Spring 2009 Radiometric Normalization Radiometric Normalization ensures that all input measurements use the same measurement scale. We shall concentrate on statistical relative radiometric normalization. These methods do not require spatial alignment although they assume the images are more-or-less aligned. Other methods will be discussed throughout the course

Sensor Fusion Spring 2009 Histogram Matching Input: Reference image A and test image B. Normalization: Transform B such that (pdf of B) is same as (pdf of A), i.e. find a function such that The solution is where

Sensor Fusion Spring 2009 Histogram Matching Easy if B has distinct gray-levels Let be histogram of B Suppose A has pixels with a gray-level Then all pixels in A with rank are assigned gray-level etc

Sensor Fusion Spring 2009 Histogram Matching If gray-levels are not distinct may break ties randomly. Better to use “exact histogram specification”.

Sensor Fusion Spring 2009 Exact Histogram Specification Convolve input image with 6 masks e.g. Resolve ties using. If no ties exist, stop etc

Sensor Fusion Spring 2009 Midway Histogram Equalization Warp both input histograms to a common histogram The common histogram is defined to be as similar as possible to A solution: Define by its cumulative histogram : Implementation is difficult. Fast algorithm (dhw) is available using dynamic programming.

Sensor Fusion Spring 2009 Midway Histogram Equalization Optical flow with and without histogram equalization

Sensor Fusion Spring 2009 Midway Histogram Equalization If input images have unique gray-levels (use exact histogram specification) then midway histogram is trivial: where is kth largest gray levels in A and B

Sensor Fusion Spring 2009 Ranking Ranking may also be used as a robust method of radiometric normalization. Very effective on small images, less so on large images with many ties. Solutions? exact histogram specification. fuzzy ranking

Sensor Fusion Spring 2009 Ranking. Classical Classical ranking works as follows: M crisp numbers Compare each with. Result is The crisp ranks are where Note: We may make the eqns symmetrical by redefining :

Sensor Fusion Spring 2009 Ranking. Classical Example.

Sensor Fusion Spring 2009 Ranking. Fuzzy Fuzzy ranking is a generalization of classical ranking. In place of M crisp numbers we have M membership functions Compare each with “extended min” and “extended max”. Result is The fuzzy ranks are where

Sensor Fusion Spring 2009 Thresholding Thresholding is mainly used to segment an image into background and foreground Also used as a normalization method. A few unsupervised thresholding algorithms are: Otsu Kittler-Illingworth Kapur,Sahoo and Wong etc Example. KSW thresholding. Consider image as two sources foreground (A) and background (B) according to threshold t. Optimum threshold=maximum sum of the entropies of the two sources

Sensor Fusion Spring 2009 Thresholding Advantage: Unsupervised thresholding methods automatically adjust to input image. Disadvantage: Quantization is very coarse May overcome? by using fuzzy thresholding t ClassicalFuzzy

Sensor Fusion Spring 2009 Aside: Fuzzy Logic From this viewpoint may regard fuzzy logic as a method of normalizing an input x in M different ways: We have M membership functions which represent different physical qualities eg “hot”, “cold”, “tepid”. Then represent x as three values which represent the degree to which x is hot, x is cold and x is tepid. x Degree to which x is regarded as hot

Sensor Fusion Spring 2009 Likelihood Powerful normalization is to convert the measurements to a likelihood Widely used for normalizing feature maps. Requires a ground truth which may be difficult.

Sensor Fusion Spring 2009 Likelihood. Edge Operators Example. Consider multiple edge operators Canny edge operator. Sobel edge operator. Zero-crossing edge operator The resulting feature maps all measure the same phenomena (i.e. presence of edges). But the feature maps have different scales. Require radiometric normalization. Can use methods such as histogram matching etc. But better to use likelihood. Why?

Sensor Fusion Spring 2009 Likelihood. Edge and Blob Operators Example. Consider edge and blob operators Feature maps measure very different phenomena. Radiometric normalization is therefore of no use. However theory of ATR suggests edge and blob are casually linked to presence of a target. Edge and Blob may therefore be normalized by semantically aligning them, i.e. interpreting them as giving the likelihood of the presence of a target.

Sensor Fusion Spring 2009 Likelihood. Edge and Blob Operators Edge map E(m,n) measures strength of edge at (m,n) Blob map B(m,n) measures strength of blob at (m,n) Edge likelihood measures likelihood of target existing at (m,n) given E(m,n) Blob likelihood measures likelihood of target existing at (m,n) given B(m,n). Calculation of the likelihoods requires ground truth data. Three different approaches to calculating the likelihoods.

Sensor Fusion Spring 2009 Likelihood. Platt Calibration Given training data (ground truth): K examples of edge values: and K indicator flags (which describe presence or absence of true target): Suppose the function which describes likelihood of a target given an edge value x is sigmoid in shape: Find optimum values of and by maximum likelihood

Sensor Fusion Spring 2009 Likelihood. Platt Calibration Maximum likelihood solution is If too few training samples have or then liable to overfit. Correct for this by using modified

Sensor Fusion Spring 2009 Likelihood. Histogram Platt calibration assumes a likelihood function of known shape If we do not know the shape of the function we have may simply define it as a discrete curve or histogram. In this case we quantize the edge values and place them in histogram bins. In a given bin we count the number of edge values which fall in the bin and the number of times a target is detected there. Then the likelihood function is

Sensor Fusion Spring 2009 Likelihood. Isotonic Regression Isotonic regression assumes likelihood curve is monotonically increasing (or decreasing). It therefore represents a intermediate case between Platt calibration and Histogram calibration. A simple algorithm for isotonic curve fitting is PAV (Pair- Adjacent Violation Algorithm). Monotonically increasing likelihood curve of unknown shape

Sensor Fusion Spring 2009 Likelihood. Isotonic Regression Find montonically increasing function f(x) which minimizes Use PAV algorithm. This works iteratively as follows: Arrange the such that If f is isotonic then f*=f and stop If f is not isotonic then there must exist a label l such that Eliminate this pair by creating a single entry with which is now isotonic.

Sensor Fusion Spring 2009 Likelihood. Isotonic Regression # score init iterations In first iteration entries 12 and 13 are removed by pooling the two entries together and giving them a value of 0.5. This introduces a new violation between entry 11 and the group 12-13, which are pooled together formin a pool of 3 entries with value 0.33

Sensor Fusion Spring 2009 Likelihood. Isotonic Regression So far have considered pairwise likelihood estimation. How can we generalize to multiple classes with more than two classes? Project.