Presentation is loading. Please wait.

Presentation is loading. Please wait.

Semantic Alignment Spring 2009 Ben-Gurion University of the Negev.

Similar presentations


Presentation on theme: "Semantic Alignment Spring 2009 Ben-Gurion University of the Negev."— Presentation transcript:

1 Semantic Alignment Spring 2009 Ben-Gurion University of the Negev

2 Sensor Fusion Spring 2009 Instructor Dr. H. B Mitchell email: harveymitchell@walla.co.il

3 Sensor Fusion Spring 2009 Semantic Alignment For fusion to take place inputs must be converted into a common representational format Semantical alignment requires that all measurements refer to the same object or phenomena.

4 Sensor Fusion Spring 2009 Semantical Alignment: Image Fusion Generally not required if captured by cameras of same type. In fact even sensors which are different are often regarded as measuring same phenomena and therefore semantically aligned. Example: Infra-red and visible cameras.

5 Sensor Fusion Spring 2009 Semantic Alignment: Feature Map Fusion Multiple feature maps are generated by  Same feature operator acting on multiple input images. In this case the feature maps refer to the same phenomena and no semantic alignment is required.  Multiple feature operators acting on a single input image. If the feature operators all measure the same object or phenomena (but using different algorithms) then no semantic alignment is required.  Multiple feature operators acting on a single input image. If the feature operators measure different object or phenomena then semantic alignment is required.

6 Sensor Fusion Spring 2009 Semantic Alignment: Feature Map Fusion Example. Canny and Sobel edge operators acting on the same input image. Both feature operators refer to same phenomena. Semantic alignment is therefore not required. However radiometric normalization is required: The dynamic scale of Sobel and Canny operators give very different.

7 Sensor Fusion Spring 2009 Semantic Alignment: Feature Map Fusion Example. Edge operator and Blob detector acting on the same input image. The feature operators refer to different phenomena. Therefore semantic alignment is required. Theory of ATR suggests edge and blob are casually linked to “presence of target” scores or likelihoood. No radiometric alignment is required if we use the same semantic alignment algorithm

8 Sensor Fusion Spring 2009 Semantic Alignment: Likelihood Feature map F(m,n) = strength of feature operator at pixel (m,n) Likelihood L(m,n) = likelihood or probability that target exists at pixel (m,n) If have training data with ground truth then can learn likelihood function L(m,n).

9 Sensor Fusion Spring 2009 Platt Calibration Given training data: K examples of edge values: Corresponding ground truth Suppose likelihood curve follows a sigmoid curve. Find optimum sigmoid curve using method of maximum likelihood

10 Sensor Fusion Spring 2009 Histogram Calibration Given training data: K examples of edge values: Corresponding ground truth Assume no knowledge regarding shape of likelihood curve. Then likelihood is relative number of edges in each bin.

11 Sensor Fusion Spring 2009 Likelihood. Isotonic Regression Isotonic regression assumes likelihood curve is monotonically increasing (or decreasing). It therefore represents a intermediate case between Platt calibration and Histogram calibration. A simple algorithm for isotonic curve fitting is PAV (Pair- Adjacent Violation Algorithm). Monotonically increasing likelihood curve of unknown shape

12 Sensor Fusion Spring 2009 Likelihood. Isotonic Regression Find montonically increasing function f(x) which minimizes Use PAV algorithm. This works iteratively as follows: Arrange the such that If f is isotonic then f*=f and stop If f is not isotonic then there must exist a label l such that Eliminate this pair by creating a single entry with which is now isotonic.

13 Sensor Fusion Spring 2009 Likelihood. Isotonic Regression # score init iterations In first iteration entries 12 and 13 are removed by pooling the two entries together and giving them a value of 0.5. This introduces a new violation between entry 11 and the group 12-13, which are pooled together formin a pool of 3 entries with value 0.33

14 Sensor Fusion Spring 2009 Semantic Alignment: Decision Map Fusion Multiple decision maps are generated by  Same decision operator acting on multiple feature maps. If the feature maps are semantically equivalent, then the decision maps are semantically equivalent and no semantic alignment is required.  If the feature maps are not semantically equivalent then the decision maps are also not semantically equivalent and semantic alignment is required.  If multiple decision operator act on a single feature map. If the decision operators all refer to the same object or phenomena then the decision maps are semantically equivalent and no semantic alignment is required.  If the decision operators refer to different objects or phenomena then semantic alignment is required.

15 Sensor Fusion Spring 2009 Association Given two decision maps A and B let be the hth label in A and be the kth label in B Often a one-to-one relationship exists between and Finding the relationship is often the decision fusion itself.

16 Sensor Fusion Spring 2009 Association Given two decision maps A and B let be the hth label in A and be the kth label in B Often a one-to-one relationship exists between and Finding the relationship is often the decision fusion itself.

17 Sensor Fusion Spring 2009 Association Given a test target and a model. For each point on the boundary of the test target A we wish to find the corresponding point on the boundary of the model A. Estimate transformation Measure similarity model target...

18 Sensor Fusion Spring 2009 Association Compute matching costs using Chi Squared distance: Recover correspondences by solving linear assignment problem with costs C ij [Jonker & Volgenant 1987] Recover correspondences by solving linear assignment problem with costs C ij The result of associating each point on test target with the model

19 Sensor Fusion Spring 2009 Assignment. Key Point Association Given two images A and B we may perform spatial alignment by matching key points in A with their corresponding key points in B

20 Sensor Fusion Spring 2009 Image Alignment In spatial alignment the assignment problem is made difficult because of the large number of key points in A which have no corresponding key point in B and vice versa

21 Sensor Fusion Spring 2009 Association. Multiple Sensor Tracking Two targets 1 and 2 are detected by two sensors A and B which separately track them. Ideally we have two tracks from A – A1 and A2 and two tracks from B – B1 and B2 We may only combine (and obtain more accurate tracks) if we can correctly assign A1 and A2 to B2 and B2 With false and missing tracks the task becomes very difficult.

22 Sensor Fusion Spring 2009 Binary Map Fusion Given K binary maps which we suppose are semantically equivalent: ie label {0,1} in corresponds to label {0,1} in May fuse them together by voting: However this tends to give an image with a broken- appearance

23 Sensor Fusion Spring 2009 Binary Map Fusion May fuse using distance map. d(i,j|1)=Euclidean distance from nearest black pixel d(i,j|0)=Euclidean distance from nearest white pixel Add the distance maps and then choose the smallest

24 Sensor Fusion Spring 2009 Binary Map Fusion Given K binary maps which we suppose are semantically equivalent: ie label {0,1} in corresponds to label {0,1} in May fuse them together by voting: However this tends to give an image with a broken- appearance


Download ppt "Semantic Alignment Spring 2009 Ben-Gurion University of the Negev."

Similar presentations


Ads by Google