Download presentation

Presentation is loading. Please wait.

Published byTheodora Hardy Modified over 2 years ago

1
Segmentation via Maximum Entropy Model

2
Goals Is it possible to learn the segmentation problem automatically? Using a model which is frequently used in NLP tasks - the Maximum Entropy model Finding features compatible to the segmentation task

3
Segmentation as a tagging problem Attach each pixel one of two tags: is it an edge between segments or not. Use a tagged set of images to build a statistical model. Given a new image we want to find the most probable tag for each pixel using this model

4
Maximum Entropy We want to build a statistical model which makes as few assumptions as possible. Entropy – the uncertainty of a distribution. The level of “surprise”. So, we need a statistical model with the highest entropy possible.

5
Maximum Entropy - Definitions Event – a couple (p, t) where p is a pixel and t is its tag. Feature – elementary piece of evidence that link p to t. A feature has a real value. The decision about a pixel is based only on the features active at that point. Looking for conditional probability P(p|t,f)

6
Maximum Entropy – building the model We use a tagged training set to build the statistical distribution. Constrains on the distribution: –The expectation for each feature should be equal to its expectation in the training set. –Maximizing the Entropy over all possible distributions. We get an optimization problem which can be solved using Lagrange multipliers (weights).

7
opennlp maxent Machine learning package compatible for NLP tasks. Used for training maximum entropy models. All is left is adjusting it to work with images and finding a set of features compatible for the segmentation task.

8
The Berkeley segmentation database Consists of 300 images and the corresponding segmentation maps. The segmentation maps where constructed by humans. The database is divided into a train set (200) and a test set (100).

9
The Berkeley segmentation database - example

10
Features I focused on brightness and texture features (technical problems): –Brightness difference with surrounding pixels –Gradient magnitude using the Prewitt operators. –Canny edge detection results

11
Features – cont. Local Binary Partition Measure: For each pixel p, create an 8-bit number b1 b2 b3 b4 b5 b6 b7 b8 where bi=0 if neighbor i has value less than or equal to p’s value and 1 otherwise. For example: 1 2 3 7 6 100 101 103 40 50 80 50 60 90 1 1 1 1 1 1 0 0 4545 8

12
Features – cont. Laws Texture Measure: Basic texture matrices: L5 = [ 1 4 6 4 1 ] E5 = [ -1 -2 0 2 1 ] S5 = [ -1 0 2 0 -1 ]R5 = [ 1 -4 6 -4 1 ] Creating a 9 feature vector for each pixel: L5*E5/E5*L5 L5*R5/R5*L5 E5*S5/S5*E5 S5*R5/R5*S5 L5*S5/S5*L5 E5*R5/R5*E5 R5R5 S5S5E5E5 For the current pixel calculating the distance to surrounding pixels

13
Results

17
Discussion The model is basic but it has managed to find segments at some level. Best result are found where there is an object with a contrasting background or radical difference of texture. The model had succeeded in filtering edges which are not segments. For example:

18
Discussion

19
Feature Work More features A compatible package for images Search algorithm for finding the best sequence Working with color images Special learning: by one person, by category etc.

Similar presentations

Presentation is loading. Please wait....

OK

Texture Readings: Ch 7: all of it plus Carson paper

Texture Readings: Ch 7: all of it plus Carson paper

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google