Presentation is loading. Please wait.

Presentation is loading. Please wait.

Image Indexing and Retrieval using Moment Invariants Imran Ahmad School of Computer Science University of Windsor – Canada.

Similar presentations


Presentation on theme: "Image Indexing and Retrieval using Moment Invariants Imran Ahmad School of Computer Science University of Windsor – Canada."— Presentation transcript:

1 Image Indexing and Retrieval using Moment Invariants Imran Ahmad School of Computer Science University of Windsor – Canada

2 I. Ahmad - Windsor, Canada2 Outline Introduction Shape-based Retrieval Image Representation Moment Invariants Proposed Approach Experimental Results Conclusions

3 I. Ahmad - Windsor, Canada3 Introduction Information Characteristics Nature Multiple formats Complex Types Image Database Time dependent sequence Video Database

4 I. Ahmad - Windsor, Canada4 Introduction (contd.) Still information Dynamic information Temporal Evolution of information Changes in features and characteristics

5 I. Ahmad - Windsor, Canada5 Introduction (contd.) Image contents Real world information Unique features Need retrieval based on contents

6 I. Ahmad - Windsor, Canada6 Image Retrieval Exact match Similarity-based retrievals Color / texture similarity-based retrievals Spatial similarity-based retrievals Shape similarity-based retrievals

7 I. Ahmad - Windsor, Canada7 Shape-based retrievals Model-based Object Recognition Approach Models based on global and local features Unknown object compared against known ones Data-Driven Approach An index for known shapes Search utilizes such indices

8 I. Ahmad - Windsor, Canada8 Image Representation Images can be defined in terms of: Global features Based on overall image composition Easier to compute Local features Based on individual image components Incorporate spatial information Computationally expensive

9 I. Ahmad - Windsor, Canada9 Shape How to define a shape? A geometric property of a figure Formal definition – independent of language Described in terms of properties invariant under a group of coordinate transformations Let is a characteristic function such that For points in the figure Otherwise

10 I. Ahmad - Windsor, Canada10 Shape (contd.) Definition: Let  be a group of coordinate transformations. The function I is invariant w.r.t.  if for all characteristic functions and all transformations  Definition: A shape of a figure is a pair, where I is invariant under the group of coordinate transformations 

11 I. Ahmad - Windsor, Canada11 Moments Can capture global information about image Do not require closed boundaries. Regular moments – introduced by Hu. Invariant to translation, rotation and scaling Algebraic moments Do not depend on actual values of the coefficients Central moments Equivalent to regular moments of an image that has been shifted

12 I. Ahmad - Windsor, Canada12 Moments (contd.) Applications Image reconstruction Shape identification such as aircrafts, etc. Shape recognition Classifiers

13 I. Ahmad - Windsor, Canada13 Moments (contd.) Let be the image intensity distribution function p +q is the order of moments (for p, q =0, 1, 2, …) the algebraic moment of functions are given as: For a digital image of size M x N

14 I. Ahmad - Windsor, Canada14 Moments (contd.) For centralized moments, we can write: with its digital form as:

15 I. Ahmad - Windsor, Canada15 Moments (contd.) Central moments up to 2 nd order are defined as:

16 I. Ahmad - Windsor, Canada16 Moments (contd.)

17 I. Ahmad - Windsor, Canada17 Moments (contd.) Algebraic moments by Hu

18 I. Ahmad - Windsor, Canada18 Moments (contd.) Moment Invariants: Time complexity in computing MI is directly proportional to the number of pixels in the silhouette or forming the boundary. Let N be the perimeter of the closed boundary To calculate 2 nd order moments, we need: 4(N-1) real additions and 3N real multiplications To calculate 3 rd order moments, we need: 6(N-1) real additions and 12N real multiplications

19 I. Ahmad - Windsor, Canada19 Moments (contd.) A Sample binary image & moment invariants F={0.259179343138514, 0.00801986505055, 0.012354456089699, 0.00827468547136, -0.000000750728194, -0.00005777268349, 0.00000025369430}

20 I. Ahmad - Windsor, Canada20 Clustering Main Categories Hierarchical methods A nested sequence of partitions Involves multiple iterations to cluster objects Non-hierarchical methods Assume desired number of clusters at the beginning Data is reallocated until a particular clustering criteria is optimized. Objects in a cluster are more similar to each other.

21 I. Ahmad - Windsor, Canada21 Clustering (contd.) K-means clustering Let a set of N objects in d-dimensional space R d k is an integer Determine a set of k points in R d, called centers, so as to minimize the mean squared distance from each data point to its nearest center. Also known as squared-error distortion.

22 I. Ahmad - Windsor, Canada22 K-means Clustering (contd.) ALGORITHM: K-means clustering 1. For pattern vectors P1, P2,…, Pm, set first k pattern vectors to the initial clusters C1=P1, C2=P2, C3=P3,…, Ck=Pk, where m >= k 2. Assign each pattern vector to the nearest cluster 3. Compute new cluster means 4. If new cluster means = old cluster means stop, else go to step 2

23 I. Ahmad - Windsor, Canada23 K-means Clustering (contd.) K-means Clustering Tree (KCT) The KCT is a hierarchical data structure similar to a combination of binary search tree and B+ -tree. Data pointers are stored only at the leaf nodes of tree. Non-leaf nodes that contain weight vectors. Non-leaf nodes have links to other nodes. Unidirectional links Left child nodes with lesser threshold values Right child nodes with greater threshold values.

24 I. Ahmad - Windsor, Canada24 K-means Clustering (contd.) K-means Clustering Tree (KCT) – contd. KCT has a single root node Leaf nodes of KCT contain features for an image shape and a pointer to the images. Non-leaf nodes of the tree correspond to the other levels of the index. The nodes correspond to disk pages and the structure is designed so that search requires visiting only a small number of pages.

25 I. Ahmad - Windsor, Canada25 K-means clustering (contd.) KCT Tree creation Assume 2-D feature vector for each of 19 object as: { {-3.0, 3.0}, {-2.5, 3.0}, {-2.0, 2.0}, {-1.5, 2.0}, {-3.5, 1.5}, {-4.0, 1.0}, {-3.0, 0.5}, {-3.0, 0.0}, {-1.5, 0.5}, { 2.5,-0.5}, { 2.5,-1.0}, { 4.5,-1.0}, { 0.5,-1.5}, { 1.0,-2.0}, { 3.0,-2.0}, { 4.0,-2.0}, { 0.5,-2.5}, { 1.0,-2.5}, { 2.0,-3.0} }

26 I. Ahmad - Windsor, Canada26 K-means clustering (contd.) Same 2-dimensional numerical values Each object value will be in leaf node of KCT Object123456789 X-3.02.5-2.0-1.5-3.5-4.0-3.0 -1.5 y3.02.0 1.51.00.50.00.5 Object10111213141516171819 2.5 4.50.51.03.04.00.51.02.0 -0.5 -1.5-2.0 -2.5 -3.0

27 I. Ahmad - Windsor, Canada27 K-means clustering (contd.) Corresponding KCT tree

28 I. Ahmad - Windsor, Canada28 K-means Clustering (contd.) Insertion possibilities 1. A leaf node is full and a new object is to be inserted in that leaf node. In such case, an insertion results in overflow and, therefore, the node must split. As a result, a new non-leaf node and two leaf nodes are constructed and linked to that part of the tree. We also need to train the non-leaf node to get weight values. 2. When the node in which the object has to be inserted has only one object in it. In this case, object can be simply added into node without any additional cost.

29 I. Ahmad - Windsor, Canada29 K-means Clustering (contd.) Deletion possibilities 1. When the sibling node is full-leaf node or non-leaf node and we delete an object from full leaf node  delete object from that node. 2. When the sibling node is leaf node with an object and we delete an object from full- leaf node  delete object from that node, combine two leaf nodes, delete its parent non-leaf node, and connect remaining full-leaf node to deleted non-leaf node's parent node.

30 I. Ahmad - Windsor, Canada30 K-means Clustering (contd.) Deletion possibilities – contd. 3. When the sibling node is full-leaf node and we delete an object from leaf node with itself  delete object from that node, delete its parent non-leaf node, and connect remaining full-leaf node to deleted non-leaf node's parent node. 4. When sibling node is non-leaf node and we delete an object from leaf node with one object  Delete an object from that node, delete its parent non-leaf node, and connect deleted non-leaf node's child node to deleted non-leaf node's parent node.

31 I. Ahmad - Windsor, Canada31 K-means Clustering (contd.) Algorithm for retrieving images // Search an object with feature F using KCT 1. b  block containing root node of a KCT 2. read block b 3. while (b is not a leaf node of the KCT) do 4. next  recall Backpropagation using weights in block b 5. b  next 6. read block b 7. search block b for the most similar object with feature F // search leaf node 8. if found then 9. read index file block; display images with object

32 I. Ahmad - Windsor, Canada32 Experimental Results Experimental Setup Environment: PC with Microsoft Windows 98 Language: C / C++ Images Normalized to grayscale and 128 x 128 Grayscale images to binary images for chain-codes Data set size 100 original images 5 variants involving translation, rotation and scaling.

33 I. Ahmad - Windsor, Canada33 Experimental Results (contd.) Sample image shapes and their seven moment invariants abcd 10.21362500520.22024788580.34048002960.6092586028 20.00079601140.00319655030.08366836720.2806988755 30.00618326190.00107783340.00097954640.0710062966 40.00003189720.00010019210.00044052160.0345626089 5-0.0000000141-0.00000002420.00000028930.0017118779 6-0.00000089210.00000476180.00012742280.0183105689 70.00000000000.00000002220.00000000000.0000339550

34 I. Ahmad - Windsor, Canada34 Experimental Results (contd.) A subset of grouping results using database images at the root level of KCT. Objects in top row occupy the left subtree while the bottom row objects become right subtree.

35 I. Ahmad - Windsor, Canada35 Experimental Results (contd.) Grouping results with database images at the 3 rd level of KCT. The top row of objects forms the left subtree of KCT while the bottom row is the right subtree.

36 I. Ahmad - Windsor, Canada36 Experimental Results (contd.) Results of sample queries. Query shape is given in row 1, column 1 of each image while the retrieved images include the query shape.

37 I. Ahmad - Windsor, Canada37 Experimental Results (contd.)

38 I. Ahmad - Windsor, Canada38 Conclusions Presented a moment invariants based image indexing scheme. Chain codes are used to reduce size of database Indexing is based on K-means clustering with k = 2 and Backpropagation to get weights for each node. Retrieval of images was based on finding the leaf node that includes similar images. Limitation: Small image collection and limited training data.


Download ppt "Image Indexing and Retrieval using Moment Invariants Imran Ahmad School of Computer Science University of Windsor – Canada."

Similar presentations


Ads by Google