Presentation is loading. Please wait.

Presentation is loading. Please wait.

Efficient Image Classification on Vertically Decomposed Data

Similar presentations


Presentation on theme: "Efficient Image Classification on Vertically Decomposed Data"— Presentation transcript:

1 Efficient Image Classification on Vertically Decomposed Data
Taufik Abidin, Aijuan Dong, Hongli Li, and William Perrizo Computer Science North Dakota State University The 1st IEEE International Workshop on Multimedia Databases and Data Management (MDDM-06)

2 Outline Image classification
The application of SMART-TV algorithm in image classification SMART-TV algorithm Experimental results Summary

3 Image Classification Why classifying images?
The proliferation of digital images The need to organize them into semantic categories for effective browsing for effective retrieval Techniques for image classification: SVM, Bayesian, Neural Network, KNN

4 Image Classification Cont.
In this work, we focus on KNN method KNN is widely used in image classification: Simple and easy to implement Good classification results Problems: Classification time is linear to the size of image repositories When the repositories are very large, contains millions of images, KNN is impractical

5 Our Contributions We apply our recently developed classification algorithms, a.k.a. SMART-TV for image classification task and analyze its performance We demonstrate that SMART-TV, a classification algorithm that uses P-tree vertical data structure, is fast and scalable to very large image databases We show that for Corel images a combination of color and texture features are a good alternative to represent the low-level of images

6 Image Preprocessing We extracted color and texture features from the original pixel of the images We created 54-dimension color histogram in HVS (6x3x3) color space for color features and created 8 multi-resolutions Gabor filter (4 orientations and 2 scales) to extract texture features of the images (see B.S. Manjunath, IEEE Trans. on Pattern Analysis and Machine Intelligence, 1996, for more detail about the filter)

7 Image Preprocessing Cont.
Color Features Convert RGB to HSV HSV  to the way humans tend to perceive color The value is in the range of 0..1 Quantize the image into 54 bins i.e. (6 x 3 x 3) bins Record the frequency of the HSV of each pixel in the images

8 Image Preprocessing Cont.
Texture Features Transform the images into frequency domain using the 8 filters generated (4 orientations and 2 scales parameters) and record the standard deviation and the mean of the pixel in the image after transformation This process will produce 16 texture features for each image

9 Overview of SMART-TV Compute Root Counts Store the root count
Measure TV of each object in each class Large Training Set Store the root count and TV values Preprocessing Phase Unclassified Object Approximate the candidate set of NNs Search the K-nearest neighbors from the candidate set Vote Classifying Phase

10 SMART-TV Algorithm SMART-TV: SMall Absolute diffeRence of ToTal Variation Approximates a set of candidates of nearest neighbors by examining the absolute difference between the total variation of each data object in the training set and the total variation of the unclassified object The k-nearest neighbors are searched from the candidate set Computing Total Variation (TV):

11 Total Variation The Total Variation of a set X about (the mean), , measures total squared separation of objects in X about , defined as follows: TV(X,)=TV(X,x33) 1 2 3 4 5 X TV g a- a Y

12 SMART-TV Algorithm

13 The Independency of RC The root count operations are independence from , which allows us to run the operations once in advance and retain the count results In classification task, the sets of classes are known and unchanged. Thus, the total variation of an object about its class can be pre-computed

14 Preprocessing Phase Preprocessing:
The computation of root counts of each class Cj, where 1  j  number of classes. O(kdb2) where k is the number of classes, d is the total of dimensions, and b is the bit-width Compute , 1 j  number of classes. O(n) where n is the number of images in the training set

15 Classifying Phase Classifying:
For each class Cj, where 1  j  number of classes do: a. Compute , where is the feature of the unclassified image Find hs images in Cj such that the absolute difference between the total variation of the images in Cj and the total variation of are the smallest, i.e. Let A be an array and , where c. Store the ID of the images in an arrayTVGapList

16 Classifying Phase (Cont.)
For each objectIDt, 1 t  Len(TVGapList) where Len(TVGapList) is equal to hs times the total number of classes, retrieve the corresponding object features from the training set and measure the pair-wise Euclidian distance between and , i.e. and determine the k nearest neighbors of Vote the class label for from the k nearest neighbors

17 Dataset We used Corel images (http://wang.ist.psu.edu/docs/related)
10 categories Originally, each category has 100 images Number of feature attributes 70 (54 from color and 16 from texture) We randomly generated several bigger size datasets to evaluate the speed and scalability of the algorithms 50 images for testing set, 5 for each category

18 Dataset Cont.

19 Experimental Results Experimental Setup : Intel P4 CPU 2.6 GHz machine, 3.8GB RAM running Red Hat Linux Classification Accuracy Comparison Class SMART-TV KNN k=3 k=5 k=7 hs=15 hs=25 hs=35 C1 0.69 0.72 0.75 0.74 0.73 0.78 0.81 0.77 0.79 C2 0.64 0.60 0.59 0.62 0.68 0.63 0.66 C3 0.65 0.67 0.76 0.57 0.70 C4 0.84 0.87 0.90 0.88 C5 0.91 0.92 0.93 0.89 0.94 C6 0.61 0.71 C7 0.85 C8 0.96 C9 0.52 0.43 0.45 0.54 C10 0.82

20 Example on Corel Dataset

21 Experimental Results Cont.
Loading Time Classification Time

22 Summary We have presented the SMART-TV algorithm, a classification algorithm that uses vertical data structure, and applied it in image classification task We found that the speed of our algorithm outperforms the speed of the classical KNN algorithm Our method scales well to large image repository. Its classification accuracy is very comparable to that of KNN algorithm


Download ppt "Efficient Image Classification on Vertically Decomposed Data"

Similar presentations


Ads by Google