Presentation is loading. Please wait.

Presentation is loading. Please wait.

Database Seminar The Gauss-Tree: Efficient Object Identification in Databases of Probabilistic Feature Vectors Authors : Christian Bohm, Alexey Pryakhin,

Similar presentations


Presentation on theme: "Database Seminar The Gauss-Tree: Efficient Object Identification in Databases of Probabilistic Feature Vectors Authors : Christian Bohm, Alexey Pryakhin,"— Presentation transcript:

1 Database Seminar The Gauss-Tree: Efficient Object Identification in Databases of Probabilistic Feature Vectors Authors : Christian Bohm, Alexey Pryakhin, Matthias Schubert Published in : ICDE 2006 Presenter : Chui Chun Kit Supervisor : Dr. Kao Date : 18 th Jan 2007

2 Presentation Outline Introduction  Object identification problem  Motivation of modeling data with uncertainty The Gaussian Uncertainty Model Identification Queries  K-mostly likely identification query (k-MLIQ)  Threshold identification query (TIQ) General Solution The Gauss-Tree Data Structure Experimental Results Conclusion

3 Introduction Suppose Peter is a detective …  The police force maintains a number of criminal databases, one of the databases stores the pictures of the criminals and suspects.  One typical task for Peter is to search for the suspects who look similar to the pictures he took or a drawing provided by the witnesses. E.g. Given the query image, retrieve the most similar image from the database. Query image Images in the database

4 Introduction Suppose Peter is a detective …  To do the similarity search Each object is pre-processed to extract it’s features.  Each feature describes one property of the images.  An image can be described by a set of features, a set of feature values forms a feature vector. Collection of feature vectors forms a database of images, we will instead work on such a database rather than the original images. A d -dimensional feature vector consists of d feature values describing the object. Breadth of lips… Image 13cm Image 26cm ……… E.g. Breadth of nose is one of the features describing the facial image Image Database Query7cm Image 1Image 2Query image Feature value Breadth of nose Feature 2cm 3cm 3.2cm

5 Introduction Suppose Peter is a detective …  By defining a distance function (e.g. Euclidian distance) to feature vectors, we can assume that the distance between the feature vectors corresponds to the dissimilarity of objects. Breadth of nose Breadth of lips… Image 12cm3cm Image 23cm6cm ……… Image Database Query3.2cm7cm Image 1Image 2Query image

6 Introduction To identify the most similar object to the query image, we could retrieve the nearest neighbor of the query image in the database.  Image 2 is regarded as the most similar object with the query image because the distance between image 2 and the query image is the smallest. Breadth of nose Breadth of lips… Image 12cm3cm Image 23cm6cm ……… Breadth of nose Breadth of lips x x q Query3.2cm7cm Image 2 Query image Image 1

7 Introduction In reality, the database images, as well as the query image are often represented by feature vectors with a varying degree of exactness or uncertainty.  For example, most data collections consisting of facial images do not just contain images that were taken under the same illumination or sharpness.  The observed feature values may different from the true feature value. Two feature values describing the same object can be significantly different from each other. Breadth of nose Breadth of lips… Image 12cm3cm Image 23cm6cm Image 33cm5cm Query3.2cm7cm Breadth of lips Breadth of nose x x x q Query image Image 3 Data uncertainty can be represented by an uncertainty region. In this case, although the observed value of the breadth of lips of the query image is 3.2cm, it’s actual value may vary from 2 to 4 cm. Therefore, if we consider data uncertainty, Image 3 will be more likely to be the answer instead of image 2. Image 2 Image 1 Here, the feature value of “breadth of lips” of image 3 is quite different from the query image.

8 Introduction The degree of similarity between observed and exact values can vary from feature to feature because some features cannot be determined as exactly as others.  E.g. It is much more easier to determine the breadth of nose than the breath of lips. (why?) Such kind of data uncertainty is common in face recognition, fingerprint analysis or voice recognition  This motivates the authors to propose an uncertainty model for the feature vectors. Breadth of nose Breadth of lips… Image 12cm3cm Image 23cm6cm Image 33cm5cm Query3.2cm7cm Breadth of lips Breadth of nose x x x q Query image Image 3 Therefore, if we consider data uncertainty, Image 3 will be more likely to be the answer instead of image 2. Image 2 Image 1

9 The Gaussian Uncertainty Model

10 A feature value µ i in a feature vector is an observation. E.g. from a sensor.  Due to measurement error, the observed value µ i may be different from the true value x i. The authors assume that the measurement error follows a Gaussian distribution around the true feature value x i (mean) with a known standard deviation δ i. xixi P( x ) x Graphically, a Gaussian distribution is a bell shape probability density function.

11 The Gaussian Uncertainty Model The probability density that the value µ i is observed given x i is the true value can be calculated by substituting µ i to the Gaussian function. xixi P( µ i ) µiµi However, we often want to determine the probability density of the true feature value x i for the observed feature value µ i. P( x ) x

12 The Gaussian Uncertainty Model Fortunately, Gaussians are symmetric: xixi P( µ i ) µiµi However, we often want to determine the probability density of the true feature value x i for the observed feature value µ i. P( x ) x

13 The Gaussian Uncertainty Model Fortunately, Gaussians are symmetric: xixi µiµi P( x ) xx Gaussian distribution with mean equal to the observed value µ i. Therefore, we can use this to determine the probability density of the true feature value x i (probably from the query) given the observed feature value µ i. The probability of µ i in the Gaussian of x i is the same as the probability of x i in the Gaussian of µ i.

14 The Gaussian Uncertainty Model Probabilistic feature vector (pfv) is…  A vector consisting of d pairs of observed feature values µ i and standard deviations δ i.  Each pair of µ i and δ i defines a Gaussian distribution of the true feature value x i. F1F2… O1[5,0.2][3,0.2] O2[1,2][4.5,2] O3[5,1][1,0.2] Probability F1 F2 A visualization of the probability densities of three 2-dimensional probabilistic feature vectors.

15 Probabilistic feature vector (pfv) is…  A vector consisting of d pairs of observed feature values µ i and standard deviations δ i.  Each pair of µ i and δ i defines a Gaussian distribution of the true feature value x i. For d -dimensional feature vectors, the probability density for observing a query vector q of actual values under the condition that we already observed v for the same data object can be calculated in the following way: The Gaussian Uncertainty Model The query q is a feature vector without uncertainty. The object v is a pfv.

16 Identification Queries

17 Queries on a database of pfv For identification task, we would like to calculate the conditional probability that a query q belongs to a pfv v, under the condition that q belongs to one pfv of the set of all considered pfv in DB P( v ) is the probability that object v is the answer to a query at all. The authors assume that P( v ) is the same for any object and thus we can cancel it in the fraction if we are using P( v | q ) for comparison. The conditional probability of observing q under the condition that we have already observed w for the same object. Recall that we have discussed the way to calculate p( q | v ). Therefore, we would like to use the Bayesian theorem to rewrite the conditional probability.

18 Queries on a database of pfv Once we can determine the probability P( v | q ), we can define two types of queries:  Threshold Identification Query (TIQ) E.g. give me all persons in the databases that could be shown on a given image with probability of at least 10%.  K-Most-Likely Identification Query (k-MLIQ) E.g. Give the 10 most likely persons in the database that are shown on a given image. The feature vectors of the query and the object can be uncertain. i.e. They are all probabilistic feature vectors.

19 Queries on a database of pfv Given 3 facial images O1,O2, and O3 of varying quality that are stored in a database. A query image Q. Feature F1 is particularly sensitive to the rotational angle, F2 is sensitive to illumination. F1F2 O1[5,0.2][3,0.2] O2[1,2][4.5,1] O3[5,1][1,0.2] Object O1 is taken under good conditions. Both features are relative accurate. For O3 the rotation was bad but the illumination was good. For the query object, the rotation was good, but illumination was bad. Q[4,0.1][3,2]

20 Queries on a database of pfv We can recognize that O3 must be the object providing the highest probability for describing the same object as specified by the query Q. From the previous formulae  P(O1|Q) =10%  P(O2|Q) =13%  P(O3|Q) =77% A k-MLIQ with k=1 would report O3 as result. A TIQ with a threshold probability ε =12% would additionally report O2. F1F2 O1[5,0.2][3,0.2] O2[1,2][4.5,1] O3[5,1][1,0.2] Q[4,0.1][3,2]

21 Queries on a database of pfv A similarity query using the Euclidean distance would obtain three rather similar distances  d(Q,O1)=1.53  d(Q,O2)=1.97  d(Q,O3)=1.74 Thus the NN would be O1 which is excluded when considering the variances. F1F2 O1[5,0.2][3,0.2] O2[1,2][4.5,1] O3[5,1][1,0.2] Q[4,0.1][3,2]

22 Preliminary experiment To further illustrate the NN approach is not suitable for uncertain data, the authors conducted a preliminary experiment to compare the precision and recall rates of  The Nearest neighbor query (does not consider data uncertainty)  The Most likely identification query (consider data uncertainty) Recall is the percentage of correct answer retrieved. Precision is the percentage of correct objects in the answer set.

23 Preliminary experiment Dataset (An image database)  10,987 database objects.  27-dimensional color histograms.  To describe the images as probabilistic feature vectors, the authors complemented each dimension with a randomly generated standard deviation.  100 objects are selected and modified as query objects. New observed mean value is generated w.r.t. the corresponding Gaussian.

24 Preliminary experiment MLIQ, which consider data uncertainty achieved 98% precision and recall. On the other hand, the NN query, which use the mean value only, displayed a precision and recall rate of 42% The x axis correspond to the size of the result set. i.e. X2 means to retrieve 2- NN. Even the result size is increased the recall rate does not increase much, but the precision rate drops significantly. Therefore, the authors conclude that the nearest neighbor method which only use the mean value as the true feature value is not suitable for similarity search in uncertain data. The rates are not 100% because the query images are perturbed and maybe a few of them look very different from the original image.

25 General Solution When the query is also a probabilistic feature vector

26 Given a query vector q with exact feature values, and a database with pfv(s) v. We would like to determine P( v|q ) for each pfv v. We would like to calculate p( q|v ), d is the dimensionality. We would like to calculate, for each dimension i, the probability density for observing the exact value q i under the condition that we already observed v i for the same object. Here we apply the theory of Bays. P( v ) and P( w ) are the same for all objects. Question : How about when q is a pfv? We need to calculate p( q i | v i ) where the query value q i is uncertain. i.e. a Gaussian distribution. Queries on a database of pfv Gaussian distribution

27 Queries on a database of pfv If both the object v and query q are pfv, we have to consider all possible positions when calculating p( q i | v i ). µvµv µqµq x P(x)P(x) Probability distribution of the feature value of an object v in dimension i. Recall that if the query value q is exact value, we simply substitute q into the Gaussian of v to obtain p( q|v ). q However in this case, q is also a pfv with mean µ q and standard deviation δ q. Probability distribution of the feature value of the query q in dimension i.

28 Queries on a database of pfv If both the object v and query q are pfv, we have to consider all possible positions when calculating p( q i | v i ). µvµv µqµq x P(x)P(x) The probability that x’ is the true value for the object v. x’ The probability that x’ is the true value for the query object q. The probability that both v and q having the true value x’. Probability distribution of the feature value of an object v in dimension i. Probability distribution of the feature value of the query q in dimension i.

29 The joint probability An interesting lemma The lemma reduced the more general case to the easier case that one of the objects is exact and the other is a pfv. The joint probability of the query pfv q and object pfv v. (2 Gaussians)

30 The joint probability µvµv µqµq x P(x)P(x) Instead of calculating the integral, we can calculate the joint probability p( q i |v i ) by substituting the mean of the query value into a Gaussian function.

31 Queries on a database of pfv Given a query pfv q, and a database with pfv(s) v. We would like to calculate P( v|q ) for each pfv v. Then we would like to calculate p( q|v ), d is the dimensionality. Then we would like to calculate, for each dimension i, the probability density for observing the pfv q i under the condition that we already observed v i for the same object. The joint probability between q and v.

32 General Solution The general solution is already a stand- alone solution operating on top of a sequential scan of the database. The authors proposed the Gauss-tree  An index on database objects to improve mining efficiency.  The general approach is used as a refinement step.

33 The Gauss-Tree

34 The Gauss-Tree is a balanced tree from the R-tree family. Not the space of the spatial objects (i.e. Gaussians) is indexed but instead the parameter space.  Mean  Standard deviation (uncertainty value)

35 The Gauss-Tree A Gauss-tree of degree M is a search tree where the following properties hold:  The root has between 1 and M entries.  All other inner nodes have between M/2 and M entries each.  A leaf node has between M and 2M entries.  An inner node with k entries has k child nodes.

36 A Gauss-tree of degree M is a search tree where the following properties hold:  An entry of a non-leaf node is a minimum bounding rectangle of dimensionality 2* d defining upper and lower bounds for every feature value [, ]. every uncertainty value [, ]. (standard deviation) The Gauss-Tree δ µ MBR of 1-dimensional feature vectors, the MBR is 2-dimensional.

37 The Gauss-Tree δ µ A B C D x 78 78 P(x)P(x) E An object A inside the node represents a Gaussian distribution with mean 7 and standard deviation 1. 1 A B C D E Object B has the same mean value as object A but with a larger standard deviation. Here, the MBR only bounds on the parametric space, the probability density for positions outside the mean range does NOT equal to zero!

38 We would like to derive the maximum of the probability densities of the objects stored in the subtree of a node. The Gauss-Tree x P(x)P(x) A B C D E Graphically, this should be the upper bound of the probability densities of the objects stored in the node /subtree. Here, the MBR only bounds on the parametric space, the probability density for positions outside the mean range does NOT equal to zero!

39 The Gauss-Tree x P(x)P(x) A B C D E Distribution of object C correspond to the maximum probability density in this area. This is because C has the smallest mean and the largest variance. Distribution of object A correspond to the maximum probability density in this area. This is because A has the smallest mean and variance. Which distribution corresponds to the maximum probability density in each of the colored region?  It is obvious that the distributions correspond to the area with x < must have the mean equal to.  The problem is to determine the standard deviation such that is maximized.

40 Which distribution correspond to the maximum probability density in each of the colored region?  It is obvious that the distributions correspond to the area with x < must have the mean equal to.  The problem is to determine the standard deviation such that is maximized. The Gauss-Tree We can determine the δvalue which maximizes by setting the derivative with respect toδto zero.

41 The Gauss-Tree The only positive solution obtain a local maximum at

42 The Gauss-Tree The only positive solution obtain a local maximum at (I)(I) P(x)P(x) x Therefore, we can determine the standard deviation for area (I), where x is smaller than this value…

43 The Gauss-Tree (I)(I) P(x)P(x) x Therefore, we derive that the standard deviation must be equal to in area (I). Similar for other areas…

44 With defined, we can obtain the upper bound of the probability density of a query value q which is an exact value. Question : How about the upper bound of the probability density of a query value with Gaussian uncertainty? Recall the lemma for joint probability between two pfvs q i and v i : Therefore, the upper bound of the joint probability between pfv q and the objects in a node/subtree. The Gauss-Tree Joint probability between 2 Gaussians.

45 The Gauss-Tree Lower bound (skipped) Sum (skipped)  Accuracy of the approximation of the sum is bounded by

46 Using the Gauss- Tree

47 Recall that K-most likely identification query (K-MLIQ) reports the object v for which the probability-based similarity P( v | q ) is maximal. We briefly mention how to use Gauss-Tree for filtering nodes/objects for MLIQ (k=1).  Best first search strategy. Maintain a priority queue for the nodes to be visited  Question : How to determine the priority of each tree node? That is to say, we have to determine p( q | v ) i.e. the joint probability, which the upper bound can be obtained by using the Gauss tree.

48 Using the Gauss-Tree Let a be a node of the Gauss-Tree, it’s priority attribute is defined as follows The ordering key corresponds to the upper bound of the joint probability between the query q and the objects under the subtree of node a.  i.e. The upper bound of p( q | v ) where v is inside node a. Recall that this is the upper bound of the joint probability between the query object q and database object v in dimension i.

49 Using the Gauss-Tree Brief description of the algorithm for finding MLIQ:  Initially, the queue contains only the root.  The algorithm runs in a loop which removes the top element from the queue, loads the corresponding node and re-inserts the children into the queue.  The algorithm keeps a candidate object in a variable which is the maximum pfv that has been seen so far by the algorithm in any of the leaf nodes.

50 Using the Gauss-Tree The algorithm stops when a pfv has been found for which the relative probability exceeds that of the top element of the queue. Upper bound of the joint probability between the query object and the database objects stored under the subtree of the top element of the queue. The maximum joint probability between the query object and the database objects that has been seen so far by the algorithm.

51 Using the Gauss-Tree x P(x)P(x) 1 2 3 4 5 6 7 8 9 10 µ = [1,2] δ= [0.2,0.6] µ = [6,9] δ= [0.1,0.2] µ = [3.5,4] δ= [0.4,1] µ = [1,9] δ= [0.1,1] Gauss - Tree AB C Database objects represented by 1- dimensional probabilistic feature vectors. The Gauss-Tree used to index the pfvs.

52 Using the Gauss-Tree x P(x)P(x) 1 2 3 4 5 6 7 8 9 10 µ = [1,2] δ= [0.2,0.6] µ = [6,9] δ= [0.1,0.2] µ = [3.5,4] δ= [0.4,1] µ = [1,9] δ= [0.1,1] Gauss - Tree The maximum of the probability density functions under the root node derived from the lower and upper bounds of µ and δ. AB C The Gauss-Tree used to index the pfvs. The root node stores the upper and lower bounds of the mean and standard deviations of all the objects. Database objects represented by 1- dimensional probabilistic feature vector

53 Using the Gauss-Tree x P(x)P(x) 1 2 3 4 5 6 7 8 9 10 µ = [1,2] δ= [0.2,0.6] µ = [6,9] δ= [0.1,0.2] µ = [3.5,4] δ= [0.4,1] µ = [1,9] δ= [0.1,1] Gauss - Tree Although the range of µ is large, the range of δ is small. Resulting a narrow approximation in the tail. Similar for node B, this node is not very selective because the range of δ is large. The maximum of the probability density functions under node A derived from the lower and upper bounds of µ and δ. Since the ranges of µ and δ are quite small, this node is quite selective. AB C

54 Using the Gauss-Tree x P(x)P(x) 1 2 3 4 5 6 7 8 9 10 µ = [1,2] δ= [0.2,0.6] µ = [6,9] δ= [0.1,0.2] µ = [3.5,4] δ= [0.4,1] µ = [1,9] δ= [0.1,1] Gauss - Tree q Node Node.prio( q ) Priority Queue AB C Now, we would like to find the object in DB which is the most similar with q. The algorithm starts from loading the root, calculate the upper bound of the joint probability of q and the objects under the root. Root 1

55 Using the Gauss-Tree x P(x)P(x) 1 2 3 4 5 6 7 8 9 10 µ = [1,2] δ= [0.2,0.6] µ = [6,9] δ= [0.1,0.2] µ = [3.5,4] δ= [0.4,1] µ = [1,9] δ= [0.1,1] Gauss - Tree q Node Node.prio( q ) Priority Queue AB C Then the algorithm visits the child nodes of the root, and again, calculate their priorities (the maximum joint probabilities). B 0.4 CA 0.30.01 Root 1 A C B

56 Using the Gauss-Tree x P(x)P(x) 1 2 3 4 5 6 7 8 9 10 µ = [1,2] δ= [0.2,0.6] µ = [6,9] δ= [0.1,0.2] µ = [3.5,4] δ= [0.4,1] µ = [1,9] δ= [0.1,1] Gauss - Tree q Node Node.prio( q ) Priority Queue AB C Then the algorithm continue to visit the top element in the queue (node B). And then calculate for each object pfv the joint probability with the query q. B 0.4 CA 0.30.01 A C B

57 Using the Gauss-Tree x P(x)P(x) 1 2 3 4 5 6 7 8 9 10 µ = [1,2] δ= [0.2,0.6] µ = [6,9] δ= [0.1,0.2] µ = [3.5,4] δ= [0.4,1] µ = [1,9] δ= [0.1,1] Gauss - Tree q Node Node.prio( q ) Priority Queue AB C CA 0.30.01 A C v1v1 Now, object v 1 is the candidate with highest joint probability with q. Candidate Object v1v1 p( q | v )0.2 Store this as a candidate of the result in a variable. Since the top element in the priority queue, node C is having upper bound of joint probability of 0.3, which is greater than object v 1, v 1 is not guaranteed to be the object having the highest joint probability with q. We have to visit node C.

58 Using the Gauss-Tree x P(x)P(x) 1 2 3 4 5 6 7 8 9 10 µ = [1,2] δ= [0.2,0.6] µ = [6,9] δ= [0.1,0.2] µ = [3.5,4] δ= [0.4,1] µ = [1,9] δ= [0.1,1] Gauss - Tree q Node Node.prio( q ) Priority Queue AB C CA 0.30.01 A C v1v1 Candidate Object v1v1 p( q | v )0.2 Therefore, we continue to look into node C and update the priority queue. Since the top element in the priority queue, node C is having upper bound of joint probability of 0.3, which is greater than object v 1, v 1 is not guaranteed to be the object having the highest joint probability with q. We have to visit node C. v2v2 A 0.01

59 Using the Gauss-Tree x P(x)P(x) 1 2 3 4 5 6 7 8 9 10 µ = [1,2] δ= [0.2,0.6] µ = [6,9] δ= [0.1,0.2] µ = [3.5,4] δ= [0.4,1] µ = [1,9] δ= [0.1,1] Gauss - Tree q Node Node.prio( q ) Priority Queue AB C A 0.01 A v1v1 Candidate Object v1v1 p( q | v )0.2 v2v2 Vector v 2 is having p( q | v 2 ) = 0.3. which is greater than v 1. Update the candidate variable. Object v2v2 p( q | v )0.3 Now, since the joint probability of v 2 and q is greater than the upper bound of the top of the priority queue, object v 2 is guaranteed to be the object with the highest p( q|v ) in the database. Therefore, we continue to look into node C and update the priority queue.

60 Experimental Results

61 The Gauss-tree and all compared methods are implemented using Java Experimental Setup  1.8GHz processor  8 GB main memory  50 Mb as database cache

62 Experimental Results Dataset 1 (An image database)  10,987 database objects.  27-dimensional color histograms.  To describe the images as probabilistic feature vectors, the authors complemented each dimension with a randomly generated standard deviation.  100 objects are selected and modified as query objects. New observed mean value is generated w.r.t. the corresponding Gaussian. Dataset 2 (Synthetic dataset)  100,000 objects with probabilistic feature vectors.  10-dimensional feature space.  500 query objects for MLIQ and TIQ.

63 Experimental Results To additionally compare to a more sophisticated method, the authors used an X-tree to store rectangular approximation of each pfv.  Calculate the 95% quantiles in each dimension Determine the interval around the mean value of a Gaussian that contains a random observation with a probability of 95%. By combining these intervals to a hyper rectangle, we generate a good approximation for the query pfv.

64 Experimental Results To process a most likely identification query (MLIQ) with X-Tree, we first calculate an approximation for the query pfv. Use X-Tree to determine a candidate set consisting of all approximations that intersect with the query approximation. Notice that this method does NOT offer exact results with respect to Gaussian uncertainty model because the used approximation allow false dismissals. X-tree is a spatial index, Gauss-tree is a parametric index.

65 Experimental Results Dataset 1 (Real data) The page accesses, CPU time and complete query time for a most likely identification query (1-MLIQ) and two threshold identification query (TIQ).

66 Experimental Results Dataset 1 (Real data) The Gauss-Tree is able to reduce the page accesses as well as the CPU time by a factor of 4.2 compared to the sequential scan.

67 Experimental Results Dataset 1 (Real data) The all over time of Gauss- Tree improved query processing by at least 46%. The overall improvement is less than the savings in CPU and page access because the Gauss-Tree suffered from additional seeks on the hard disc.

68 Experimental Results Dataset 2 (Synthetic) The Gauss-Tree achieved a speed up of 4.3 with respect to the number of accessed pages. For the TIQ, the Gauss-Tree even improved the page accesses by a factor between 35.7 and 43.2 of the page access of the sequential scan.

69 Experimental Results Dataset 2 (Synthetic) The X-tree approach storing rectangular approximations on the other hand did not offer any speed up against the sequential scan for MLIQ. The reason is that the rectangular approximations are very large for data with high uncertainty.

70 Experimental Results Dataset 2 (Synthetic) The speed up of the all over query time was between 3.1 for the MLIQ and about 7.5 for both TIQ. Thus, the Gauss-tree offered a significant improvement of the efficiency compared to the sequential scan.

71 Conclusions

72 The authors introduced the Gaussian uncertainty model for identification queries on uncertain data. Defined two types of identification queries  K-MLIQ, TIQ Proposed the Gauss-Tree index to speed up query processing.  Experimental results demonstrated that the Gauss- Tree is very effective in answering identification queries on uncertain data.

73 The End Thank you!


Download ppt "Database Seminar The Gauss-Tree: Efficient Object Identification in Databases of Probabilistic Feature Vectors Authors : Christian Bohm, Alexey Pryakhin,"

Similar presentations


Ads by Google