Presentation on theme: "Semi-Supervised Learning in Gigantic Image Collections"— Presentation transcript:
1Semi-Supervised Learning in Gigantic Image Collections Rob Fergus (New York University)Yair Weiss (Hebrew University)Antonio Torralba (MIT)TexPoint fonts used in EMF.Read the TexPoint manual before you delete this box.: AAAAAAAAAA
2Gigantic Image Collections What does the world look like?Gigantic Image CollectionsObject Recognition for large-scale image searchHigh level image statisticsOur goal is to develop techniques for image search that can be applied to the billions of images on the InternetWe’re interested in developing object recognition techniques that can scale to the billions of images on the Internet.
3Spectrum of Label Information Human annotationsNoisy labelsUnlabeledOne property of images on the internet is that they have a wide range of label information. A tiny fraction have been labeled by humans so have reliable labels, but a much larger fraction have a some kind of noisy label. In other words, there is some kind of text associated with the image, from the image name or surrounding HTML text that gives some cue as to what’s in the image but is not that accurate. And of course, we also have a large amount of data with no labels at all. So we’d like a framework that can make use of all these types of labels.
4Semi-Supervised Learning Classification function should be smooth with respect to data densityDataSupervisedSemi-SupervisedAnd one such technique is semi-supervised learning. Consider a toy dataset with just two labels
5Semi-Supervised Learning using Graph Laplacian [Zhu03,Zhou04]is n x n affinity matrix (n = # of points)We consider approaches of this type that are based on the Graph Laplacian. Here each image is a vertex in a graph and the weight of the edge between the vertices is given by an affinity defined as follows. So for n points we have an n by n affinity matrix W, from which we can compute the normalized graph Laplacian L, like so.Graph Laplacian:
6SSL using Graph Laplacian Want to find label function f that minimizes:y = labelsIf labeled, , otherwiseSmoothnessAgreement with labelsIn SSL, we solve for a label function f over the data points. The graph Laplacian measures the smoothness of the label function f while the second term constrains f to agree with the labels, using a weighting lambda according the reliability of the label. We can rewrite these lambda in matrix form like so and then the optimal f is given by the solution to a this n by n linear system.Solution:n x n system(n = # points)
7Eigenvectors of Laplacian Smooth vectors will be linear combinations of eigenvectors U with small eigenvalues:[Belkin & Niyogi 06,Schoelkopf & Smola 02,Zhu et al 03, 08]Instead of directly solving the n by n linear system, we can instead model f as a linear combination of the smallest few eigenvectors of the Laplacian. So U are the eigenvectors and \alpha are the coefficients. These smallest eigenvectors are smooth with respect to the data density. The smallest is just a DC term, but the 2nd smallest splits the data horizontally and the 3rd splits it vertically.
8Rewrite System Let U = smallest k eigenvectors of L = coeffs. k is user parameter (typically ~100)Optimal is now solution to k x k system:So if we use the k smallest eigenvectors as a basis, k being a value we select, typically being 100 or so, then we can rewrite the linear system we had previously as follows. This now means that instead of solving an n by n system, we just need to solve a k by k system for the coefficients alpha, from which we can compute the label function f.
9Computational Bottleneck Consider a dataset of 80 million imagesInverting LInverting 80 million x 80 million matrixFinding eigenvectors of LDiagonalizing 80 million x 80 million matrixBut if we want to scale to really large datasets, then we have a problem. Consider a dataset of 80 million images. If we directly solve the original linear system, then we need to invert an 80 million by 80 million matrix. If we use the eigenvector basis, then finding the eigenvectors themselves is going to require diagonalizing this 80 million by 80 million matrix.
10Large Scale SSL - Related work Nystrom method: pick small set of landmark pointsCompute exact eigenvectors on theseInterpolate solution to restOther approaches include:[see Zhu ‘08 survey]DataLandmarksA number of techniques exist for doing large scale semi-supervised learning. Many of these are similar to the Nystrom method which computes the exact eigenvectors on a small set of landmarks and then interpolates the remaining points to give an approximate solution. A variety of other approaches exist that adaptively group the datapoints and compute eigenvectors on these groupings.Mixture models (Zhu and Lafferty ‘05), Sparse Grids (Garcke and Griebel ‘05), Sparse Graphs (Tsang and Kwok ‘06)
11Our ApproachSo our approach takes a different route.
12Overview of Our Approach Compute approximate eigenvectorsDensityDataLandmarksOursNystromIn Nystrom we reduce the number of data points down to set a set of landmarks. By contrast in our approach we consider the limit as the number of points goes to infinity and we have a continuous density. A key points is that our approach is linear in the # of examples.Limit as n ∞Reduce nLinear in number of data-pointsPolynomial in number of landmarks
13Consider Limit as n ∞Consider x to be drawn from 2D distribution p(x)Let Lp(F) be a smoothness operator on p(x), for a function F(x)Smoothness operator penalizes functions that vary in areas of high densityAnalyze eigenfunctions of Lp(F)2whereLet’s consider a toy 2D dataset. As n goes to infinity, then we have 2D density. We define an operator that measures smoothness of a continuous label function capital F. Notice that is a continuous analogue of the graph Laplacian. Nearby locations x1 and x2 which have high affinity will similar values in F, if F is smooth. We now analyze the eigenfunctions of this operator. Lp of F.
14Eigenvectors & Eigenfunctions To get an intuition as to what these eigenfunction looks like we show them in the bottom row. Notice that they are continuous functions that capture the same structure as the discrete eigenvectors.
15Key Assumption: Separability of Input data p(x1)Claim:If p is separable, then:Eigenfunctions of marginals are also eigenfunctions of the joint density, with same eigenvalueA key assumption we make in our method is that the input distribution is separable. So for our toy 2D data, we assume that the joint density is modeled as a product of the two marginal distributions px1 and px2. And you can show that eigenfunctions of these marginals are eigenfunctions of the joint density.p(x2)[Nadler et al. 06,Weiss et al. 08]p(x1,x2)
16Numerical Approximations to Eigenfunctions in 1D 300,000 points drawn from distribution p(x)Consider p(x1)p(x1)So let’s look at how we compute the eigenfunctions for one of these marginal distributions. Given a large set of observed data, which we assume is drawn from the density, we can form a histogram hx1 which will be an approximation to the true marginal.Histogram h(x1)p(x)Data
17Numerical Approximations to Eigenfunctions in 1D Solve for values of eigenfunction at set of discrete locations (histogram bin centers)and associated eigenvaluesB x B system (B = # histogram bins, e.g. 50)Then we can solve for the eigenfunctions of the 1D distribution using the histogram. We solve for the values of the eigenfunction g and their associated eigenvalues at the locations of the bin centers using the following equation. The size of this system is given by the # of histogram bins which is small, e.g. 50 or so.
181D Approximate Eigenfunctions So if we solve this system, then what do the eigenfunctions look like? Well, here are the smallest 3 eigenfunctions of the marginal of the 1st dimension. You can see that the first one is fairly constant in regions of high density and then changes rapidly in the middle. The second has an extra kink and the 3rd has two kinks.1st Eigenfunctionof h(x1)2nd Eigenfunctionof h(x1)3rd Eigenfunctionof h(x1)
19Separability over Dimension Build histogram over dimension 2: h(x2)Now solve for eigenfunctions of h(x2)And we do the same thing for the 2nd dimension. Here the marginal is a similar to a gaussian and these are the eigenfunctions.1st Eigenfunctionof h(x2)2nd Eigenfunctionof h(x2)3rd Eigenfunctionof h(x2)
20From Eigenfunctions to Approximate Eigenvectors Take each data pointDo 1-D interpolation in each eigenfunctionVery fast operationEigenfunction valueHaving obtained the eigenfunctions, how do we compute the approximate eigenvectors? For each data point we just do a 1D interpolation in the eigenfunctions to find the eigenvector value. This is a very quick operation.150Histogram bin
21Preprocessing Need to make data separable Rotate using PCA PCA One important pre-processing operation is that we rotate the data to make it more separable, as assumed by our algorithm. Currently we do this we PCA, although other options are possible.Not separableSeparable
22Overall AlgorithmRotate data to maximize separability (currently use PCA)For each of the d input dimensions:Construct 1D histogramSolve numerically for eigenfunctions/valuesOrder eigenfunctions from all dimensions by increasing eigenvalue & take first kInterpolate data into k eigenfunctionsYields approximate eigenvectors of LaplacianSolve k x k least squares system to give label function
24Nystrom ComparisonWith Nystrom, too few landmark points result in highly unstable eigenvectorsHere we compare Nystrom to our eigenfunction approach. When only a few landmarks are used, Nystrom fails to capture the structure of the data. But our eigenfunction approach uses all the datapoints and we get the correct solution.
25Nystrom ComparisonEigenfunctions fail when data has significant dependencies between dimensionsBut if the input data has significant dependencies between dimensions, like in this example, then our eigenfunction approach fails. Here Nystrom, despite the small set of landmarks, gets the correct solution.
27ExperimentsImages from 126 classes downloaded from Internet search engines, total 63,000 imagesDump truck EmuLabels (correct/incorrect) provided by Alex Krizhevsky, Vinod Nair & Geoff Hinton, (CIFAR & U. Toronto)In our first set of experiments we use 63,000 images from a set of 126 classes downloaded from image search engines. Human labels for these images have been gathered by Alex Krizhevsky and Geoff Hinton.
28Input Image Representation Pixels not a convenient representationUse Gist descriptor (Oliva & Torralba, 2001)L2 distance btw. Gist vectors rough substitute for human perceptual distanceWe represent each image with a single global descriptor. We use the Gist descriptor of Oliva & Torralba which captures the energy of gabor filters at different scales & orientations over the image. We use PCA to project it down into 64 d.Apply oriented Gabor filtersover different scalesAverage filter energyin each bin
29Are Dimensions Independent? Joint histogram for pairs of dimensions from raw 384-dimensional GistJoint histogram for pairs of dimensions after PCA to 64 dimensionsPCAMI is mutual information score. 0 = Independent
30Real 1-D Eigenfunctions of PCA’d Gist descriptors Input Dimension
31Protocol Task is to re-rank images of each class (class/non-class) Use eigenfunctions computed on all 63,000 imagesVary number of labeled examplesMeasure precision @ 15% recall
37Running on 80 million images PCA to 32 dims, k=48 eigenfunctionsFor each class, labels propagating through 80 million imagesPrecompute approximate eigenvectors (~20Gb)Label propagation is fast <0.1secs/keyword
38Japanese Spaniel3 positive3 negativeLabels from CIFAR set
40SummarySemi-supervised scheme that can scale to really large problems – linear in # pointsRather than sub-sampling the data, we take the limit of infinite unlabeled dataAssumes input data distribution is separableCan propagate labels in graph with 80 million nodes in fractions of secondRelated paper in this NIPS by Nadler, Srebro & ZhouSee spotlights on Wednesday
44ExactEigenvectors:Exact -- ApproximateEigenvaluesApproximate::::DataSo for this toy dataset on the left, we can compute the exact eigenvectors of the graph Laplacian shown in the center, and also the approximate eigenvectors using our eigenfunction approach. Note the similarity of the eigenvalues of the first 3 eigenvectors, but then the exact ones start mixing across dimension
45Are Dimensions Independent? Joint histogram for pairs of dimensions from raw 384-dimensional GistJoint histogram for pairs of dimensions after PCAPCAMI is mutual information score. 0 = Independent
46Are Dimensions Independent? Joint histogram for pairs of dimensions from raw 384-dimensional GistJoint histogram for pairs of dimensions after ICAICAMI is mutual information score. 0 = Independent
51Complexity Comparison Key: n = # data points (big, >106) l = # labeled points (small, <100)m = # landmark points d = # input dims (~100)k = # eigenvectors (~100) b = # histogram bins (~50)NystromSelect m landmark pointsGet smallest k eigenvectors of m x m systemInterpolate n points into k eigenvectorsSolve k x k linear systemEigenfunctionRotate n pointsForm d 1-D histogramsSolve d linear systems, each b x bk 1-D interpolations of n pointsSolve k x klinear systemPolynomial in # landmarksLinear in # data points
52Semi-Supervised Learning using Graph Laplacian [Zhu03,Zhou04]V = data points (n in total)E = n x n affinity matrix WWe consider approaches of this type that are based on the Graph Laplacian. Here each image is a vertex in a graph and the weight of the edge between the vertices is given by an affinity defined as follows. So for n points we have an n by n affinity matrix W, from which we can compute the normalized graph Laplacian L, like so.Graph Laplacian:
53Rewrite System Let U = smallest k eigenvectors of L = coeffs. k is user parameter (typically ~100)Optimal is now solution to k x k system:So if we use the k smallest eigenvectors as a basis, k being a value we select, typically being 100 or so, then we can rewrite the linear system we had previously as follows. This now means that instead of solving an n by n system, we just need to solve a k by k system for the coefficients alpha, from which we can compute the label function f.
54Consider Limit as n ∞Consider x to be drawn from 2D distribution p(x)Let Lp(F) be a smoothness operator on p(x), for a function F(x):Analyzeeigenfunctions of Lp(F)2whereLet’s consider a toy 2D dataset. As n goes to infinity, then we have 2D density. We define an operator that measures smoothness of a continuous label function capital F. Notice that is a continuous analogue of the graph Laplacian. Nearby locations x1 and x2 which have high affinity will similar values in F, if F is smooth. We now analyze the eigenfunctions of this operator. Lp of F.
55Numerical Approximations to Eigenfunctions in 1D Solve for values g of eigenfunction at set of discrete locations (histogram bin centers)and associated eigenvaluesB x B system (# histogram bins = 50)P is diag(h(x1))Then we can solve for the eigenfunctions of the 1D distribution using the histogram. We solve for the values of the eigenfunction g and their associated eigenvalues at the locations of the bin centers using the following equation. The size of this system is given by the # of histogram bins which is small, e.g. 50 or so.Affinity between discrete locations