Download presentation
Presentation is loading. Please wait.
Published byRoy McBride Modified over 7 years ago
1
Determinant Preserving Sparsification of SDDM Matrices with Applications to Counting and Sampling Spanning Trees Richard Peng Georgia Tech Joint work with: David Durfee John Peebles Anup B. Rao 1
2
OUtline Laplacians and matrix-tree theorem
Applications of det. preserving sparsification Proof of sparsification guarantees
3
Graph Laplacians Matrices that correspond to undirected graphs 1 3
2 Coordinates vertices Non-zeros edges Many uses in: scientific computing, network science, combinatorial optimization
4
Kirchoff’s Matrix Tree Theorem
# / total weight of spanning trees of G = determinant of n-1 sized minor of LG w(T) = ∏e∈ Tw(e) 1 3 det+(L) 2 Total trees = 1 × × × 3 = 11 det+ = 3 × 4 – 1 = 11
5
Algorithms for Det+(L)
General matrix determinant: O(nω), ω ≈ Sparse graphs: det+(G / e): number of trees containing e det+(G / e) / det+(G): leverage score, τe Linearity of expectation: Σe τe = n - 1 Cramer’s rule: for e = uv, τe = (L-v-1)uu Fast linear system solvers: Õ(m) time Pick one tree, contract all its edge: Õ(nm) Can also use this to sample a random tree
6
Dense Vs. Sparse Missing piece: sparsification subroutine Problem
Dense (m ≈ n2) Sparse (m ≈ n) Determinant O(n2.37) Õ(nm) Rand Spanning tree Õ(n5/3m1/3) Õ(m4/3) Max matching Õ(m7/4) Parallel Shortest Path Õ(mn1/2) Approx. maxflow Õ(m) Lx = b Missing piece: sparsification subroutine
7
Sparsification For Det+(G)
O(n1.5) (rescaled) edges sampled with probabilities proportional to τe gives H s.t. det+(H)≈ det+(G) Reason for n2: need to estimate τe to error n-1/4 Applications: Õ(n2δ-2) time algorithms for: Estimating determinant to error (1 ± δ) Generate spanning tree from distribution with TV distance ≤ δ to the uniform distribution
8
OUtline Laplacians and matrix-tree theorem
Applications of det. preserving sparsification Proof of sparsification guarantees
9
Schur Complements L11 L12 V1 L = L21 L22 V2 G = G[V1] G[V2]
Partition V into V1 and V2 L21 L22 V2 Schur complement: Sc(L, V2): partial state of Gaussian elimination after removing V2 G = Sc(L, V2) = L22 – L21 L11-1 L12 G[V1] G[V2] Key fact: Sc(L, Vi) is still a graph, can sparsify!
10
Determinant Approximation
Determinant invariant under row/column operations (which are the Gaussian elimination primitives) L11 L12 V1 det+(L) = det(L11) ∙ det+(Sc(L, V2)) L = L21 L22 V2 [KLPSS `16] / [JKPS `17]: can sample Sc(L, V2) w.p. (n-1/4 close to) τe in Õ(n2) time Recurse + control errors: T(n, ) = 2T(n / 2, δ / √2) + Õ (n2 δ-2) = Õ (n2 δ-2) Requires analyzing variance instead of errors
11
High Level Role of Sc(G, V2)
det+(L) = det(L11) ∙ det+(Sc(L, V2)): Divide step of div-conquer algorithms G[V1] G[V2] G = Sc(G, V2) G[V1] New edges formed by Schur complement
12
Algorithms that Use Div-Conquer
Directly use div-conquer Div-conquer in some inner loop Problem Dense (m ≈ n2) Sparse (m ≈ n) Determinant O(n2.37) Õ(nm) Rand Spanning tree Õ(n5/3m1/3) Õ(m4/3) Max matching Õ(m7/4) Parallel Shortest Path Õ(mn1/2) Approx. maxflow Õ(m) Lx = b
13
Div-Conquer: OI Version
CTSC = Chinese (IOI) Team Selection Contest CTSC`13 Report CTSC`08 Homework Div-conquer + Convexity / monge search Augmented search trees KMP and suffix-tree/array Voronoi diagrams
14
Div-Conquer + ? + errors [Kyng-Sachdeva FOCS `16] (our result)
Determinant-preserving sparsifiers of Sc(G, V2) Sampling spanning trees in Õ(n2) time [DKPRS STOC `17]: Õ(n5/3m1/3)
15
Spanning Tree Distributions
Tree distribution given by: H Sparsify(G) T SampleTree(H) TV distance: dTV(p, q) = ΣT |p(T) – q(T)| Bound dTV(trees(G), trees(H = sparsify(G))) by bounding EH |T ⊆H[det+(H)2] for any tree T
16
Modified Algorithm V1 V2 New edges formed by Schur complement
Random spanning tree in Sc(G, V1) decides all edges in G[V1] Contract/remove edges of G[V1] based on tree picked Find another spanning tree on Sc(G’, V2)
17
Key New Ideas V1 V2 On quasi-bipartite G’, there is an (efficient) bijection between trees on Sc(G’, V2) and trees on G’ Sparsify Schur complements, then recruse Similar to, but messier than determinant
18
OUtline Laplacians and matrix-tree theorem
Applications of det. preserving sparsification Proof of sparsification guarantees
19
Simplifying Assumptions
All edges have leverage score ≤ n / m In any G, leverage scores sum to n – 1. Split e into m / τe copies, let m ∞.
20
Aside: Concentration Bounds
Matrix concentration (e.g. [RV `97][Tropp `12]): s = Õ(nε-2) gives LH ≈ε LG LH ≈ε LG implies all eigenvalues are within 1 ± ε det+(G): product of all non-zero eigenvalues of LG n eigenvalues: need ε = 1/n, s ≈ n3 Variance based proofs: s ≈ n2
21
Main motivation [Janson, `94]: a random graph with O(n1.5) edges, G(n, O(n1.5)) has concentrated numbers of: Spanning trees Matchings Hamiltonian cycles Main insight: uniform leverage scores ≈ complete graph Aside: this does not work for G(n, n-0.5)!
22
Expectation Rand subset of s > n2 edges, picked without replacement
Probability of a single edge picked: p = s/m Probability of a tree picked: 𝑝 𝑛−1 ⋅exp − 𝑛 2 2𝑠 −𝑜 1 Linearity of expectation: 𝐸 𝑇 𝐻 =𝑇 𝐺 ⋅ 𝑝 𝑛−1 ⋅exp − 𝑛 2 2𝑠 −𝑜 1 Goal: show E[T(H)2] is close to the square of this
23
Bounding Second Moment
Goal: show E[T(H)2] is close to: 𝑇 𝐺 2 ⋅ 𝑝 2𝑛−2 ⋅exp − 𝑛 2 𝑠 −𝑜 1 Main steps: Express E[T(H)2] as sum over pairs of trees of probabilities of both in H. Express such probability in terms of the size of intersection. Bound pairs of trees with intersection size k in terms of k using bounded leverage scores + negative correlation.
24
E[T(H)2] Interpretation: # of pairs of trees in H
Depends only on k = |T1 ∩ T2|, bound by: 𝑝 2𝑛−2 ⋅ exp − 2𝑛 2 𝑠 ⋅ 1 𝑝 ⋅ 1+ 2𝑛 𝑠 𝑘
25
Incorporating Leverage Scores
S: subset of k edges Negative correlation between trees: Number of T containing S ≤ T(G) ∏e ∈ Sτe Uniform leverage score assumption: τe ≤ n / m Number of trees containing S: ≤ T(G) ∙ (n / m)k Pairs of trees containing S: ≤ T(G)2 ∙ (n / m)2k Number of subsets of E of size k: 𝑚 𝑘 ≤ 𝑚𝑘 𝑘! Total number of pairs: ≤𝑇 𝐺 2 ⋅ 1 𝑘! ⋅ 𝑛2 𝑚 𝑘
26
Putting Things together
# pairs of T1, T2 PrH[H contains both T1, T2] 𝑘 𝑇 𝐺 2 1 𝑘! 𝑛2 𝑚 𝑘 ⋅𝑝 2𝑛−2 ⋅ exp − 2𝑛 2 𝑠 𝑝 ⋅ 1+ 2𝑛 𝑠 𝑘 Terms depending on k: 𝑘 1 𝑘! ⋅ 𝑛 2 𝑚 ⋅ 1 𝑝 ⋅ 1+ 2𝑛 𝑠 𝑘 ≤ exp 𝑛 2 𝑠 +𝑂 𝑛 3 𝑠 2 Subbing in E[T(H)]: ≤ E T H 2 ⋅exp 𝑂 𝑛 3 𝑠 2
27
Future Directions Matrix-concentration based extensions?
Determinatal processes? Janson `94: matchings and Hamitonian tours. Getting fewer than n1.5 edges? Directly work with TV distances? (skip det) Removing n2 factor (result of needing estimates of τe with error n-1/4) Combine with algorithms for sparse graphs? (KM `09, MST `15) (some) references: Paper: [DKPRS `17]: [Kyng-Sachdeva `16]: [KLPSS `16]:
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.