Stephen Alstrup (U. Copenhagen) Esben Bistrup Halvorsen (U. Copenhagen) Holger Petersen (-) Aussois – ANR DISPLEXITY - March 2015 Cyril Gavoille (LaBRI,

Slides:



Advertisements
Similar presentations
Chapter 5: Tree Constructions
Advertisements

Chapter 4 Partition I. Covering and Dominating.
Distance and Routing Labeling Schemes in Graphs
WSPD Applications.
Size-estimation framework with applications to transitive closure and reachability Presented by Maxim Kalaev Edith Cohen AT&T Bell Labs 1996.
Weighted graphs Example Consider the following graph, where nodes represent cities, and edges show if there is a direct flight between each pair of cities.
Lecture 3: Parallel Algorithm Design
Compact and Low Delay Routing Labeling Scheme for Unit Disk Graphs Chenyu Yan, Yang Xiang, and Feodor F. Dragan (WADS 2009) Kent State University, Kent,
Minimum Spanning Trees
Distributed Data Structures: A Survey Cyril Gavoille (LaBRI, University of Bordeaux)
 Graph Graph  Types of Graphs Types of Graphs  Data Structures to Store Graphs Data Structures to Store Graphs  Graph Definitions Graph Definitions.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
Routing, Anycast, and Multicast for Mesh and Sensor Networks Roland Flury Roger Wattenhofer RAM Distributed Computing Group.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
The Goldreich-Levin Theorem: List-decoding the Hadamard code
Addressing, Distances and Routing in Triangular Systems with Application in Cellular and Sensor Networks Victor Chepoi, Feodor Dragan, Yan Vaxes University.
Greedy Algorithms Reading Material: Chapter 8 (Except Section 8.5)
Testing Metric Properties Michal Parnas and Dana Ron.
Oded Regev Tel-Aviv University On Lattices, Learning with Errors, Learning with Errors, Random Linear Codes, Random Linear Codes, and Cryptography and.
Is the following graph Hamiltonian- connected from vertex v? a). Yes b). No c). I have absolutely no idea v.
Collective Additive Tree Spanners of Homogeneously Orderable Graphs
Dept. of Computer Science Distributed Computing Group Asymptotically Optimal Mobile Ad-Hoc Routing Fabian Kuhn Roger Wattenhofer Aaron Zollinger.
Collective Tree Spanners of Graphs with Bounded Parameters F.F. Dragan and C. Yan Kent State University, USA.
Collective Tree Spanners of Graphs F.F. Dragan, C. Yan, I. Lomonosov Kent State University, USA Hiram College, USA.
Collective Tree Spanners and Routing in AT-free Related Graphs F.F. Dragan, C. Yan, D. Corneil Kent State University University of Toronto.
Greedy Algorithms Like dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems Unlike dynamic programming.
15-853Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.
Complexity of Bellman-Ford Theorem. The message complexity of Bellman-Ford algorithm is exponential. Proof outline. Consider a topology with an even number.
Broadcast & Convergecast Downcast & Upcast
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
Mathematics Review and Asymptotic Notation
Minimum Spanning Tree Given a weighted graph G = (V, E), generate a spanning tree T = (V, E’) such that the sum of the weights of all the edges is minimum.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Fast, precise and dynamic distance queries Yair BartalHebrew U. Lee-Ad GottliebWeizmann → Hebrew U. Liam RodittyBar Ilan Tsvi KopelowitzBar Ilan → Weizmann.
Télécom 2A – Algo Complexity (1) Time Complexity and the divide and conquer strategy Or : how to measure algorithm run-time And : design efficient algorithms.
Complexity of Bellman-Ford
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
A correction The definition of knot in page 147 is not correct. The correct definition is: A knot in a directed graph is a subgraph with the property that.
Asymmetric Communication Complexity And its implications on Cell Probe Complexity Slides by Elad Verbin Based on a paper of Peter Bro Miltersen, Noam Nisan,
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Distance and Routing Labeling Schemes in Graphs with Applications in Cellular and Sensor Networks Feodor F. Dragan Computer Science Department Kent State.
Lower bounds on data stream computations Seminar in Communication Complexity By Michael Umansky Instructor: Ronitt Rubinfeld.
Sketching complexity of graph cuts Alexandr Andoni joint work with: Robi Krauthgamer, David Woodruff.
Data Structures for Emergency Planning Cyril Gavoille (LaBRI, University of Bordeaux) 8 th FoIKS Bordeaux – March 3, 2014.
Secret Sharing Schemes: A Short Survey Secret Sharing 2.
Succinct Routing Tables for Planar Graphs Compact Routing for Graphs Excluding a Fixed Minor Ittai Abraham (Hebrew Univ. of Jerusalem) Cyril Gavoille (LaBRI,
Succinct Data Structures
Lecture 3: Parallel Algorithm Design
Markov Chains and Random Walks
Randomized Min-Cut Algorithm
New Characterizations in Turnstile Streams with Applications
Progress and Challenges for Labeling Schemes
Labeling Schemes with Forbidden-Sets
Lectures on Network Flows
COMP 6/4030 ALGORITHMS Prim’s Theorem 10/26/2000.
Fast Trie Data Structures
Localized Data Structures
Graphs & Graph Algorithms 2
Distance and Routing Labeling Schemes in Graphs
Graphs Chapter 11 Objectives Upon completion you will be able to:
Approximating Points by A Piecewise Linear Function: I
Compact routing schemes with improved stretch
Taken largely from University of Delaware Compiler Notes
The Greedy Approach Young CS 530 Adv. Algo. Greedy.
How to use spanning trees to navigate in Graphs
Time Complexity and the divide and conquer strategy
Routing in Networks with Low Doubling Dimension
Forbidden-set labelling in graphs
Distance and Routing Labeling Schemes in Graphs with Applications in Cellular and Sensor Networks Feodor F. Dragan Computer Science Department Kent State.
Presentation transcript:

Stephen Alstrup (U. Copenhagen) Esben Bistrup Halvorsen (U. Copenhagen) Holger Petersen (-) Aussois – ANR DISPLEXITY - March 2015 Cyril Gavoille (LaBRI, U. Bordeaux)

1. The Problem 2. Squashed Cube Dimension 3. Dominating Set Technique 4. Another Solution 5. Conclusion Agenda

The Distance Labeling Problem Given a graph, find a labeling of its nodes such that the distance between any two nodes can be computed by inspecting only their labels. Subject to: label the nodes of every graph of a family (scheme), label the nodes of every graph of a family (scheme), using short labels (size measured in bits), and using short labels (size measured in bits), and with a fast distance decoder (algorithm) with a fast distance decoder (algorithm)

Motivation [Peleg ’99] If a short label (say of poly-logarithmic size) can be added to the address of the destination, then routing to any destination can be done without routing tables and with a “limited” number of messages. dist (x,y) x message header=hop-county

A Distributed Data Structure Get the labels of nodes involved in the query Get the labels of nodes involved in the query Compute/decode the answer from the labels Compute/decode the answer from the labels No other source of information is required No other source of information is required

Label Size: a trivial upper bound There is a labeling scheme using labels of O(nlogn) bits for every (unweighted) graph G with n nodes, and constant time decoding. L G (i)=(i, [dist(i,1),…,dist(i,j),…,dist(i,n)] ) ➟ distance vector

Label Size: a trivial lower bound No labeling scheme can guarantee labels of less than 0.5n bits for all n-node graphs (whatever the distance decoder complexity is) Proof. The sequence L G (1),…,L G (n) and the decoder δ(.,.) is a representation of G on n. k+O(1) bits if each label has size k: i adjacent to j iff δ(L G (i),L G (j))=1. n. k + O(1) ≥ log 2 (#graphs(n))=n. (n-1)/2

Squashed Cube Dimension [Graham,Pollack ’71] Labeling: word over {0,1,*} Decoder: Hamming distance (where *=don’t care) (where *=don’t care) (graphs must be connected) SC dim (G)≥max{n +,n - } n +/- =#positive/negative eigen- values of the distance matrix of G SC dim (K n )=n-1

Squashed Cube Dimension [Winkler ’83] Theorem. Every connected n-node graph has squashed cube dimension at most n-1. Therefore, for the family of all connected n-node graphs: Label size: O(n) bits, in fact nlog 2 3 ~ 1.58n bits Decoding time: O(n/logn) in the RAM model Rem: all graphs = connected graphs + O(logn) bits

Dominating Set Technique [G.,Peleg,Pérennès,Raz ’01] Definition. A k-dominating set for G is a set S such that, for every node x ∈ G, dist(x,S) ≤ k. Fact: Every n-node connected graph has a k- dominating set of size at most n/(k+1). Target: Label size: O(n) bits Label size: O(n) bits Decoding time: O(loglogn) Decoding time: O(loglogn)

Part 1 dom S (x) = dominator of x, its closest node in S Observation 1: dist(x,y)=dist(dom S (x),dom S (y))+e(x,y) where e(x,y) ∈ [-2k,+2k] Observation 2: One can encode e(x,y) with log 2 (4k+1) bits

Part 2 Idea: take a collection of k i -dominating set S i, i=0…t (may intersect) where k 0 =0, |S i |=n/(k i +1), and |S t |=o(n/logn). Denote x 0 =x and x i =dom Si (x i-1 ). Decoder: dist(x,y)=∑ 1≤i≤t e i (x i-1,y i-1 ) + dist(x t,y t ) Time: O(t)

Part 3 Decoder: dist(x,y)=∑ 1≤i≤t e i (x i-1,y i-1 ) + dist(x t,y t ) Label size: ∑ 1≤i≤t |S i-1 |. log 2 (4k i +1) + |S t |. O(logn) bits size ≤ n. ∑ 1≤i≤t log 2 (4k i +1)/(k i-1 +1) + o(n) size ≤ n. ∑ 1≤i≤t log 2 (4k i +1)/(k i-1 +1) + o(n) Choose: k i =4 i -1 and t=loglogn ➟ k 0 =0, log 2 (4k i +1)=2i+2, and |S t |=o(n/logn) ➟ size ≤ n. ∑ i≥1 (2i+2)/4 i-1 = n. (56/9) ~ 6.22n bits ➟ time O(t) = O(loglogn) [ k 0…4 =(0,2,15,145,940), k i =exp(k i-1 ), t=log * n, size ~ 5.90n ]

Another Solution Label size: n(log 2 3)/2 ~ 0.793n bits Decoding time: O(1)

Warm-up (ignoring decoding time) Assume G has a Hamiltonian cycle x 0,x 1,…,x n-1. ➟ dist(x 0,x i ) = dist(x 0,x i-1 ) + e 0 (x i,x i-1 ) where e 0 (x i,x i-1 ) ∈ {-1,0,+1} ➟ dist(x 0,x i ) = ∑ 1≤j≤i e 0 (x j,x j-1 ) Label: L G (x i )=(i, [e i (x i+1,x i ),e i (x i+2,x i+1 ), …]) Size: n(log 2 3)/2 bits by storing only half e i ()’s

Applications Assume G is 2-connected. By Fleischner’s Theorem [1974], G 2 has a Hamiltonian cycle x 0,x 1,…,x n-1. So, e 0 (x i,x i-1 )=dist(x 0,x i )-dist(x 0,x i- 1 ) ∈ {-2,-1,0,1,2}. Label size: n(log 2 5)/2 ~ 1.16n bits For every tree T, T 3 is Hamiltonian. So, if G is connected, then G 3 is Hamiltonian and has a labeling scheme with n(log 2 7)/2 ~ 1.41n bits.

Constant Time Solution T = any rooted spanning tree of G e x (u) = dist(x,u)-dist(x,parent(u)) ∈ {-1,0,+1} Telescopic sum: if z ancestor of y, then dist(x,y)=dist(x,z)+∑ u ∈ T[y,z) e x (u)

Dominating Set & Edge- Partition Select an α-dominating S={s i } with root(T) ∈ S and |S| ≤ (n/α)+1 Edge-partition: T has an edge-partition into at most n/β sub-trees {T i } of ≤ 2β edges. Choose: α=logn. loglogn α=logn. loglogn β=logα. loglogn ~ (loglogn) 2 β=logα. loglogn ~ (loglogn) 2

Distance decoder: dist(x,y)=dist(x,z)+∑ u ∈ T[y,z) e x (u) = A + B Storage for computing A: #z logn=(n/β). logn=ω(n) ✖ #z x logn=(n/β). logn=ω(n) ✖ But, A = dist(x,s) + A’, where A’=(dist(x,z)-dist(x,s)) ∈ [-2α,+2α] ➟ only log 2 (2α+1) bits to store A’. #s logn =(n/α). logn = o(n) ✔ #s x logn =(n/α). logn = o(n) ✔ #z log(2α+1)=(n/β). logα = o(n) ✔ #z x log(2α+1)=(n/β). logα = o(n) ✔

Distance decoder: dist(x,y)=dist(x,z)+∑ u ∈ T[y,z) e x (u) = A + B Storage for computing B: assume y ∈ T i B=f(T i,e x (T i ),rk(y)) for some f(…) where e x (T i )=[-1,0,+1,+1,0,-1,…] But, the input T i,e x (T i ),rk(y) writes on O(β)=O(log 2 logn) bits! ➟ one can tabulate f(…) once for all its possible inputs with 2 O(β) log 2 B = o(n) bits to have constant time. ➟ one can tabulate f(…) once for all its possible inputs with 2 O(β) x log 2 B = o(n) bits to have constant time. → → →

Storage for u (in its label): storage for all A,A’,B ➟ o(n) bits storage also for: 1. i st. u ∈ T i, rk(u) in T i, and z=root(T i ) 2. closest s ∈ S ancestor of z 3. coding of T i with O(β) bits 4. e u (T i ),e u (T i+1 ),… for half the total information (this latter costs n(log 2 3)/2 + o(n) bits) (this latter costs n(log 2 3)/2 + o(n) bits) Decoder: [ x “knows” y, otherwise swap x,y ] 1. y tells to x: i,T i,rk(y),z,s 2. x computes and returns A+A’+B →→

Conclusion Main question: Design a labeling scheme with 0.5n+o(n)-bit labels and constant time decoder? Bonus: The technique extends to weighted graphs. We show n. log 2 (2w+1)/2 bits for edge-weight in [1,w]. We show a lower bound of n. log 2 (w/2+1)/2. The technique extends to weighted graphs. We show n. log 2 (2w+1)/2 bits for edge-weight in [1,w]. We show a lower bound of n. log 2 (w/2+1)/2. We also show a 1-additive (one sided error) labeling scheme of n/2 bits, and a lower bound of n/4. We also show a 1-additive (one sided error) labeling scheme of n/2 bits, and a lower bound of n/4.