Randomized Algorithms

Slides:



Advertisements
Similar presentations
NP-Hard Nattee Niparnan.
Advertisements

Max Cut Problem Daniel Natapov.
CSCI 3160 Design and Analysis of Algorithms Tutorial 4
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
“Devo verificare un’equivalenza polinomiale…Che fò? Fò dù conti” (Prof. G. Di Battista)
Lecture 22: April 18 Probabilistic Method. Why Randomness? Probabilistic method: Proving the existence of an object satisfying certain properties without.
The Theory of NP-Completeness
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Approximation Algorithms for Unique Games Luca Trevisan Slides by Avi Eyal.
CSL758 Instructors: Naveen Garg Kavitha Telikepalli Scribe: Manish Singh Vaibhav Rastogi February 7 & 11, 2008.
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Randomized Algorithms and Randomized Rounding Lecture 21: April 13 G n 2 leaves
NP-Complete Problems Reading Material: Chapter 10 Sections 1, 2, 3, and 4 only.
Complexity 19-1 Complexity Andrei Bulatov More Probabilistic Algorithms.
Computability and Complexity 24-1 Computability and Complexity Andrei Bulatov Approximation.
Study Group Randomized Algorithms Jun 7, 2003 Jun 14, 2003.
Randomness in Computation and Communication Part 1: Randomized algorithms Lap Chi Lau CSE CUHK.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
Randomized Algorithms Morteza ZadiMoghaddam Amin Sayedi.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
Télécom 2A – Algo Complexity (1) Time Complexity and the divide and conquer strategy Or : how to measure algorithm run-time And : design efficient algorithms.
More Computational Complexity Shirley Moore CS4390/5390 Fall August 29,
1 Permutation routing in n-cube. 2 n-cube 1-cube2-cube3-cube 4-cube.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
NPC.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
1 The Theory of NP-Completeness 2 Review: Finding lower bound by problem transformation Problem X reduces to problem Y (X  Y ) iff X can be solved by.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
CHAPTER SIX T HE P ROBABILISTIC M ETHOD M1 Zhang Cong 2011/Nov/28.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
ICS 353: Design and Analysis of Algorithms NP-Complete Problems King Fahd University of Petroleum & Minerals Information & Computer Science Department.
The Theory of NP-Completeness
The NP class. NP-completeness
More NP-Complete and NP-hard Problems
P & NP.
Chapter 10 NP-Complete Problems.
Probabilistic Algorithms
Randomized Min-Cut Algorithm
Richard Anderson Lecture 26 NP-Completeness
Approximating the MST Weight in Sublinear Time
Randomized Algorithm (Lecture 2: Randomized Min_Cut)
Richard Anderson Lecture 26 NP-Completeness
Minimum Spanning Tree 8/7/2018 4:26 AM
Maximal Independent Set
P and NP CISC4080, Computer Algorithms CIS, Fordham Univ.
Discrete Mathematics for Computer Science
ICS 353: Design and Analysis of Algorithms
Enumerating Distances Using Spanners of Bounded Degree
Intro to NP Completeness
Randomized Algorithms CS648
Richard Anderson Lecture 25 NP-Completeness
Richard Anderson Lecture 28 NP-Completeness
NP-Complete Problems.
CS 3343: Analysis of Algorithms
Graphs and Algorithms (2MMD30)
Permutation routing in n-cube
CSE 6408 Advanced Algorithms.
The Theory of NP-Completeness
Topics in Algorithms 2005 Max Cuts
A Variation of Minimum Latency Problem on Path, Tree and DAG
GRAPH TRAVERSAL.
Presentation transcript:

Randomized Algorithms Morteza ZadiMoghaddam Amin Sayedi

Types of Randomized algorithms Las Vegas Monte Carlo

Las Vegas Always gives the true answer. Running time is random. Running time is bounded. Quick sort is a Las Vegas algorithm.

Monte Carlo It may produce incorrect answer! We are able to bound its probability. By running it many times on independent random variables, we can make the failure probability arbitrarily small at the expense of running time.

Monte Carlo Example Suppose we want to find a number among n given numbers which is larger than or equal to the median.

Monte Carlo Example Suppose A1 < … < An. We want Ai, such that i ≥ n/2. It’s obvious that the best deterministic algorithm needs O(n) time to produce the answer. n may be very large! Suppose n is 100,000,000,000 !

Monte Carlo Example Choose 100 of the numbers with equal probability. find the maximum among these numbers. Return the maximum.

Monte Carlo Example The running time of the given algorithm is O(1). The probability of Failure is 1/(2100). Consider that the algorithm may return a wrong answer but the probability is very smaller than the hardware failure or even an earthquake!

Monte Carlo Suppose the output is Yes or No. One sided error. Two sided error.

RP Class ( randomized polynomial ) Bounded polynomial time in the worst case. If the answer is Yes; Pr[ return Yes] > ½. If the answer is No; Pr[ return Yes] = 0. ½ is not actually important.

PP Class ( probabilistic polynomial ) Bounded polynomial time in worst case. If the answer is Yes; Pr[ return Yes] > ½. If the answer is No; Pr[ return Yes] < ½. Unfortunately the definition is weak because the distance to ½ is important but is not considered.

Routing Problem There are n computers. Each computer has a packet. Each packet has a destination D(i). Packets can not follow the same edge simultaneously. An oblivious algorithm is required.

Routing Problem For any deterministic oblivious algorithm on a network of N nodes each of outdegree d, there is an instance of permutation routing requiring (N/d) ½.

Routing Problem Pick random intermediate destination. Packet i first travels to the intermediate destination and then to the final destination. With probability at least 1-(1/N), every packet reaches its destination in 14n of fewer steps in Qn. The expected number of steps is 15n.

Maximum Satisfiability You have m clauses and n boolean variables. Each clause contains some of variables or some of complements. A clause is satisfied if at least one of it’s variables are satisfied. We want to set the variables such that the number of satisfied clauses is maximized.

Example for Maximum Sat There are 3 variables A, B and C. M1 = (A) or (B) M2 = (A) or (not B) or (not C) M3 = (C) M4 = (B) or (not C) M5 = (not C)

Example of Maximum Sat Set A = True Set B = True Set C = False Four of the clauses are satisfied.

Maximum Sat This problem is a famous problem which has no polynomial time algorithm yet. It’s NP-hard.

Maximum Sat For any set of m clauses, there is truth assignment for the variables that satisfies at least m/2 clauses.

Maximum Sat Let Zi =1 if the i-th clause is satisfied and 0 otherwise. Set the variables in a random way. The probability of a clause with k variables to be true is 1- (1/(2k)) >= ½. So E[Z1]+…+E[Zm] >= ½. Thus there exist at least one assignment such that Z1+…+Zm >= ½.

Maximum Sat algorithm This problem is NP-hard so we seek for approximation algorithms. We have an algorithm that produces an answer which is at least ½ of the best answer. If all clauses have at least 2 literals then we have an algorithm that produces an answer which is at least ¾ of the best one.

Maximum Sat algorithm ∑ yi (if Xi is in Zj and is uncomplemented) We want to maximize Z1+…+Zm. We have some inequalities: ∑ yi (if Xi is in Zj and is uncomplemented) ∑ (1-yi) (if Xi is in Zj and is complemented). This inequality must be hold: ∑ yi + ∑ (1-yi) >= Zj This problem could be solved using integer linear programming. We have to use linear programming.

Maximum Sat Solve the problem using linear programming. You get a real number for each yi or zi. Assign Xi true with the probability yi. The expected number of clauses that are satisfied is (1- 1/e) of the best answer.

Maximum Sat algorithm Using both algorithms and choosing the better answer gives us an answer which is at least ¾ of the best answer!!! Which is better than ½ and 1- 1/e.

2-Sat Every clause has at most 2 literals. We want to check if all clauses can be satisfied. It has polynomial algorithm. Assign random values to the variables. If all clauses are satisfied we are finished. If there is an unsatisfied clause, the value of one of it’s literals is different from the best answer. Change the value of one of the variables in this clause. You may make a good change or bad one.

2-Sat You are walking on a path. If you are on 0 you go to 1. If you are on i you go to i+1 or i-1 with equal probability. The expected number of steps to reach the end of the path is O(n2). So the given algorithm is O(n3).

Graph Connectivity You want to check if two vertices u and v are in the same connected component. Start a random walk from v. Have a random walk of length 2n3. If you haven’t visited u, the probability of u to be in this component is less than ½. By repeating this algorithm, you can make the probability of failure arbitrarily small.

Graph Connectivity Running time of algorithm is O(n3). Required space is O(logn).

Diameter of a Point Set You want to find the diameter of set of n points in the space. Suppose I(x) is the convex body formed by the intersection of n sphere centered at n points with radius x. F(p) is distance between p and the point in the set that is farthest from p.

Diameter of a Point Set Consider I(x) when x=F(p). For any q in S, if q is in I(x) then F(q)<F(p). And if q is not in I(x) then F(p)<F(q).

Diameter of a Point Set Pick a point p in s at random. Computer F(p). [O(n)] Set x=F(p). Compute I(x). [O(n logn)] Find the points outside I(x). Call this set T. [O(n logn)] If T is empty return x as the answer, else continue on T.

Diameter of a Point Set The running time of algorithm above is O(n log n). In each step, all points that have smaller F(x) than the chosen point are removed.

All-Pairs Shortest Paths Let G(V,E) be an undirected, connected graph with V={1,…,n} and |E|=m. The adjacency matrix A is an n  n 0-1 matrix with Aij=Aji=1 if and only if the edge (i,j) is present in E. We are going to compute matrix D which Dij equals the length of a shortest path from vertex i to vertex j.

All-Pairs Distances Z A2 Compute matrix A’ such that A’ij=1 if and only if i ≠ j and (Aij=1 or Zij>0) If A’ij=1 for all i ≠ j then return D = 2A’-A. Recursively compute the APD matrix D’ for the graph G’ with adjacency matrix A’. S AD’ Return matrix D with Dij=2D’ij if Sij≥D’ijZii , otherwise Dij=2D’ij-1.

APSP The APD algorithm computes the distance matrix for an n-vertex graph in time O(MM(n)log(n)) using integer matrix multiplication algorithm. Matrix multiplication algorithm running in time O(n2.376).

Boolean Product Witness Matrix Suppose A and B are nn boolean matrices and P=AB is their product under Boolean matrix multiplication. A witness for Pij is an index k  {1,…,n} such that Aik=Akj=1. Observe that Pij=1 if and only if it has some witness k.

BPWM W -AB 2. for t=0,…, log(n) do 2.1. r 2t 2.2. Repeat 3.77log(n) times 2.2.1. Choose random R  {1,…,n} with |R|=r . 2.2.2. Compute AR and BR . 2.2.3. Z ARBR . 2.2.4. for all (i,j) do if Wij < 0 and Zij is witness then Wij Zij 3. for all (i,j) do if Wij < 0 then find witness Wij by brute force.

BPWM The BPWM algorithm is a Las Vegas algorithm for the BPWM problem with expected running time O(MM(n)log2(n)). The probability that no witness is found for Pij before the end of Step 2 is at most (1-1/2e)3.77log(n)  1/n.

Determining Shortest Path A successor matrix S for an n-vertex graph G is an n  n matrix such that Sij is the index of a neighbor of vertex i that lies on a shortest path from i to j.

APSP Compute the distance matrix D=APD(A). for s={0,1,2} do Compute 0-1 matrix Dkj(s)=1 if and only if Dkj+1 = s (mod 3) Compute the witness matrix W(s)=BPWM(A,D(s)). Compute successor matrix S for G.

APSP Algorithm APSP computes the successor matrix for an n-vertex graph G in expected time O(MM(n)log2(n)).

Algorithm contract H G While H has more than 2 vertices do Choose an edge (x,y) uniformly at random from the edges in H. F F  {(x,y)}. H H / (x,y). (C,V/C) the sets of vertices corresponding to the two meta-vertices in H=G/F.

FastCut n |v| if n  6 then compute min-cut of G by brute-force enumeration else t 1+n/2 Using Algorithm Contract, perform two independent contraction sequences to obtain graphs H1 and H2 each with t vertices. Recursively compute min-cuts in each of H1 and H2. Return the smaller of the two min-cuts.

Fastcut Algorithm Fastcut succeeds in finding a min-cut with probability (1/log(n)). Algorithm Fastcut runs in O(n2log(n)) time and uses O(n2) space.

MST Finding MST in a graph with n vertices and m edges has a Las Vegas algorithm which has the expected running time O(n+m). But We don’t have enough time to Explain it !!!

Research problems Devise an algorithm for the all-pairs shortest paths problem that does not use matrix multiplication and runs in time O(n3-) for a positive constant . Devise an algorithm for computing the diameter of an unweighted graph that does not use matrix multiplication and runs in time O(n3-) for a positive constant .

Research problems Devise a Las Vegas or a deterministic algorithm for min-cuts with running time close to O(n2). Is there a randomized algorithm for min-cuts with expected running time close to O(m)?

Have a nice randomized life