Presentation is loading. Please wait.

Presentation is loading. Please wait.

Parallel computation Section 10.5 Giorgi Japaridze Theory of Computability.

Similar presentations


Presentation on theme: "Parallel computation Section 10.5 Giorgi Japaridze Theory of Computability."— Presentation transcript:

1

2 Parallel computation Section 10.5 Giorgi Japaridze Theory of Computability

3 Introduction 10.5.a Giorgi Japaridze Theory of Computability A parallel computer is one that can perform multiple operations simultaneously. Such computers may solve certain problems much faster than sequential computers, which can only do a single operation at a time. In practice, the distinction between the two is slightly blurred because most real computers (including “sequential” ones) are designed to use some parallelism as they execute individual instructions (remember pipelining after all). We focus here on massive parallelism whereby a huge number (think of millions or more) of processing elements are actively participating in a single computation. One of the most popular models in theoretical work on parallel algorithms is called the Parallel Random Access Machine or PRAM. In the PRAM model, idealized processors with a single instruction set patterned on actual computers interact via a shared memory. Our textbook, however, uses an alternative, simpler model of parallel computers. Namely, Boolean circuits, already seen in Section 9.3.

4 Uniform Boolean circuits as parallel computers 10.5.b Giorgi Japaridze Theory of Computability In the Boolean circuit model of a parallel computer, we take each gate to be an individual processor, so we define the processor complexity of a Boolean circuit to be its size. We consider each processor to compute its function in a single time step, so we define the parallel time complexity of a Boolean circuit to be its depth. Any particular circuit has a fixed input size (= number of input variables), so we use circuit families as defined in Definition 9.27 for recognizing languages. We however need to impose a technical requirement on circuit families so that they correspond to parallel computation models such as PRAMs where a single machine is capable of handling all input lengths. That requirement states that we can easily obtain all members in a circuit family. This uniformity requirement is reasonable because knowing that a small circuit exists for recognizing certain elements of a language isn’t very useful if the circuit itself is hard to find. That leads us to the following definition. Definition 10.34 A family of circuits (C 1,C 2,…) is uniform if some log space transducer T outputs when T’s input is 1 n. We say that a language has simultaneous size-depth circuit complexity at most (f(n),g(n)) if a uniform circuit family exists for that language with size complexity f(n) and depth complexity g(n).

5 The class NC 10.5.c Giorgi Japaridze Theory of Computability Many interesting problems have size-depth complexity (O(n k ),O(log k n)) for some constant k. Such problems may be considered highly parallelizable with a moderate number of processors. That prompts the following definition. Definition 10.38 For i ≥ 1, NC i is the class of languages that can be decided by a uniform family of circuits with polynomial size and O(log i n) depth. NC is the class of languages that are in NC i for some i. Functions that are computed by such circuit families are called NC i computable or NC computable.

6 Main theorems 10.5.d Giorgi Japaridze Theory of Computability Theorem 10.39 NC 1  L. Proof idea. We sketch a log space algorithm to decide a language A in NC 1. On input w of length n, the algorithm can construct the description as needed of the n’th circuit in the uniform circuit family for A. Then the algorithm can evaluate the circuit using a depth-first search from the output gate. Theorem 10.40 NL  NC 2. Proof idea. Omitted. Theorem 10.41 NC  P. Proof idea. A polynomial time algorithm can run the log space transducer to generate circuit C n and simulate it on an input of length n. Open problem: NC=P? Equality here would be surprising because it would imply that all polynomial time solvable problems are highly parallelizable.

7 P-completeness 10.5.e Giorgi Japaridze Theory of Computability Definition 10.42 A language B is P-complete if 1. B  P, and 2. every A in P is log space reducible to B. CIRCUIT-VALUE = { | C is a Boolean circuit and C(x)=1}. For a circuit C and input string x we write C(x) to be the value of C on x. The following language can be called the circuit evaluation problem. Theorem 10.44 CIRCUIT-VALUE is P-complete.


Download ppt "Parallel computation Section 10.5 Giorgi Japaridze Theory of Computability."

Similar presentations


Ads by Google