Presentation is loading. Please wait.

Presentation is loading. Please wait.

1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.

Similar presentations


Presentation on theme: "1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department."— Presentation transcript:

1 1Computer Sciences Department

2

3 Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department

4 The Recursion Theorem Pages 217– 226 4 ADVANCED TOPICS IN C O M P U T A B I L I T Y THEORY

5  Explanation: The possibility of making machines that can construct replicas of themselves  SELF-REFERENCE (algorithm) Computer Sciences Department5 Objectives

6 Recursion  It concerns the possibility of making machines that can construct replicas of themselves. 1. Living things are machines (operate in a mechanistic way). 2. Living things can self-reproduce (essential characteristic). 3. Machines cannot self-reproduce. 6

7 a machine A that constructs a machine B  A must be more complex than B. But a machine cannot be more complex than itself. How can we resolve this paradox?  Making machines that reproduce themselves is possible. (The recursion theorem demonstrates how.) 7

8 SELF-REFERENCE  Let's begin by making a Turing machine that ignores its input and prints out a copy of its own description.  We call this machine SELF.  To help describe SELF, we need the following lemma.  LEMMA 6.1 8

9 SELF-REFERENCE (algorithm) 9

10 Machines A and B  The job of A is to print out a description of B, and conversely the job of B is to print out a description of A.  The result is the desired description of SELF.  The jobs are similar, but they are carried out differently.  Our description of A depends on having a description of B. So we can't complete the description of A until we construct B. 10

11 Machines A and B (cont) For A we use the machine P, described by q( ). q( ) means applying the function q to.  If B can obtain, it can apply q to that and obtain.  B only needs to look at the tape to obtain.  Then after B computes q( ) =, it combines A and B into a single machine and writes its description = on the tape. 11

12 Machines A and B (algorithm) 12

13 TM that prints its own description 13 Suppose that we want to give an English sentence that commands the reader to print a copy of the same sentence. One way to do so is to say: Print out this sentence.

14 Example 2 14

15 TERMINOLOGY FOR THE RECURSION THEOREM  recursion theorem in TM - (If you are designing a machine M, you can include the phrase "obtain own description " in the informal description of M's algorithm.)  Two ways: 1. use any other computed value 2. simulate. 15

16 Algorithms 16 2. simulate. 1. use any other computed value APPLICATIONS - computer virus

17 THEOREM 6.5 THEOREM 6.6 THEOREM 6.8 17

18 Decidability of logical Theories  What is a theorem?  What is a proof?  What is truth?  Can an algorithm decide which statements are true?  Are all true statements provable? 18

19 Decidability of logical Theories (cont.) We focus on the problem of:  determining whether mathematical statements are true or false and,  investigate the decidability of this problem.  Can be done - an algorithm to decide truth and another for which this problem is undecidable. 19

20  Statement 1 - infinitely many prime numbers exist - solved.  Statement 2 is Fermat' last theorem - solved, and  Statement 3 - infinitely many prime pairs 1 exist - unsolved. 1. differ by 2 20 Decidability of logical Theories (cont)

21 let's describe the form of the alphabet of this language: 21 - A formula is a well-formed string over this alphabet - All quantifiers appear in the front of the formula. - A variable that isn't bound within the scope of a quantifier is called a free variable Decidability of logical Theories (cont.)

22 C O M P L E X I T Y T H E 0 R Y 22

23 TIME COMPLEXITY 23 Pages 247 - 256

24 Objectives  investigation of the: -time, -memory, or -Other resources required for solving computational problems.  to present the basics of time complexity theory. 24

25 Objectives (cont.)  First - introduce a way of measuring the time used to solve a problem.  Then - show how to classify problems according to the amount of time required.  After - discuss the possibility that certain decidable problems require enormous amounts of time and how to determine when you are faced with such a problem. 25

26 Introduction  Even when a problem: -is decidable and -computationally solvable  it may not be solvable in practice if the solution requires an inordinate amount of time or memory. 26

27 MEASURING COMPLEXITY 27 The language

28 MEASURING COMPLEXITY (cont.)  How much time does a single-tape Turing machine need to decide A? 28

29  The number of steps that an algorithm uses on a particular input may depend on several parameters: (if the input is a graph) -the number of steps may depend on : -the number of nodes, -the number of edges, and -the maximum degree of the graph, or -some combination of these and/or other factors. 29 MEASURING COMPLEXITY (cont.)

30 Analysis  worst-case analysis - consider the longest running time of all inputs of a particular length.  average-case analysis - consider the average of all the running times of inputs of a particular length. 30

31 31

32 BIG-O AND SMALL-O NOTATION  Exact running time of an algorithm often is a complex expression. (estimation)  Asymptotic analysis - seek to understand the running time of the algorithm when it is run on large inputs. 32 The asymptotic notation or big-O notation for describing this relationship is

33 BIG-O AND SMALL-O NOTATION (cont.)  Performing this scan uses n steps.  Typically use n to represent the length of the input.  Repositioning the head at the left-hand end of the tape uses another n steps.  The total used in this stage is 2n steps. 33 In stage 1

34 34 BIG-O AND SMALL-O NOTATION (cont.) In stage 4 the machine makes a single scan to decide whether to accept or reject. The time taken in this stage is at most O(n).

35  Thus the total time of M1 on an input of length n is O(n) + O(n 2 ) + O(n) or O(n 2 ). In other words, it's running time is O(n 2 ), which completes the time analysis of this machine. 35 BIG-O AND SMALL-O NOTATION (cont.)

36 36 Is there a machine that decides A asymptotically more quickly?

37 Executed Time  Stages 1 and 5 are executed once, taking a total of O(n) time.  Stage 4 crosses off at least half the 0s and 1s is each time it is executed, so at most 1+log 2 n.  the total time of stages 2, 3, and 4 is (1 + log 2 n)O(n), or O(n log n).  The running time of M 2 is O(n) + O (n logn) = O(n log n). 37

38 COMPLEXITY RELATIONSHIPS AMONG MODELS  We consider three models: -the single-tape Turing machine; -the multi-tape Turing machine; and -the nondeterministic Turing machine 38

39  convert any multi-tape TM into a single-tape TM that simulates it.  Analyze that simulation to determine how much additional time it requires.  simulating each step of the multi-tape machine uses at most O(t(n)) steps on the single-tape machine.  the total time used is O(t 2 (n)) steps. O(n) + O(t 2 (n)) running time O(t 2 (n)) 39 COMPLEXITY RELATIONSHIPS AMONG MODELS (cont.)

40 SPACE COMPLEXITY I N T R A C T A B I L I T Y 40 Pages 303 – 308

41 Objective  Consider the complexity of computational problems in terms of the amount of space, or memory, that they require.  Time and space are two of the most important considerations when we seek practical solutions to many computational problems.  Space complexity shares many of the features of time complexity and serves as a further way of classifying problems according to their computational difficulty. 41

42 Introduction  select a model for measuring the space used by an algorithm.  Turing machines are mathematically simple and close enough to real computers to give meaningful results. 42

43 43

44 Estimation the space complexity  We typically estimate the space complexity of Turing machines by using asymptotic notation. 44

45 EXAMPLE 8.4 45

46 SAVITCH'S THEOREM read only 46

47 I N T R A C T A B I L I T Y 47 Pages 335 - 338

48  Certain computational problems are solvable in principle, but the solutions require so much time or space that they can't be used in practice. Such problems are called intractable.  Turing machines should be able to decide more languages in time n 3 than they can in time n 2. The hierarchy theorems prove that. 48

49 Approximation Algorithms Pages 365 - 367 49 ADVANCED TOPICS IN COMPLEXITY THEORY

50 Optimization problems  Optimization problems - seek the best solution among a collection of possible solutions.  Example: shortest path connecting two nodes  Approximation algorithm is designed to find such approximately optimal solutions.  A solution that is nearly optimal may be good enough and may be much easier to find. 50

51 Polynomial & Exponential  Decision problem-one that has a yes/no answer.  POLYNOMIAL TIME - polynomial differences in running time are considered to be small and fast, whereas exponential differences are considered to be large.  Polynomial time algorithm - n 3.  Exponential time algorithm - 3 n.  MIN-VERTEX-COVER is an example of a minimization problem because we aim to find the smallest among the collection of possible solutions. 51

52 Exponential time algorithm  Exponential time algorithms typically arise when we solve problems by exhaustively searching through a space of solutions, called brute-force search. 52

53 Exponential time algorithm  factor a number into its constituent primes is to search through all potential divisors. 53

54  minimization problem - find the smallest among the collection of possible solutions.  maximization problem - seek the largest solution.  Decision problem and NP- Decision.  Optimization problem and NP-Optimization.  Approximation problem and approximation. Note: (NP) Nondeterministic polynomial. 54

55 Explanation 55

56 Explanation 56

57 PROBABILISTIC ALGORITHMS Pages 368 - 375 57

58 PROBABILISTIC ALGORITHMS  A probabilistic algorithm is an algorithm designed to use the outcome of a random process.  Example: flip a coin.  How can making a decision by flipping a coin ever be better than actually calculating, or even estimating, the best choice in a particular situation? 58

59 THE CLASS BPP  We begin our formal discussion of probabilistic computation by defining a model of a probabilistic Turing machine. 59

60 60

61 Definition 10.3 (cont.)  When a probabilistic Turing machine recognizes a language = it must accept all strings in the language and reject all strings out of the language as usual.  Except that now we allow the machine a small probability of error. For say that M recognizes language A with error probability 61

62  We also consider error probability bounds that depend on the input length n. For example, error probability = 2 -n indicates an exponentially small probability of error. Definition 10.3 (cont.) worst case computation branch on each input 62

63 = amplification lemma.  Amplification lemma gives a simple way of making the error probability exponentially small.  LEMMA 10.5 and proof IDEA. (self study) PROOF. 63

64 PRIMALITY  A prime number is an integer greater than 1 that is not divisible by positive integers other than 1 and itself.  A nonprime number greater than 1 is called composite.  One way to determine whether a number is prime is to try all possible integers less than that number and see whether any are divisors, also called factors.  exponential time complexity 64

65  For example, if p = 7 and a = 2, the theorem says that 2 (7-1) mod 7 should be 1 because 7 is prime.  The simple calculation 2 (7-1) = 2 6 = 64 and 64 mod 7 = 1 confirms this result.  Suppose that we try p = 6 instead. Then 2 (6-1) = 2 5 = 32 and 32 mod 6 = 2 65 Fermat's little theorem

66 Algorithm Fermat test 66

67 67

68 Note The probabilistic primality algorithm has one-sided error. When the algorithm outputs reject, we know that the input must be composite. When the output is accept, we know only that the input could be prime or composite. Thus an incorrect answer can only occur when the input is a composite number. The one-sided error feature is common to many probabilistic algorithms, so the special complexity class RP is designated for it. 68


Download ppt "1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department."

Similar presentations


Ads by Google