Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 17: Limitations of Computing Chapter 17 Limitations of Computing Page 196 What problems cannot be solved on a computer? What problems cannot be.

Similar presentations


Presentation on theme: "Chapter 17: Limitations of Computing Chapter 17 Limitations of Computing Page 196 What problems cannot be solved on a computer? What problems cannot be."— Presentation transcript:

1

2 Chapter 17: Limitations of Computing Chapter 17 Limitations of Computing Page 196 What problems cannot be solved on a computer? What problems cannot be solved on a computer in a “reasonable” amount of time? These aren’t just philosophical questions; their answers will determine how practical it is to pursue computer-based solutions to real-world problems like hurricane prediction, disease control, and economic forecasting.

3 Chapter 17 Limitations of Computing Page 197 Computability To examine the limits of what it is possible to do with a computer, Alan Turing (1912-1954) developed a simplified mathematical model, called a Turing machine. A Turing machine consists of three parts: 1)A tape of cells from which symbols can be read and into which symbols can be written, 2)A read/write head that moves back and forth across the tape, reading the symbol inside the current cell and/or writing a new symbol into the current cell, and 3)A control unit that keeps track of what “state” the machine is in, and uses that state and the current symbol under the read/write head to: a)Determine which symbol to place in the current cell, b)Determine whether to move the read/write head one cell to the left or right, and c)Determine the next state of the machine. Tape Read/Write Head Control Unit

4 Chapter 17 Limitations of Computing Page 198 State Transition Diagram A state transition diagram may be used to define a Turing machine. Each  /  /  transition signifies reading  on the tape, replacing it with , and then moving the read/write head in the  direction. The state transition diagram above defines a Turing machine that increments a binary number on the tape by one. STARTSTARTADDADDCARRYCARRY NO CARRY OVERFLOWOVERFLOW HALTHALTRETURNRETURN*/*/L1/0/L1/0/L 0/1/L0/1/L 0/0/L,1/1/L */1/L */*/R*/*/R */*/R */*/- 0/0/R,1/1/R *101* State:START *101* State:ADD *100* State:CARRY *110*State: *110*State: *110* State:RETURN *110* State:RETURN *110* State:RETURN *110* State:RETURN *110* State:HALT

5 Chapter 17 Limitations of Computing Page 199 The Church-Turing Thesis Computer scientists commonly accept the Church-Turing Thesis, which states that the set of functions that can be calculated on a computer is exactly the same as the set of functions for which a Turing machine can be devised. There are problems that have been proven to be non-computable (i.e., no Turing machine can be devised to calculate their solutions). The Halting Problem Given a program with a set of input values, does the program halt on that input, or does it get stuck in an infinite loop? One classical example:

6 Chapter 17 Limitations of Computing Page 200 Complexity The time complexity of an algorithm is a measure of how many steps are executed when the associated program is run. void printA() { cout << 0 << endl; cout << 0 << endl; } void printB() { int i; int i; for (i = 1; i <= 100; i++) for (i = 1; i <= 100; i++) cout << 0 << endl; cout << 0 << endl;} void printC(int n) { int i; int i; for (i = 1; i <= n; i++) for (i = 1; i <= n; i++) cout << 0 << endl; cout << 0 << endl;} void printD(int n) { int i,j; int i,j; for (i = 1; i <= n; i++) for (i = 1; i <= n; i++) for (j = 1; j <= n; j++) for (j = 1; j <= n; j++) cout << 0 << endl; cout << 0 << endl;} Number of Output Statements Executed: 4 Time Complexity: O(1) Number of Output Statements Executed: 100 Time Complexity: O(1) Number of Output Statements Executed: n Time Complexity: O(n) Number of Output Statements Executed: n 2 Time Complexity: O(n 2 ) The “big-O” notation provides information regarding the program’s “order of complexity”. O(1) indicates that the execution time doesn’t relate to the size of the number n. O(n) indicates that the execution time increases linearly as n increases. O(n 2 ) indicates that the execution time increases quadratically as n increases. The “big-O” notation provides information regarding the program’s “order of complexity”. O(1) indicates that the execution time doesn’t relate to the size of the number n. O(n) indicates that the execution time increases linearly as n increases. O(n 2 ) indicates that the execution time increases quadratically as n increases.

7 Chapter 17 Limitations of Computing Page 201 Logarithmic/Polynomial/Exponential An algorithm is said to have logarithmic time complexity if the number of steps in its execution is bounded by some logarithmic function: k log 2 (n) Essentially, this means that doubling the size of the problem (n) would only increase the execution time by a constant amount (k). An algorithm is said to have polynomial time complexity if the number of steps in its execution is bounded by some polynomial function: a k n k +a k-1 n k-1 +…+a 2 n 2 +a 1 n+a 0 An algorithm is said to have exponential time complexity if the number of steps in its execution is bounded by some exponential function: k(2 n ) Essentially, this means that increasing the size of the problem (n) by one would double the execution time. log 2 (n) n n2n2n2n2 n3n3n3n3 2n2n2n2n 252512531 31010010001024 42040080001048576

8 Chapter 17 Limitations of Computing Page 202 Big-O An algorithm’s time complexity is dominated by its most significant term. For example, an algorithm that executes in time n 2 +10n is considered to be O(n 2 ) because, as n increases, the n 2 term ultimately dominates the n term. Additional examples: 5n + n 2 + 0.125n 3 is O(n 3 ) log 2 (n) + n 2 + 2 n is O(2 n ) 100n + max(n 2, 1000 - n 3 ) is O(n 2 ) n 3 - ⅞ (n 3 - 80n 2 - 800n) is O(n 3 ) 1000000 + 0.000001log 2 (n) is O(log 2 (n))

9 Chapter 17 Limitations of Computing Page 203 P and NP Problems A problem is said to be a P problem if it can be solved with a deterministic, polynomial-time algorithm. (Deterministic algorithms have each step clearly specified.) A problem is said to be an NP problem if it can be solved with a nondeterministic, polynomial-time algorithm. In essence, at a critical point in the NP problem’s algorithm, a decision must be made, and it is assumed that some magical “choice” function (also called an oracle) always chooses correctly. For example, take the Satisfiability Problem: Given a set of n boolean variables b 1, b 2, … b n, and a boolean function f (b 1, b 2, …, b n ), are there any values that can be assigned to the variables so the function value will evaluate to TRUE? To try every combination of boolean values would take exponential time, but the nondeterministic solution at right has polynomial time complexity. for (i = 1; i <= n; i++) b i = choice(true, false); b i = choice(true, false); if (f(b 1, b 2,…, b n ) == true) satisfiable = true; satisfiable = true; else satisfiable = false; for (i = 1; i <= n; i++) b i = choice(true, false); b i = choice(true, false); if (f(b 1, b 2,…, b n ) == true) satisfiable = true; satisfiable = true; else satisfiable = false;

10 Chapter 17 Limitations of Computing Page 204 The Knapsack Problem The Knapsack Problem involves taking n valuable jewels J 1, J 2,…,J n, with respective weights w 1, w 2,…, w n, and prices p 1, p 2,…, p n, and placing some of them in a knapsack that is capable of supporting a combined weight of M. The problem is to pack the maximum worth of gems without exceeding the capacity of the knapsack. (It’s not as easy as it sounds; three lightweight $1000 gems might be preferable to one heavy $2500 gem, and one 20-pound gem worth a lot of money might be preferable to twelve 1-pound gems that are practically worthless.) A Nondeterministic Polynomial Solution: TotalWorth = 0; TotalWeight = 0; for (i = 1; i <= n; i++) { b i = choice(true, false); b i = choice(true, false); if (b 1 == true) if (b 1 == true) { TotalWorth+= p i ; TotalWorth+= p i ; TotalWeight += w i ; TotalWeight += w i ; }} if (TotalWeight <= M) cout << “Woo-hoo!” << endl; cout << “Woo-hoo!” << endl;else cout << “Doh!” << endl; cout << “Doh!” << endl; A Nondeterministic Polynomial Solution: TotalWorth = 0; TotalWeight = 0; for (i = 1; i <= n; i++) { b i = choice(true, false); b i = choice(true, false); if (b 1 == true) if (b 1 == true) { TotalWorth+= p i ; TotalWorth+= p i ; TotalWeight += w i ; TotalWeight += w i ; }} if (TotalWeight <= M) cout << “Woo-hoo!” << endl; cout << “Woo-hoo!” << endl;else cout << “Doh!” << endl; cout << “Doh!” << endl;


Download ppt "Chapter 17: Limitations of Computing Chapter 17 Limitations of Computing Page 196 What problems cannot be solved on a computer? What problems cannot be."

Similar presentations


Ads by Google