# CS4018 Formal Models of Computation weeks 20-23 Computability and Complexity Kees van Deemter (partly based on lecture notes by Dirk Nikodem)

## Presentation on theme: "CS4018 Formal Models of Computation weeks 20-23 Computability and Complexity Kees van Deemter (partly based on lecture notes by Dirk Nikodem)"— Presentation transcript:

CS4018 Formal Models of Computation weeks 20-23 Computability and Complexity Kees van Deemter (partly based on lecture notes by Dirk Nikodem)

Third set of slides: Introduction to complexity Making algorithms more efficient Analysing algorithms using big-Oh notation. Function domination. The complexity of a problem The classes P, NP, NPC

Complexity Famous book on this topic starts with three cartoons: (employee in boss office) –I cant find an efficient solution. I guess Im too dumb. –I cant find an efficient solution. No efficient solution exists. –I cant find an efficient solution, but neither can all these famous people. (easily adapted to illustrate computability too.) M.R.Garey & D.S.Johnson (1979). Computers and Intractability: A guide to the theory of NP- completeness. Freeman & Company, New York

Complexity is a large and growing area of work Our treatment will be even more sketchy than that of computability First: analysing some algorithms; the difference between polynomial and nonpolynomial algorithms Then an extended example: Natural Language Generation algorithms (Dale & Reiter paper, on the web)

Linking computability and complexity Computability: Can it be computed? Complexity: What is the cost of computing it? (time, memory) Can be measured in exact terms, e.g., processing time on a given computer; number of statements executed Measurements that are independent of platform are often preferable

First: Analysing algorithms A key distinction in complexity is between 1.the complexity of an algorithm 2.the complexity of a problem ( i.e., How complex is the most efficient algorithm that solves this problem? ) As for (1), let us look at the earlier algorithm for determining whether n is prime

Is n prime? (Old program) program prime (input, output); var n, i : integer; begin read (n); i := 2; while (i 0) {n is not divisible by i} do i := i+1; if i=n then writeln(n is prime) else writeln(n is not prime); end

How long does this take? Depends on input: 1.if 2 is a divisor of n then loop is executed only once. (best case) 2.If n is prime then loop is executed n-1 times (worst case) 3.Generally, it takes longer for longer n As for 1 and 2: were usually interested in worst- case performance How might the program be speeded up?

Possibilities: If even numbers are treated separately, then we only have to try odd divisors. So: start with 3 and always add 2 No need to try divisors higher than n So: while loop tests whether (i n div i)

The faster version: program prime (input, output); var n, i : integer; begin read(n); {assume n > 3} if not odd(n) then writeln(false) else begin i := 3; while (i n div i) and (n mod i <> 0) do i := i+2; if i > n div i then writeln(n is prime) else writeln(n is not prime); end; end.

Effects of changes Effect of these two changes: loop has to be executed only about 1/2 n times Obviously, thats better than n-1 times, since 1/2 n is always smaller than n-1 … for n>1 More generally, we dont worry about low n. If there are problems then they arise with high n. (We shall see later that the difference between 1/2 n and n-1 is only minor)

Comparing functions Typically, cost is different for different inputs, which makes measuring more difficult. Suppose we know what n is, and that we measure worst-case costs. Four algorithms execute a given loop 100n, 2n 2, n 3, or 2 n times respectively Which of these is cheapest? (E.g., check n=5, n=10, n=20)

Comparing functions n=5 n=10 n=20 100n 500 1000 2000 n 3 125 1000 8000 6n 2 150 600 2400 2 n 32 10 6 10 9 Cheapest: 2 n 6n 2 100n Costliest: 100n 2 n 2 n

Wanted: A yardstick for measuring the complexity of a function

Main complexity types: 1.Polynomial: 100n, n 3, 2n 2 (this includes linear: 100n) 2. Exponential: 2 n Polynomials in more detail: p(n) = a r n r + a r-1 n r-1 +…+a 0 n 0 (term becomes empty if a i = 0)

p(n) = a r n r + a r-1 n r-1 +…+a 0 n 0 Constants a are unimportant. E.g., for some m, n>m (n 3 > 2n 2) A useful abstract notion: one function dominating another. f dominated by g means: for some c and m, n>m (c*g(n) f(n)) (There exists a constant c such that, for all n that are large enough, g(n) may be smaller than f(n), but only by that constant factor c)

Some examples 1.g=n 3 dominates f=2n 2, since n>2(g(n) f(n)) 2. g=2n 3 + 5n 2 dominates f=2n 3, since n>0(g(n) f(n)) But the converse holds too! : 3. g=2n 3 dominates f=2n 3 + 5n 2, since c n>0 (c*g(n) f(n)). To see this, consider that f = 2n 3 + 5n 2 2n 3 + 5n 3 = 7 n 3

Further terminology When g dominates f, we also write f O(g) (f is big-Oh of g, f is in the order of g) In the cases of interest, f O(g) and g O(f) We try to measure the complexity of functions, using simple functions as yardstick Hence, the interesting case is of f O(g) where g is as simple as can be. (In particular, simpler then f.)

For polynomial functions p(n) = a r n r + a r-1 n r-1 +…+a 0 n 0 Theorem: p(n) is always dominated by its largest non-empty term: p(n) O(n r ) So, we measure p(n) using the largest relevant n i For example,

Some polynomial functions p(n)=2n 3 + 5n 2 characterised by n 3 (also: cubic) p(n)=n 2 + 500n – 1 characterised by n 2 (also: quadratic) Formal notation using Big-Oh: p is O(n 3 ) p is O(n 2 )

Big-Oh An interestingly imprecise way of comparing functions, focussing on how they behave for large inputs N.B. one can calculate with big-Oh numbers in a precise way. E.g., O(n 3 )+O(n 2 )=O(n 3 ) Calculate complexity of an algorithm based on complexity of its parts

Before analysing algorithms … Lets look at the discussion in the lecture notes (under So What?) 1.Speed of computers doubles every year. Should we let some problems wait? 2.Hire cleverer programmers, to find more efficient algorithms? 3.Does this stuff have other applications? 4.Doesnt dependence on computer and language make these calculations futile?

1. Speed of computers doubles every year. Should we let some problems wait? If running time grows very quickly (as the input size grows) then computer speed would have to grow very quickly as well to make a difference Suppose computers double in speed every year growing at rate of 2 n If running time grows at 2 n too then you can handle an input thats 1 bigger next year (in the same time)

2. Hire cleverer programmers, to find more efficient algorithms? If your problem does not allow a faster solution then clever programming wont help In other words: One would like to know about the complexity of a problem, rather than the complexity of a program Thats what real complexity theory is all about. It resembles computability. E.g., –Think about problems rather than programs –Reduce one problem to another

3. Does this stuff have other applications? Encryption is an example: modern methods rely on the fact that each non- prime integer x is the product of two integer y and z –Given x and y, z is easy to calculate –Given x alone, y and z take extremely long to calculate (when x is large) Complexity theory has much to say about the safety of this encryption method

4. Doesnt dependence on computer and language make these calculations futile? No: the speed of different computational models differs only by a polynomial factor: If Model 1 takes time f(n) and Model 2 takes time g(n) then f(n) p(g(n)), for some polynomial function p (please take our word for it)

Analysing more complex algorithms Lets look at some sorting algorithms

Example 1 {Simple Sort; also Bubblesort} for i := 1 to n do begin for j := i to n do begin if A[i] > A[j] then begin Exchange (A[i], A[j]); end

Inner loop: comparison is done n-i+1 times; therefore the total is (n-1)+1 plus (n-2)+1 plus … plus (n-n)+1 Average value is ½ n+1. There are n values. So their sum is n*( ½ n+1 ) = ½ (n 2 +n). Thats O(n 2 ). This is a quadratic-time algorithm

Exponential algorithms For large n, 2 n > n 2 Exponential algorithms are always slower than polynomial ones. Exponential algorithms are called intractable (sometime: `unreasonable – bad term) An example: calculating the n-th fibonacci number. (Origins: calculating speed of growth of population of rats, 1200 CE)

Fibonacci sequence defined: fib(1)=fib(2)=1 fib(n)=fib(n-1)+fib(n-2), if n>2 A direct implementation: function fib(n) if n<=2 then fib=1; else fib=fib(n-1)+fib(n-2)

This algorithm is exponential calculating fib(4) involves calculating fib(3) and fib(2), which involves calculating fib(2) and fib(1) and fib(1), etc. f4 f3 f2 f2 f1 f0 f1 f1 f0

This algorithm is exponential Let fib(n) take T(n) time then, for some a,b T(1)=T(2)=a T(n)=b+T(n-1)+T(n-2) Hence T(n) > 2(T(n-2)), so T(n) is O(2 n/2 )

An iterative algorithm (y = the n-th fibonacci number) If n=0 then y:=0 else begin x:=0 y:=1 for i:=1 to n-1 do begin z:=x+y x:=y y:=z end end

If n=0 then y:=0 else begin x:=0 {x = fib(0)} y:=1 {y = fib(1)} for i:=1 to n-1 do begin z:=x+y {z is a buffer} x:=y {x = fib(i)} y:=z {y = fib(i+1)} end end If n>0 then fib(n) is calculated in n-1 loops After one loop: z=fib(0)+fib(1)=1, x=fib(1)=1, y=z=fib(2)=1 After n-1 loops, x=fib(n-1), y=fib(n)

If n>0 then fib(n) is calculated in n-1 loops This implies calculating n-1 additions a polynomial algorithm, in fact a linear one! Thats what were looking for: polynomial algorithms Polynomial has become equivalent with tractable

Question Can one assess the complexity of a problem (rather than the complexity of an algorithm)? In particular, are there problems that cannot be solved in polynomial time? The answer turns out to be subtle!

Example of a probably-not- polynomial problem Travelling salesman = TRAV Given n towns, and a number of roads between them, each with a length Problem: find shortest path from start to start (if there exists one), visiting every other town exactly once. –Can be made precise using labelled directed graphs. –Variants of the problem exists. (See lecture notes)

Are we certain that TRAV has no polynomial algorithm? No. – No proof has been found! There do exist problems that are quite transparent, and yet no polynomial algorithm has been found for them. Everyone assumes that problems like TRAV dont have a polynomial-time solution – But everyone could be wrong!

A different example of a probably not polynomial problem A bounded version of Posts Correspondence Problem (PCP) (PS Different `bounds are possible)

Viewed as a puzzle: Can pieces be joined in such a way that top row symbols = bottom row symbols ? 1 2 XX XXX OXX O

Reminder of PCP: Given n pairs of finite strings { p 1 =,…, p n = } Does there exist a sequence of pairs (possibly with iterations), such that First(p i1 ) +…+ First(p ik )= Second(p i1 ) +…+ Second(p ik )

Bounded PCP Given n pairs of finite strings { p 1 =,…, p n = } Let K <= n Does there exist a sequence of pairs (possibly with iterations), such that First(p i1 ) +…+ First(p iK )= Second(p i1 ) +…+ Second(p iK )

Going through all n*n-1*n-2*…*n-k possible orders does seem the only option We have to be cautious: someone might have thought the same about fib … Complexity theory: Reduce from the most transparent problems. Compare Computability: Reduction again, but complexity has no equivalent of the Halting problem (yet?)

A key distinction: The difficulty of finding a solution (i.e., a sequence, a path, etc.) The difficulty of checking that a proposed solution is correct –Hard: given large c, find a and b such that a*b=c –Easy: given a, b and c, check whether a*b=c.

Example: Bounded PCP Checking Bounded PCP is easy If someone proposes a sequence, then its easy to check whether First(p i1 ) +…+ First(p iK )= Second(p i1 ) +…+ Second(p iK )

Some terminology 1. P = the class of problems for which there exists a polynomial algorithm 2. NP = the class of problems for which there exist polynomial-time checking algorithms The big open question: is P = NP? Some problems Q are so central that all other NP problems can be reduced from them. This is the class of NP-Complete problems (NPC). Solve any of these 1000 problems in polynomial time, then we know that P = NP.

Weve seen two open questions: Infinity: is 0 = 1 Complexity: is P = NP?

Weve seen two open questions: Infinity: is 0 = 1 Complexity: is P = NP? Solutions wellcome … in which case dont worry about your exams ;)

Finally: extended discussion of a real problem The problem `lives in Artificial Intelligence More specifically Natural Language Generation (NLG)

This view is definitely correct P NPC

This view might be correct NP=P=NPC

Download ppt "CS4018 Formal Models of Computation weeks 20-23 Computability and Complexity Kees van Deemter (partly based on lecture notes by Dirk Nikodem)"

Similar presentations