Presentation is loading. Please wait.

Presentation is loading. Please wait.

PRIMALITY TESTING – its importance for cryptography Lynn Margaret Batten Deakin University Talk at RMIT May 2003.

Similar presentations


Presentation on theme: "PRIMALITY TESTING – its importance for cryptography Lynn Margaret Batten Deakin University Talk at RMIT May 2003."— Presentation transcript:

1 PRIMALITY TESTING – its importance for cryptography Lynn Margaret Batten Deakin University Talk at RMIT May 2003

2 Prime numbers have attracted much attention from mathematicians for many centuries. Questions such as how many are there? is there a formula for generating them? how do you tell if a give number is a prime? have fascinated people for years.

3 However, the first actual use of prime numbers in an important area outside of the theory of numbers was discovered only in the mid to late 1900s. This was in the establishment of a technical system to be used in maintaining the secrecy of electronic communications.

4 TRANSMITTER COMMUNICATION CHANNEL RECEIVER KEY CHANNEL Message M Source Encrypt M C = E (M), Using key K 1 K1 K1 Cryptanalyst Decrypt C M = D (C), Using Key K 2 K2K2 Key Source #1 Random Key K 1 is Produced Key Source #2 Decryption Key K 2 Determined from K 1 C C K1K1 K2K2 Conventional cryptosystem. The key channel must be secure.

5 The Diffie-Hellman scheme proposed in 1976, was a radical departure from what, up to then, had all been essentially ‘private key’ schemes. The idea was that everyone would own both a ‘private key’ and a ‘public key’. The public key would be published in a directory, like a telephone book. If A wanted to send B an encrypted message, A simply looked up B’s public key, applied it and sent the message. Only B knew B’s private key and could use it to decrypt the message. PROBLEM? Diffie and Hellman had no concrete example of an encryption/decryption pair which could pull this off!

6 Then along came the Rivest, Shamir, Adleman (RSA) solution in 1977: Public information: nan integer which is a product of two large primes (p and q kept secret), and ea positive integer less than (p-1)(q-1) with gcd(e,(p-1)(q-1)) = 1. Secret information: The two primes p and q such that n = pq, and d such that ed  1 (mod (p – 1)(q – 1)).

7 To encrypt the message/number m: c  m e (mod n). To decrypt c: c d  m ed  m (mod n).

8 Example. Let n = 101 x 107 = and e = 7. Note 7d  1 (mod 100x106), or 7d  1 (mod 10600) so d = To encrypt the message m = 109 we find c = (mod 10807) = To decrypt find c d =  109.

9 The security of this scheme depends on the difficulty of factoring n. In fact, it is easy to show that knowing d is equivalent to factoring n. No way of breaking RSA is known, other than finding the secret information. Thus the RSA scheme leads to the following two problems: 1. Find a large pool of big ( >100 digits) primes. (If very few of these are available, Oscar will easily be able to get his hands on the list and simply try them all in order to break the scheme.) 2. Find a quick (polynomial time) algorithm to factor integers. (There is no known deterministic, polynomial time algorithm for factoring integers.) We take a look at problem 1.

10 The primes p and q must be of sufficient size that factorization of their product is beyond computational reach. Moreover, they should be random primes in the sense that they be chosen as a function of a random input which defines a pool of candidates of sufficient cardinality that an exhaustive attack is infeasible. In practice, the resulting primes must also be of a pre-determined bitlength, to meet system specifications.

11 Since finding large primes is very difficult. And also, since the known primes are usually available in some library or on some website, one of the 'solutions' to problem 1 has been to investigate numbers that are not primes, but simply act like primes.

12 Generally speaking, we say that a composite integer N is a pseudoprime if it satisfies some condition that a prime must always satisfy. One result for primes is the well-known: FERMAT'S LITTLE THEOREM Let p be a prime, and gcd(a,p) = 1. Thena p-1  1 (mod p). [Try a =2 and p=7.] The converse of Fermat's theorem is false as we see by the following example: Let N = 2701 = Then  1 (mod2701).

13 Now consider the following: Definition We say that the composite integer N is a base b pseudoprime (written b-psp) if b N-1  1(mod N).(*) Thus a b-psp acts like a prime with respect to Fermat's theorem, but it is not a prime. If there were only a few such numbers, this would not improve our situation, but as early as 1903 Malo showed that there exists an infinite number of composite N satisfying (*).

14 There exists an infinite number of base b pseudoprimes because: Theorem If p is an odd prime, p  b (b 2  1) and N = (b 2p  1) / (b 2  1), then N is a b-psp.

15 The existence of so many pseudo- primes indicates that the question of deciding whether a given number is prime or composite is a difficult one. This leads us back to RSA and its second problem (factoring) which we now approach from a different angle – that of primality testing.

16 It was simply very difficult (if not impossible) to prove that a randomly selected 100-digit number was a prime back in Furthermore, the primality proving methods that were available did not lend themselves to easy implementation in hardware, a necessary condition for RSA to become widely useable. A result of this situation was the refinement and further development of what are called probabilistic primality tests.

17 Probabilistic methods Let be any set. A Monte Carlo algorithm for is an algorithm, which, given and a source of random numbers for choosing, returns “yes” or “no” with the properties that: If then the answer is always “no”; If then the answer is “yes” with probability at least ½.

18 Solovay-Strassen test The Solovay-Strassen probabilistic primality test (1977) was the first such test popularized by the advent of public-key cryptography. There is no longer any reason to use this test, because an alternative is available,the Miller-Rabin test, which is both more efficient and always at least as correct.

19 Miller-Rabin Test The probabilistic primality test used most in practice today is the Miller-Rabin test (1980), also known as the strong pseudoprime test. The test is based on a more complex version of Fermat’s Little Theorem: a p-1  1 (mod p) or a p  0 (mod p) for p prime and gcd(a, p) =1.

20 For p odd, of course p – 1 = 2r is even. Then a p = a 2r – 1 = (a r -1)(a r + 1). So a p-1 – 1  0 (mod p) implies that the prime p divides into a r – 1 or into a r + 1 and consequently a r  1 (mod p) or a r  -1 (mod p).

21 This can be taken even further, by taking all powers of 2 out of p – 1 to obtain the following fact. Fact 1. Let n be an odd prime, and let n – 1 = 2 s r where r is odd. Let a be any integer such that gcd(a, n) = 1. Then either  1 (mod n) or  -1 (mod n) for some j, 0  j  s – 1.

22 DefinitionsLet n be an odd composite integer and let n – 1 = 2 s r where r is odd. Let be an integer in the interval [1, n – 1] relatively prime to n. (i)If (mod n) and if (mod n) for all j, 0  j  s – 1, then is called a strong witness (to compositeness) for n. (ii)Otherwise, n is said to be a strong pseudoprime to the base. The integer is called a strong liar (to primality) for n.

23 Example(strong pseudoprime) Consider the composite integer n = 91 =7x13. Try a = 9. Since 91 – 1 = 90 = 2 x 45, s = 1 and r = 45. Since 9 r = 9 45  1 (mod 91), 91 is a strong pseudoprime to the base 9. The set of all strong liars for 91 is {1, 9, 10, 12, 16, 17, 22, 29, 38, 53, 62, 69 74, 75, 79, 81, 82, 90}. Notice that the number of strong liars for 91 is less than 90/4.

24 Fact 1 can be used as a basis for a probabilistic primality test due to the following result. Fact 2If n is an odd composite integer, then at most of all the numbers a, 1  a  n –1, are strong liars for n.

25 Algorithm Miller-Rabin probabilistic primality test MILLER-RABIN (n,t) INPUT: an odd integer n  3 and security parameter t  1. OUTPUT:an answer ‘prime” or “composite”. 1. Write n – 1 = 2 s r such that r is odd. 2. For i from 1 to t do the following: 2.1 Choose a random integer a, 2  a  n – Compute y = a r mod n. 2.3 If y  1 and y  n – 1 then do the following: j  1. While j  s – 1 and y  n – 1 do the following: Compute y  y 2 mod n. If y  1 then return (“composite”). j  j + 1. If y  n – 1 then return (“composite”). 3. Return (“prime”).

26 If n is actually prime, this algorithm will always declare ‘prime’. However, if n is composite, Fact 2 can be used to deduce the following probability of the algorithm erroneously declaring ‘prime’.

27 FACT 3 (Miller-Rabin error- probability bound) For any odd composite integer n, the probability that MILLER-RABIN (n, t) incorrectly declares n to be “prime” is less than

28 To perform the Miller-Rabin test on  N to base, we will need no more than log 2 ( ) (which is the number of bits in the binary representation of ) modular exponentiations, each using bit operations. Hence, the Miller-Rabin test to base takes bit operations. Since we can run this up to – 3 times, but the more values of we run, the slower the algorithm.

29 In 1983, Adleman, Pomerance and Rumely gave the first deterministic algorithm for primality testing that runs in less than exponential time. For n the number being tested, the time needed is.

30 In 1986, two independent algorithms were developed by Goldwasser and Kilian and by Atkin which, under certain assumptions, would guarantee primality (but not necessary compositness) in polynomial time.

31 Then in August 2002, Agrawal, Kawal and Saxena made public their unconditional deterministic, polynomial-time algorithm for primality testing. For the number being tested, this algorithm runs in time. The proof that the algorithm works uses relatively basic mathematics and we shall outline it here.

32 The AKS algorithm is based on the following identity for prime numbers: for any such that We expand the difference between the polynomials.

33 Thus, for the coefficient of in is If is prime, is divisible by for all If is not prime, let be a prime divisor of and The does not divide or In this caseis not zero modulo

34 So, given to test, one could choose a value for and test as above. We would need to evaluate about coefficients however, in the worst case, which is too slow. The trick used to reduce the run time is a standard one in algebra: We ‘mod out’ by a polynomial to obtain Still working modulo. How is chosen? Will this work? * ( )

35 In fact, all primes satisfy for any choice of and of Unfortunately, some composites may also satisfy for some choices of the pair Congruence takes time to check if Fast Fourier Multiplication (Knuth, 1998) is used. The authors show that a suitable choice of is: prime of order where contains a factor of a certain size. They then verify their algorithm for a small number of ‘ s. ( ) * * *

36 The algorithm _______________________________________________ Input:Integer 1.If ( is of the form ) output COMPOSITE; 2. ; 3.While 4. if output COMPOSITE 5. if ( is prime) 6. let be the largest prime factor of ; 7. if ( and 8. break; 9. ; 10. } 11.For to 12. if output COMPOSITE; 13.output PRIME; _______________________________________________________________

37 The first loop in the algorithm tries to find a prime such that has a large prime factor The authors show that, as described in line 7 of the algorithm, must exist, and they are even able to establish bounds on it. They then use these bounds to establish that if is prime, the algorithm returns PRIME.

38 In order to show that if is composite, the algorithm returns COMPOSITE, the following set is constructed: Where is a polynomial of the type on line 12 of the algorithm. There are such polynomials. Thus, if the algorithm falsely declares PRIME, every one of the incongruences in line 12 must be false. It follows that and the authors show that this leads to a contradiction.

39 Time Complexity _______________________________________________ Input:Integer 1.If ( is of the form ) output COMPOSITE;

40 3.While 4.If output COMPOSITE; 5.if ( is prime) 6.let be the largest prime factor of ; 7.If and 8.break; 9. ; 10. } Total: or iterations

41 11.for to 12.if output COMPOSITE 13.output PRIME; Total:

42 Implications for future work: There is a good chance that people are already looking at implementing the new idea of using modulus by a polynomial to find a polynomial algorithm for factoring.

43 REFERENCES M. Agrawal, N. Kayal, N. Saxena, ‘PRIMES is in P’. R. Crandall and C. Pomerance, ‘Prime numbers: A computational perspective’. Springer, D. Knuth, ‘Art of computer programming’, VII. Addison- Wesley, H. Williams, ‘Edouard Lucas and Primality Testing’, CMS Monographs, Wiley, 1998.


Download ppt "PRIMALITY TESTING – its importance for cryptography Lynn Margaret Batten Deakin University Talk at RMIT May 2003."

Similar presentations


Ads by Google