Algebra Jeff Edmonds York University COSC 6111 Fields GCD Powers mod p Fermat, Roots of Unity, & Generators Z mod p vs Complex Numbers Cryptography Other.

Slides:



Advertisements
Similar presentations
296.3Page :Algorithms in the Real World Error Correcting Codes II – Cyclic Codes – Reed-Solomon Codes.
Advertisements

15-853:Algorithms in the Real World
Primality Testing Patrick Lee 12 July 2003 (updated on 13 July 2003)
Notation Intro. Number Theory Online Cryptography Course Dan Boneh
Math 3121 Abstract Algebra I
CNS2010handout 8 :: introduction to number theory1 computer and network security matt barrie.
Princeton University COS 423 Theory of Algorithms Spring 2002 Kevin Wayne Fast Fourier Transform Jean Baptiste Joseph Fourier ( ) These lecture.
1. 2 Overview Some basic math Error correcting codes Low degree polynomials Introduction to consistent readers and consistency tests H.W.
CSE 321 Discrete Structures Winter 2008 Lecture 8 Number Theory: Modular Arithmetic.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
The Fourier series A large class of phenomena can be described as periodic in nature: waves, sounds, light, radio, water waves etc. It is natural to attempt.
Digital Image Processing, 2nd ed. © 2002 R. C. Gonzalez & R. E. Woods Chapter 4 Image Enhancement in the Frequency Domain Chapter.
CELLULAR COMMUNICATIONS DSP Intro. Signals: quantization and sampling.
Dividing Polynomials.
Induction and recursion
College Algebra Prerequisite Topics Review
Integral Transform Dongsup Kim Department of Biosystems, KAIST Fall, 2004.
Chapter 2 The Fundamentals: Algorithms, the Integers, and Matrices
Chapter 10 Review: Matrix Algebra
FINITE FIELDS 7/30 陳柏誠.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Induction and recursion
CPSC 3730 Cryptography and Network Security
1 Cryptography and Network Security Third Edition by William Stallings Lecture slides by Lawrie Brown Chapter 4 – Finite Fields.
Information Security and Management 4. Finite Fields 8
Linear Algebra Chapter 4 Vector Spaces.
The Greatest Common Factor; Factoring by Grouping
Basic Concepts of Algebra
Fourier Transformations Jeff Edmonds York University COSC 6111 Change from Time to Polynomial Basis Evaluating & Interpolating FFT in nlogn Time Roots.
By: Hector L Contreras SSGT / USMC
Math 002 College Algebra Final Exam Review.
Great Theoretical Ideas in Computer Science.
Chapter 8. Section 8. 1 Section Summary Introduction Modeling with Recurrence Relations Fibonacci Numbers The Tower of Hanoi Counting Problems Algorithms.
Great Theoretical Ideas in Computer Science.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
MATH – High School Common Core Vs Tested Kansas Standards Please note: For informational purposes, all Common Core standards are listed, and the tested.
Analysis of Algorithms
FFT1 The Fast Fourier Transform. FFT2 Outline and Reading Polynomial Multiplication Problem Primitive Roots of Unity (§10.4.1) The Discrete Fourier Transform.
Counting III: Pascal’s Triangle, Polynomials, and Vector Programs Great Theoretical Ideas In Computer Science Steven RudichCS Spring 2003 Lecture.
Basic Number Theory Divisibility Let a,b be integers with a≠0. if there exists an integer k such that b=ka, we say a divides b which is denoted by a|b.
MA/CS 375 Fall MA/CS 375 Fall 2002 Lecture 31.
Fourier Series. Introduction Decompose a periodic input signal into primitive periodic components. A periodic sequence T2T3T t f(t)f(t)
Chapter 9 Polynomial Functions
Chapter 4 – Finite Fields
Data Security and Encryption (CSE348) 1. Lecture # 12 2.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Linear Feedback Shift Register. 2 Linear Feedback Shift Registers (LFSRs) These are n-bit counters exhibiting pseudo-random behavior. Built from simple.
Great Theoretical Ideas in Computer Science.
The Fast Fourier Transform and Applications to Multiplication
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
COMPSCI 102 Discrete Mathematics for Computer Science.
Information Security Lab. Dept. of Computer Engineering 87/121 PART I Symmetric Ciphers CHAPTER 4 Finite Fields 4.1 Groups, Rings, and Fields 4.2 Modular.
Great Theoretical Ideas In Computer Science Anupam GuptaCS Fall 2006 Lecture 15Oct 17, 2006Carnegie Mellon University Algebraic Structures: Groups,
15-499Page :Algorithms and Applications Cryptography II – Number theory (groups and fields)
Great Theoretical Ideas in Computer Science.
Chapter 1 Algorithms with Numbers. Bases and Logs How many digits does it take to represent the number N >= 0 in base 2? With k digits the largest number.
Polynomials Count! CS Lecture 9 X1 X1 X2 X2 + + X3 X3.
May 9, 2001Applied Symbolic Computation1 Applied Symbolic Computation (CS 680/480) Lecture 6: Multiplication, Interpolation, and the Chinese Remainder.
Math for CS Fourier Transforms
Unit-III Algebraic Structures
Chapter 1 Linear Equations and Vectors
Great Theoretical Ideas in Computer Science
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
Image Enhancement in the
UNIT II Analysis of Continuous Time signal
1.1 Real Numbers.
Chapter 8 The Discrete Fourier Transform
Chapter 8 The Discrete Fourier Transform
Math review - scalars, vectors, and matrices
Presentation transcript:

Algebra Jeff Edmonds York University COSC 6111 Fields GCD Powers mod p Fermat, Roots of Unity, & Generators Z mod p vs Complex Numbers Cryptography Other Finite Fields Vector Spaces Colour Error Correcting Codes Linear Transformations Integrating Changing Basis Fourier Transformation (sine) Fourier Transformation (JPEG) Fourier Transformation (Polynomials) Other Algebra Taylor Expansions Generating Functions Primes Numbers

Fields A Field has: A universe U of values Two operations: + and × + Identity:  0 a+0 = a × Identity:  1 a×1 = a Associative: a+(b+c) = (a+b)+c & a×(b×c) = (a×b)×c Commutative: a+b = b+a & a×b = b×a Distributive: a×(b+c) = a×b + a×c + Inverse:  a  b a+b=0, i.e. b=-a (These give you a group.) ( & a×0 = 0) Differentiates between + and ×

Fields A Field has: A universe U of values Two operations: + and × + Identity:  0 a+0 = a × Identity:  1 a×1 = a Associative: a+(b+c) = (a+b)+c & a×(b×c) = (a×b)×c Commutative: a+b = b+a & a×b = b×a Distributive: a×(b+c) = a×b + a×c + Inverse:  a  b a+b=0, i.e. b=-a × Inverse:  a≠0  b a×b=1, i.e. b=a -1 Examples: Reals & Rationals Complex Numbers Integers Invertible Matrices ( & a×0 = 0)

Fields Problems for computers: Reals Too much space Lack of precision Integers Lack of inverses Grow too big Better field? Finite field, eg integers mod a prime Finite

Finite Fields × Integers mod 5 (Z/5) Universe U = {0,1,2,3,4} Two operations + and × 3+4 = 7 = mod 5 2 3×4 = 12 = mod 5 2 Don’t think of mod 5 as a function mod 5 (7) = 2. Think of it as equivalence classes … -8 = mod 5 -3 = mod 5 2 = mod 5 7 … Must prove + & × are well defined [a] modp × [b] modp = [a+ip]×[b+jp] = a×b + (aj+bi+ijp)p = [a×b] modp

Finite Fields × Special value 0 a+0 = a a×0 = 0 Special value 1 a×1 = a

Finite Fields × Associative: a+(b+c) = (a+b)+c a×(b×c) = (a×b)×c Commutative: a+b = b+a a×b = b×a Distributive: a×(b+c) = a×b + a×c

Finite Fields × Inverses:  a  b a+b=0, i.e. b=-a 0 = 2+(-2) = mod  a≠0  b a×b=1, i.e. b=a -1 1 = 2×(½) = mod 5 2×(?) 2×3 = 6 = mod 5 1

× Finite Fields Integers mod 7 Multiplicative Inverse:  a≠0  b a×b=1, i.e. b=a -1 Given a, find a -1 If b = a -1, then a = b -1 1 = 1 -1 It is possible that a = a -1

Finite Fields Integers mod 6 Multiplicative Inverse:  a≠0  b a×b=1, i.e. b=a -1 Given a, find a -1 No inverse for 2 Zero Divisors: 2×3 = 6 = mod 6 0 No inverses for ints mod n if n is × not prime

Finite Fields Integers mod 6 Multiplicative Inverse:  a≠0  b a×b=1, i.e. b=a -1 Given a, find a -1 No inverse for 2 Zero Divisors: 2×3 = 6 = mod 6 0 No inverses for ints mod n if n is Inverses for ints mod p if p is prime Prove by construction using GCD alg × not prime

GCD(a,b) Input: = 3Output: GCD(a,b) = Maintain values such that GCD(a,b) = GCD(x,y) GCD(a,b) = GCD(x,y) = GCD(y,x mod y) = GCD(x’,y’) Replace with

GCD(a,b)

Extended GCD(a,b) Input: = 5 = GCD(25,15) (2)×25 + (-3)×15 = 5 Output: g = GCD(a,b) u×a + v×b = g =

Extended GCD(a,b) My instance: My friend’s instance: = a’ = b, b’ = a mod b = a - r×b My friend’s solution: g’ = GCD(a’,b’) u’×a’ + v’×b’ = g’ u’×b + v’×(a-r×b) = g v’×a + (u’-v’×r)×b = g u×a + v×b = g My solution: g = u = v = g’ v’ u’-v’×r = g

Extended GCD(a,b)

× Finding Inverses Integers mod p (Z/p) Multiplicative Inverse: Given a≠0 and prime p, find b such that a×b = mod p 1

Use Extended GCD(, ) a p Output: g = GCD(a,p) a×u + p×v = g = 1 b = u a×b = Multiplicative Inverse: Given a≠0 and prime p, find b such that a×b = mod p 1 a×b = mod p 1 1 – p×v = mod p 1  a≠0  b a×b=1, i.e. b=a -1 Integers mod p (Z/p) Finding Inverses

Chinese Remainder Theorem Suppose you want to compute some integer x. But instead of doing the long computation over the integers, you compute it over the integers mod p 1. Then you compute it over the integers mod p 2. Input: Output: x  i x = a i mod p i Unique answer ≤ p 1 p 2 …p r Sorry. We don’t cover the algorithm.

Powers mod p 1 Start with 1 and continually multiply by b mod p. What do you get? b ×b×b b2b2 ×b×b b3b3 ×b×b b4b4 ×b×b b5b5 ×b×b b6b6 ×b×b bNbN ×b×b … Input: b and N Output: y = b N mod p Time(N) =  (N) n = Size = log(b) + log(N) = 2  (n)

N=7 N=2 N=1 N=2 N=1 N=2 N=1 N=4N=3 b4b4 b3b3 b 7 = b 3 × b 4 T(N) = 2T(N/2) + 1 =  (N) Size = log(b) + log(N) = 2  (n) Powers mod p

N=7 N=1 N=3 b3b3 b 7 = (b 3 ) 2 × b T(N) = 1T(N/2) + 1 =  (log(N)) Size = log(b) + log(N) =  (n) Powers mod p

Input: b and N Output: y = b N mod p Time(N) =  (log N) =  (n) Time(N) =  (N) = 2  (n) Input: b and y Output: N such that y = b N mod p N = log b (y) mod p n = Size = log(b) + log(N) 1 b ×b×b b2b2 ×b×b b3b3 ×b×b b4b4 ×b×b b5b5 ×b×b b6b6 ×b×b y ×b×b … A one way hard function Useful in cryptography. Discrete Log Similarly: Multiplying: p×q = N Time =  (n) Factoring: N = p×q Time = 2  (n)

Can this go on for ever? No. There are only p elements. b×x = b×y b -1 ×b×x = b -1 ×b×y x = y Each node has in-degree one and out-degree one. 1 b ×b×b b2b2 ×b×b b3b3 ×b×b b4b4 ×b×b b5b5 ×b×b b6b6 ×b×b Is this possible? x ×b×b y ×b×b  b -1 b×b -1 =1 Fermat, Roots of Unity, & Generators

What does a graph with in and out-degree one look like? 1 b b2b2 b3b3 b4b4 b5b5 Fermat, Roots of Unity, & Generators Lets first focus on only these elements.

1 b b2b2 b3b3 b4b4 b5b5  r b r = 1 a ab ab 2 ab 3 ab 4 ab 5 There might be another element a. ab r = a c cb cb 2 cb 3 cb 4 cb 5 There might be another element c. cb r = cDo this some q number of times. q Are there more elements? 0 The total # of elements = rq+1 = p Fermat, Roots of Unity, & Generators

1 b b2b2 b3b3 b4b4 b5b5  r b r = 1 Do this some q number of times. q Are there more elements? 0 The total # of elements = rq+1 = p b2b2 b3b3 b4b4 b5b5 b6b6 b7b7 b8b8 b9b9 b 10 b0b0 b1b1 Eg. p=11, n=p-1=rq=10 r=5,q=2 r=2,q=5 r=1,q=10 r=10,q=1 Fermat, Roots of Unity, & Generators

1 b b2b2 b3b3 b4b4 b5b5  r b r = 1 Values of b like 2,6,7, & 8 are said to be a generator of the field The total # of elements = rq+1 = p b2b2 b3b3 b4b4 b5b5 b6b6 b7b7 b8b8 b9b9 b 10 b0b0 b1b1 Eg. p=11, n=p-1=rq=10 r=10,q=1 Fermat, Roots of Unity, & Generators

Fermat’s Little Theorem:  b≠0 b p-1 = mod p 1 1 b b2b2 b3b3 b4b4 b5b5  r b r = 1 The total # of elements = rq+1 = p Proof: b p-1 = b rq = (b r ) q = mod p (1) q = 1 Fermat, Roots of Unity, & Generators

Fermat’s Little Theorem:  b≠0 b p-1 = mod p 1 Euler’s Version:  b≠0 b φ = mod n 1 where n = pq with p and q are prime φ = (p-1)(q-1) and where b is co-prime with n. Example: b=2, p=3, q=5, n=15, r=(p-1)(q-1)=8 = mod ×b×b 4 ×b×b 8 ×b×b 16 ×b×b Fermat, Roots of Unity, & Generators

Fermat’s Little Theorem:  b≠0 b p-1 = mod p 1 Example: b=2, p=3, q=5, n=15, r=(p-1)(q-1)= b 4 = 1 b 8 = (b 4 ) 2 = 1 Euler’s Theorem:  b≠0 b φ = mod n 1 where n = pq with p and q are prime φ = (p-1)(q-1) and where b is co-prime with n.

 16 1  16 0  16 2  16 3  16 4  16 5  16 6  16 7  16 8  16 9       =  = 1 Z mod 17 vs Complex Numbers 16 th roots of unity -1 = i -i (  n/2 ) 2 = 1 (  n/4 ) 2 =  n/2 = -1 (  3n/4 ) 2 =  n/2 = -1 These could be Z mod 17 or complex numbers ××r r θ re θi = rcosθ + irsinθ

f(θ) = re θi g(θ) = rcosθ + irsinθ Goal: Proof f(θ) = g(θ) f(0) = re 0i = r g(0) = rcos0 + irsin0 = r f’(θ) = ire θi g’(θ) = -rsinθ + ircosθ f(0) = g(0) f’(0) = ire 0i = ir g’(0) = -rsin0 + ircos0 = ir f’(0) = g’(0) f’’(θ) = -re θi g’’(θ) = -rcosθ - rsinθ = -f(θ)= -g(θ) Z mod 17 vs Complex Numbers

Goal: Proof f(θ) = g(θ) f(0) = g(0) f’(0) = g’(0) f’’(θ) = -f(θ) g’’(θ) = -g(θ) Proof by induction (over the reals) that f(θ) = g(θ) f(θ)g(θ) For this θ, f(θ) = g(θ) and f’(θ) = g’(θ) For next θ+ , f(θ+  ) = g(θ+  ) f’’(θ) = -f(θ) = -g(θ) =g’’(θ) For next θ+ , f’(θ+  ) = g’(θ+  ) Z mod 17 vs Complex Numbers

 16 1  16 0  16 2  16 3  16 4  16 5  16 6  16 7  16 8  16 9       =  = 1 Z mod 17 vs Complex Numbers 16 th roots of unity -1 = i -i These could be Z mod 17 or complex numbers r r θ re θi = rcosθ + irsinθ re θi × se αi = (rs)e (θ+α)i

Cryptography I publish a public key E and hide a private key D. I have a message m to send to him. I use E to encode it. code = Encode(m,E) Knowing E but not D, I cannot decode the message. Knowing D, I decode the message. m = Decode(code,D)

Identifying Oneself I am the guy who knows D Prove it. I will encode a message for you. code = Encode(m,E) Knowing D, I can decode the message. m = Decode(code,D) Knowing E but not D, I cannot pretend to be him.

Cryptography I chose two primes p and q. n = pq. φ = (p-1)(q-1) Euler’s Theorem:  b≠0 b φ = 1 mod n Let e be some value co-prime with φ Let d = e -1 mod φ Note φ is not prime, but gcd(φ,e) = 1 is good enough. Note ed = 1+ φr I publish E = to the world. I keep D = and private.

Cryptography In summary:  b≠0 b φ = 1 mod n ed =1+ φr c = Encode(m,E) = m e mod n Time? =  (# bits in e, m, & n) using repeated squaring

Cryptography In summary:  b≠0 b φ = 1 mod n ed =1+ φr c = Encode(m,E) = m e mod n m’ = Decode(c,D) = c d mod n = (m e ) d = m ed = m 1+ φr = m × (m φ ) r = m × (1) r mod n = m

We have seen the finite field Z/p with elements being the integers {0,1,2,…,p-1} with normal + and × mod a prime integer p. Similarly, we consider the field of (Z/p)[x]/P with elements the polynomials a d-1 x d-1 + … + a 3 x 3 + a 2 x 2 + a 1 x + a 0 with coefficients a i in Z/p. and degree at most d-1. with normal + and × mod an unfactorable polynomial P. x d - 2x d-1 - … - 3x 2 - x – 4 = 0 Note this field has p d elements. Finite Fields mod Polynomial

For example (Z/2)[x]/(x 3 +x+1) Finite Fields mod Polynomial Binary coefficients.Polynomials over x. Mod x 3 +x+1 All x 3 removed, so elements have degree 2. There are 2 3 =8 elements.

For example (Z/2)[x]/(x 3 +x+1) x 2 +1 and x 2 +x+1 are elements (x 2 +1)×(x 2 +x+1) = x 4 +x 3 +x 2 + x 2 +x+1 = x 4 +x 3 +x+1 = (x 3 +x+1)(x+1) + (x 2 +x) = x 2 +x Finite Fields mod Polynomial x x 4 +x 3 +x+1 x 3 +x+1 x 4 +x 2 +x x 3 +x x 3 +x+1 x 2 +x remainder

Types of Finite Fields Lemma: Every finite field has p d elements for some prime p and int d. Any two finite fields with the same number of elements are isomorphic ie same with under some renaming of the elements. Eg There is not a field with 6 elements! Is there a field with 81 elements? Yes, because 81 = 3 4 Is there a field with 82 elements? No, because 82 = 2∙41

Can this go on for ever? No. There are a finite # of elements. x+1 = y+1 x+1+(-1) = y+1+(-1) x = y Each node has in-degree one and in out-degree one Is this possible? x +1 y +1+1  (-1) 1+(-1)=0 Partial Proof: Consider some finite field. Every field has a zero 0 and a +1 Types of Finite Fields Every finite field has p d elements And effectively is determined. Skip

What does a graph with in and out-degree one look like? 0 1 Lets first focus on only these elements. Types of Finite Fields Every finite field has p d elements And effectively is determined.

0  n … + 1 = 0 Give these elements names. We don’t know how × works. Proof that for these n elements × works act like Z/n. r’×s’ = (1+1+…+1) × (1+1+…+1) = (1×1 + 1×1 + … + 1×1) = ( … + 1) = (r×s)’ r s A Field is distributive: a×(b+c) = a×b + a×c r×s 1 2’ 3’ 4’ (n-1)’ ’ ’ 1×1 = 1 By definition Proves × works correctly. Types of Finite Fields Every finite field has p d elements And effectively is determined.

0 Similarly, for these n elements + works act like Z/n. Hence, we can rename the elements. 1 2’ 3’ 4’ (n-1)’ ’ ’ Types of Finite Fields Every finite field has p d elements And effectively is determined.

Similarly, for these n elements + works act like Z/n. Hence, we can rename the elements (n-1) Types of Finite Fields Every finite field has p d elements And effectively is determined.

Proof n is prime. Suppose n=rs No zero divisors allowed Hence, n is prime. Types of Finite Fields Every finite field has p d elements And effectively is determined (n-1)

But there may be other elements in the Field. Consider one u. What about a×u for a in Z/p ? Types of Finite Fields Every finite field has p d elements And effectively is determined (p-1) u 4u4u 0 2u2u 3u3u (p-1) u

What about u×u? Call it v. What about b×v for b in Z/p ? Types of Finite Fields Every finite field has p d elements And effectively is determined (p-1) 4u4u 0 2u2u 3u3u (p-1) u 4v4v 0 2v2v 3v3v (p-1) v u v= u×u

Types of Finite Fields Every finite field has p d elements And effectively is determined (p-1) 4u4u 0 2u2u 3u3u (p-1) u 4v4v 0 2v2v 3v3v (p-1) v What about au + bv for a and b in Z/p ? u v= u×u 2v2v v3v 4v4v 5v5v 6v6v v u 2u2u 3u3u 4u4u 5u5u 6u6u au +bv

Types of Finite Fields Every finite field has p d elements And effectively is determined (p-1) 4u4u 0 2u2u 3u3u (p-1) u 4v4v 0 2v2v 3v3v (p-1) v What about au + bv for a and b in Z/p ? u v= u×u Or think of u and v as vectors in a vector space with underlying finite field Z/p. 3u + 2v = = = u v

We now have considered Z/p = 0,1,2,…,p-1 u v= u×u Linear combinations au+bv Now consider x 3, x 4, x 5,..., x d-1 And  Linear combinations a 0 + a 1 x + a 2 x 2 + a 3 x 3 + … + a d-1 x d-1 Until d is such that x d has been seen before. Perhaps x d = 2x d-1 + … + 3x 2 + x +4 Types of Finite Fields Every finite field has p d elements And effectively is determined. = x

The elements of our field then consist of the set of polynomials a 0 + a 1 x + a 2 x 2 + a 3 x 3 + … + a d-1 x d-1 with coefficients a i in Z/p. Degree at most d-1. Mod x d - 2x d-1 - … - 3x 2 - x – 4 = 0 (This is a polynomial that is like a prime in that it has no factors.) Note this field has p d elements. Types of Finite Fields Every finite field has p d elements And effectively is determined.

Vector Spaces A vector space has: A universe V of objects. Eg: An arrow with a direction and a length A knapsack of toys A function 1inch North East 2x 2 e x sin x

Vector Spaces A vector space has: A universe V of objects. An underlying field F. Closed under linear combinations If u,v  V, then au + bv  V u = v = 3u + 2v =

Vector Spaces A vector space has: A universe V of objects. An underlying finite field F. Closed under linear combinations If u,v  V, then au + bv  V 3u + 2v = u = v =

Vector Spaces A vector space has: A universe V of objects. An underlying finite field F. Closed under linear combinations If u,v  V, then au + bv  V 3u + 2v = u = v = 2 x 2 e x sin x+ xe x cos x 2 xe x cos x + 3 e x sin x 6 x 2 e x sin x + 7 xe x cos x+ 6 e x sin x

Vector Spaces A vector space has: A universe V of objects. An underlying finite field F. Closed under linear combinations If u,v  V, then au + bv  V Cannot multiply two objects producing an object. Zero Object 0 v = 0 0 v =

Vector Spaces A vector space has: A universe V of objects. An underlying finite field F. Closed under linear combinations If u,v  V, then au + bv  V Cannot multiply two objects producing an object. Zero Object Usual Field rules Associative: u+(v+w) = (u+v)+w Commutative: u+v = v+u Distributive: a×(u+v) = a×u + a×v + Inverse:  v  u u+v=0, i.e. v=-u

Vector Spaces A basis of a vector space: A tuple of objects Linearly independent w d ≠ a 1 w 1 +a 2 w 2 +… + a d-1 w d-1 0 ≠ a 1 w 1 +a 2 w 2 +… + a d w d

Vector Spaces A basis of a vector space: A tuple of objects Linearly independent Spans the space uniquely  v  [a 1,a 2,…,a d ] v = a 1 w 1 +a 2 w 2 +… + a d w d [3,-1] Basis = [, ] v = [a 1,a 2,…,a d ] =

Vector Spaces A basis of a vector space: A tuple of objects Linearly independent Spans the space uniquely  v  [a 1,a 2,…,a d ] v = a 1 w 1 +a 2 w 2 +… + a d w d Basis = [, ] v = [a 1,a 2,…,a d ] = [3,4] Standard

Vector Spaces A basis of a vector space: A tuple of objects Linearly independent Spans the space uniquely  v  [a 1,a 2,…,a d ] v = a 1 w 1 +a 2 w 2 +… + a d w d [3,2,-4] [a 1,a 2,…,a d ] = Basis = v = [,, ]

Vector Spaces A basis of a vector space: A tuple of objects Linearly independent Spans the space uniquely  v  [a 1,a 2,…,a d ] v = a 1 w 1 +a 2 w 2 +… + a d w d [a 1,a 2,…,a d ] = Basis = v = [,, ] [2,3,6] Standard

Vector Spaces A basis of a vector space: A tuple of objects Linearly independent Spans the space uniquely  v  [a 1,a 2,…,a d ] v = a 1 w 1 +a 2 w 2 +… + a d w d [3,7,6] Basis = Standard [ x 2 e x sin x, xe x cos x, e x sin x ] [a 1,a 2,…,a d ] = v = 3 x 2 e x sin x + 7 xe x cos x+ 6 e x sin x

Vector Spaces A basis of a vector space: A tuple of objects Linearly independent Spans the space uniquely  v  [a 1,a 2,…,a d ] v = a 1 w 1 +a 2 w 2 +… + a d w d The object V is represented by the vector [a 1,a 2,…,a d ] The dimension of the vector space V is d.

Vector Spaces FindBasis( V ) Let w 1 be any nonzero object in V Let B = {w 1 } and d = 1 loop : B linearly independent 0 ≠ a 1 w 1 +a 2 w 2 +… + a d w d Exit if B spans V Let w d+1 be any object in V not spanned by B Let B = B  {w d+1 } and d = d+1 end loop return(B) Note the dimension d could be infinite.

Colour Colour: Each frequency f of light is a “primary colour”. Each colour contains a mix of these ie a linear combination a 1 f 1 +a 2 f 2 +… + a d f d What is the dimension d of this vector space? Infinite, because there are an infinite number of frequencies Do we see all of these colours?

Colour: No, we have three sensors that detect frequency so our brain only returns three different real values. What is the dimension d of the vector space of colours that humans see? d = 3. Each colour is specified by a vector [255,153,0] Colour

Colour: The basis colours? Bases = Or = Colour

We have a [n,k,r]-linear code [|code|, |message|, hamming dist] I have a k digit message I encode it into an n digit code and send it to you. I corrupt some of the digits. I can detect up to r-1 errors. I can correct up to (r-1) / 2 errors and recover the message. Error Correcting Codes

We have a [n,k,r]-linear code [|code|, |message|, hamming dist] I have a k digit message I encode it into an n digit code and send it to you. Error Correcting Codes Think of the code words as vectors in a sub-vector space These code words are spread evenly through the set of all n digit tuples so that the minimum hamming distance between any two is r.

Error Correcting Codes These code words are spread evenly through the set of all n digit tuples so that the minimum hamming distance between any two is r

A vector space has: A universe V of objects. Eg: Code words v = An underlying field F. The digits of the message and in the code are from your favorite finite field F. Eg bits: v = Bytes: v = 13 A2 7C F3 A2 Error Correcting Codes

A vector space has: A universe V of objects. Eg: Code words v = An underlying field F. Closed under linear combinations If u,v  V, then au + bv  V Each digit is a separate binary sum. No carries. Error Correcting Codes

A basis of a vector space: A tuple of objects Linearly independent Spans the space uniquely  v  [a 1,a 2,…,a d ] v = a 1 w 1 +a 2 w 2 +… + a d w d [0,0,1,1] [a 1,a 2,…,a d ] = w 2 = w 1 = Basis = w 4 = w 3 = v = w 3 +w 4 = What k digit message is associated with this code word? [3,-1] Basis = [, ] v = [a 1,a 2,…,a d ] = /k/k /k/k /k/k /k/k /k/k Error Correcting Codes

Given my message is “Yes, I will marry you”, I send this code Had my message been “No, way bozo”, I would have sent this other code. My goal is to corrupt the message to confuse the receiver. I do hope I get a yes. Error Correcting Codes

I must change r=3 digits to completely corrupt the code in an undetectable way. We say that the hamming distance between these codes is r=3 because they differ in this many digits. Oh. This is the code for no.  Error Correcting Codes

Considered the n-dimensional cube of possible codes with an edge between those that differ in one digit When I corrupt a code, one digit at a time, I travel along these edges Error Correcting Codes

We say that the hamming distance between these codes is r=3 because this is the shortest path between them in this cube. Error Correcting Codes

This is a [n,k,r]-linear code Because the hamming distance between any two codes is at least r Error Correcting Codes rr

Error Correcting Codes This is my code I can detect up to r-1 errors. I must change r=3 digits to completely corrupt the code in an undetectable way. rr

Error Correcting Codes If I receive a code that is not legal, I decode to the closest legal code. (r-1) / 2 I can correct up to (r-1) / 2 errors and recover the message. rr

Linear Transformations Linear Transformations T(v) = u Useful for: Transforming objects wrt the same basis. Change the basis used to describe an object. Linear means: T(au+bv) = aT(u) + bT(v) Recall  v  [a 1,a 2,…,a d ] v = a 1 w 1 +a 2 w 2 +… + a d w d T(v) = T(a 1 w 1 +a 2 w 2 +… + a d w d ) = a 1 T(w 1 ) +a 2 T(w 2 ) +… + a d T(w d ) Hence we only need to know where the basis vectors get mapped.

Linear Transformations We only need to know where the basis vectors get mapped T(v) = a 1 T(w 1 ) +a 2 T(w 2 ) +… + a d T(w d ) Basis = [, ] T( ) = T( ) = Linear Transformations

We only need to know where the basis vectors get mapped T(v) = a 1 T(w 1 ) +a 2 T(w 2 ) +… + a d T(w d ) T( ) = 2? 0? [ ][ ] = [ ] Basis = [, ] T( ) = = T( ) = = Linear Transformations

We only need to know where the basis vectors get mapped T(v) = a 1 T(w 1 ) +a 2 T(w 2 ) +… + a d T(w d ) T( ) = [ ][ ] = [ ] a b 2a b Basis = [, ] T( ) = Linear Transformations

We only need to know where the basis vectors get mapped T(v) = a 1 T(w 1 ) +a 2 T(w 2 ) +… + a d T(w d ) cos  ? sin  ? [ ][ ] = [ ] 1 0 cos  sin  Basis = [, ] T( ) = cos  sin  cos  -sin  T( ) = = cos  + sin  T( ) = = -sin  + cos  cos  -sin  sin  cos  [ ][ ] = [ ] 0 1 -sin  cos  Linear Transformations

Integrating f(x) = x 2 e x sin x Can you differentiate it? Can you integrate it? Sure! f’(x) = 2xe x sinx + x 2 e x sinx + x 2 e x cosx Ahh? No  I can! Think of differentiation as a Linear Transformation and then invert it.

Integrating x 2 e x sinx x 2 e x cosx xe x sinx xe x cosx e x sinx e x cosx Basis =  /  x ( x 2 e x sinx ) We will explain where this basis comes from later.

Integrating x 2 e x sinx x 2 e x cosx xe x sinx xe x cosx e x sinx e x cosx Basis = =  /  x ( x 2 e x sinx ) = 2xe x sinx + x 2 e x sinx + x 2 e x cosx

Integrating x 2 e x sinx x 2 e x cosx xe x sinx xe x cosx e x sinx e x cosx Basis = =  /  x ( x 2 e x cosx )= 2xe x cosx + x 2 e x cosx - x 2 e x sinx

Integrating x 2 e x sinx x 2 e x cosx xe x sinx xe x cosx e x sinx e x cosx Basis = =  /  x ( xe x sinx )= e x sinx + xe x sinx + xe x cosx

Integrating x 2 e x sinx x 2 e x cosx xe x sinx xe x cosx e x sinx e x cosx Basis = =  /  x ( xe x cosx )= e x cosx + xe x cosx - xe x sinx

Integrating x 2 e x sinx x 2 e x cosx xe x sinx xe x cosx e x sinx e x cosx Basis = =  /  x ( e x sinx )= e x sinx + e x cosx

Integrating x 2 e x sinx x 2 e x cosx xe x sinx xe x cosx e x sinx e x cosx Basis = =  /  x ( e x cosx )= e x cosx - e x sinx

Integrating x 2 e x sinx x 2 e x cosx xe x sinx xe x cosx e x sinx e x cosx Basis = =  /  x (x 2 e x sinx + x 2 e x cosx + xe x sinx + xe x cosx + e x sinx + e x cosx) = 2x 2 e x cosx + 2xe x sinx + 4xe x cosx + 1e x sinx + 3e x cosx

Integrating f(x) = x 2 e x sin x Can you differentiate it? Can you integrate it? Sure! f’(x) = 2xe x sinx + x 2 e x sinx + x 2 e x cosx Ahh? No  I can! Think of differentiation as a Linear Transformations and then invert it.

Integrating D =D -1 = ½½0000 -½½0000 0½½00 10-½½00 ½0 ½½ ½0 ½

Integrating x 2 e x sinx x 2 e x cosx xe x sinx xe x cosx e x sinx e x cosx Basis = ½ -½ 0 1 = ½½0000 ½0000 0½½00 10-½½00 ½0 ½½ ½0 ½  x 2 e x sin x  x = ½ x 2 e x sinx - ½ x 2 e x cosx + xe x cosx - ½ e x sinx - ½ e x cosx

Integrating x 2 e x sinx x 2 e x cosx xe x sinx xe x cosx e x sinx e x cosx Basis =  /  x ( x 2 e x sinx ) We will now explain where this basis comes from. The Basis must be “Closed under Differentiation”. Wedding Party: You must invite the bride and groom. any friend of anyone invited.

Integrating x 2 e x sinx x 2 e x cosx xe x sinx Basis =  /  x ( x 2 e x sinx ) = 2xe x sinx + x 2 e x sinx + x 2 e x cosx We will now explain where this basis comes from. The Basis must be “Closed under Differentiation”. Wedding Party: You must invite the bride and groom. any friend of anyone invited.

Integrating x 2 e x sinx x 2 e x cosx xe x sinx xe x cosx Basis =  /  x ( x 2 e x cosx )= 2xe x cosx + x 2 e x cosx - x 2 e x sinx We will now explain where this basis comes from. The Basis must be “Closed under Differentiation”. Wedding Party: You must invite the bride and groom. any friend of anyone invited.

Integrating x 2 e x sinx x 2 e x cosx xe x sinx xe x cosx e x sinx Basis =  /  x ( xe x sinx )= e x sinx + xe x sinx + xe x cosx We will now explain where this basis comes from. The Basis must be “Closed under Differentiation”. Wedding Party: You must invite the bride and groom. any friend of anyone invited.

Integrating x 2 e x sinx x 2 e x cosx xe x sinx xe x cosx e x sinx e x cosx Basis =  /  x ( xe x cosx )= e x cosx + xe x cosx - xe x sinx We will now explain where this basis comes from. The Basis must be “Closed under Differentiation”. Wedding Party: You must invite the bride and groom. any friend of anyone invited. And so on

Integrating f(x) = x -1 e x sin x Can you differentiate it? Can you integrate it? Sure! f’(x) = -x -2 e x sinx + x -1 e x sinx + x -1 e x cosx Ahh? No  I can ?

Integrating x -1 e x sinx x -2 e x sinx x -3 e x sinx x -4 e x sinx Infinite Basis! Basis =  /  x ( x -1 e x sinx )= -x -2 e x sinx + x -1 e x sinx + x -1 e x cosx x -1 e x sinx  /  x ( x -2 e x sinx )= -2x -3 e x sinx + x -2 e x sinx + x -2 e x cosx  /  x ( x -3 e x sinx )= -3x -4 e x sinx + x -3 e x sinx + x -3 e x cosx

Integrating f(x) = x -1 e x sin x Can you differentiate it? Can you integrate it? Sure! f’(x) = -x -2 e x sinx + x -1 e x sinx + x -1 e x cosx Ahh? No  Oops this method does not work.

Integrating

Change of Basis: T([a 1,a 2,…,a d ]) = [A 1,A 2,…,A d ] Changes the basis used to describe an object. The standard basis of a vector space: A tuple of basis objects Linearly independent Spans the space uniquely  v  [a 1,a 2,…,a d ], v = a 1 w 1 +a 2 w 2 +… + a d w d The new basis of a vector space: A tuple of basis objects Linearly independent Spans the space uniquely  v  [A 1,A 2,…,A d ], v = A 1 W 1 +A 2 W 2 +… + A d W d Use small letters a j for the coefficients in the standard basis and capital letters A k for the coefficients in the new basis Changing Basis

[3,2] v = [a 1,a 2 ] =[1 1 / 5,3 2 / 5 ] [A 1,A 2 ] = T -1 ([A 1,A 2,…,A d ]) = [a 1,a 2,…,a d ] Change of Basis: T([a 1,a 2,…,a d ]) = [A 1,A 2,…,A d ] Changes the basis used to describe an object.  v  [a 1,a 2,…,a d ], v = a 1 w 1 +a 2 w 2 +… + a d w d  v  [A 1,A 2,…,A d ], v = A 1 W 1 +A 2 W 2 +… + A d W d =[w 1,w 2 ] =[, ] Standard Basis New Basis =[W 1,W 2 ] =[, ] ?? ?? [ ][ ] = [ ] a1a1 a2a2 A1A1 A2A2 v = Changing Basis

[4/5, -3/5][4/5, -3/5] [a 1,a 2 ] =[1,0] [A 1,A 2 ] = T -1 ([A 1,A 2,…,A d ]) = [a 1,a 2,…,a d ] Change of Basis: T([a 1,a 2,…,a d ]) = [A 1,A 2,…,A d ] Changes the basis used to describe an object.  v  [a 1,a 2,…,a d ], v = a 1 w 1 +a 2 w 2 +… + a d w d  v  [A 1,A 2,…,A d ], v = A 1 W 1 +A 2 W 2 +… + A d W d =[w 1,w 2 ] =[, ] Standard Basis New Basis =[W 1,W 2 ] =[, ] ? ? ? ? [ ][ ] = [ ] a1a1 a2a2 A1A1 A2A2 -3/5-3/5 4/54/5 v = 1 0 4/54/5 -3/5 -3/5 4/54/5 -3/5 -3/5 W 1 [1] W 1 [2] ? ? [ ][ ] = [ ] 1 0 W 1 [1] W 1 [2] W 1 [1] W 1 [2] Changing Basis

[3/5,4/5][3/5,4/5] [a 1,a 2 ] =[0,1] [A 1,A 2 ] = T -1 ([A 1,A 2,…,A d ]) = [a 1,a 2,…,a d ] Change of Basis: T([a 1,a 2,…,a d ]) = [A 1,A 2,…,A d ] Changes the basis used to describe an object.  v  [a 1,a 2,…,a d ], v = a 1 w 1 +a 2 w 2 +… + a d w d  v  [A 1,A 2,…,A d ], v = A 1 W 1 +A 2 W 2 +… + A d W d =[w 1,w 2 ] =[, ] Standard Basis New Basis =[W 1,W 2 ] =[, ] ? ? [ ][ ] = [ ] a1a1 a2a2 A1A1 A2A2 3/53/5 4/54/5 0 1 v = 3/53/5 4 / 5 4/54/5 -3/5 -3/5 3/53/5 W 2 [1] W 2 [2] [ ][ ] = [ ] 0 1 W 1 [1] W 1 [2] W 1 [1] W 1 [2] W 2 [1] W 2 [2] ? ? Changing Basis

T -1 ([A 1,A 2,…,A d ]) = [a 1,a 2,…,a d ] Change of Basis: T([a 1,a 2,…,a d ]) = [A 1,A 2,…,A d ] Changes the basis used to describe an object.  v  [a 1,a 2,…,a d ], v = a 1 w 1 +a 2 w 2 +… + a d w d  v  [A 1,A 2,…,A d ], v = A 1 W 1 +A 2 W 2 +… + A d W d =[w 1,w 2 ] =[, ] Standard Basis New Basis =[W 1,W 2 ] =[, ] [3,2] v = [a 1,a 2 ] = [1 1 / 5,3 2 / 5 ] [A 1,A 2 ] = v = [ ][ ] = [ ] a1a1 a2a2 A1A1 A2A2 11/511/5 32/532/ /54/5 -3/5 -3/5 3/53/5 4 / 5 [ ][ ] = [ ] W 1 [1] W 1 [2] W 2 [1] W 2 [2] a1a1 a2a2 A1A1 A2A2 Changing Basis

Change of Basis: T([a 1,a 2,…,a d ]) = [A 1,A 2,…,A d ] Changes the basis used to describe an object.  v  [a 1,a 2,…,a d ], v = a 1 w 1 +a 2 w 2 +… + a d w d  v  [A 1,A 2,…,A d ], v = A 1 W 1 +A 2 W 2 +… + A d W d =[w 1,w 2 ] =[, ] Standard Basis New Basis =[W 1,W 2 ] = [, ] [ ][ ] = [ ] W 1 [1] W 1 [2] W 2 [1] W 2 [2] a1a1 a2a2 A1A1 A2A2 [ ] [ ] = [ ] W 1 [1] W 1 [2] W 2 [1] W 2 [2] a1a1 a2a2 A1A1 A2A2 Changing Basis W 1 [1] W 1 [2] W 2 [1] W 2 [2]

Change of Basis: T([a 1,a 2,…,a d ]) = [A 1,A 2,…,A d ] Changes the basis used to describe an object.  v  [a 1,a 2,…,a d ], v = a 1 w 1 +a 2 w 2 +… + a d w d  v  [A 1,A 2,…,A d ], v = A 1 W 1 +A 2 W 2 +… + A d W d =[w 1,w 2 ] =[, ] Standard Basis New Basis =[W 1,W 2 ] = [, ] If the new basis vectors are orthogonal and of uniform length: |W 1 | 2 =n, then W 1 ∙W 1 =  j W 1 [j]  W 1 [j] = n W 1  W 2, then W 1 ∙W 2 =  j W 1 [j]  W 2 [j] = 0 [ ]  [ ] = [ ] W 1 [1] W 1 [2] W 2 [1] W 2 [2] n 0 0 n W 1 [1] W 2 [1] W 1 [2] W 2 [2] [ ] = [ ] W 1 [1] W 1 [2] W 2 [1] W 2 [2] W 1 [1] W 2 [1] W 1 [2] W 2 [2] 1/n1/n Changing Basis W 1 [1] W 1 [2] W 2 [1] W 2 [2]

Change of Basis: T([a 1,a 2,…,a d ]) = [A 1,A 2,…,A d ] Changes the basis used to describe an object.  v  [a 1,a 2,…,a d ], v = a 1 w 1 +a 2 w 2 +… + a d w d  v  [A 1,A 2,…,A d ], v = A 1 W 1 +A 2 W 2 +… + A d W d =[w 1,w 2 ] =[, ] Standard Basis New Basis =[W 1,W 2 ] = [, ] [ ][ ] = [ ] W 1 [1] W 1 [2] W 2 [1] W 2 [2] a1a1 a2a2 A1A1 A2A2 [ ] [ ] = [ ] W 1 [1] W 1 [2] W 2 [1] W 2 [2] a1a1 a2a2 A1A1 A2A2 W 1 [1] W 2 [1] W 1 [2] W 2 [2] [ ][ ] = [ ] a1a1 a2a2 A1A1 A2A2 1/n1/n Changing Basis W 1 [1] W 1 [2] W 2 [1] W 2 [2]

Change of Basis: T([a 1,a 2,…,a d ]) = [A 1,A 2,…,A d ] Changes the basis used to describe an object.  v  [a 1,a 2,…,a d ], v = a 1 w 1 +a 2 w 2 +… + a d w d  v  [A 1,A 2,…,A d ], v = A 1 W 1 +A 2 W 2 +… + A d W d =[w 1,w 2 ] =[, ] Standard Basis New Basis =[W 1,W 2 ] = [, ] W 1 [1] W 2 [1] W 1 [2] W 2 [2] [ ][ ] = [ ] a1a1 a2a2 A1A1 A2A2 v a1a1 a2a2 v  W1W1 A1A1 v  A 1 = |v|  cos(  ) = v∙W 1 =  j a j  W 1 [j] cos(  ) = v∙W1v∙W1 |v|  |W 1 | Viewed a different way: This is the correlation between v and W 1 Changing Basis W 1 [1] W 1 [2] W 2 [1] W 2 [2] |W1||W1|

Fourier Transformation Fourier Transform are a change of basis from the time basis to sine/cosine basis JPG or polynomial basis Applications Signal Processing Compressing data (eg images with.jpg) Multiplying integers in n logn loglogn time. …. Purposes: Some operations on the data are cheaper in new format Some concepts are easier to read from the data in new format Some of the bits of the data in the new format are less significant and hence can be dropped. The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D. Amazingly once you include complex numbers, the FFT code for sine/cosines and for polynomials are the SAME.

Fourier Transformation Swings, capacitors, and inductors all resonate at a given frequency, which is how the circuit picks out the contribution of a given frequency. A continuous periodic function t time y(t) Find the contribution of each frequency Sine &Cosine Basis If this is the dominate musical note of frequency  = 2  /T, then all the other basis functions are its harmonics frequencies: Frequency: Note on the Piano: ,,2,2,3,3,4,4,5,5,6,6,... CCGCEG

Surely this can’t be expressed as sum of sines and cosines. Fourier Transformation y(x) = x y(x)  2 sin(x) - sin(2x) + 2/3 sin(3x) Sine &Cosine Basis

Fourier Transformation y(x)  -4 sin(x) + sin(2x) - 4/9 sin(3x) y(x) = x 2 Sine &Cosine Basis

Change of Basis: T([a 1,a 2,…,a d ]) = [A 1,A 2,…,A d ] Changes the basis used to describe an object. The Time basis of a vector space: A tuple of basis objects Linearly independent Spans the space uniquely  v  [a 1,a 2,…,a d ], v = a 1 w 1 +a 2 w 2 +… + a d w d The Fourier basis of a vector space: A tuple of basis objects Linearly independent Spans the space uniquely  v  [A 1,A 2,…,A d ], v = A 1 W 1 +A 2 W 2 +… + A d W d Change of Basis Fourier Transformation

Time Domain y Frequency Domain Y The value y[j] of the signal at each point in time j. The amount Y[f] of frequency f in the signal for each frequency f.

Change of Basis: T(y[0],y[1],…,y[n-1]) = [Y Re [0],…,Y Im [ n / 2 ]] Changes the basis used to describe an object. Time Basis =[, ] y = A discrete periodic function j y[j] y[0]=3 y[1]=2 =[I 1,I 2,…] The time basis j’ I j [j’] zero one j  y  [y[0],y[1],…,y[n-1]], y = y[0]I 1 +y[1]I 2 +… + y[n-1]I n Change of Basis Fourier Transformation The value y[j] of the signal at each point in time j.

Time Basis =[, ] y = Fourier Transformation A discrete periodic function j y[j] =[I 1,I 2,…]  y = Y Re [0]∙c 1 +Y Im [0]∙s 1 +,…,Y Re [ n / 2 ]∙s n/2 +Y Im [ n / 2 ]∙s n/2 =[c 1,s 1,..] Y Re [0] =1 1 / 5 Y Im [0] =3 2 / 5 y = Fourier Basis =[, ] =[?,?] c1c1 s n/2 c n/2 s1s1 y[0]=3 y[1]=2 Change of Basis Change of Basis: T(y[0],y[1],…,y[n-1]) = [Y Re [0],…,Y Im [ n / 2 ]] Changes the basis used to describe an object.  y  [y[0],y[1],…,y[n-1]], y = y[0]I 1 +y[1]I 2 +… + y[n-1]I n The amount Y[f] of frequency f in the signal for each frequency f.

Time Basis =[, ] y = Fourier Transformation =[I 1,I 2,…]  y = Y Re [0]∙c 1 +Y Im [0]∙s 1 +,…,Y Re [ n / 2 ]∙s n/2 +Y Im [ n / 2 ]∙s n/2 =[c 1,s 1,..] Y Re [0] =1 1 / 5 Y Im [0] =3 2 / 5 y = Fourier Basis =[, ] c1c1 s n/2 c n/2 s1s1 y[0]=3 y[1]=2 Change of Basis Change of Basis: T(y[0],y[1],…,y[n-1]) = [Y Re [0],…,Y Im [ n / 2 ]] Changes the basis used to describe an object.  y  [y[0],y[1],…,y[n-1]], y = y[0]I 1 +y[1]I 2 +… + y[n-1]I n s 1 [1] s 2 [1] s 1 [2] s 2 [2] Y[1] Y[2] [ ] [ ] = [ ] y[1] y[2] s 1 [1] s 1 [2] s 2 [1] s 2 [2] Y[1] Y[2] y[1] y[2] [ ] [ ] = [ ]

Correlation (DFT) Ex. 1 Signal 1Ex. 2 Signal 2 Searching for s 3 sine base Correlation (point wise mult) of 2 above signals Ʃ ≠ 0  signal present Ʃ = 0  signal not present why?orthogonal basis

Time Basis =[, ] Fourier Transformation =[I 1,I 2,…] Fourier Basis =[, ] Sine and Cosines of different frequencies are orthogonal and of (almost) uniform length: =[c 1,s 1,..] [ ]  [ ] = [ ] s 1 [1] s 1 [2] s 2 [1] s 2 [2] n/2n/2 0 0 n/2n/2 s 1 [1] s 2 [1] s 1 [2] s 2 [2] [ ] = [ ] s 1 [1] s 1 [2] s 2 [1] s 2 [2] s 1 [1] s 2 [1] s 1 [2] s 2 [2] 2/n2/n Orthogonal Basis

Time Basis =[, ] Fourier Transformation =[I 1,I 2,…] Fourier Basis =[, ] =[c 1,s 1,..] [ ][ ] = [ ] s 1 [1] s 1 [2] s 2 [1] s 2 [2] Y[1] Y[2] s 1 [1] s 2 [1] s 1 [2] s 2 [2] 2/n2/n Y[1] Y[2] [ ][ ] = [ ] y[1] y[2] y[1] y[2] Orthogonal Basis Duality of FT: If Y=FT(y), then y=FT(Y)

Fourier Transformation Time Domain y Frequency Domain Y Delta function Impulse at y[4] Delta function Impulse at Y re [4] Cosine wave Cosine with f=4 Cosine wave Cosine with f=4 Duality of FT: If Y=FT(y), then y=FT(Y) Duality of FT

How do you get these corner? ? Fourier Transformation Time Domain y Frequency Domain Y Sinc function Square wave Sinc function Duality of FT: If Y=FT(y), then y=FT(Y) Duality of FT

Fourier Transformation Time Domain y Frequency Domain Y Duality of FT: If Y=FT(y), then y=FT(Y) Gaussian Duality of FT

Fourier Transformation Continuous Functions

O(log(n)) levels Fourier Transformation FFT Butterfly Fast Fourier Transform takes O(nlogn) time! (See Recursive Slides)

Fourier Transformation Time Domain y Frequency Domain Y Sound Signal ie how far out is the speaker drum at each point in time. Sound is low frequency High frequencies filtered out. Radio Signals

Fourier Transformation Time Domain y Frequency Domain Y Radio Carrier Signal ie A wave of magnetic field that can travel far. One high frequency signal Radio Signals

Fourier Transformation Time Domain y Frequency Domain Y y(i) = y 1 (i)  y 2 (i) Modulation: Their product Carrier signal Audio Signal (shifted) Audio Signal (shifted &flipped) Radio Signals

x[]y[] Fourier Transformation This system takes in a signal and outputs transformed signal. Linear Filter

Fourier Transformation In order understand this transformation, we put in a single pulse.  [] = h[] = h[] This response h[] identifies the system. h[] = Linear Filter

Fourier Transformation h[] Feed in any signal x[] = h[] = Sum of contributions from each separate pulse. Linear Filter Computationally trying to figure out what this electronic system does to a signal takes O(nm) time. How can we do it faster?

Fourier Transformation Time Domain y Frequency Domain Y Y = X  H Product y = x*h Convolution x[] = Input h[] = Impulse Response X[] H[] x[]*h[] = Output X[]  H[] Multiplication takes O(n) time. Oops Fourier Transform takes O(n 2 ) time. Fast Fourier Transform takes O(nlogn) time!   * Convolution

Fourier Transformation Time Domain y Frequency Domain Y Y = X  H Product y = x*h Convolution x[] = Input Impulse Response X[] x[]*h[] = Output   * Convolution h[] = H[] = Impulse Response X[]  H[] Multiplying zeros low and high frequencies in input. Filters out low and high frequencies in input. Not clear what system does to input

Fourier Transformation JPEG (Image Compression) JPG Image Compression JPEG is two dimensional Fourier Transform exactly as done before.

Fourier Transformation JPG Image Compression Each 8  8 block of values from the image is encoded separately.

Fourier Transformation JPG Image Compression Each 8  8 block of values from the image is encoded separately. It is decomposed as a linear combination of basis functions. Each basis function has a coefficient, giving the contribution of this basis function to the image.

Fourier Transformation JPG Image Compression Each 8  8 block of values from the image is encoded separately. It is decomposed as a linear combination of basis functions. Each of the 64 basis functions is a two dimensional cosine.

Fourier Transformation JPG Image Compression The first basis is constant. Its coefficient gives the average value in within block. Because many images have large blocks of the same colour, this one coefficient gives much of the key information!

Fourier Transformation JPG Image Compression Its (pos or neg) coefficient gives whether left to right the value tends to increase or decrease. The second basis “slopes” left to right

Fourier Transformation JPG Image Compression The second basis “slopes” left to right Because many images have have a gradual change in colour, this one coefficient gives more key information!

Fourier Transformation JPG Image Compression A similar basis for top to bottom.

Fourier Transformation JPG Image Compression The basis is whether the value tends to be smaller in the middle. Its coefficient gives This helps display the horizontal lines in images

Fourier Transformation JPG Image Compression As seen, the low frequency components of a signal are more important. Removing 90% of the bits from the high frequency components might remove, only 5% of the encoded information.

Fourier Transformation Instead of using sine and cosines as the basis, Polynomial Basis

Fourier Transformation Instead of using sine and cosines as the basis, We will now use polynomials. Polynomial Basis

Change of Basis: T([y[0],y[1],…, y[n-1]]) = [a 1,a 2,…,a n-1 ] Changes the basis used to describe an object. Time Basis =[, ] Fourier Transformation y[0]=3 y[1]=2 =[I 0,I 1,…] The time basis x I j [x] zero one xjxj Polynomial Basis f = A discrete function x f(x) the value f(x j ) of the function at x j. y[j] = x 0 x 1 x 2 x 3 x 4 … x n-1 These x j are fixed values. For FFT, we set x j = e 2  i j/n  f  [y[0],y[1],…,y[n-1]], f = y 0 I 0 +y 1 I 1 +… + y n-1 I n-1

Change of Basis: T([y[0],y[1],…, y[n-1]]) = [a 1,a 2,…,a n-1 ] Changes the basis used to describe an object. Time Basis =[, ] Fourier Transformation y[0]=3 y[1]=2 =[I 0,I 1,…]  f  [y[0],y[1],…,y[n-1]], f = y 0 I 0 +y 1 I 1 +… + y n-1 I n-1 Polynomial Basis f = A discrete function x f(x) x 0 x 1 x 2 x 3 x 4 … x n-1 y = Fourier Basis =[, ] =[1,x,x 2,x 3..]  f  [a 0,a 1,a 2,…,a n-1 ], f = a 0 +a 1 x +a 2 x 2 + … + a n-1 x n-1 The a j are the cooeficients of the polynomial. a1a1 a2a2

A Fourier Transform is a change in basis. It changes the representation of a function from the coefficients of the polynomial f(x) = a 0 +a 1 x +a 2 x 2 + … + a n-1 x n-1 This amounts to evaluating f at these points. Evaluating & Interpolating x to the value f(x i ) at key values x i. x 0 x 1 x 2 x 3 x 4 … x n-1 y 0 y 1 y 2 y 3 y 4 … y n-1 Fourier Transformation y i = f(x i )

A Fourier Transform is a change in basis. It changes the representation of a function Fourier Transformation from the coefficients of the polynomial f(x) = a 0 +a 1 x +a 2 x 2 + … + a n-1 x n-1 This amounts to evaluating f at these points. (x 0 ) 0 (x 0 ) 1 (x 0 ) 2 (x 0 ) 3 … (x 0 ) n-1 a 0 a 1 a 2 a 3 … a n-1 y 0 y 1 y 2 y 3 … y n-1 = (x 1 ) 0 (x 1 ) 1 (x 1 ) 2 (x 1 ) 3 … (x 1 ) n-1 (x n-1 ) 0 (x n-1 ) 1 (x n-1 ) 2 (x n-1 ) 3 …(x n-1 ) n-1 (x 2 ) 0 (x 2 ) 1 (x 2 ) 2 (x 2 ) 3 … (x 2 ) n-1 (x 3 ) 0 (x 3 ) 1 (x 3 ) 2 (x 3 ) 3 … (x 3 ) n-1 Vandermonde matrix Invertible if x i distinct. Evaluating & Interpolating y i = f(x i )

Fourier Transformation to the coefficients of the polynomial f(x) = a 0 +a 1 x +a 2 x 2 + … + a n-1 x n-1 This amounts to interpolating these points. An Inverse Fourier Transform is the reverse. It changes the representation of a function Evaluating & Interpolating x from the value f(x i ) at key values x i. x 0 x 1 x 2 x 3 x 4 … x n-1 y 0 y 1 y 2 y 3 y 4 … y n-1 y i = f(x i )

Fourier Transformation to the coefficients of the polynomial f(x) = a 0 +a 1 x +a 2 x 2 + … + a n-1 x n-1 This amounts to interpolating these points. Given a set of n points in the plane with distinct x-coordinates, there is exactly one (n-1)-degree polynomial going through all these points. An Inverse Fourier Transform is the reverse. It changes the representation of a function Evaluating & Interpolating

Polynomial Multiplication f(x) = a 0 +a 1 x +a 2 x 2 + … + a n-1 x n-1 g(x) = b 0 +b 1 x +b 2 x 2 + … + b n-1 x n-1 [f×g](x) = c 0 +c 1 x +c 2 x 2 + … +c 2n-2 x 2n-2 x 5 coefficient: c 5 = a 0 ×b 5 +a 1 ×b 4 + a 2 ×b 3 + … + a 5 ×b 0 Time = O(n 2 ) Too much Convolution

Polynomial Multiplication f(x) = a 0 +a 1 x +a 2 x 2 + … + a n-1 x n-1 g(x) = b 0 +b 1 x +b 2 x 2 + … + b n-1 x n-1 [f×g](x) = c 0 +c 1 x +c 2 x 2 + … +c 2n-2 x 2n-2 Coefficient Domain a j Evaluation Domain y i [a 0,a 1,a 2,…,a n-1 ] [b 0,b 1,b 2,…,b n-1 ] Fast Fourier Transform takes O(nlogn) time! y i = f(x i ) z i = g(x i ) y i ×z i = [g×f](x i ) Multipling values pointwise takes O(n) time! [c 0,c 1,c 2,…,c n-1 ]

Multiplying Big Integers X = 11… (N bits) Y = 10… X×Y = 10… The high school algorithm takes O(N 2 ) bit operations. Can we do it faster? With FFT we can do it in O(N log(N) loglog(N)) time. See Recursive Slides.

In many problems we face functions which are far more complicated than the standard functions from classical analysis. If we can represent them as series of polynomials then some properties of the functions would be easier to study. Taylor Expansions

Taylor Expansions of a Function: Functions f(x) can be expressed by F(x) = a 0 +a 1 x +a 2 x 2 +a 3 x 3 + … Clearly only converges if x<1 and/or a i  0. But gives the perfect answer within some range of x. Eg: f(x) = 1/(1-x) F(x) = 1+x +x 2 +x 3 + … Proof: xF(x) = x +x 2 +x 3 +x 4 + … F(x)-xF(x) = 1 F(x) = 1/(1-x)

Taylor Expansions Taylor Expansions of a Function: Eg: f(x) = 1/(1-x) F(x) = 1+x +x 2 +x 3 + Functions f(x) can be approximated by F(x) = a 0 +a 1 x +a 2 x 2 +a 4 x 3 + … + a n-1 x n-1 x 4 +  (x 5 )

Taylor Expansions Taylor Expansions of a Function: Eg: f(x) = 1/(1-x) F(x) = 1+x +x 2 +x 3 + … Eg: f(x) = 1/(1-  x) F(x) = 1+  x +  2 x 2 +  3 x 3 + … a i =  i Converges if |  x| < 1 if |x| < 1 / 

Taylor Expansions (Some functions?) Analytic Eg: f(x) = 1/x F(x) = ?? Taylor Expansions of a Function: The problem is a 0 = f(0)= .

Taylor Expansions Taylor Expansions of a Function: Functions f(x) can be expressed by F(x) = a 0 +a 1 x +a 2 x 2 +a 3 x 3 + … F(x) = a 0 +a 1 x + a 2 x 2 + a 3 x 3 + a 4 x 4 + … F ’ (x) = F ’’ (x) = F ’’’ (x) = a a 2 x + 3 a 3 x a 4 x 3 + … a0a0 f(0) a1a1 f ’ (0) F(0) = F ’ (0) = F ’’ (0) = F ’’’ (0) = F i (0) = Proof: 2 a 2 + 2∙3 a 3 x + 3∙4 a 4 x 2 + … i! a i 2∙3 a 3 + 2∙3∙4 a 4 x + … 2 a22 a2 1 / 2 f ’’ (0) 2∙3 a 3 a 0 = a 1 = a i = a 2 = a 3 = 1 / i! f i (0) 1 / 1∙2∙3 f ’’’ (0)

Taylor Expansions Taylor Expansions of a Function: Functions f(x) can be expressed by F(x) = a 0 +a 1 x +a 2 x 2 +a 3 x 3 + … f(0) = e 0 = 1 f ’ (0) = e 0 = 1 f ’’ (0) = e 0 = 1 f ’’’ (0) = e 0 = 1 Example: f(x) = e x a i = 1 / i! f i (0) Converges for all x.

Taylor Expansions Taylor Expansions of a Function: Functions f(x) can be expressed by F(x) = a 0 +a 1 x +a 2 x 2 +a 3 x 3 + … f(0) = sin(0) = 0 f ’ (0) = cos(0) = 1 f ’’ (0) = -sin(0) = 0 f ’’’ (0) = -cos(0) = -1 Example: f(x) = sin(x) a i = 1 / i! f i (0) Converges for all x.

Taylor Series for f(x) centered at a: Clearly requires f(x) to be infinitely differentiable at x = a: Taylor Expansions

The n th order approximation: The Lagrange Remainder: Taylor Expansions

Application:

Taylor Expansions

Application: Taylor Expansions

Application: Taylor Expansions Find solutions to differential equations. Newton’s method to find the root of a function. Can be extended to functions in several variables.

Generating functions: Hiding interesting values within the coefficients of a Taylor expansion of a function. It is so powerful that it can solve: Most recurrences Most sums Lots of the neat math facts. Generating-Functionology by Herbert S. Wilf (Academic Press) Is HIGHLY recommended! Generating Functions

Which function has the Taylor Expansion with the Fibonacci sequence for coefficients? G =  i=0..  F i x i where F 0 = 0, F 1 =1, F n =F n-1 + F n-2 G = F 0 + F 1 x + F 2 x 2 + F 3 x 3 + F 4 x 4 + F 5 x 5 + … -x G = -F 0 x - F 1 x 2 - F 2 x 3 - F 3 x 4 - F 4 x 5 - … -x 2 G = -F 0 x 2 - F 1 x 3 - F 2 x 4 - F 3 x 5 - … x 0(1-x-x 2 ) G = x G = (1-x-x 2 )

The fact that the manipulation of polynomial equations can encode the same theorems that are proved by combinatorial reasoning is very significant! Never underestimate the insights encoded into the coefficients of a polynomial! Generating Functions

Lets count things. Eg: Let p(n) denote the # of binary trees there are with n nodes. p(1) p(2) = 1 = 2 p(3)= 5 p(4) ? = 14 ? (#L,#R) = (3,0),(2,1),(1,2),(0,3) ? ? (3,0) (2,1) (0,3) (1,2) p(0) = 1 

Generating Functions Lets count things. Eg: Let p(n) denote the # of binary trees are there with n nodes. p(1) p(2) = 1 = 2 p(3)= 5 p(4)= 14 (#L,#R) = (n-1,0),(n-2,1),(n-3,2),…,(0,n-1) ? ? (n-i-1,i) p(0) = 1  p(n) p(n-i-1) p(i) =  i=0..n-1 p(n-i-1)∙p(i)

Generating Functions (in a real cool way). Let T denote the (infinite) set of all binary trees For each tree, t, let n(t) denote the number of nodes in t. The values of p(n) can be read off the coefficients of the polynomial. Lets count things. Eg: Let p(n) denote the # of binary trees are there with n nodes. Generation function for set T and powers n(t). …}  T={

Generating Functions Let T denote the (infinite) set of all binary trees Note that a tree t is either: the empty tree t=  or consists of: a root node a left tree t 1, a right tree t 2. t= …}  T={ ,,  = ,t 1,t 2  =

Generating Functions Let T denote the (infinite) set of all binary trees T  T={  t 1,t 2  | t 1,t 2  T} …}  T={  = { , , , , , , …} …}  T={ (a+b+c)  (u+v+x) =(au+av+ax+ bu+bv+bx+ cu+cv+cx)

Generating Functions Let T denote the (infinite) set of all binary trees T  T={  t 1,t 2  | t 1,t 2  T} …}  T={  = { , , , , , , …} …}  T={ Theorem: The generating function of the cross products of two sets is the product of the generating functions of the two sets provided the power n(t) are additive.

Generating Functions Let T denote the (infinite) set of all binary trees …}  T={ Theorem: The generating function of the disjoint union of two sets is the of the generating functions of the two sets. sum

Generating Functions Let T denote the (infinite) set of all binary trees …}  T={  = { ,, , ,, , ,, , …}   T  T  ={ ,t 1,t 2  | t 1,t 2  T}

Generating Functions Let T denote the (infinite) set of all binary trees …}  T={  = { ,, , ,, , ,, , …}   T  T  ={ ,t 1,t 2  | t 1,t 2  T} Note that a tree t is either: the empty tree t=  or t=  ,t 1,t 2  t= ,,  = ,t 1,t 2  = Hence, the set    T  T  can be thought of as a set of binary trees. But does it contain all of T? No. It is missing empty tree .

Generating Functions Let T denote the (infinite) set of all binary trees …}  T={ T = {  }     T  T  Note that a tree t is either: the empty tree t=  or t=  ,t 1,t 2  + - Taylor Expansion?

(Recall) Taylor Expansions of a Function: Functions f(x) can be expressed by F(x) = a 0 +a 1 x +a 2 x 2 +a 3 x 3 + … f(0) f ’ (0) 1 / 2 f ’’ (0) a 0 = a 1 = a i = a 2 = 1 / i! f i (0) L'Hôpital's Rule Generating Functions

(Recall) Taylor Expansions of a Function: Functions f(x) can be expressed by F(x) = a 0 +a 1 x +a 2 x 2 +a 3 x 3 + … f(0) f ’ (0) 1 / 2 f ’’ (0) a 0 = a 1 = a i = a 2 = 1 / i! f i (0) Generating Functions The values of p(n) can be read off the coefficients of the polynomial. p(n) Proof Sketch Catalan Numbers

Generating Functions Binomial Coefficients What if n is not an integer?

Generating Functions Binomial Coefficients  (½)(½-1)(½-2)(½-3) …(½-n+1) (½) n = n terms, non-int n-1 neg (1) (1) (3) (5) … (2n-3) - (-2) n (½) n = n-1 terms, odd (2) (4) (6) … (2n-2) 2 n-1 (n-1)! = n-1 terms, even (2n-2)! -½ (-4) n (½) n (n-1)! =2n-2 terms x>y

Generating Functions Set y=-4x. Remove constant coefficient & negate. n-1  n The values of p(n) can be read off the coefficients of the polynomial.

Generating Functions Lets count things. Eg: Let p(n) denote the # of binary trees are there with n nodes. (in a real cool way). We will explain this approximation when doing prime numbers.

Prime Number Theorem: Every integer can be uniquely decomposed into a unique product of primes. Why is 1 not considered a prime? Because then this factorization would not be unique. Primes p is prime if it is a positive integer and nothing but 1 and p divided into it. Eg 2,3,5,7,11,13,17,19,23,29,31,....

Primes Greatest Common Divisor (GCD) Each integer can be though to as the set of its prime multiples. Subscripts are used to differentiate between copies. The intersection 2∙3 = 6 is The union 2 3 ∙3 2 ∙5∙7 = 2520 isGreatest Common Multiple

Number of Primes Proof: By contradiction, suppose there are only a finite number. Hence, there is a maximum prime. Let it be p. Let n=p!+1. Note every prime p’ ≤ p does not divide into n, because the remainder is 1. Consider the decomposition of n into prime factors. It contains no primes ≤p Hence there is a prime bigger than p that divides into n. Contradicting the fact that p is the biggest prime. Theorem: There are an infinite number of primes.

Number of Primes Proof: Count the # of prime factors of

Number of Primes = “a choose b” = Given a set of a objects, it is the number of ways of choosing a subset consisting of b of them.

Number of Primes = “a choose b” = Given a set of a objects, it is the number of ways of choosing a subset consisting of b of them. Lm 1:

Number of Primes = “a choose b” = Given a set of a objects, it is the number of ways of choosing a subset consisting of b of them. = an integer Prime Number Theorem: Every integer can be uniquely decomposed into a unique product of primes, eg Let

Number of Primes Lm 2: i.e. All of these primes appear here at least once each. Proof: This p appears on the top, but can’t be cancelled by the bottom because it is a prime.

Number of Primes Lm 2: i.e. All of these primes appear here at least once each. These only make the product bigger And each of these is at least n.

Number of Primes Proof: This p does not appear on the top, and can’t be put together from parts because it is a prime. i.e. None of these primes appear. Lm 3:

Number of Primes Proof: Later i.e. Each prime contributes at most 2n to the product. Lm 4:

Number of Primes Lm 1

Number of Primes Lm 1 Back to the proof of lemma 4.

Number of Primes Proof: Lm 5: n! = 1∙2∙3∙... ∙n∙n There is one place where p divides n!. And another. This gives of them. ∙p∙... ∙2p∙... ∙3p∙... ∙4p∙...

Number of Primes Proof: Lm 5: n! = 1∙2∙3∙... ∙n∙n There is one place where p divides n! two times. But one of these we counted in the last slide so this adds only one more to our count. And another. This gives more of them. ∙p 2 ∙... ∙2p 2 ∙... ∙3p 2 ∙... ∙4p 2 ∙...

Number of Primes Proof: Lm 5: n! = 1∙2∙3∙... ∙n∙n There is one place where p divides n! i times. But all but one of these we counted already. So this adds only one more to our count. And another. This gives more of them. ∙p i ∙... ∙2p i ∙... ∙3p i ∙... ∙4p i ∙... Total:

Number of Primes Proof: Lm 6:

Number of Primes Lm 4: Lm 5: Proof: Lm 6:

Number of Primes Primes are more or less randomly distributed. If you want an n-bit prime, Generate a random n-bit number p and Pr[p is prime] ~ 1 / n Repeat 10n times and you likely have found a prime.

Number of Primes Primes are more or less randomly distributed.. If you want p and p+2 to both be primes Generate a random n-bit number p and Pr[both p and p+2 are prime] ~ 1 / n 2 Repeat 10n 2 times and you likely have found twin primes. Conjectured for 100 years but not proven

Number of Primes Primes are more or less randomly distributed.. If you want p and p+1 to both be primes Generate a random n-bit number p and Pr[both p and p+1 are prime] ~ 0, because one must be even Oops.

Number of Primes Primes are more or less randomly distributed.. If you want p to be prime and p-1 to be a power of 2 Generate a random n-bit number p and Pr[p-1 is a power of 2] ~ 1 / 2 n Note is only n-bit power of 2. Oops

Number of Primes Primes are more or less randomly distributed.. If you want p to be prime and p-1 to be a power of 2 Generate a random ~n-bit number N that is a power of 2 Pr[N+1 is prime] ~ 1 / n Repeat 10n times and you likely have found such a p and p-1. You will try N=2 n,2 n+1,2 n+2,… 2 11n getting an 11n-bit number

Number of Primes Primes are more or less randomly distributed.. If you want p to be prime and p-1 divisible by 2 n Generate a random small r, Let p = r2 n +1 Pr[p is prime] ~ 1 / n Repeat 10n times and you likely have found good r. Try r = 1,…,10n. p will need log p = n + logr = n + log 10n bits. Note this is much better than 11n

Number of Primes Primes are more or less randomly distributed. Homework questions: # of N of the form N=p q # of N of the form N=pq for primes p&q # of prime factors of N

End