B504/I538: Introduction to Cryptography

Slides:



Advertisements
Similar presentations
 The running time of an algorithm as input size approaches infinity is called the asymptotic running time  We study different notations for asymptotic.
Advertisements

1 Section 14.1 Computability Some problems cannot be solved by any machine/algorithm. To prove such statements we need to effectively describe all possible.
Chapter 1 – Basic Concepts
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
The Growth of Functions
CPSC 411, Fall 2008: Set 12 1 CPSC 411 Design and Analysis of Algorithms Set 12: Undecidability Prof. Jennifer Welch Fall 2008.
1 A Note on Negligible Functions Mihir Bellare J. CRYPTOLOGY 2002.
1 Undecidability Andreas Klappenecker [based on slides by Prof. Welch]
1 CSE 417: Algorithms and Computational Complexity Winter 2001 Lecture 18 Instructor: Paul Beame.
Theory of Computation. Computation Computation is a general term for any type of information processing that can be represented as an algorithm precisely.
CSE 373 Data Structures and Algorithms Lecture 4: Asymptotic Analysis II / Math Review.
Instructor Neelima Gupta
C. – C. Yao Data Structure. C. – C. Yao Chap 1 Basic Concepts.
Lecture 2 Computational Complexity
Mathematics Review and Asymptotic Notation
MCA 202: Discrete Structures Instructor Neelima Gupta
Algorithms Growth of Functions. Some Notation NNatural numbers RReal numbers N + Positive natural numbers R + Positive real numbers R * Non-negative real.
Measuring complexity Section 7.1 Giorgi Japaridze Theory of Computability.
Ryan Henry I 538 /B 609 : Introduction to Cryptography.
Time Complexity of Algorithms (Asymptotic Notations)
ADS 1 Algorithms and Data Structures 1 Syllabus Asymptotical notation (Binary trees,) AVL trees, Red-Black trees B-trees Hashing Graph alg: searching,
CSE 373: Data Structures and Algorithms Lecture 4: Math Review/Asymptotic Analysis II 1.
Mathematical Foundations (Growth Functions) Neelima Gupta Department of Computer Science University of Delhi people.du.ac.in/~ngupta.
CSE15 Discrete Mathematics 03/06/17
The Acceptance Problem for TMs
CSE202: Introduction to Formal Languages and Automata Theory
Probabilistic Algorithms
Modeling with Recurrence Relations
Time complexity Here we will consider elements of computational complexity theory – an investigation of the time (or other resources) required for solving.
Introduction to Algorithms
Analysis of Algorithms
Growth of functions CSC317.
The Growth of Functions
Analysis of Algorithms
DATA STRUCTURES Introduction: Basic Concepts and Notations
CSE 105 theory of computation
CS 3343: Analysis of Algorithms
Asymptotic Notations Algorithms Lecture 9.
Andreas Klappenecker [based on slides by Prof. Welch]
CSCE 411 Design and Analysis of Algorithms
B504/I538: Introduction to Cryptography
CSCI 2670 Introduction to Theory of Computing
Introduction to Algorithms Analysis
The Curve Merger (Dvir & Widgerson, 2008)
Asymptotic Growth Rate
Analysis of Algorithms
Chapter 14 Time Complexity.
Analysis of Algorithms
CS154, Lecture 12: Time Complexity
Formal Languages, Automata and Models of Computation
Intro to Theory of Computation
CSC 4170 Theory of Computation Time complexity Section 7.1.
Recall last lecture and Nondeterministic TMs
Cryptography Lecture 5.
CSCI 2670 Introduction to Theory of Computing
CS21 Decidability and Tractability
CSC 380: Design and Analysis of Algorithms
CSE 105 theory of computation
Discrete Mathematics 7th edition, 2009
CSE 373, Copyright S. Tanimoto, 2001 Asymptotic Analysis -
Theorem 9.3 A problem Π is in P if and only if there exist a polynomial p( ) and a constant c such that ICp (x|Π) ≤ c holds for all instances x of Π.
Algorithms Chapter 3 With Question/Answer Animations.
Estimating Algorithm Performance
Instructor: Aaron Roth
CSC 4170 Theory of Computation Time complexity Section 7.1.
CS 2604 Data Structures and File Management
Turing Machines Everything is an Integer
Analysis of Algorithms
Intro to Theory of Computation
CSE 105 theory of computation
Presentation transcript:

B504/I538: Introduction to Cryptography Spring 2017 • Lecture 3 (2017—01—17)

Assignment 0 is due today; Assignment 1 is out and is due in 2 weeks! (2017—01—31)

Review: Asymptotic notation

English: “f is [in] big-O of g” Oh s and Omega s, Oh my! Defⁿ: Let f:ℕ→ℝ and g:ℕ→ℝ be two functions. We say iff ∃c>0 and nc∈ℕ, such that ∀n>nc, f∈Ο (g) |f(n)|≤c·|g(n)|. English: “f is [in] big-O of g” Intuitively: f(n) grows ”no faster” than g(n) as n→∞

English: “f is [in] big-Omega of g” Oh s and Omega s, Oh my! Defⁿ: Let f:ℕ→ℝ and g:ℕ→ℝ be two functions. We say iff ∃c>0 and nc∈ℕ, such that ∀n>nc, f∈Ω(g) |g(n)|≤c·|f(n)|. English: “f is [in] big-Omega of g” Intuitively: f(n) grows “no slower” than g(n) as n→∞ Equivalently: f∈Ω(g) iff g∈Ο(f)

English: “f is [in] little-O of g” Oh s and Omega s, Oh my! Defⁿ: Let f:ℕ→ℝ and g:ℕ→ℝ be two functions. We say iff ∀c>0 and nc∈ℕ, such that ∀n>nc, f∈ο (g) |f(n)|<c·|g(n)|. English: “f is [in] little-O of g” Intuitively: f(n) grows “much slower” than g(n) as n→∞

English: “f is [in] little-Omega of g” Oh s and Omega s, Oh my! Defⁿ: Let f:ℕ→ℝ and g:ℕ→ℝ be two functions. We say iff ∀c>0 and nc∈ℕ, such that ∀n>nc, f∈ω (g) |g(n)|<c·|f(n)|. English: “f is [in] little-Omega of g” Intuitively: f(n) grows “much faster” than g(n) as n→∞ Equivalently: f∈ω(g) iff g∈ο(f)

c1·|g(n)|≤|f(n)|≤c2·|g(n)|. Oh s and Omega s, Oh my! Defⁿ: Let f:ℕ→ℝ and g:ℕ→ℝ be two functions. We say iff ∃c1,c2>0 and n0∈ℕ, such that ∀n>n0, f∈Θ (g) c1·|g(n)|≤|f(n)|≤c2·|g(n)|. English: “f is [in] big-Theta of g” Intuitively: f(n) grows “about as fast” as g(n) as n→∞ Equivalently: f∈Θ(g) iff f∈Ο(g) and g∈Ο(f)

Oh s and Omega s, Oh my! Each of Ο(g), ο(g), Θ(g), Ω(g), and ω(g) is an infinite set of real-valued functions Ο & Θ impose a total order on real-valued functions: f∈ο(g) ≈ f<g f∈Ο(g) ≈ f≤g f∈Θ(g) ≈ f=g f∈Ω(g) ≈ f≥g f∈ω(g) ≈ f>g Common abuse of notation: f=Ο(g) or f=Ω(g) or f=Θ(g), etc. Just keep in mind that “=“ means “∈” not “equals”! Reflexive: Transitive: Antisymmetric: Total: f∈Ο(f) f∈Ο(g)∧g∈Ο(h)⇒f∈Ο(h) f∈Ο(g)∧g∈Ο(f)⇒f∈Θ(g) f∈Ο(g)∨g∈Ο(f)

Polynomial functions p∈Ο(nc). Defⁿ: A function p:ℕ→ℝ⁺ is polynomial in n if ∃c∈ℕ such that p∈Ο(nc). Defⁿ: p is superpolynomial in n if it is not polynomial in n Defⁿ: The set of all functions polynomial in n is poly(n) Common abuse of notation: p=poly(n) means p∈poly(n) p can be polynomial in n and not be “a polynomial”! Eg.: if f(n) is a polynomial, then p(n)≔f(n)·㏒n is polynomial in n

Inverse polynomial functions Defⁿ: A function p:ℕ→ℝ⁺ is inverse polynomial in n if p-1∈poly(n). Alt. defⁿ: p:ℕ→ℝ⁺ is inverse polynomial if its reciprocal is polynomial Eg.: p(n)≔1/(n100 ㏒n) is inverse polynomial since p−1(n)∈Ο(n101) Eg.: q(n)≔1/n! is not inverse polynomial since n!∉poly(n)

Probabilistic polynomial time (PPT) algorithms

Including your smartphone and laptop Turing Machines (TMs) A simple, well-defined model of computation Definition is out of scope Measure running time by number of steps before TM halts Measure robust in that all other “reasonable” models of computation require “polynomially related” number of steps (try Wikipedia ) Church-Turing Thesis: TMs are universal: anything you can compute in theory, you can compute on a TM! Including your smartphone and laptop Alan Turing

Turing Machines (TMs) All inputs and outputs to a TM are assumed to be finite strings from {0,1}* If x∈{0,1}*, then |x| denotes the length of (number of bits in) x Defⁿ: A sequence of TMs {TMn}n∈ℕ implements an algorithm A if, ∀x∈{0,1}ⁿ, on input x, TMn executes each step of A with input x.

Turing Machines (TMs) We use the terms algorithm and (sequence of) TM(s) interchangeably An algorithm is equivalent to the “most efficient sequence of TMs” that implements it Defⁿ: If A is an algorithm (sequence of TMs), then A(x) denotes the random variable describing the output of A on input x∈{0,1}*

Turing Machines (TMs) Defⁿ: Let t:ℕ→ℕ. An algorithm A runs in time t if ∀x∈{0,1}*,A halts after at most t(|x|) steps on input x. Note: running time is proportionate to length of input If input is an integer, say n∈ℕ (expressed in binary), then running time depends on ⎾㏒n⏋ rather than on n itself!

Turing Machines (TMs) Defⁿ: An algorithm A computes f:{0,1}*→{0,1}* if there exists a function t:ℕ→ℕ such that A runs in time t and ∀x∈{0,1}*,A(x)=f(x). Requirement that A runs in time t is just to ensure that A eventually halts; otherwise, an algorithm that never halts would compute every function!

Polynomial time algorithms Defⁿ: An algorithm A runs in polynomial time if ∃p∈poly(n) such that A runs in time p. Refer to such an algorithm as a polynomial-time algorithm

Probabilistic algorithms Defⁿ: An algorithm A is probabilistic if ∃x∈{0,1}* such that A(x) induces a distribution that is not a point distribution. Intuitively, a probabilistic algorithm makes random choices during its execution Probabilistic algorithms can do everything non- probabilistic algorithms can do and possibly more

Probabilistic polynomial time Defⁿ: An algorithm A is probabilistic polynomial-time (PPT) if it is probabilistic and it runs in polynomial time. An algorithm is said to be efficient if it is PPT If the sequence of TMs {TMn}n∈ℕ implementing A satisfies TM1=TM2=⋯ then A is uniform PPT; otherwise, it is non-uniform PPT (nuPPT)

The negligible, the noticeable, and the overwhelming

Negligible functions ∀c>0,ε(n)∈Ο(n⁻c). Defⁿ: A function ε:ℕ→ℝ⁺ is negligible if ∀c>0,ε(n)∈Ο(n⁻c). Alt. defⁿ: A function ε:ℕ→ℝ⁺ is negligible if it “vanishes (to zero)” faster than any inverse polynomial Alt. defⁿ: A function ε:ℕ→ℝ⁺ is negligible if ∀c>0, ∃nc∈ℕ such that ε(n)<n-c for all n>nc

Negligible functions ∀c>0,ε(n)∈Ο(n⁻c). Defⁿ: A function ε:ℕ→ℝ⁺ is negligible if ∀c>0,ε(n)∈Ο(n⁻c). Defⁿ: The set of all functions negligible in n is negl(n) Common abuse of notation: ε=negl(n)

Proof: Left as an exercise (see Assignment 1). ☐ Negligible functions Thm (negl+negl=negl): If ε1:ℕ→ℝ⁺ and ε2:ℕ→ℝ⁺ are both negligible, then ε1(n)+ε2(n) is negligible. Proof: Left as an exercise (see Assignment 1). ☐

Negligible functions Corollary: If ε:ℕ→ℝ⁺ is negligible and λ:ℕ→ℝ⁺ is not negligible, then λ(n)−ε(n) is not negligible. Proof: Define μ(n)≔λ(n)−ε(n) and suppose (for a contradiction) that μ(n) is negligible. Then, by the preceding theorem, μ(n)+ε(n)=λ(n) would also be negligible. ☐

Proof: Left as an exercise (see Assignment 1). ☐ Negligible functions Thm (poly×negl=negl): If ε:ℕ→ℝ⁺ is negligible and p:ℕ→ℝ⁺ is polynomial, then p(n)·ε(n) is negligible. Proof: Left as an exercise (see Assignment 1). ☐ Note: This proof is not especially tricky; however, the proof does NOT follow immediately from the fact ε1(n)+ε2(n) is negligible—any argument that does not use the fact that p(n) is a polynomial is incorrect! Indeed, if p(n)=⎾1/ε(n)⏋, then p(n)·ε(n)=∑ε(n)≥1 and is clearly not negligible! p(n) i=1

Proof: Left as an exercise (see Assignment 1). ☐ Negligible functions Thm (neglc=negl): If ε:ℕ→ℝ⁺ is negligible and c>0 is a constant, then (ε(n))c is negligible. Proof: Left as an exercise (see Assignment 1). ☐ Note: This proof is not especially tricky; however, the proof does NOT follow from the lemma on the next slide together with the self-evident-yet-dubious “fact” that ε(n)c<ε(n) whenever ε(n)<1. Indeed, if ε(n)<1 and c<1, then ε(n)c>ε(n).

Proving a function is negligible Step 1: Assume c>0 is given do not fix a specific c — keep c is an abstract quantity Step 2: Set nc=f(c), for some f this step requires all the ingenuity Step 3: Prove that ε(n)<n-c for all n>nc * * Okay to assume that c is not “too small”

Proving a function is negligible Lemma: Let f:ℕ→ℝ⁺ and fix c>0 and nc∈ℕ such that f(n)<n−c for all n>nc. Then for any d∈(0,c), we have f(n)<n−d for all n>nc. Q: Why do we care? A: To prove ε is negligible, it suffices to consider only "large" values of the constant c sometimes the argument for large c fails for small c

Proving a function is negligible Lemma: Let f:ℕ→ℝ⁺ and fix c>0 and nc∈ℕ such that f(n)<n−c for all n>nc. Then for any d∈(0,c), we have f(n)<n−d for all n>nc. Proof: n−c≤n−d iff n−c/n−d≤1 (noting n≥1) Rewrite n−c/n−d as n−a, where a=c−d Now, a>0 since c>d; thus, 1≤na (again noting n≥1) It follows that,n−c/n−d=n−a≤1. ☐

Example proof: 2−√n is negligible Let c≥2 be given. We must produce an nc∈ℕ such that 2−√n<n−c for all n>nc. Taking logs, this is −√n<−c ㏒n ⇔ √n/㏒n>c for all n>nc. Set nc=⎾c4⏋. Since c≥2, we have ㏒nc≤nc¼; hence, for all n>nc, √n / ㏒n ≥√n / n¼ =n¼ >nc¼. But nc¼ ≥(c4) ¼=c, so we just showed -√n/㏒n>c, as desired. (assuming c≥2 justified by preceding lemma) We’ll argue √n/㏒n>c for all n>nc. ☐

Example proof: 10100·ε(n) is negligible Note use of c+1 instead of c Let c>0 be given. By defⁿ: ∃mc∈ℕ such that ε(n)<n−(c+1) for all n>mc. Set nc=max{10100,mc}. Then for all n>nc, 10100·ε(n) <10100·n−(c+1) =(10100/n)·n−c ≤n−c; (nc≥10100⇒ (10100/n)≤1) hence, we have shown that 10100·ε(n)<n−c for all n>nc. ☐

Overwhelming functions Defⁿ: A function κ:ℕ→ℝ⁺ is overwhelming if ε(n)≔1−κ(n) is negligible Alt. defⁿ: A function κ:ℕ→ℝ⁺ is overwhelming if there exists a negligible function ε:ℕ→ℝ⁺ such that κ(n)=1−ε(n) (Like negligible functions), overwhelming functions typically describe probabilities

Noticeable functions ∃c>0,μ(n)∈Ω(n⁻c). Defⁿ: A function μ:ℕ→ℝ⁺ is noticeable if ∃c>0,μ(n)∈Ω(n⁻c). Alt. defⁿ: A function μ:ℕ→ℝ⁺ is noticeable if some inverse polynomial “vanishes to zero” faster than it Noticeable ↔ inverse polynomial; however, we typically reserve the label “noticeable” for inverse polynomials that (quickly) tend to 0 as n→∞

Q: NO! But why? If f is not negligible, does that mean it is noticeable? Q: NO! But why? (See assignment 1)

Distribution ensembles Defⁿ: An ensemble of probability distributions is an infinite sequence {Xn}n∈ℕ of random variables indexed by the natural numbers. Any PPT algorithm A induces an ensemble of distributons ∀n∈ℕ, set Xn≔A(x) for the x∈{0,1}ⁿ

Indistinguishable ensembles Defⁿ: Two ensembles of distributions {Xi}i∈ℕ and {Yi}i∈ℕ are statistically indistinguishable there exists a negligible function ε:ℕ→ℝ⁺ such that ∀i∈ℕ, where Vi is the (combined) range of Xi and Yi. ∑ | Pr[Xi=x]−Pr[Yi=x] |≤ε(n) x∈Vi

That’s all for today, folks!