When are Fuzzy Extractors Possible?

Slides:



Advertisements
Similar presentations
1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Advertisements

Efficient Private Approximation Protocols Piotr Indyk David Woodruff Work in progress.
Invertible Zero-Error Dispersers and Defective Memory with Stuck-At Errors Ariel Gabizon Ronen Shaltiel.
Linear-Degree Extractors and the Inapproximability of Max Clique and Chromatic Number David Zuckerman University of Texas at Austin.
Detection of Algebraic Manipulation with Applications to Robust Secret Sharing and Fuzzy Extractors Ronald Cramer, Yevgeniy Dodis, Serge Fehr, Carles Padro,
Extracting Randomness From Few Independent Sources Boaz Barak, IAS Russell Impagliazzo, UCSD Avi Wigderson, IAS.
Foundations of Cryptography Lecture 7 Lecturer:Danny Harnik.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Strong Key Derivation from Biometrics
Sheng Xiao, Weibo Gong and Don Towsley,2010 Infocom.
NON-MALLEABLE EXTRACTORS AND SYMMETRIC KEY CRYPTOGRAPHY FROM WEAK SECRETS Yevgeniy Dodis and Daniel Wichs (NYU) STOC 2009.
TAMPER DETECTION AND NON-MALLEABLE CODES Daniel Wichs (Northeastern U)
Fuzzy extractor based on universal hashes
 Secure Authentication Using Biometric Data Karen Cui.
Oded Regev Tel-Aviv University On Lattices, Learning with Errors, Learning with Errors, Random Linear Codes, Random Linear Codes, and Cryptography and.
Derandomizing LOGSPACE Based on a paper by Russell Impagliazo, Noam Nissan and Avi Wigderson Presented by Amir Rosenfeld.
Toyohiro Tsurumaru (Mitsubishi Electric Corporation) Masahito Hayashi (Graduate School of Information Sciences, Tohoku University / CQT National University.
1 Streaming Computation of Combinatorial Objects Ziv Bar-Yossef U.C. Berkeley Omer Reingold AT&T Labs – Research Ronen.
BB84 Quantum Key Distribution 1.Alice chooses (4+  )n random bitstrings a and b, 2.Alice encodes each bit a i as {|0>,|1>} if b i =0 and as {|+>,|->}
1 Leonid Reyzin May 23, th International Conference on Information Theoretic Security Minentropy and its Variations for Cryptography.
EECS 598 Fall ’01 Quantum Cryptography Presentation By George Mathew.
CMSC 414 Computer and Network Security Lecture 3 Jonathan Katz.
Simulating independence: new constructions of Condensers, Ramsey Graphs, Dispersers and Extractors Boaz Barak Guy Kindler Ronen Shaltiel Benny Sudakov.
Private Approximation of Search Problems Amos Beimel Paz Carmi Kobbi Nissim Enav Weinreb (Technion)
Key Derivation from Noisy Sources with More Errors Than Entropy Benjamin Fuller Joint work with Ran Canetti, Omer Paneth, and Leonid Reyzin May 5, 2014.
Rational Cryptography Some Recent Results Jonathan Katz University of Maryland.
1 Private codes or Succinct random codes that are (almost) perfect Michael Langberg California Institute of Technology.
Strong Key Derivation from Noisy Sources Benjamin Fuller December 12, 2014 Based on three works: Computational Fuzzy Extractors [FullerMengReyzin13] When.
1 Leonid Reyzin Boston University Adam Smith Weizmann  IPAM  Penn State Robust Fuzzy Extractors & Authenticated Key Agreement from Close Secrets Yevgeniy.
When is Key Derivation from Noisy Sources Possible?
Does Privacy Require True Randomness? Yevgeniy Dodis New York University Joint work with Carl Bosley.
Correcting Errors Without Leaking Partial Information Yevgeniy Dodis New York University Adam SmithWeizmann Institute To appear in STOC 2005
CSC2535: Lecture 4: Autoencoders, Free energy, and Minimum Description Length Geoffrey Hinton.
Correcting Errors Without Leaking Partial Information
Strong Key Derivation from Noisy Sources
Reusable Fuzzy Extractors for Low-Entropy Distributions
Encryption and Integrity
Coexistence Among Cryptography and Noisy Data Theory and Applications
Computational Fuzzy Extractors
Cryptographic Hash Function
Foundations of Secure Computation
Sampling of min-entropy relative to quantum knowledge Robert König in collaboration with Renato Renner TexPoint fonts used in EMF. Read the TexPoint.
Modern symmetric-key Encryption
Introduction to security goals and usage of cryptographic algorithms
Cryptography.
Tradeoff Analysis of Strategies for System Qualities
Topic 14: Random Oracle Model, Hashing Applications
Communication Amid Uncertainty
Course Business I am traveling April 25-May 3rd
Background: Lattices and the Learning-with-Errors problem
Cryptography Lecture 4.
IMAGE-BASED AUTHENTICATION
Cryptography Lecture 19.
Turnstile Streaming Algorithms Might as Well Be Linear Sketches
Four-Round Secure Computation without Setup
Linear sketching with parities
The Curve Merger (Dvir & Widgerson, 2008)
Conditional Computational Entropy
Linear sketching over
When are Fuzzy Extractors Possible?
Linear sketching with parities
Imperfectly Shared Randomness
Information-Theoretic Security
Cryptography Lecture 4.
Communication Amid Uncertainty
New Frontiers in Secret Sharing
Minwise Hashing and Efficient Search
Diffie/Hellman Key Exchange
Cryptography Lecture 3.
Cryptography Lecture 18.
Presentation transcript:

When are Fuzzy Extractors Possible? Benjamin Fuller Leonid Reyzin Adam Smith U. Connecticut Boston U. Pennsylvania State U. http://eprint.iacr.org/2014/961

Key Derivation from Noisy Sources Biometric Data Physically Unclonable Functions (PUFs) High-entropy sources are often noisy Initial reading w0 ≠ later reading w1 Assume a bound on distance: d(w0, w1) ≤ t Goal: derive a stable cryptographically strong output Want w0, w1 to map to same output The output should look uniform to the adversary Why? Use output to bootstrap authentication http://sigma.octopart.com/14152599/image/Microchip-TC14433EPG.jpg http://www.drgaelriverz.com/images/Round_Eye.JPG E.g., self-enforcing, rather than server-enforced, authorization

Key Derivation from Noisy Sources Alice Bob Non-Tampering Eve w1 w0 Parties agree on cryptographic key [Wyner 75]… [Bennett-Brassard-Robert 86]… [Maurer 93]…

Notation Enrollment algorithm Gen (Alice): Take a measurement w0 from the source W. Produce random key r and nonsecret value p. Subsequent algorithm Rep (Bob): give same output if d(w0, w1) < t correctness security looks uniform given p if the source is good enough Gen r w0 Rep r p < t w1

When are fuzzy extractors possible? looks uniform given p if the source is good enough Gen r w0 Rep r p < t w1

When are fuzzy extractors possible? An adversary can always try a guess w0 in W p r Rep Source W Minimum requirement: every w0 has low probability looks uniform given p if the source is good enough Gen r w0 Rep r p = seed

When are fuzzy extractors possible? Define min-entropy: H(W) = −log max Pr[w] Necessary: H(W) > security parameter And sufficient by Leftover Hash Lemma: universal hash functions are extractors w/ long seed and entropy loss prop. to security param. Minimum requirement: every w0 has low probability looks uniform given p if the source is good enough Gen r w0 Rep r p = seed

When are fuzzy extractors possible? An adversary can always try a guess w1 near many w0 p t w1 r Rep Source W Minimum requirement: every Bt has low probability looks uniform given p if the source is good enough Gen r w0 Rep r p < t w1

When are fuzzy extractors possible? Define fuzzy min-entropy: Hfuzz(W) = −log max Pr[W  Bt] Bt Minimum requirement: every Bt has low probability looks uniform given p if the source is good enough Gen r w0 Rep r p < t w1

When are fuzzy extractors possible? Define fuzzy min-entropy: Hfuzz(W) = −log max Pr[W  Bt] Bt Why bother with this new notion? Can’t we use the old one? Hfuzz(W)  H(W) − log |Bt| Because there are distributions “with more errors than entropy” ( log |Bt| > H(W) ) There are distributions that occur in practice that fuzzy extractors provide no security for. This is because they often use this kind of bound which is what comes from coding theory. Would adding Gaussian noise fix this? But perhaps Hfuzz(W) > 0

Why is Hfuzz the right notion? An adversary can always try a guess w1 near many w0 p t w1 Hfuzz(W) necessary r Rep Q: can we make sure that’s all the adversary can do? A: yes, using obfuscation! [Bitansky Canetti Kalai Paneth 14] Gen r Pick random r p = obfuscate { if dis(w0,  )  t  r else   } w0 Rep r p Send input to the obfuscated program p w1

Why is Hfuzz the right notion? An adversary can always try a guess w1 near many w0 p t w1 r Rep Hfuzz is necessary Hfuzz is sufficient against computational adversaries What about information-theoretic adversaries? Our Contribution: Sufficient if the distribution is completely known Insufficient in the case of distributional uncertainty (several variants of this theorem) Only consider Hamming distance. Focus of the Talk

Common design goal: one construction for family of sources (e. g Common design goal: one construction for family of sources (e.g., all sources of a given Hfuzz) Why? we may not know our source precisely something about our source may have leaked out to Eve or want a computationally efficient construction

How are fuzzy extractors built? Ext converts entropy to almost uniform (privacy amplification) Gen Hfuzz(w0)=k r Ext If you haven’t heard the terms privacy amplification and information reconciliation don’t worry about it. They are just what is listed here. As a warm up we’re going to look at the way that the vast majority of fuzzy extractors are built. We’ll prove impossibility of this approach and then use similar techniques for our other results. w0

How are fuzzy extractors built? Ext converts entropy to almost uniform (privacy amplification) Sketch, Rec, known as secure sketch, corrects errors (information reconciliation) Gen Hfuzz(w0)=k r Ext w0 Rep r p Ext Sketch w0 < t Rec w1

How can a sketch work for a family? correctness w0 p Rec w0 w1

How can a sketch work for a family? correctness w0 p Rec w0 w1

How can a sketch work for a family? Rec needs to recover from w1 regardless of which W the original w0 came from correctness w0 p Rec w0 w1

How can a sketch work for a family? Rec needs to recover from w1 regardless of which W the original w0 came from chooses points s.t. Rec provides same value for all nearby points security w0 p Rec w0 w1

How can a sketch work for a family? Rec needs to recover from w1 regardless of which W the original w0 came from chooses points s.t. Rec provides same value for all nearby points security w0 p Rec w0 w1

How can a sketch work for a family? Rec needs to recover from w1 regardless of which W the original w0 came from chooses points s.t. Rec provides same value for all nearby points Few points from each distribution will be at center of a ball Slow down and talk about why these balls would exist security w0 p Rec w0 w1

How can a sketch work for a family? Theorem:  family {W} with linear Hfuzz(W) s.t. any Sketch, Rec handling linear error rate with good probability has H(W | p) < 2 Rec needs to recover from w1 regardless of which W the original w0 came from chooses points s.t. Rec provides same value for all nearby points Few points from each distribution will be at center of a ball Need family where each distribution has high Hfuzz and distributions share few points Write the number as a small constant and then say its actually 2 p Rec w0 w1

But we don’t need a sketch! We don’t have to recover w0!

But we don’t need a sketch! We don’t have to recover w0! Gen r Ext w0 Rep r p Ext Sketch w0 < t Rec w1

But we don’t need a sketch! We don’t have to recover w0! Gen r w0 something else entirely Rep r p something else entirely < t w1

Impossibility of fuzzy ext for a family Given p, this partition is known Nothing near the boundary can have been w0 (else Rep wouldn’t be 100% correct) w1 s.t. Rep(w1, p) = 0 w0 p w1 s.t. Rep(w1, p) = 1 r Rep w1 (even just one bit)

Impossibility of fuzzy ext for this family Given p, this partition is known Nothing near the boundary can have been w0 (else Rep wouldn’t be 100% correct) w1 s.t. Rep(w1, p) = 0 Leaves little uncertainty about w0 (high-dimensions  everything near boundary) correctness No w0 in here! Combined with Eve’s knowledge of the distribution it reveals everything about w0 w1 s.t. Rep(w1, p) = 1

Impossibility of fuzzy ext for this family Given p, this partition is known Nothing near the boundary can have been w0 (else Rep wouldn’t be 100% correct) Theorem:  family {W} with linear Hfuzz(W) s.t. any Gen, Rep handling a linear error rate can’t extract even 3 bits (if Rep is 100% correct) Leaves little uncertainty about w0 (high-dimensions  everything near boundary) Combined with Eve’s knowledge of the distribution it reveals everything about w0

Alternative family Use [Holenstein-Renner 05] result on impossibility of key agreement from bit-fixing sources Theorem:  family {W} with linear Hfuzz(W) s.t. any Gen, Rep handling a linear error rate can’t extract even 3 bits (if Rep is 100% correct) Obtain similar result, but + 100% correctness not required Parameters hidden by “linear” are worse

Summary Natural and nec. notion: Hfuzz(W) = −log max Pr[W  Bt] Sufficient w/ comp. sec. Sufficient if dist. known t Insufficient in case of distributional uncertainty for info.-theory construction (several variants of “insufficiency theorem”) w1 r Rep Q: can we provide computational sec. w/ weaker assump.?