Simple Lattice Trapdoor Sampling from a Broad Class of Distributions Vadim Lyubashevsky and Daniel Wichs.

Slides:



Advertisements
Similar presentations
Efficient Lattice (H)IBE in the standard model Shweta Agrawal, Dan Boneh, Xavier Boyen.
Advertisements

Shortest Vector In A Lattice is NP-Hard to approximate
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Foundations of Cryptography Lecture 2: One-way functions are essential for identification. Amplification: from weak to strong one-way function Lecturer:
Enumerative Lattice Algorithms in any Norm via M-Ellipsoid Coverings Daniel Dadush (CWI) Joint with Chris Peikert and Santosh Vempala.
Discrete Gaussian Leftover Hash Lemma Shweta Agrawal IIT Delhi With Craig Gentry, Shai Halevi, Amit Sahai.
The Many Entropies of One-Way Functions Thomas Holenstein Iftach Haitner Salil VadhanHoeteck Wee Joint With Omer Reingold.
P ROBABILITY T HEORY APPENDIX C P ROBABILITY T HEORY you can never know too much probability theory. If you are well grounded in probability theory, you.
NON-MALLEABLE EXTRACTORS AND SYMMETRIC KEY CRYPTOGRAPHY FROM WEAK SECRETS Yevgeniy Dodis and Daniel Wichs (NYU) STOC 2009.
Quantum Spectrum Testing Ryan O’Donnell John Wright (CMU)
The Learning With Errors Problem Oded Regev Tel Aviv University (for more details, see the survey paper in the proceedings) Cambridge, 2010/6/11.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
New Lattice Based Cryptographic Constructions
Lattice-Based Cryptography. Cryptographic Hardness Assumptions Factoring is hard Discrete Log Problem is hard  Diffie-Hellman problem is hard  Decisional.
Simulation Where real stuff starts. ToC 1.What, transience, stationarity 2.How, discrete event, recurrence 3.Accuracy of output 4.Monte Carlo 5.Random.
Random-Variate Generation. Need for Random-Variates We, usually, model uncertainty and unpredictability with statistical distributions Thereby, in order.
ACT1 Slides by Vera Asodi & Tomer Naveh. Updated by : Avi Ben-Aroya & Alon Brook Adapted from Oded Goldreich’s course lecture notes by Sergey Benditkis,
Dimensional reduction, PCA
Tirgul 8 Universal Hashing Remarks on Programming Exercise 1 Solution to question 2 in theoretical homework 2.
1 Analysis of the Linux Random Number Generator Zvi Gutterman, Benny Pinkas, and Tzachy Reinman.
Oded Regev Tel-Aviv University On Lattices, Learning with Errors, Learning with Errors, Random Linear Codes, Random Linear Codes, and Cryptography and.
EXPANDER GRAPHS Properties & Applications. Things to cover ! Definitions Properties Combinatorial, Spectral properties Constructions “Explicit” constructions.
Hidden pairings and trapdoor DDH groups Alexander W. Dent Joint work with Steven D. Galbraith.
Tirgul 7. Find an efficient implementation of a dynamic collection of elements with unique keys Supported Operations: Insert, Search and Delete. The keys.
The Many Entropies of One-Way Functions Thomas Holenstein Iftach Haitner Salil VadhanHoeteck Wee Joint With Omer Reingold.
Introduction to Computer and Network Security Iliano Cervesato 26 August 2008 – Modern Cryptography.
1 NTRU: A Ring-Based Public Key Cryptosystem Jeffrey Hoffstein, Jill Pipher, Joseph H. Silverman LNCS 1423, 1998.
Computer Security CS 426 Lecture 3
Foundations of Cryptography Lecture 2 Lecturer: Moni Naor.
Hypothesis Testing. Distribution of Estimator To see the impact of the sample on estimates, try different samples Plot histogram of answers –Is it “normal”
C&O 355 Mathematical Programming Fall 2010 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
Sullivan – Fundamentals of Statistics – 2 nd Edition – Chapter 9 Section 1 – Slide 1 of 39 Chapter 9 Section 1 The Logic in Constructing Confidence Intervals.
Ideal Lattices and Ring-LWE
Ragesh Jaiswal Indian Institute of Technology Delhi Threshold Direct Product Theorems: a survey.
The Negative Binomial Distribution An experiment is called a negative binomial experiment if it satisfies the following conditions: 1.The experiment of.
CSC321: Neural Networks Lecture 12: Clustering Geoffrey Hinton.
Vadim Lyubashevsky INRIA / ENS, Paris
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster 2012.
Knapsack Cipher. 0-1 knapsack problem Given a positive integer C and a vector A=(a 1,...,a n ) of positive integers, find a subset of the elements of.
Cryptography Lecture 9 Stefan Dziembowski
Random Number Generators 1. Random number generation is a method of producing a sequence of numbers that lack any discernible pattern. Random Number Generators.
Fast algorithm for the Shortest Vector Problem er (joint with Aggarwal, Dadush, and Stephens-Davidowitz) Oded Regev Courant Institute, NYU UC Irvine, Sloan.
Optimal Bayes Classification
Public Key Systems 1 Merkle-Hellman Knapsack Public Key Systems 2 Merkle-Hellman Knapsack  One of first public key systems  Based on NP-complete problem.
CIAR Summer School Tutorial Lecture 1b Sigmoid Belief Nets Geoffrey Hinton.
Lattice-based cryptography and quantum Oded Regev Tel-Aviv University.
Based on work with: Sergey Gorbunov and Vinod Vaikuntanathan Homomorphic Commitments & Signatures Daniel Wichs Northeastern University.
Great Theoretical Ideas in Computer Science.
1 Review of Probability and Random Processes. 2 Importance of Random Processes Random variables and processes talk about quantities and signals which.
Irena Váňová. B A1A1. A2A2. A3A3. repeat until no sample is misclassified … labels of classes Perceptron algorithm for i=1...N if then end * * * * *
CSC2535 Lecture 5 Sigmoid Belief Nets
Kevin Stevenson AST 4762/5765. What is MCMC?  Random sampling algorithm  Estimates model parameters and their uncertainty  Only samples regions of.
Fast Exact Bayes Net Structure Learning Daniel Eaton Tuesday Oct 31, 2006 relatively-speakingly-
CSC Lecture 23: Sigmoid Belief Nets and the wake-sleep algorithm Geoffrey Hinton.
Random Variables By: 1.
Key Exchange in Systems VPN usually has two phases –Handshake protocol: key exchange between parties sets symmetric keys –Traffic protocol: communication.
Generating Random Variates
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Statistical Modelling
Information Complexity Lower Bounds
B504/I538: Introduction to Cryptography
Background: Lattices and the Learning-with-Errors problem
Lattice Signature Schemes
Equivalence of Search and Decisional (Ring-) LWE
Cryptography Lecture 19.
Information Based Criteria for Design of Experiments
MCMC Inference over Latent Diffeomorphisms
Math review - scalars, vectors, and matrices
Generating Random Variates
Presentation transcript:

Simple Lattice Trapdoor Sampling from a Broad Class of Distributions Vadim Lyubashevsky and Daniel Wichs

Trapdoor Sampling A t s = Given: a random matrix A and vector t Find: vector s with small coefficients such that As=t Without a “trapdoor” for A, this is a very hard problem When sampling in a protocol, want to make sure s is independent of the trapdoor mod p

Trapdoor Sampling First algorithm: Gentry, Peikert, Vaikuntanathan (2008) Very “geometric” The distribution of s is a discrete Gaussian Agrawal, Boneh, Boyen (2010) + Micciancio, Peikert (2012) More “algebraic” (you don’t even see the lattices) Still s needs to be a discrete Gaussian Are Gaussians “fundamental” to trapdoor sampling?

Constructing a Trapdoor A s t =mod p

Constructing a Trapdoor A1A1 t s1s1 =mod p A2A2 s2s2

A1A1 R G + Random matrix Random matrix with small coefficients Special matrix that is easy to invert A1A1 Constructing a Trapdoor A =

A1A1 R G + Random matrix Random matrix with small coefficients Special matrix that is easy to invert A1A1 Constructing a Trapdoor A = H Invertible matrix H that is used as a “tag” in many advanced constructions

Easily-Invertible Matrix Matrix G has the property that for any t, you can find a 0/1 vector s 2 such that Gs 2 =t (a bijection between integer vectors and {0,1} * ) … q/ … q/2 G =

Example =

Inverting with a Trapdoor A = [A 1 | A 2 ] = [A 1 | A 1 R+G] Want to find a small s such that As=t s = (s 1,s 2 ) t = As = A 1 s 1 +(A 1 R+G)s 2 = A 1 (s 1 +Rs 2 ) + Gs 2 t = Gs 2 s 1 = - Rs 2 set to 0 Reveals R Bad

Inverting with a Trapdoor A = [A 1 | A 2 ] = [A 1 | A 1 R+G] Want to find a small s such that As=t s = (s 1,s 2 ) t = As = A 1 s 1 +(A 1 R+G)s 2 = A 1 (s 1 +Rs 2 ) + Gs 2 t - A 1 y = Gs 2 s 1 = y - Rs 2 small y Intuition: y helps to hide R

The Distribution we Hope to Get t = A 1 (s 1 +Rs 2 ) + Gs 2 t - A 1 y = Gs 2 s 1 = y - Rs 2 s 2  D 2 s 1  D 1 | A 1 s 1 + (A 1 R+G)s 2 = t Output s = (s 1,s 2 ) small y (but enough entropy) uniformly random (leftover hash lemma) random bit string (because of the shape of G) Depends on R, s 2, and y

Rejection Sampling Make sure it’s at most 1

Removing the Dependence on R Assume R and s 2 are fixed s 1 = y - Rs 2 If y  D y then Pr[s 1 ] = Pr[y=s 1 +Rs 2 ] We want Pr[s 1 ] to be exactly D 1 (s 1 ) (conditioned on As=t) So sample y and output s 1 =y - Rs 2 with probability D 1 (s 1 ) / (c∙D y (s 1 +Rs 2 )) s 2  D 2 s 1  D 1 | A 1 s 1 + (A 1 R+G)s 2 = t Output s = (s 1,s 2 )

The Real Distribution y  D y s 2  G -1 (t - A 1 y) s 1  y - Rs 2 Output s=(s 1,s 2 ) with probability D 1 (s 1 )/(c∙D y (s 1 +Rs 2 )) s 2  D 2 s 1  D 1 | A 1 s 1 + (A 1 R+G)s 2 = t Output s = (s 1,s 2 ) Real DistributionTarget Distribution the shift Rs 2 depends on y (what’s the distribution of s 1 ??)

Equivalence of Distributions y  D y s 2  G -1 (t - A 1 y) s 1  y - Rs 2 Output s=(s 1,s 2 ) with probability D 1 (s 1 )/(c∙D y (s 1 +Rs 2 )) s 2  D 2 s 1  D 1 | A 1 s 1 + (A 1 R+G)s 2 = t Output s = (s 1,s 2 ) Real DistributionTarget Distribution 1.For (almost) all s=(s 1,s 2 ) in the support of TD, D 1 (s 1 )/(c∙D y (s 1 +Rs 2 )) ≤ 1 2.D 2 is uniformly random and G is a 1-1 and onto function between the support of D 2 and Z p n 3.For x  D 1 and x  D y, Δ(A 1 x, U(Z p n )) < 2 -(n logp+λ) ≈ λc∙2 -λ (2) and (3) break the dependency between s 2 and y and (1) allows rejection sampling

Our “Unbalanced” Result A1A1 t s1s1 =mod p A2A2 s2s2 n Has entropy greater than nlogp Binary vector

Is the Gaussian Distribution “Fundamental” to Lattices My opinion To lattices – YES A Gaussian distribution centered at any point in space is uniform over R n / L for any “small-enough” lattice L To lattice cryptography – NO We usually work with random lattices of a special form Can use the leftover hash lemma to argue uniformity But … Gaussians are often an optimization

What Distribution to use in Practice? Gaussians are often (always?) the “optimal” distribution to use for minimizing the parameters But … Sampling Gaussians requires high(er) precision so maybe too costly in low-power devices Try to use the distribution that minimizes parameters try to improve the efficiency later