Information Theory and Security. Lecture Motivation Up to this point we have seen: –Classical Crypto –Symmetric Crypto –Asymmetric Crypto These systems.

Slides:



Advertisements
Similar presentations
Lecture 2: Basic Information Theory TSBK01 Image Coding and Data Compression Jörgen Ahlberg Div. of Sensor Technology Swedish Defence Research Agency (FOI)
Advertisements

Applied Algorithmics - week7
Sampling and Pulse Code Modulation
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
Information Theory EE322 Al-Sanie.
An introduction to Data Compression
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
Week 21 Basic Set Theory A set is a collection of elements. Use capital letters, A, B, C to denotes sets and small letters a 1, a 2, … to denote the elements.
Chapter 4 Probability and Probability Distributions
COUNTING AND PROBABILITY
Mathematics in Today's World
SI485i : NLP Day 2 Probability Review. Introduction to Probability Experiment (trial) Repeatable procedure with well-defined possible outcomes Outcome.
1 Codes, Ciphers, and Cryptography-Ch 2.3 Michael A. Karls Ball State University.
Chain Rules for Entropy
Data Compression.
Fundamental limits in Information Theory Chapter 10 :
Excursions in Modern Mathematics, 7e: Copyright © 2010 Pearson Education, Inc. 15 Chances, Probabilities, and Odds 15.1Random Experiments and Sample.
Ref. Cryptography: theory and practice Douglas R. Stinson
Evaluating Hypotheses
Information Theory and Security pt. 2. Lecture Motivation Previous lecture talked about a way to measure “information”. In this lecture, our objective.
Copyright (c) Bani K. Mallick1 STAT 651 Lecture #15.
June 1, 2004Computer Security: Art and Science © Matt Bishop Slide #32-1 Chapter 32: Entropy and Uncertainty Conditional, joint probability Entropy.
Shannon ’ s theory part II Ref. Cryptography: theory and practice Douglas R. Stinson.
Lecture 2: Basic Information Theory Thinh Nguyen Oregon State University.
CryptographyPerfect secrecySlide 1 Today What does it mean for a cipher to be: –Computational secure? Unconditionally secure? Perfect secrecy –Conditional.
Information Theory and Security
Computer Security CS 426 Lecture 3
Chapter 6 Probability.
CMSC 414 Computer and Network Security Lecture 3 Jonathan Katz.
Chapter 9 Introducing Probability - A bridge from Descriptive Statistics to Inferential Statistics.
Introduction to AEP In information theory, the asymptotic equipartition property (AEP) is the analog of the law of large numbers. This law states that.
©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms.
§1 Entropy and mutual information
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
Sets, Combinatorics, Probability, and Number Theory Mathematical Structures for Computer Science Chapter 3 Copyright © 2006 W.H. Freeman & Co.MSCS SlidesProbability.
Chapter 1 Probability and Distributions Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
All of Statistics Chapter 5: Convergence of Random Variables Nick Schafer.
Lecture Discrete Probability. 5.1 Probabilities Important in study of complexity of algorithms. Modeling the uncertain world: information, data.
Copyright © Cengage Learning. All rights reserved. CHAPTER 9 COUNTING AND PROBABILITY.
Topic 21 Cryptography CS 555 Topic 2: Evolution of Classical Cryptography CS555.
From Randomness to Probability
1 2. Independence and Bernoulli Trials Independence: Events A and B are independent if It is easy to show that A, B independent implies are all independent.
Basic Concepts of Encoding Codes, their efficiency and redundancy 1.
Great Theoretical Ideas in Computer Science.
4.1 Probability Distributions. Do you remember? Relative Frequency Histogram.
Cryptography Lecture 7: RSA Primality Testing Piotr Faliszewski.
LECTURE 15 THURSDAY, 15 OCTOBER STA 291 Fall
1 TABLE OF CONTENTS PROBABILITY THEORY Lecture – 1Basics Lecture – 2 Independence and Bernoulli Trials Lecture – 3Random Variables Lecture – 4 Binomial.
JHU CS /Jan Hajic 1 Introduction to Natural Language Processing ( ) Essential Information Theory I AI-lab
LECTURE 14 TUESDAY, 13 OCTOBER STA 291 Fall
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
1 Information Theory Nathanael Paul Oct. 09, 2002.
CS433 Modeling and Simulation Lecture 03 – Part 01 Probability Review 1 Dr. Anis Koubâa Al-Imam Mohammad Ibn Saud University
Probability Course web page: vision.cis.udel.edu/cv March 19, 2003  Lecture 15.
PROBABILITY, PROBABILITY RULES, AND CONDITIONAL PROBABILITY
Sixth lecture Concepts of Probabilities. Random Experiment Can be repeated (theoretically) an infinite number of times Has a well-defined set of possible.
Natural Language Processing Giuseppe Attardi Introduction to Probability IP notice: some slides from: Dan Jurafsky, Jim Martin, Sandiway Fong, Dan Klein.
Conditional Probability Mass Function. Introduction P[A|B] is the probability of an event A, giving that we know that some other event B has occurred.
CS555Spring 2012/Topic 31 Cryptography CS 555 Topic 3: One-time Pad and Perfect Secrecy.
Great Theoretical Ideas in Computer Science for Some.
Huffman Coding (2 nd Method). Huffman coding (2 nd Method)  The Huffman code is a source code. Here word length of the code word approaches the fundamental.
Basic Probability. Introduction Our formal study of probability will base on Set theory Axiomatic approach (base for all our further studies of probability)
Dr. Michael Nasief 1. MathematicsMeasure TheoryProbability TheoryRandom Process theory 2.
Conditional Expectation
(C) 2000, The University of Michigan 1 Language and Information Handout #2 September 21, 2000.
What is Probability? Quantification of uncertainty.
Information-Theoretic Secrecy
CSCI 5832 Natural Language Processing
Presentation transcript:

Information Theory and Security

Lecture Motivation Up to this point we have seen: –Classical Crypto –Symmetric Crypto –Asymmetric Crypto These systems have focused on issues of confidentiality: Ensuring that an adversary cannot infer the original plaintext message, or cannot learn any information about the original plaintext from the ciphertext. In today’s lecture we will put a more formal framework around the notion of what information is, and use this to provide a definition of security from an information-theoretic point of view.

Lecture Outline Probability Review: Conditional Probability and Bayes Entropy: –Desired properties and definition –Chain Rule and conditioning Coding and Information Theory –Huffman codes –General source coding results Secrecy and Information Theory –Probabilistic definitions of a cryptosystem –Perfect Secrecy

The Basic Idea Suppose we roll a 6-sided dice. –Let A be the event that the number of dots is odd. –Let B be the event that the number of dots is at least 3. A = {1, 3, 5} B = {3, 4, 5, 6} I tell you: the roll belongs to both A and B then you know there are only two possibilities: {3, 5} In this sense tells you more than just A or just B. That is, there is less uncertainty in than in A or B. Information is closely linked with this idea of uncertainty: Information increases when uncertainty decreases.

Probability Review, pg. 1 A random variable (event) is an experiment whose outcomes are mapped to real numbers. For our discussion we will deal with discrete-valued random variables. Probability: We denote p X (x) = Pr(X = x). For a subset A, Joint Probability: Sometimes we want to consider more than two events at the same time, in which we case we lump them together into a joint random variable, e.g. Z = (X,Y). Independence: We say that two events are independent if

Probability Review, pg. 2 Conditional Probability: We will often ask questions about the probability of events Y given that we have observed X=x. In particular, we define the conditional probability of Y=y given X=x by Independence: We immediately get Bayes’s Theorem: If p X (x)>0 and p Y (y)>0 then

Entropy and Uncertainty We are concerned with how much uncertainty a random event has, but how do we define or measure uncertainty? We want our measure to have the following properties: 1. To each set of nonnegative numbers with, we define the uncertainty by. 2. should be a continuous function: A slight change in p should not drastically change 3. for all n>0. Uncertainty increases when there are more outcomes. 4. If 0<q<1, then

Entropy, pg. 2 We define the entropy of a random variable by Example: Consider a fair coin toss. There are two outcomes, with probability ½ each. The entropy is Example: Consider a non-fair coin toss X with probability p of getting heads and 1-p of getting tails. The entropy is The entropy is maximum when p= ½.

Entropy, pg. 3 Entropy may be thought of as the number of yes-no questions needed to accurately determine the outcome of a random event. Example: Flip two coins, and let X be the number of heads. The possibilities are {0,1,2} and the probabilities are {1/4, 1/2, 1/4}. The Entropy is So how can we relate this to questions? First, ask “Is there exactly one head?” You will half the time get the right answer… Next, ask “Are there two heads?” Half the time you needed one question, half you needed two

Entropy, pg. 4 Suppose we have two random variables X and Y, the joint entropy H(X,Y) is given by Conditional Entropy: In security, we ask questions of whether an observation reduces the uncertainty in something else. In particular, we want a notion of conditional entropy. Given that we observe event X, how much uncertainty is left in Y?

Entropy, pg. 5 Chain Rule: The Chain Rule allows us to relate joint entropy to conditional entropy via H(X,Y) = H(Y|X)+H(X). (Remaining details will be provided on the white board) Meaning: Uncertainty in (X,Y) is the uncertainty of X plus whatever uncertainty remains in Y given we observe X.

Entropy, pg. 6 Main Theorem: 1. Entropy is non-negative. 2. where denotes the number of elements in the sample space of X (Conditioning reduces entropy) with equality if and only if X and Y are independent.

Entropy and Source Coding Theory There is a close relationship between entropy and representing information. Entropy captures the notion of how many “Yes-No” questions are needed to accurately identify a piece of information… that is, how many bits are needed! One of the main focus areas in the field of information theory is on the issue of source-coding: –How to efficiently (“Compress”) information into as few bits as possible. We will talk about one such technique, Huffman Coding. Huffman coding is for a simple scenario, where the source is a stationary stochastic process with independence between successive source symbols

Huffman Coding, pg. 1 Suppose we have an alphabet with four letters A, B, C, D with frequencies: We could represent this with A=00, B=01, C=10, D=11. This would mean we use an average of 2 bits per letter. On the other hand, we could use the following representation: A=1, B=01, C=001, D=000. Then the average number of bits per letter becomes (0.5)*1+(0.3)*2+(0.1)*3+(0.1)*3 = 1.7 Hence, this representation, on average, is more efficient. ABCD

Huffman Coding, pg. 2 Huffman Coding is an algorithm that produces a representation for a source. The Algorithm: –List all outputs and their probabilities –Assign a 1 and 0 to smallest two, and combine to form an output with probability equal to the sum –Sort List according to probabilities and repeat the process –The binary strings are then obtained by reading backwards through the procedure A B C D Symbol Representations A: 1 B: 01 C: 001 D: 000

Huffman Coding, pg. 3 In the previous example, we used probabilities. We may directly use event counts. Example: Consider 8 symbols, and suppose we have counted how many times they have occurred in an output sample. We may derive the Huffman Tree (Exercise will be done on whiteboard) The corresponding length vector is (2,2,3,3,3,4,5,5) The average codelength is If we had used a full-balanced tree representation (i.e. the straight-forward representation) we would have had an average codelength of 3. S1S2S3S4S5S6S7S

Huffman Coding, pg. 4 We would like to quantify the average amount of bits needed in terms of entropy. Theorem: Let L be the average number of bits per output for Huffman encoding of a random variable X, then Here, l x =length of codeword assigned to symbol x. Example: Let’s look back at the 4 symbol example Our average codelength was 1.7 bits.

Huffman Coding, pg. 5 An interesting and useful question is: What if I use the wrong distribution when calculating the code? How badly will my code perform? Suppose the true distribution is p x, and you used another distribution to find the lengths l x. Define the auxiliary distribution. Theorem: If we code the source X with l x instead of the correct Huffman code, then the resulting average codelength will satisfy: where the Kullback-Leibler Divergence D(p||q) is

Another way to look at cryptography, pg. 1 So far in class, we have looked at the security problem from an algorithm point-of-view (DES, RC4, RSA,…). But why build these algorithms? How can we say we are doing a good job? Enter information theory and its relationship to ciphers… Suppose we have a cipher with possible plaintexts P, ciphertexts C, and keys K. –Suppose that a plaintext P is chosen according to a probability law. –Suppose the key K is chosen independent of P –The resulting ciphertexts have various probabilities depending on the probabilities for P and K.

Another way to look at cryptography, pg. 2 Now, enter Eve… She sees the ciphertext C and several security questions arise: –Does she learn anything about P from seeing C? –Does she learn anything about the key K from seeing C? Thus, our questions are associated with H(P|C) and H(K|C). Ideally, we would like for the uncertainty to not decrease, i.e. H(P | C) = H(P) H(K | C) = H(K)

Another way to look at cryptography, pg. 3 Example: Suppose we have three plaintexts {a,b,c} with probabilities {0.5, 0.3, 0.2}. Suppose we have two keys k1 and k2 with probabilities 0.5 and 0.5. Suppose there are three ciphertexts U,V,W. We may calculate probabilities of the ciphertexts Similarly we get p C (V)=0.25 and p C (W)=0.25 E k1 (a)=UE k1 (b)=VE k1 (c)=W E k2 (a)=UE k2 (b)=WE k2 (c)=V

Another way to look at cryptography, pg. 4 Suppose Eve observes the ciphertext U, then she knows the plaintext was “a”. We may calculate the conditional probabilities: Similarly we get p P (c|V)=0.4 and p P (a|V)=0. Also p P (a|W)=0, p P (b|W)=0.6, p P (c|W)=0.4. What does this tell us? Remember, the original plaintexts probabilities were 0.5, 0.3, and 0.2. So, if we see a ciphertext, then we may revise the probabilities… Something is “learned”

Another way to look at cryptography, pg. 5 We use entropy to quantify the amount of information that is learned about the plaintext given the ciphertext is observed. The conditional entropy of P given C is Thus an entire bit of information is revealed just by observing the ciphertext!

Perfect Secrecy and Entropy The previous example gives us the motivation for the information-theoretic definition of security (or “secrecy”) Definition: A cryptosystem has perfect secrecy if H(P|C)=H(P). Theorem: The one-time pad has perfect secrecy. Proof: See the book for the details. Basic idea is to show each ciphertext will result with equal likelihood. We then use manipulations like: Equating these two as equal and using H(K)=H(C) gives the result. Why?