1 16.548 Coding, Information Theory (and Advanced Modulation) Prof. Jay Weitzen Ball 411

Slides:



Advertisements
Similar presentations
CHAPTER 8 More About Estimation. 8.1 Bayesian Estimation In this chapter we introduce the concepts related to estimation and begin this by considering.
Advertisements

Information Theory EE322 Al-Sanie.
Math 495 Micro-Teaching JOINT DENSITY OF RANDOM VARIABLES David Sherman Bedrock, USA.
Chain Rules for Entropy
Probability & Certainty: Intro Probability & Certainty.
Chapter 6 Information Theory
Probability Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Chapter 4 Probability.
1 CS 430 / INFO 430 Information Retrieval Lecture 12 Probabilistic Information Retrieval.
Short review of probabilistic concepts Probability theory plays very important role in statistics. This lecture will give the short review of basic concepts.
Notes II Jay Weitzen University of Massachusetts Lowell.
Visual Recognition Tutorial
Lecture 2: Basic Information Theory Thinh Nguyen Oregon State University.
Probability (cont.). Assigning Probabilities A probability is a value between 0 and 1 and is written either as a fraction or as a proportion. For the.
Probability & Certainty: Intro Probability & Certainty.
Statistical Hydrology By Prof. Dr. Eng. Amro Elfeki Faculty of Meteorology, Environment and Arid Land Agriculture (Room 207)
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Elec471 Embedded Computer Systems Chapter 4, Probability and Statistics By Prof. Tim Johnson, PE Wentworth Institute of Technology Boston, MA Theory and.
Expected Value (Mean), Variance, Independence Transformations of Random Variables Last Time:
Huffman Coding Vida Movahedi October Contents A simple example Definitions Huffman Coding Algorithm Image Compression.
Pairs of Random Variables Random Process. Introduction  In this lecture you will study:  Joint pmf, cdf, and pdf  Joint moments  The degree of “correlation”
Chapter 5 Sampling and Statistics Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
Sets, Combinatorics, Probability, and Number Theory Mathematical Structures for Computer Science Chapter 3 Copyright © 2006 W.H. Freeman & Co.MSCS SlidesProbability.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 13 Oct 14, 2005 Nanjing University of Science & Technology.
Dr. Gary Blau, Sean HanMonday, Aug 13, 2007 Statistical Design of Experiments SECTION I Probability Theory Review.
Visibility Graph. Voronoi Diagram Control is easy: stay equidistant away from closest obstacles.
(Important to algorithm analysis )
Channel Capacity.
LECTURE 15 THURSDAY, 15 OCTOBER STA 291 Fall
COMMUNICATION NETWORK. NOISE CHARACTERISTICS OF A CHANNEL 1.
LECTURE 14 TUESDAY, 13 OCTOBER STA 291 Fall
Chapter 4 Probability ©. Sample Space sample space.S The possible outcomes of a random experiment are called the basic outcomes, and the set of all basic.
Week 21 Conditional Probability Idea – have performed a chance experiment but don’t know the outcome (ω), but have some partial information (event A) about.
1 Information Theory Nathanael Paul Oct. 09, 2002.
1 CHAPTERS 14 AND 15 (Intro Stats – 3 edition) PROBABILITY, PROBABILITY RULES, AND CONDITIONAL PROBABILITY.
Probability Course web page: vision.cis.udel.edu/cv March 19, 2003  Lecture 15.
1 BN Semantics 1 Graphical Models – Carlos Guestrin Carnegie Mellon University September 15 th, 2008 Readings: K&F: 3.1, 3.2, –  Carlos.
Copyright © 2014 by McGraw-Hill Higher Education. All rights reserved. Essentials of Business Statistics: Communicating with Numbers By Sanjiv Jaggia and.
Stat 31, Section 1, Last Time Big Rules of Probability –The not rule –The or rule –The and rule P{A & B} = P{A|B}P{B} = P{B|A}P{A} Bayes Rule (turn around.
MATH 256 Probability and Random Processes Yrd. Doç. Dr. Didem Kivanc Tureli 14/10/2011Lecture 3 OKAN UNIVERSITY.
1 Chapter 4, Part 1 Basic ideas of Probability Relative Frequency, Classical Probability Compound Events, The Addition Rule Disjoint Events.
Week 21 Rules of Probability for all Corollary: The probability of the union of any two events A and B is Proof: … If then, Proof:
Copyright © Cengage Learning. All rights reserved. 5 Joint Probability Distributions and Random Samples.
LECTURE 16 TUESDAY, 20 OCTOBER STA 291 Fall
BUSA Probability. Probability – the bedrock of randomness Definitions Random experiment – observing the close of the NYSE and the Nasdaq Sample.
Statistical NLP: Lecture 4 Mathematical Foundations I: Probability Theory (Ch2)
7.2 Means & Variances of Random Variables AP Statistics.
Econ 3790: Business and Economics Statistics Instructor: Yogesh Uppal
Chapter 9: Joint distributions and independence CIS 3033.
AP Statistics From Randomness to Probability Chapter 14.
Virtual University of Pakistan
Warm-up How many digits do you need to simulate heads or tails (or even or odd)? 2) To simulate an integer % probability of passing or failing?
Introduction to Information theory
Chapter 5: Probability: What are the Chances?
What is Probability? Quantification of uncertainty.
Chapter 5: Probability: What are the Chances?
CH 8. Image Compression 8.1 Fundamental 8.2 Image compression models
COT 5611 Operating Systems Design Principles Spring 2012
COT 5611 Operating Systems Design Principles Spring 2014
Chapter 5: Probability: What are the Chances?
POINT ESTIMATOR OF PARAMETERS
Statistical NLP: Lecture 4
Honors Statistics From Randomness to Probability
Chapter 5: Probability: What are the Chances?
Chapter 5: Probability: What are the Chances?
Chapter 5: Probability: What are the Chances?
Chapter 5: Probability: What are the Chances?
Chapter 5: Probability: What are the Chances?
Chapter 5: Probability: What are the Chances?
Presentation transcript:

Coding, Information Theory (and Advanced Modulation) Prof. Jay Weitzen Ball 411

2 Class Coverage Fundamentals of Information Theory (4 weeks) Block Coding (3 weeks) Advanced Coding and modulation as a way of achieving the Shannon Capacity bound: Convolutional coding, trellis modulation, and turbo modulation, space time coding (7 weeks)

3 Course Web Site –Class notes, assignments, other materials on web site –Please check at least twice per week –Lectures will be streamed, see course website

4 Prerequisites (What you need to know to thrive in this class) or (A Probability class) Some Programming (C, VB, Matlab) Digital Communication Theory

5 Grading Policy 4 Mini-Projects (25% each project) Lempel ziv compressor Cyclic Redundancy Check Convolutional Coder/Decoder soft decision Trellis Modulator/Demodulator

6 Course Information and Text Books Coding and Information Theory by Wells, plus his notes from University of Idaho Digital Communication by Sklar, or Proakis Book Shannon’s original Paper (1948) Other material on Web site

7 Claude Shannon Founds Science of Information theory in 1948 In his 1948 paper, ``A Mathematical Theory of Communication,'' Claude E. Shannon formulated the theory of data compression. Shannon established that there is a fundamental limit to lossless data compression. This limit, called the entropy rate, is denoted by H. The exact value of H depends on the information source --- more specifically, the statistical nature of the source. It is possible to compress the source, in a lossless manner, with compression rate close to H. It is mathematically impossible to do better than H.A Mathematical Theory of Communicationfundamental limit lossless entropy ratestatistical naturecompression rate

8

9 This is Important

10 Source Modeling

11 Zero order models It has been said, that if you get enough monkeys, and sit them down at enough typewriters, eventually they will complete the works of Shakespeare

12 First Order Model

13 Higher Order Models

14

15

16

17

18

19

20 Zero’th Order Model

21

22 Definition of Entropy Shannon used the ideas of randomness and entropy from the study of thermodynamics to estimate the randomness (e.g. information content or entropy) of a process

23 Quick Review: Working with Logarithms

24

25

26

27

28 Entropy of English Alphabet

29

30

31

32

33

34 Kind of Intuitive, but hard to prove

35

36

37 Bounds on Entropy

38 Math 495 Micro-Teaching Quick Review: JOINT DENSITY OF RANDOM VARIABLES David Sherman Bedrock, USA

39 In this presentation, we’ll discuss the joint density of two random variables. This is a mathematical tool for representing the interdependence of two events. First, we need some random variables. Lots of those in Bedrock.

40 Let X be the number of days Fred Flintstone is late to work in a given week. Then X is a random variable; here is its density function: Amazingly, another resident of Bedrock is late with exactly the same distribution. It’s... Fred’s boss, Mr. Slate!

41 Remember this means that P(X=3) =.2. Let Y be the number of days when Slate is late. Suppose we want to record BOTH X and Y for a given week. How likely are different pairs? We’re talking about the joint density of X and Y, and we record this information as a function of two variables, like this: This means that P(X=3 and Y=2) =.05. We label it f(3,2).

42 The first observation to make is that this joint probability function contains all the information from the density functions for X and Y (which are the same here). For example, to recover P(X=3), we can add f(3,1)+f(3,2)+f(3,3)..2 The individual probability functions recovered in this way are called marginal. Another observation here is that Slate is never late three days in a week when Fred is only late once.

43 Since he rides to work with Fred (at least until the directing career works out), Barney Rubble is late to work with the same probability function too. What do you think the joint probability function for Fred and Barney looks like? It’s diagonal! This should make sense, since in any week Fred and Barney are late the same number of days. This is, in some sense, a maximum amount of interaction: if you know one, you know the other.

44 A little-known fact: there is actually another famous person who is late to work like this. SPOCK! Before you try to guess what the joint density function for Fred and Spock is, remember that Spock lives millions of miles (and years) from Fred, so we wouldn’t expect these variables to influence each other at all. (Pretty embarrassing for a Vulcan.) In fact, they’re independent….

45 Since we know the variables X and Z (for Spock) are independent, we can calculate each of the joint probabilities by multiplying. For example, f(2,3) = P(X=2 and Z=3) = P(X=2)P(Z=3) = (.3)(.2) =.06. This represents a minimal amount of interaction.

46 Dependence of two events means that knowledge of one gives information about the other. Now we’ve seen that the joint density of two variables is able to reveal that two events are independent ( and ), completely dependent ( and ), or somewhere in the middle ( and ). Later in the course we will learn ways to quantify dependence. Stay tuned…. YABBA DABBA DOO!

47

48

49

50

51 Marginal Density Functions

52 Conditional Probability another event

53 Conditional Probability (cont’d) P(B|A)

54 Definition of conditional probability If P(B) is not equal to zero, then the conditional probability of A relative to B, namely, the probability of A given B, is

55 Conditional Probability AB P(A) = = 0.50 P(B) = = 0.70 P(A’) = =0.50 P(B’)= =0.30

56 Law of Total Probability Special case of rule of Total Probability

57 Bayes Theorem

58

59 Generalized Bayes’ theorem

60 Urn Problems Applications of Bayes Theorem Begin to think about concepts of Maximum likelihood and MAP detections, which we will use throughout codind theory

61

62

63

64 End of Notes 1