DCSP-5: Noise Jianfeng Feng Department of Computer Science Warwick Univ., UK

Slides:



Advertisements
Similar presentations
DCSP-2: Fourier Transform I Jianfeng Feng Department of Computer Science Warwick Univ., UK
Advertisements

DCSP-6: Signal Transmission + information theory Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-8: Minimal length coding I Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-7: Information Jianfeng Feng Department of Computer Science Warwick Univ., UK
Lecture 2: Basic Information Theory TSBK01 Image Coding and Data Compression Jörgen Ahlberg Div. of Sensor Technology Swedish Defence Research Agency (FOI)
Properties of Least Squares Regression Coefficients
CS433: Modeling and Simulation
Information Theory EE322 Al-Sanie.
Probability Distributions CSLU 2850.Lo1 Spring 2008 Cameron McInally Fordham University May contain work from the Creative Commons.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
Sampling Distributions
Chapter 6 Information Theory
DEPARTMENT OF HEALTH SCIENCE AND TECHNOLOGY STOCHASTIC SIGNALS AND PROCESSES Lecture 1 WELCOME.
SUMS OF RANDOM VARIABLES Changfei Chen. Sums of Random Variables Let be a sequence of random variables, and let be their sum:
CHAPTER 6 Statistical Analysis of Experimental Data
3-1 Introduction Experiment Random Random experiment.
Review of Probability and Random Processes
Probability and Statistics in Engineering Philip Bedient, Ph.D.
Statistical Theory; Why is the Gaussian Distribution so popular? Rob Nicholls MRC LMB Statistics Course 2014.
Lecture II-2: Probability Review
Modern Navigation Thomas Herring
So are how the computer determines the size of the intercept and the slope respectively in an OLS regression The OLS equations give a nice, clear intuitive.
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Elec471 Embedded Computer Systems Chapter 4, Probability and Statistics By Prof. Tim Johnson, PE Wentworth Institute of Technology Boston, MA Theory and.
Review of Probability.
Prof. SankarReview of Random Process1 Probability Sample Space (S) –Collection of all possible outcomes of a random experiment Sample Point –Each outcome.
STATISTIC & INFORMATION THEORY (CSNB134)
Random Processes and LSI Systems What happedns when a random signal is processed by an LSI system? This is illustrated below, where x(n) and y(n) are random.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
All of Statistics Chapter 5: Convergence of Random Variables Nick Schafer.
Probability theory 2 Tron Anders Moger September 13th 2006.
TELECOMMUNICATIONS Dr. Hugh Blanton ENTC 4307/ENTC 5307.
PROBABILITY & STATISTICAL INFERENCE LECTURE 3 MSc in Computing (Data Analytics)
1 LES of Turbulent Flows: Lecture 1 Supplement (ME EN ) Prof. Rob Stoll Department of Mechanical Engineering University of Utah Fall 2014.
Theory of Probability Statistics for Business and Economics.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Mathematical Preliminaries. 37 Matrix Theory Vectors nth element of vector u : u(n) Matrix mth row and nth column of A : a(m,n) column vector.
LECTURER PROF.Dr. DEMIR BAYKA AUTOMOTIVE ENGINEERING LABORATORY I.
Signals CY2G2/SE2A2 Information Theory and Signals Aims: To discuss further concepts in information theory and to introduce signal theory. Outcomes:
DCSP-8: Minimal length coding I Jianfeng Feng Department of Computer Science Warwick Univ., UK
Information Theory The Work of Claude Shannon ( ) and others.
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
CS433 Modeling and Simulation Lecture 03 – Part 01 Probability Review 1 Dr. Anis Koubâa Al-Imam Mohammad Ibn Saud University
Risk Analysis & Modelling Lecture 2: Measuring Risk.
Introduction to Digital Signals
Random Variable The outcome of an experiment need not be a number, for example, the outcome when a coin is tossed can be 'heads' or 'tails'. However, we.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
Discrete-time Random Signals
1 EE571 PART 3 Random Processes Huseyin Bilgekul Eeng571 Probability and astochastic Processes Department of Electrical and Electronic Engineering Eastern.
Chapter 20 Statistical Considerations Lecture Slides The McGraw-Hill Companies © 2012.
Review of statistical modeling and probability theory Alan Moses ML4bio.
Warsaw Summer School 2015, OSU Study Abroad Program Normal Distribution.
Chapter 2: Probability. Section 2.1: Basic Ideas Definition: An experiment is a process that results in an outcome that cannot be predicted in advance.
Lecture 8: Measurement Errors 1. Objectives List some sources of measurement errors. Classify measurement errors into systematic and random errors. Study.
1 Review of Probability and Random Processes. 2 Importance of Random Processes Random variables and processes talk about quantities and signals which.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Chapter 6 Random Processes
Physics 114: Lecture 13 Probability Tests & Linear Fitting
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
3.1 Expectation Expectation Example
Department of Computer Science Warwick Univ., UK
Introduction to Summary Statistics
Introduction to Instrumentation Engineering
Inferential Statistics
STOCHASTIC HYDROLOGY Random Processes
Warsaw Summer School 2017, OSU Study Abroad Program
Continuous Random Variables: Basics
Presentation transcript:

DCSP-5: Noise Jianfeng Feng Department of Computer Science Warwick Univ., UK

Assignment:2015 Q1: you should be able to do it after last week seminar Q2: need a bit reading (my lecture notes) Q3: standard Q4: standard

Assignment:2015 Q5: standard Q6: standard Q7: after today’s lecture Q8: load jazz, plot soundsc load Tunejazz plot load NoiseJazz plot

Recap Fourier Transform for a periodic signal { sim(n  t), cos(n  t)} For general function case,

Recap : this is all you have to remember (know)? Fourier Transform for a periodic signal { sim(n  t), cos(n  t)} For general function case,

Can you do FT for cos(2  t)? Dirac delta function

For example, take F=0 in the equation above, we have It makes no sense !!!!

Dirac delta function: A photo with the highest IQ (15 NL) Dirac Einstein Shordiger Pauli Heisenberg Langevin De Boer Bone Lorentz M Curie Planck Compton Ehrenfest Bragg Debije

Dirac Einstein Shordiger Pauli Heisenberg Langevin De Boer Bone Lorentz M Curie Planck Compton Ehrenfest Bragg Debije Dirac delta function: A photo with the highest IQ (15 NL)

Dirac delta function The (digital) delta function, for a given n 0 n 0 =0 here  (t)

Dirac delta function The (digital) delta function, for a given n 0 Dirac delta function  (x) (you could find a nice movie in Wiki); n 0 =0 here  (t)

Dirac delta function Dirac delta function  (x); The FT of cos(2  t) is Frequency

A final note (in exam or future) Fourier Transform for a periodic signal { sim(n  t), cos(n  t)} For general function case ( it is true, but need a bit further work),

Summary Will come back to it soon (numerical) This trick (FT) has changed our life and will continue to do so

This Week’s Summary Noise Information Theory

Noise in communication systems: probability and random signals I = imread('peppers.png'); imshow(I); noise = 1*randn(size(I)); Noisy = imadd(I,im2uint8(noise)); imshow(Noisy); Noise

Noise in communication systems: probability and random signals I = imread('peppers.png'); imshow(I); noise = 1*randn(size(I)); Noisy = imadd(I,im2uint8(noise)); imshow(Noisy); Noise

Noise is a random signal (in general). By this we mean that we cannot predict its value. We can only make statements about the probability of it taking a particular value Noise

The probability density function (pdf) p(x) of a random variable x is the probability that x takes a value between x 0 and x 0 +  x. We write this as follows: p(x 0 )  x =P(x 0 <x< x 0 +  x) pdf [x 0 x 0 +  x] P(x)

Probability that x will take a value lying between x 1 and x 2 is The probability is unity. Thus pdf

Mentally Inadequate 23% Low Intelligence 13.6% Average 34.1% Above Average 34.1% High Intelligence 13.6% Superior Intelligence 2.1% Exceptionally Gifted 0.13% IQ distribution

A density satifying the equation is termed normalized. The cumulative distribution function (CDF) F(x) is the probability x is less than x 0 My IQ is above 85% (F(my IQ)=85%). pdf

From the rules of integration: P(x 1 <x<x 2 ) = P(x 2 ) --P(x 1 ) pdf has two classes: continuous and discrete pdf

Continuous distribution An example of a continuous distribution is the Normal, or Gaussian distribution: where ,  is the mean and standard variation value of p(x). The constant term ensures that the distribution is normalized.

Continuous distribution. This expression is important as many actually occurring noise source can be described by it, i.e. white noise or coloured noise.

Generating f(x) from matlab X=randn(1,1000); Plot(x); X[1], x[2], …. X[1000], Each x[i] is independent Histogram

If a random variable can only take discrete value, its pdf takes the forms of lines. An example of a discrete distribution is the Poisson distribution Discrete distribution.

We cannot predicate value a random variable We can introduce measures that summarise what we expect to happen on average. The two most important measures are the mean (or expectation) and the standard deviation. The mean of a random variable x is defined to be Mean and variance

In the examples above we have assumed that the mean of the Gaussian distribution to be 0, the mean of the Poisson distribution is found to be.

The mean of a distribution is, in common sense, the average value. Can be estimated from data Assume that {x 1, x 2, x 3, …,x N } are sampled from a distribution Law of Large Numbers: EX ~ (x 1 +x 2 +…+x N )/N Mean and variance

The more data we have, the more accurate we can estimate the mean (x 1 +x 2 +…+x N )/N against N for randn(1,N) mean

The variance is defined as The variance  is defined to be The square root of the variance is called standard deviation. Again, it can be estimated from data Mean and variance

The standard deviation is a measure of the spread of the probability distribution around the mean. A small standard deviation means the distribution are close to the mean. A large value indicates a wide range of possible outcomes. The Gaussian distribution contains the standard deviation within its definition (  ) Mean and variance

Communication signals can be modelled as a zero-mean, Gaussian random variable. This means that its amplitude at a particular time has a PDF given by Eq. above. The statement that noise is zero mean says that, on average, the noise signal takes the values zero. Mean and variance

Mean and variance

Einstein’s IQ Mentally Inadequate 23% Low Intelligence 13.6% Average 34.1% Above Average 34.1% High Intelligence 13.6% Superior Intelligence 2.1% Exceptionally Gifted 0.13% Einstein’s IQ=160+ What about yours?

Signal to noise ratio is an important quantity in determining the performance of a communication channel. The noise power referred to in the definition is the mean noise power. It can therefore be rewritten as SNR= 10 log 10 ( S /  2 ) SNR

Correlation or covariance Cov(X,Y) = E(X-EX)(Y-EY) correlation coefficient is normalized covariance Coef(X,Y) = E(X-EX)(Y-EY) / [  (X)  (Y)] Positive correlation, Negative correlation No correlation (independent)

Stochastic process = signal A stochastic process is a collection of random variables x[n], for each fixed [n], it is a random variable Signal is a typical stochastic process To understand how x[n] evolves with n, we will look at auto-correlation function (ACF) ACF is the correlation between k steps

Stochastic process >> clear all close all n=200; for i=1:10 x(i)=randn(1,1); y(i)=x(i); end for i=1:n-10 y(i+10)=randn(1,1); x(i+10)=.8*x(i)+y(i+10); end plot(xcorr(x)/max(xcorr(x))); hold on plot(xcorr(y)/max(xcorr(y)),'r') two signals are generated: y (red) is simply randn(1,200) x (blue) is generated x[i+10]=.8*x[i] + y[i+10] For y, we have  (0)=1,  (n)=0, if n is not 0 : having no memory For x, we have  (0)=1, and  (n) is not zero, for some n: having memory

white noise w[n] White noise is a random process we can not predict at all (independent of history) In other words, it is the most ‘violent’ noise White noise draws its name from white light which will become clear in the next few lectures

The most ‘noisy’ noise is a white noise since its autocorrelation is zero, i.e. corr(w[n], w[m])=0 when Otherwise, we called it colour noise since we can predict some outcome of w[n], given w[m], m<n white noise w[n]

Why do we love Gaussian? Sweety Gaussian

+ = Sweety Gaussian A linear combination of two Gaussian random variables is Gaussian again For example, given two independent Gaussian variable X and Y with mean zero aX+bY is a Gaussian variable with mean zero and variance a 2  (X)+b 2  (Y) This is very rare (the only one in continuous distribution) but extremely useful: panda in the family of all distributions Yes, I am junior Gaussian Herr Gauss + Frau Gauss = Juenge Gauss

DCSP-6: Information Theory Jianfeng Feng Department of Computer Science Warwick Univ., UK

Data Transmission

How to deal with noise? How to transmit signals? Data Transmission

Transform I Fourier Transform ASK (AM), FSK(FM), and PSK (skipped, but common knowledge) Noise Signal Transmission Data Transmission

Data transmission: Shannon Information and Coding: Information theory, coding of information for efficiency and error protection; Today

Information and coding theory Information theory is concerned with description of information sources representation of the information from a source (coding), transmission of this information over channel.

Information and coding theory The best example how a deep mathematical theory could be successfully applied to solving engineering problems. Information and coding theory

Information theory is a discipline in applied mathematics involving the quantification of data with the goal of enabling as much data as possible to be reliably stored on a medium and/or communicated over a channel. Information and coding theory

The measure of data, known as information entropy, is usually expressed by the average number of bits needed for storage or communication. Information and coding theory

The field is at the crossroads of Information and coding theory mathematics, statistics, computer science, physics, neurobiology, electrical engineering.

Impact has been crucial to success of voyager missions to deep space, invention of the CD, feasibility of mobile phones, development of the Internet, the study of linguistics and of human perception, understanding of black holes, and numerous other fields. Information and coding theory

Founded in 1948 by Claude Shannon in his seminal work A Mathematical Theory of Communication Information and coding theory

The ‘bible’ paper: cited more than 60,000

The most fundamental results of this theory are 1. Shannon's source coding theorem the number of bits needed to represent the result of an uncertain event is given by its entropy; 2. Shannon's noisy-channel coding theorem reliable communication is possible over noisy channels if the rate of communication is below a certain threshold called the channel capacity. The channel capacity can be approached by using appropriate encoding and decoding systems. Information and coding theory

The most fundamental results of this theory are 1. Shannon's source coding theorem the number of bits needed to represent the result of an uncertain event is given by its entropy; 2. Shannon's noisy-channel coding theorem reliable communication is possible over noisy channels if the rate of communication is below a certain threshold called the channel capacity. The channel capacity can be approached by using appropriate encoding and decoding systems. Information and coding theory

Consider to predict the activity of Prime minister tomorrow. This prediction is an information source. Information and coding theory

Consider to predict the activity of Prime Minister tomorrow. This prediction is an information source X. The information source X ={O, R} has two outcomes: He will be in his office (O), he will be naked and run 10 miles in London (R). Information and coding theory

Clearly, the outcome of 'in office' contains little information; it is a highly probable outcome. The outcome 'naked run', however contains considerable information; it is a highly improbable event. Information and coding theory

An information source is a probability distribution, i.e. a set of probabilities assigned to a set of outcomes (events). This reflects the fact that the information contained in an outcome is determined not only by the outcome, but by how uncertain it is. An almost certain outcome contains little information. A measure of the information contained in an outcome was introduced by Hartley in Information and coding theory

Defined the information contained in an outcome x i in x={x 1, x 2,…,x n } I(x i ) = - log 2 p(x i ) Information

The definition above also satisfies the requirement that the total information in in dependent events should add. Clearly, our prime minister prediction for two days contain twice as much information as for one day. Information

The definition above also satisfies the requirement that the total information in in dependent events should add. Clearly, our prime minister prediction for two days contain twice as much information as for one day X={OO, OR, RO, RR}. For two independent outcomes x i and x j, I(x i and x j ) = - log 2 P(x i and x j ) = - log 2 P(x i ) P(x j ) = - log 2 P(x i ) - log 2 P(x j ) Information

The measure entropy H(X) defines the information content of the source X as a whole. It is the mean information provided by the source. We have H(X)=  i P(x i ) I(x i ) = -  i P(x i ) log 2 P(x i ) A binary symmetric source (BSS) is a source with two outputs whose probabilities are p and 1-p respectively. Entropy

The prime minister discussed is a BSS. The entropy of the BBS source is H(X) = -p log 2 p - (1-p) log 2 (1-p) Entropy

. When one outcome is certain, so is the other, and the entropy is zero. As p increases, so too does the entropy, until it reaches a maximum when p = 1-p = 0.5. When p is greater than 0.5, the curve declines symmetrically to zero, reached when p=1. Entropy

Next Week Application of Entropy in coding Minimal length coding

We conclude that the average information in BSS is maximised when both outcomes are equally likely. Entropy is measuring the average uncertainty of the source. (The term entropy is borrowed from thermodynamics. There too it is a measure of the uncertainly of disorder of a system). Shannon: My greatest concern was what to call it. I thought of calling it ‘information’, but the word was overly used, so I decided to call it ‘uncertainty’. When I discussed it with John von Neumann, he had a better idea.John von Neumann Von Neumann told me, ‘You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage. Entropy

In Physics: thermodynamics The arrow of time (Wiki) Entropy is the only quantity in the physical sciences that seems to imply a particular direction of progress, sometimes called an arrow of time.arrow of time. As time progresses, the second law of thermodynamics states that the entropy of an isolated systems never decreases Hence, from this perspective, entropy measurement is thought of as a kind of clock

Entropy