Introduction to Digital Signals

Slides:



Advertisements
Similar presentations
ELEN 5346/4304 DSP and Filter Design Fall Lecture 15: Stochastic processes Instructor: Dr. Gleb V. Tcheslavski Contact:
Advertisements

ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Response to a Sinusoidal Input Frequency Analysis of an RC Circuit.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Periodograms Bartlett Windows Data Windowing Blackman-Tukey Resources:
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Statistics 1: Introduction to Probability and Statistics Section 3-3.
Intensity Transformations (Chapter 3)
Quantization Prof. Siripong Potisuk.
Review of Probability and Random Processes
1 ECE310 – Lecture 23 Random Signal Analysis 04/27/01.
5-1 Two Discrete Random Variables Example Two Discrete Random Variables Figure 5-1 Joint probability distribution of X and Y in Example 5-1.
5-1 Two Discrete Random Variables Example Two Discrete Random Variables Figure 5-1 Joint probability distribution of X and Y in Example 5-1.
Review of Probability.
©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
ECE 8443 – Pattern Recognition ECE 3163 – Signals and Systems Objectives: Examples of Signals Functional Forms Continuous vs. Discrete Symmetry and Periodicity.
Formatting and Baseband Modulation
Introduction Signals is a cornerstone of an electrical or computer engineering education and relevant to most engineering disciplines. The concepts described.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Introduction SNR Gain Patterns Beam Steering Shading Resources: Wiki:
STAT 497 LECTURE NOTES 2.
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
Digital Electronics. Introduction to Number Systems & Codes Digital & Analog systems, Numerical representation, Digital number systems, Binary to Decimal.
1 LES of Turbulent Flows: Lecture 1 Supplement (ME EN ) Prof. Rob Stoll Department of Mechanical Engineering University of Utah Fall 2014.
ECE 8443 – Pattern Recognition ECE 3163 – Signals and Systems Objectives: Useful Building Blocks Time-Shifting of Signals Derivatives Sampling (Introduction)
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
Random Processes ECE460 Spring, Power Spectral Density Generalities : Example: 2.
Mathematical Preliminaries. 37 Matrix Theory Vectors nth element of vector u : u(n) Matrix mth row and nth column of A : a(m,n) column vector.
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Review Resources: Wiki: Superheterodyne Receivers RE: Superheterodyne.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Convolution Definition Graphical Convolution Examples Properties.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Conjugate Priors Multinomial Gaussian MAP Variance Estimation Example.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Definitions Random Signal Analysis (Review) Discrete Random Signals Random.
An introduction to audio/video compression Dr. Malcolm Wilson.
Fourier Analysis of Discrete-Time Systems
Representation of CT Signals (Review)
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
Modern Navigation Thomas Herring MW 11:00-12:30 Room
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Periodic Signals Triangle Wave Other Simple Functions Review Integration.
Lecture V Probability theory. Lecture questions Classical definition of probability Frequency probability Discrete variable and probability distribution.
Motivation Thus far we have dealt primarily with the input/output characteristics of linear systems. State variable, or state space, representations describe.
1 Quantization Error Analysis Author: Anil Pothireddy 12/10/ /10/2002.
CHAPTER 5 SIGNAL SPACE ANALYSIS
CSCI-100 Introduction to Computing Hardware Part II.
ECE 8443 – Pattern Recognition ECE 3163 – Signals and Systems Objectives: Stability Response to a Sinusoid Filtering White Noise Autocorrelation Power.
ECE 8443 – Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem Proof EM Example – Missing Data Intro to Hidden Markov Models.
First-Order System Revisited
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Elements of a Discrete Model Evaluation.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.
1 ECE310 – Lecture 22 Random Signal Analysis 04/25/01.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
Linear Constant-Coefficient Difference Equations
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: FIR Filters Design of Ideal Lowpass Filters Filter Design Example.
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Derivation of the DFT Relationship to DTFT DFT of Truncated Signals.
1 What is Multimedia? Multimedia can have a many definitions Multimedia means that computer information can be represented through media types: – Text.
1 Review of Probability and Random Processes. 2 Importance of Random Processes Random variables and processes talk about quantities and signals which.
ECE 8443 – Pattern Recognition Objectives: Reestimation Equations Continuous Distributions Gaussian Mixture Models EM Derivation of Reestimation Resources:
Chapter 6 Random Processes
Copyright 1998, S.D. Personick. All Rights Reserved. Telecommunications Networking I Lectures 4&5 Quantifying the Performance of Communication Systems.
Linear Constant-Coefficient Difference Equations
National Mathematics Day
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
Quantization and Encoding
Sampling rate conversion by a rational factor
Scalar Quantization – Mathematical Model
Soutenance de thèse vendredi 24 novembre 2006, Lorient
EE513 Audio Signals and Systems
Data Transformations targeted at minimizing experimental variance
LECTURE 02: BASIC PROPERTIES OF SIGNALS
Presentation transcript:

LECTURE 04: BASIC STATISTICAL MODELING OF DIGITAL SIGNALS Objectives: Definition of a Digital Signal Random Variables Probability Density Functions Means and Moments White Noise Resources: Wiki: Probability Wiki: Random Variable S.G.: Random Processes Tutorial Blogspot: Probability Resources MS Equation 3.0 was used with settings of: 18, 12, 8, 18, 12. URL:

Introduction to Digital Signals A discrete-time signal, x[n], is discrete in time but continuous in amplitude (e.g., the amplitude is represented as a real number). We often approximate real numbers in digital electronics using a floating- point numeric representation (e.g., 4-byte IEEE floating-point numbers). For example, a 4-byte floating point number uses 32 bits in its representation, which allows it to achieve 2**32 different values. The simplest way to assign these values would be to use a linear representation: divide the range [Rmin,Rmax] into 2**32 equal steps. This is often referred to as linear quantization. Does this make sense? Floating point numbers actually use a logarithmic representation consisting of an exponent and mantissa (e.g. IEEE floating-point format). For many applications, such as encoding of audio and images, 32 bits per sample is overkill because humans can’t distinguish that many unique levels. Lossless compression (e.g., gif images) techniques reduce the number of bits without introducing any distortion. Lossy compression (e.g., jpeg or mp3) techniques reduce the number of bits significantly but introduce some amount of distortion. Signals whose amplitude and time scale are discrete are referred to as digital signals and form the basis for the field of digital signal processing.

Signal Amplitude As A Random Variable The amplitude of x[n] can be modeled as a random variable. Consider an audio signal (mp3-encoded music): How would we compute a histogram of the amplitude of this signal? (Note that the example to the right is not an actual histogram of the above data.) -1.0 -0.5 0.0 0.5 1.0 Frequency Amplitude What would be a reasonable approximation for the amplitude histogram of this signal? Gaussian Distribution Triangular Distribution Uniform Distribution

Examples of Random Variables Continuous-valued random variable: Uniform: Triangular: Gaussian: Discrete-valued random variable: N/A An important part of the specification of a random variable is its probability density function. Note that these integrate or sum to 1. A Gaussian distribution is completely specified by two values, its mean and variance. This is one reason it is a popular and useful model. Also note that quantization systems take into account the distribution of the data to optimize bit assignments. This is studied in disciplines such as communications and information theory.

Mean Value Of A Random Signal (DC Value) The mean value of a random variable can be computed by integrating its probability density function: Example: Uniformly Distributed Continuous Random Variable The mean value of a discrete random variable can be computed in an analogous manner: But this can also be computed using the sample mean (N = no. of samples): In fact, in more advanced coursework, you will learn this is one of many ways to estimate the mean of a random variable (e.g., maximum likelihood). For a discrete uniformly distributed random variable: Soon we will learn that the average value of a signal is its DC value, or its frequency response at 0 Hz.

Variance and Standard Deviation The variance of a continuous random variable can be computed as: For a uniformly distributed random variable: For a discrete random variable: Following the analogy of a sample mean, this can be estimated from the data: The standard deviation is just the square root of the variance. Soon we will see that the variance is related to the correlation structure of the signal, the power of the signal, and the power spectral density.

Covariance and Correlation We can define the covariance between two random variables as: For time series, when y is simply a delayed version of x, this can be reduced to a simpler form known as the correlation of a signal: For a discrete random variable representing the samples of a time series, we can estimate this directly from the signal as: The correlation is often referred to as a second-order moment, with the expectation being the first-order moment. It is possible to define higher-order moments as well: Two random variables are said to be uncorrelated if . For a Gaussian random variable, only the mean and variance are non-zero; all other higher-order moments are equal to zero.

White Gaussian Noise We can combine these concepts to develop a model for noise. Consider a signal that has energy at all frequencies: We can describe a signal like this as having a mean value of zero, a variance, or power, of 2. It is common to model the amplitude distribution of the signal as a Gaussian distribution: We refer to this as zero mean Gaussian white noise. In many engineering systems corrupted by noise, we model the measured signal as: , where is zero-mean white Gaussian noise. Why is this a useful model? Think about: In Signals and Systems, continuous time, discrete-time and statistics converge to provide a powerful way to manipulate digital signals on a computer. This is one reason much instrumentation has gone digital.

Summary Introduced the concept of a digital signal: a signal that is discrete in both time and amplitude. Introduced the concept of a random variable, and showed how it can be modeled through a probability distribution of its value. We used this to model signals. Introduced the concept of the mean and variance of a random variable. Demonstrated simple calculations of these for a uniformly distributed random variable. Discussed the application of these to modeling noise. MATLAB code for the examples discussed in this lecture can be found at: http://www.isip.piconepress.com/publications/courses/ece_3163/matlab/2009 _spring/statistics. Other similar signal processing examples are located at: http://www.isip.piconepress.com/publications/courses/ece_3163/matlab.