Presentation is loading. Please wait.

Presentation is loading. Please wait.

©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms.

Similar presentations


Presentation on theme: "©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms."— Presentation transcript:

1 ©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms

2 ©2003/04 Alessandro Bogliolo Outline 1.Information theory 1.Definitions 2.Encodings 3.Digitalization 2.Probability theory 1.Random process 2.Probability (marginal, join, conditioned), Bayes Theorem 3.Probability distribution 4.Sampling theory and estimation 3.Algorithms 1.Definition 2.Equivalence 3.Complexity

3 ©2003/04 Alessandro Bogliolo Information Information: reduction of uncertainty The minimum uncertainty is given by two alternatives The elementary choice between 2 alternatives contains the minimum amount of information Bit: binary digit encoding the elementary choice between 2 alternatives (information unit)

4 ©2003/04 Alessandro Bogliolo Strings String: sequence of N characters (or digits) taken from a given finite alphabete of S symbols There are S N different strings of N characters taken from the same alphabete of S symbols There are S N different configurations of N characters taken from an alphabete of S symbols A binary string is composed of bits, defined over the binary alphabete Byte: binary string of 8 bits

5 ©2003/04 Alessandro Bogliolo Encodings Encoding: assignment of strings with the elements of a set according to a given rule Properties: –Irredundant: each element is assigned with a unique string –Constant length: all code words are of the same length –Exact: all elements are encoded and there are no elements associated with the same string 0 10 110 1110 00 01 10 11 000 001 010 011 100 101 110

6 ©2003/04 Alessandro Bogliolo Encoding finite sets The minimum number of digits of a constant-length exact encoding of a set of M elements is so that Properties:

7 ©2003/04 Alessandro Bogliolo Encoding unlimited sets (limitation) An unlimited set contains infinite elements Example: integer numbers Infinite sets cannot be exactly encoded In order to be digitally encoded the set must be restricted to a limited, finite subset In most cases this is done by encoding only the elements within given lower and upper bounds Example: integer numbers within 0 and 999 The limited subset may be exactly encoded

8 ©2003/04 Alessandro Bogliolo Example: positional notation The base-b positional representation of an integer number of n digits has the form The value of the number is n digits encode all integer numbers from 0 to b n -1 Example: b=2, n=510011=1*16+1*2+1*1=19 11111=31=2 5 -1

9 ©2003/04 Alessandro Bogliolo Encoding continuous sets (discretization) A continuous set contains infinite elements Example: real numbers in a given interval, points on a plane In order to be digitally encoded, the set needs to be discretized: partitioned into a discrete number of subsets Codewords will be associated with subsets The resulting encoding is approximated

10 ©2003/04 Alessandro Bogliolo Example: gray levels 1111 1110 1101 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 0000 The encoding associates a unique code with an interval of gray levels All gray levels within the interval are associated with the same code, thus loosing information The original gray level cannot be exactly reconstructed from the code Encoding associates each code with a unique gray level (representative of a class)

11 ©2003/04 Alessandro Bogliolo Example: 2D images Gray level x y n lev nxnx nyny pixel

12 ©2003/04 Alessandro Bogliolo Analog and digital signals Signal: time-varying physical quantity –Analog: continuous-time, continuous-value –Digital: discrete-time, discrete-value The digital encoding of a continuous signal entails: –Sampling (i.e., time discretization) –Quantization (i.e., value discretization) Sampling rate Duration Sample size

13 ©2003/04 Alessandro Bogliolo Example: time series time value

14 ©2003/04 Alessandro Bogliolo Example: video s rate = frame rate n col = number of colors n x n y = frame size time nyny nxnx color

15 ©2003/04 Alessandro Bogliolo Redundancy Redundant encoding: encoding that makes use of more than the minimum number of digits required by an exact encoding Motivations for redundancy: –Providing more expressive/natural encoding/decoding rules –Reliability (error detection) Ex: parity encoding –Noise immunity / fault tolerance (error correction) Ex: triplication

16 ©2003/04 Alessandro Bogliolo 01101 Parity encoding: –A parity bit is used to guarantee that all codewords have an even number of 1’s –Single errors are detected by means of a parity check Redundancy: examples 0010 00101 000000111000 parity check 0 1 error Irredundant codeword Triple redundancy: –Each character is repeats 3 times –Single errors are corrected by means of a majority voting 000000111010 error 00 10 voting result

17 ©2003/04 Alessandro Bogliolo Compression Lossy compression –Compression achieved at the cost of reducing the accuracy of the representation –The original representation cannot be restored Lossless compression –Compression achieved by either removing redundancy or leveraging content-specific opportunities –The original representation can be restored

18 ©2003/04 Alessandro Bogliolo Outline 1.Information theory 1.Definitions 2.Encodings 3.Digitalization 2.Probability theory 1.Random process 2.Probability (marginal, join, conditioned), Bayes Theorem 3.Probability distribution 4.Sampling theory and estimation 3.Algorithms 1.Definition 2.Equivalence 3.Complexity

19 ©2003/04 Alessandro Bogliolo Random process A random process: –Can be repeated infinite times –May provide mutually-exclusive results –Provide an unpredictable result at each trial Elementary event (e): each possible result of a single trial Event space (X): the set of all elementary events Event (E): any set of elementary events (any subset of the event space) E X e

20 ©2003/04 Alessandro Bogliolo Probability Relative frequency of E over N trials: ratio of the number of occurrence of E (n E ) over the N trials: f N (E) = n E /N Probability of E (empirical definition): Probability of E (assiomatic definition): p(E)=0if E is empty p(E)=1if E=X if

21 ©2003/04 Alessandro Bogliolo Probability (properties) 1.If all elementary events have the same probability, the probability of an event is given by its “relative size”: p(E) = card(E)/card(X) 2. 3.If E 1 is a subset of E 2 4. 5. E1E1 E2E2 X

22 ©2003/04 Alessandro Bogliolo Conditional probability Joint probability of two events, outcomes of two random experiments Marginal probability Conditional probability Decomposition

23 ©2003/04 Alessandro Bogliolo Independent events The joint probability of two independent events is equal to the product of their marginal probabilities E 1 and E 2 are independent events if and only if

24 ©2003/04 Alessandro Bogliolo Bayes Theorem Bayes theorem: given two events E 1 and E 2, the conditional probability of E 1 given E 2 can be expressed as: The theorem provides the statistical support for statistical diagnosis based on the evaluation of the probability of a possible cause (E 1 ) of an observed effect (E 2 )

25 ©2003/04 Alessandro Bogliolo Random variable Random variable x: variable representing the outcome of a random experiment Probability distribution function of x: Probability density function of x:

26 ©2003/04 Alessandro Bogliolo Sampling and estimation Parent population of a random variable x: ideal set of the outcomes of all possible trials of a random process Sample of x: set of the outcomes of N trials of the random process Sample parameters can be used as estimators of parameters of the parent population Example: Expected value of xSample average

27 ©2003/04 Alessandro Bogliolo Confidence of an estimator The quality of the estimator P’ of a given parameter P can be expressed in terms of: –Confidence interval: limiting distance d between the estimator and the actual parameter –Confidence level: probability c of finding the actual parameter within the confidence interval The smaller the confidence interval d and the higher the confidence level c, the better The quality of an estimator grows with the number of samples For a fixed confidence level c, the size of confidence interval d decreases with the inverse of the square root of N

28 ©2003/04 Alessandro Bogliolo Variance, covariance, correlation Variance: Standard deviation: Covariance: Correlation: The confidence interval of an estimator is proportional to

29 ©2003/04 Alessandro Bogliolo Outline 1.Information theory 1.Definitions 2.Encodings 3.Digitalization 2.Probability theory 1.Random process 2.Probability (marginal, join, conditioned), Bayes Theorem 3.Probability distribution 4.Sampling theory and estimation 3.Algorithms 1.Definition 2.Equivalence 3.Complexity

30 ©2003/04 Alessandro Bogliolo Algorithm Definition: Finite description of a finite sequence of non ambiguous instructions that can be executed in finite time to solve a problem or provide a result Key properties: –Finite description –Non ambiguity –Finite execution Algorithms take input data and provide output data Domain: set of all allowed configurations of the input data

31 ©2003/04 Alessandro Bogliolo Complexity Complexity: measure of the number of elementary steps required by the algorithm to solve a problem The number of execution steps usually depend on the configuration of the input data (i.e., on the instance of the problem) The complexity of an algorithm is usually expressed as a function of its input data, retaining the type of behavior while neglecting additive and multiplicative constants Example: O(n), O(n 2 ), O(2 n )

32 ©2003/04 Alessandro Bogliolo Equivalence Two algorithms are said to be equivalent if: –they are defined on the same domain –they provide the same result in all domain points In general, there are many equivalent algorithms that solve the same problem, possibly providing different complexity The complexity is a property of an algorithm, it is not an inherent property of the problem The complexity of the most efficient knwon algorithm that solves a given problem is commonly considered to be the complexity of the problem


Download ppt "©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms."

Similar presentations


Ads by Google