Title Reprint/Preprint Download at:

Slides:



Advertisements
Similar presentations
Lecture 2: Basic Information Theory TSBK01 Image Coding and Data Compression Jörgen Ahlberg Div. of Sensor Technology Swedish Defence Research Agency (FOI)
Advertisements

1 The 2-to-4 decoder is a block which decodes the 2-bit binary inputs and produces four output All but one outputs are zero One output corresponding to.
Binary Symmetric channel (BSC) is idealised model used for noisy channel. symmetric p( 01) =p(10)
Noise, Information Theory, and Entropy (cont.) CS414 – Spring 2007 By Karrie Karahalios, Roger Cheng, Brian Bailey.
Sampling and Pulse Code Modulation
An introduction to Data Compression
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
Some Common Binary Signaling Formats: NRZ RZ NRZ-B AMI Manchester.
Entropy and Shannon’s First Theorem
Entropy Rates of a Stochastic Process
Fibonacci Sequences Susan Leggett, Zuzana Zvarova, Sara Campbell
Information Theory Eighteenth Meeting. A Communication Model Messages are produced by a source transmitted over a channel to the destination. encoded.
Information Theory Rong Jin. Outline  Information  Entropy  Mutual information  Noisy channel model.
1 Chapter 1 Introduction. 2 Outline 1.1 A Very Abstract Summary 1.2 History 1.3 Model of the Signaling System 1.4 Information Source 1.5 Encoding a Source.
Lecture 2: Basic Information Theory Thinh Nguyen Oregon State University.
Copyright © Cengage Learning. All rights reserved.
Lossless Compression - I Hao Jiang Computer Science Department Sept. 13, 2007.
Digital Communication Symbol Modulated Carrier RX Symbol Decision Binary Bytes D/A Recovered Analog Binary Bytes Symbol State Modulation A/D Analog Source.
Whiteboardmaths.com © 2004 All rights reserved
Introduction to AEP In information theory, the asymptotic equipartition property (AEP) is the analog of the law of large numbers. This law states that.
§1 Entropy and mutual information
STATISTIC & INFORMATION THEORY (CSNB134)
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
The Mathematics of Phi By Geoff Byron, Tyler Galbraith, and Richard Kim It’s a “phi-nomenon!”
PRECALCULUS I LOGARITHMIC FUNCTIONS Dr. Claude S. Moore Danville Community College.
1 Logarithms Definition If y = a x then x = log a y For log 10 x use the log button. For log e x use the ln button.
Derivative of the logarithmic function
Tahereh Toosi IPM. Recap 2 [Churchland and Abbott, 2012]
Logarithmic and Exponential Equations
Are You Perfect? Writing Prompt: What do you consider perfect?
Exponential and Logarithmic Equations
Basic Concepts of Encoding Codes, their efficiency and redundancy 1.
Channel Capacity.
The average life of a particular car battery is 40 months. The life of a car battery, measured in months, is known to follow an exponential distribution.
COMMUNICATION NETWORK. NOISE CHARACTERISTICS OF A CHANNEL 1.
Linawati Electrical Engineering Department Udayana University
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Coding Theory Efficient and Reliable Transfer of Information
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Abdullah Aldahami ( ) April 6,  Huffman Coding is a simple algorithm that generates a set of variable sized codes with the minimum average.
1 Lecture 7 System Models Attributes of a man-made system. Concerns in the design of a distributed system Communication channels Entropy and mutual information.
Common Logarithms - Definition Example – Solve Exponential Equations using Logs.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
Mutual Information, Joint Entropy & Conditional Entropy
8 – 4 : Logarithmic Functions (Day 1) Objective: Be able to evaluate Logarithmic Functions.
Section 6.2* The Natural Logarithmic Function. THE NATURAL LOGARITHMIC FUNCTION.
ENTROPY Entropy measures the uncertainty in a random experiment. Let X be a discrete random variable with range S X = { 1,2,3,... k} and pmf p k = P X.
UNIT I. Entropy and Uncertainty Entropy is the irreducible complexity below which a signal cannot be compressed. Entropy is the irreducible complexity.
(C) 2000, The University of Michigan 1 Language and Information Handout #2 September 21, 2000.
MAT 213 Brief Calculus Section 3.3 Exponential and Logarithmic Rate-of- Change Formulas.
Image Processing Architecture, © Oleh TretiakPage 1Lecture 5 ECEC 453 Image Processing Architecture Lecture 5, 1/22/2004 Rate-Distortion Theory,
Information Theory Information Suppose that we have the source alphabet of q symbols s 1, s 2,.., s q, each with its probability p(s i )=p i. How much.
KHARKOV NATIONAL MEDICAL UNIVERSITY MEDICAL INFORMATICS МЕДИЧНА ІНФОРМАТИКА MEDICAL INFORMATICS.
The proofs of the Early Greeks 2800 B.C. – 450 B.C.
Logarithmic Functions
The Viterbi Decoding Algorithm
Shannon Entropy Shannon worked at Bell Labs (part of AT&T)
Introduction to Information Theory- Entropy
Information Theory Michael J. Watts
Logarithms and Logarithmic Functions
A Brief Introduction to Information Theory
1. Reversing the limits changes the sign. 2.
Dynamics of Bursting Spike Renormalization
Copyright © Cengage Learning. All rights reserved.
LECTURE 15: REESTIMATION, EM AND MIXTURES
Probability Fundamentals
Homework #2 Due May 29 , Consider a (2,1,4) convolutional code with g(1) = 1+ D2, g(2) = 1+ D + D2 + D3 a. Draw the.
Lecture 9 Entropy (Section 3.1, a bit of 3.2)
Theory of Information Lecture 13
Solving Logarithmic Equations
Presentation transcript:

Title Reprint/Preprint Download at:

intr o Golden Ratio  : 1  22  + f 2 = 1 f =

intr o 1 f f3f3 f2f2 Pythagoreans (570 – 500 B.C.) were the first to know that the Golden Ratio is an irrational number. Euclid (300 B.C.) gave it a first clear definition as ‘ the extreme and mean ratio’. Pacioli’s book ‘The Divine Proportion’ popularized the Golden Ratio outside the math community (1445 – 1517). Kepler (1571 – 1630) discovered the fact that Jacques Bernoulli (1654 – 1705) made the connection between the logarithmic spiral and the golden rectangle. Binet Formula (1786 – 1856) Ohm (1835) was the first to use the term ‘Golden Section’.

Nature

Neurons Models Neurons models Rinzel & Wang (1997) Bechtereva & Abdullaev (2000) (1994) time 1T1T3T3T 1 f

seedtuning SEED Implementation Signal Encode Decode Channel Mistuned Spike Excitation Encoding & Decoding(SEED) …

Bit rate Entropy Information System Alphabet: A = {0,1} Message: s = … Information System: Ensemble of messages, characterized by symbol probabilities: P({0})= p 0, P( {1})= p 1 Probability for a particular message s 0 … s n –1 is p s 0 … p s n = p 0 # of 0s p 1 # of 1s, where # of 0s + # of 1s = n The average symbol probability for a typical message is (p s 0 … p s n ) 1/n = p 0 (# of 0s) / n p 1 (# of 1s) / n ~ p 0 p 0 p 1 p 1 Entropy Let p 0 = (1/2) log ½ p 0 = (1/2) -ln p 0 / ln 2, p 1 = (1/2) log ½ p 1 = (1/2) -ln p 1 / ln 2 Then the average symbol probability for a typical message is (p s 0 … p s n ) 1/n ~ p 0 p 0 p 1 p 1 = (1/2) (– p 0 ln p 0 – p 1 ln p 1 ) / ln 2 : = (1/2) E( p 0 ) By definition, the entropy of the system is E(p) = (– p 0 ln p 0 – p 1 ln p 1 ) / ln 2 in bits per symbol In general, if A = {0, …, n-1}, P({0}) = p 0,…, P({n –1}) = p n –1, then each average symbol contains E(p) = (– p 0 ln p 0 – … – p n –1 ln p n –1 ) / ln 2 bits of information, call it the entropy. In general, if A = {0, …, n-1}, P({0}) = p 0,…, P({n –1}) = p n –1, then each average symbol contains E(p) = (– p 0 ln p 0 – … – p n –1 ln p n –1 ) / ln 2 bits of information, call it the entropy. Example: Alphabet: A = {0, 1}, w/ equal probability P({0})=P({1})=0.5. Message: … … Then each alphabet contains E = ln 2 / ln 2 = 1 bit of information Example: Alphabet: A = {0, 1}, w/ equal probability P({0})=P({1})=0.5. Message: … … Then each alphabet contains E = ln 2 / ln 2 = 1 bit of information

Bit rate Golden Ratio Distribution SEED Encoding: Sensory Input Alphabet: S n = {A 1, A 2, …, A n } with probabilities {p 1, …, p n }. SEED Encoding: Sensory Input Alphabet: S n = {A 1, A 2, …, A n } with probabilities {p 1, …, p n }. Isospike Encoding: E n = {burst of 1 isospike, …, burst of n isospikes} Message: SEED isospike trains… Idea Situation: 1) Each spike takes up the same amount of time, T, 2) Zero inter-spike transition Then, the average time per symbol is T ave (p) = Tp 1 + 2Tp 2 +… +nTp n And, The bit per unit time is r n (p) = E (p) / T ave (p) Theorem: (Golden Ratio Distribution) For each n r 2 r n * = max{r n (p) | p 1 + p 2 +… +p n = 1, p k r 0} = _ ln p 1 / (T ln 2) for which p k = p 1 k and p 1 + p 1 2 +… + p 1 n = 1. In particular, for n = 2, p 1 = f, p 2 = f 2. In addition, p 1 (n)  ½ as n . Theorem: (Golden Ratio Distribution) For each n r 2 r n * = max{r n (p) | p 1 + p 2 +… +p n = 1, p k r 0} = _ ln p 1 / (T ln 2) for which p k = p 1 k and p 1 + p 1 2 +… + p 1 n = 1. In particular, for n = 2, p 1 = f, p 2 = f 2. In addition, p 1 (n)  ½ as n . time 1T1T3T3T 8

Bit rate Golden Ratio Distribution Generalized Golden Ratio Distribution = Special Case: T k = m k, T k / T 1 = k

Go lde nS equ enc e Golden Sequence # of 1s # of 0s Total (Rule: 1  10, 0  1) (F n ) (F n-1 ) (F n + F n –1 = F n +1 ) (# of 1s)/(# of 0s) = F n /F n-1  1/ f, F n+1 = F n + F n -1, => Distribution: 1 = F n /F n+1 + F n -1 /F n+1, => p 1  f, p 0  f 2 P{fat tile}  f P{thin tile}  f 2

Title