Presentation is loading. Please wait.

Presentation is loading. Please wait.

Title Reprint/Preprint Download at:

Similar presentations


Presentation on theme: "Title Reprint/Preprint Download at:"— Presentation transcript:

1 Title Reprint/Preprint Download at: http://www.math.unl.edu/~bdeng

2 intr o Golden Ratio  : 1  22  + f 2 = 1 f =

3 intr o 1 f f3f3 f2f2 Pythagoreans (570 – 500 B.C.) were the first to know that the Golden Ratio is an irrational number. Euclid (300 B.C.) gave it a first clear definition as ‘ the extreme and mean ratio’. Pacioli’s book ‘The Divine Proportion’ popularized the Golden Ratio outside the math community (1445 – 1517). Kepler (1571 – 1630) discovered the fact that Jacques Bernoulli (1654 – 1705) made the connection between the logarithmic spiral and the golden rectangle. Binet Formula (1786 – 1856) Ohm (1835) was the first to use the term ‘Golden Section’.

4 Nature

5 Neurons Models Neurons models Rinzel & Wang (1997) Bechtereva & Abdullaev (2000) (1994) time 1T1T3T3T 1 f

6 seedtuning SEED Implementation Signal Encode Decode Channel Mistuned Spike Excitation Encoding & Decoding(SEED) 3 2 4 3 3 2 2 1 1 3 …

7 Bit rate Entropy Information System Alphabet: A = {0,1} Message: s = 11100101… Information System: Ensemble of messages, characterized by symbol probabilities: P({0})= p 0, P( {1})= p 1 Probability for a particular message s 0 … s n –1 is p s 0 … p s n = p 0 # of 0s p 1 # of 1s, where # of 0s + # of 1s = n The average symbol probability for a typical message is (p s 0 … p s n ) 1/n = p 0 (# of 0s) / n p 1 (# of 1s) / n ~ p 0 p 0 p 1 p 1 Entropy Let p 0 = (1/2) log ½ p 0 = (1/2) -ln p 0 / ln 2, p 1 = (1/2) log ½ p 1 = (1/2) -ln p 1 / ln 2 Then the average symbol probability for a typical message is (p s 0 … p s n ) 1/n ~ p 0 p 0 p 1 p 1 = (1/2) (– p 0 ln p 0 – p 1 ln p 1 ) / ln 2 : = (1/2) E( p 0 ) By definition, the entropy of the system is E(p) = (– p 0 ln p 0 – p 1 ln p 1 ) / ln 2 in bits per symbol In general, if A = {0, …, n-1}, P({0}) = p 0,…, P({n –1}) = p n –1, then each average symbol contains E(p) = (– p 0 ln p 0 – … – p n –1 ln p n –1 ) / ln 2 bits of information, call it the entropy. In general, if A = {0, …, n-1}, P({0}) = p 0,…, P({n –1}) = p n –1, then each average symbol contains E(p) = (– p 0 ln p 0 – … – p n –1 ln p n –1 ) / ln 2 bits of information, call it the entropy. Example: Alphabet: A = {0, 1}, w/ equal probability P({0})=P({1})=0.5. Message: …011100101… Then each alphabet contains E = ln 2 / ln 2 = 1 bit of information Example: Alphabet: A = {0, 1}, w/ equal probability P({0})=P({1})=0.5. Message: …011100101… Then each alphabet contains E = ln 2 / ln 2 = 1 bit of information

8 Bit rate Golden Ratio Distribution SEED Encoding: Sensory Input Alphabet: S n = {A 1, A 2, …, A n } with probabilities {p 1, …, p n }. SEED Encoding: Sensory Input Alphabet: S n = {A 1, A 2, …, A n } with probabilities {p 1, …, p n }. Isospike Encoding: E n = {burst of 1 isospike, …, burst of n isospikes} Message: SEED isospike trains… Idea Situation: 1) Each spike takes up the same amount of time, T, 2) Zero inter-spike transition Then, the average time per symbol is T ave (p) = Tp 1 + 2Tp 2 +… +nTp n And, The bit per unit time is r n (p) = E (p) / T ave (p) Theorem: (Golden Ratio Distribution) For each n r 2 r n * = max{r n (p) | p 1 + p 2 +… +p n = 1, p k r 0} = _ ln p 1 / (T ln 2) for which p k = p 1 k and p 1 + p 1 2 +… + p 1 n = 1. In particular, for n = 2, p 1 = f, p 2 = f 2. In addition, p 1 (n)  ½ as n . Theorem: (Golden Ratio Distribution) For each n r 2 r n * = max{r n (p) | p 1 + p 2 +… +p n = 1, p k r 0} = _ ln p 1 / (T ln 2) for which p k = p 1 k and p 1 + p 1 2 +… + p 1 n = 1. In particular, for n = 2, p 1 = f, p 2 = f 2. In addition, p 1 (n)  ½ as n . time 1T1T3T3T 8

9 Bit rate Golden Ratio Distribution Generalized Golden Ratio Distribution = Special Case: T k = m k, T k / T 1 = k

10 Go lde nS equ enc e Golden Sequence # of 1s # of 0s Total (Rule: 1  10, 0  1) (F n ) (F n-1 ) (F n + F n –1 = F n +1 ) 1 1 0 10 1 1 101 2 1 10110 3 2 10110101 5 3 1011010110110 8 5 101101011011010110101 13 8 (# of 1s)/(# of 0s) = F n /F n-1  1/ f, F n+1 = F n + F n -1, => Distribution: 1 = F n /F n+1 + F n -1 /F n+1, => p 1  f, p 0  f 2 P{fat tile}  f P{thin tile}  f 2

11 Title


Download ppt "Title Reprint/Preprint Download at:"

Similar presentations


Ads by Google