Presentation is loading. Please wait.

Presentation is loading. Please wait.

Entropy. Optimal Value Example The decimal number 563 costs 10  3 = 30 units. The binary number 1000110011 costs 2  10 = 20 units.  Same value as decimal.

Similar presentations


Presentation on theme: "Entropy. Optimal Value Example The decimal number 563 costs 10  3 = 30 units. The binary number 1000110011 costs 2  10 = 20 units.  Same value as decimal."— Presentation transcript:

1 Entropy

2 Optimal Value Example The decimal number 563 costs 10  3 = 30 units. The binary number 1000110011 costs 2  10 = 20 units.  Same value as decimal 563  Lower cost Problem Assume you are designing a digital display.  Variable digits in the base  Variable display digits Display cost is the product of the base and number of digits. What is the optimal base to use for the display?

3 Natural Base The number of digits d is the logarithm of the number i in a given base n. The cost C(i) is the product of n and d. Differentiation can find the optimal base.  natural log gives the base This is an example of information theory.

4 Better Information The pental numbers only need a single digit.  Cost 5  1 = 5 units The binary numbers need at least three digits.  100 2 = 4 10  Cost 2  3 = 6 units Problem Suppose the display only includes values from 0-4. How does the cost of a pental (base 5) display compare to a binary display? There is better information in this problem than in the previous problem.

5 Shannon’s Information In 1948 Claude Shannon identified a way to measure the amount of uncertainty in a set of probabilities.  Set of probabilities {p 1, p 2, …p n }  Measure of uncertainty H(p i ) There are three conditions. 1.H is a continuous function of the p i. 2.If all p i are equal then H is monotonically increasing in n. 3.If H has subsets with H j, then H is the sum of the H j.

6 Increasing Uncertainty The simplest choice is the sum of an unknown continuous function f(p i ).  Solve for equal probabilities  No loss of generality Differentiation sets up condition 2.  Monotonically increasing Carter, chap 20

7 Subset Composition Let H be composed of two independent subsets.  r outcomes with H(1/r)  s outcomes with H(1/s)  Independent, so n = rs  Redefine the variables This can be differentiated by both R and S.  Leads to constants A, C  Integrate and substitute

8 Logarithmic Information The boundary condition gives f(1) = 0.  No uncertainty when p = 1  C = 0 Condition 2 requires A < 0.  Redefine as positive The uncertainty can now be expressed in terms of probabilities.

9 Maximum Uncertainty The uncertainty is constrained by the probability and mean. Maximize H with Lagrange multipliers.  Assume  p i independent  Each term vanishes Apply the probability constraint to find the multiplier .

10 Partition Function The sum with the multiplier is the canonical partition function. The mean constraint completes the solution for .  Usually not analytic The second derivative gives the variance.

11 Information Entropy The partition function includes information about the microscopic behavior of the system. The number of choices available at the microscopic level gives rise to the macroscopic behavior. The quantity H is the information entropy.  Equivalent to thermodynamic entropy S  Boltzmann’s constant k

12 Extra Constraints Some systems have additional information f k (x i ).  Means of functions known  Let f 1 (x i ) = x i  Uncertainty must be reduced or unchanged The constraint mean is related to the partition function.

13 Entropy and Partition The entropy is based on the probabilities of microstates. The probabilities relate to the partition function. Entropy is a Legendre transformation of the partition function.


Download ppt "Entropy. Optimal Value Example The decimal number 563 costs 10  3 = 30 units. The binary number 1000110011 costs 2  10 = 20 units.  Same value as decimal."

Similar presentations


Ads by Google