Download presentation

Presentation is loading. Please wait.

Published byJeremy Ready Modified over 3 years ago

1
**Applications of non-equilibrium statistical mechanics**

Differences and similarities with equilibrium theories and applications to problems in several areas. Birger Bergersen Dept. Of Physics and Astronomy University of British Columbia BNU Office 208 Yingdong bldg Link to slides at

2
Overview Overview of equilibrium systems: Ensembles, variable types, variational principles, fluctuations, detailed balance, equations of state, phase transitions, information theory. Steady states vs. Equilibrium: Linear response, Onsager coefficients, equilibrium “economics”, approach to equilibrium, Le Chatelier’s principle. Oscillatory reactions: Belousov –Zhabotinsky reaction, Brusselerator. SIR and Lotka-Volterra models. Effects of delay: Logistic equation with delayed carrying capacity Inertia: Transition to turbulence. Dimensional analysis Stochastic processes, Master equation, Birth and death, Fokker Planck equation, Brownian motion, escape probability, Parrondo games. Other processes: Material failure, multiplicative noise, lognormal distribution, black swans. Cauchy and Lévy distributions, First-passage times, fat tails, Zipf plots. Correlated random walks. Self- avoiding walks, rescaled range analysis, fractal Brownian motion, spectral analysis, fractal dimension. Continuous time walks. Sub- and super-diffusion. Smart moulds.

3
**Starting point for equilibrium statistical physics: **

Macro-state average over allowed micro-states. Micro-canonical ensemble: Canonical ensemble: Insulating walls Entropy S maximum subject to constraints Temperature T, pressure P, chemical potential μ fluctuating. N,E,V specified Helmholtz free energy A=E-TS minimum subject to constraints. Competition between energy and entropy. Entropy S , energy E, pressure P, chemical potential μ fluctuating. T,V,N specified Heat bath at temperature T

4
**Thermodynamic derivatives of one component system:**

In the micro-canonical ensemble: Energy of a state is a function of the S,V,N with an exact differential The identification can be thought of as definitions of T,P,μ . Exact differential implies Maxwell identities such as The first law of thermodynamics states that where Q stands for heat added to system and W work done by system. Q and W are process variables not state variables. Knowledge of the initial and final states of a process does not allow a determination of the heat and work of the process. The distinction is Important e.g. in utility theory of economics and game theory [3],[4],[5].

5
Gibbs-Duhem relation Equilibrium thermodynamic variables such as S,V,N,E are extensive i.e. they are proportional to system size. Other variables such as T,P,μ are intensive and independent of size if system large. Let us rescale a one component system by a factor of λ E(λS, λV, λN)= λ E(S,V,N) Differentiating both sides with respect to λ gives TS-PV+μN=E which is the Gibbs Duhem relation. The environment surrounding the atoms and molecules in ordinary matter is local and does not depend on size of object. This sometimes referred to as the law of existence of matter [6]. Evolving systems such as ecologies, societies, organisms, the internet, earthquake fault systems, do not follow this law, and they are not equilibrium systems, nor are astronomical objects held together by gravity.

6
**Grand canonical ensemble**

Gibbs ensemble G=E-TS+PV = μN minimum V, μ, S fluctuating Grand canonical ensemble Mathematically the different free energies are related by Legendre transforms. P Heat bath P,T,N specified Porous walls N,P,S fluctuating Grand potential Ω=-PV minimum μ, V, T specified Heat bath

7
**Canonical ensemble dA =-SdT -PdV +μdN Example isothermal atmosphere:**

If atmospheric pressure at height h=0 is p(0) and m the mass of a gas molecule But , real atmospheres are not isothermal and not in equilibrium.

8
**Canonical fluctuations:**

...

10
I assume that what we done so far are things you have seen before, but to refresh your memory let us do some simple problems: Problem 1: In the micro-canonical ensemble the change in energy E, when there is an infinitesimal change in the control parameters S,V, N is dE= TdS-PdV+μdN So that T=∂E/ ∂S, P=- ∂E/ ∂V, μ= ∂E/ ∂N a. Show that if two systems are put in contact by a conducting wall their temperatures a equilibrium will be equal. b. Show that, if the two systems are separated by a frictionless piston, their pressures will be equal at equilibrium. c. Show that if they are separated by a porous wall the chemical potential will be equal on the two sides at equilibrium.

11
Solution: Instead of considering the state variable E as E(S,V,N) we can treat the entropy as S(E,V,N) dS=E/T dE +P/T dV –μ/ T dN Or 1/T=∂S/ ∂E Let E=E₁+E₂ be the energy of the compound system and the entropy is S=S ₁+S₂. At equilibrium the entropy is maximum or Since ∂E₁ / ∂E₂=-1 we find 1/T₁ =1/T₂ or T₁ =T₂ . Similar arguments for the case of the piston tells us that at equilibrium P₁/T₁ = P₂/T₂. But, since T₁ =T₂, P₁= P₂. The same argument tells us that μ₁= μ₂. Note that if the two systems are kept at different temperatures, the entropy is not maximum when the pressure is equal, as required by mechanical equilibrium!

12
Problem 2: If q and p are canonical coordinated and momenta the number of quantum states between q and q+dq, p and p+dp are dp dq/h, where h is Planck’s constant. If a particle of mass m is in a 3D box of volume V its canonical partition function is The partition function for an ideal gas of N identical particles is The factor N! comes about because we cannot tell which particle is in which state. Find expressions for the Helmholtz energy A, the mean entropy S and the mean energy E and the pressure P Two ideal gases with N₁=N₂=N molecules occupy separate volumes V₁=V₂=V at the same temperature. They are then mixed in a volume 2V. Find the entropy of mixing. Show that there is no entropy change if the molecules are the same!

13
Solution: We have For large N we can use Stirlings formula ln N!=N ln N-N and find The entropy can be obtained from S=-∂A/ ∂T, and Similarly E=A+TS=3nkT/2 and P=-∂A/ ∂V=NkT If the two gases are different the change in entropy comes about because V→2V for both gases. Hence, ∆S=2Nk ln 2 If the two gases are the same there is no change in the total entropy.

14
**Problem 3. Maxwell speed distribution**

Assuming that the number of states with velocity components between is proportional to and that the energy of a molecule with speed is find the probability distribution for the speed. Find the average speed The mean kinetic energy The most probable speed i.e. the speed for which

15
Solution: Going to polar coordinate we find that To find the proportionality constant we use after some algebra: Integrating we find

16
**Principle of detailed balance:**

The canonical ensemble is based on the assumption that the accessible states j have a stationary probability distribution π(j)=exp(-βE(j))/Z(β) This distribution is maintained in a fluctuating environment in which the system undergoes a series of small changes with probabilities P(m←j). For the distribution to be stationary it must not change by transitions in and out of state π(m)=∑ P(j←m) π(m)= ∑ P(m←j) π(j) This is satisfied by requiring detailed balance: P(j ←m)/P(m ←j)= π(j)/ π(m)=exp(-β(E(j)-E(m)) In a equilibrium simulation we get the correct equilibrium properties, by satisfying detailed balance and making sure there is a path between all allowed states (Monte Carlo method).. We can ignore the actual dynamics. Very convenient! j j

17
**P(j1←2)= P(1 ←3) =P(2 ←1)= P(2 ←3) =P(3 ←1)= P(3 ←2)=1/2 **

Example: The urn to the left contains 2 balls, the one to the right 1. At fixed time intervals one ball is picked from each urn and they are exchanged. The two red balls are in principle distinguishable. There are 3 microstates in the system and they are equivalent. If we do not want to distinguish between the 2 red balls, there are 2 1 1 2 2 2 1 3 1 2 “macrostates”, corresponding to whether there is a red or a blue ball in the right urn. The transition probabilities are P(1 ←1)=P(2 ←2)=P(3 ←3)=0 P(j1←2)= P(1 ←3) =P(2 ←1)= P(2 ←3) =P(3 ←1)= P(3 ←2)=1/2 Detailed balance is satisfied, and the “equilibrium” probabilities are 2/3 and 1/3 for the red and blue macrostates, respectively.

18
Equations of state: The equilibrium state of ordinary matter can be specified by a small number of control variables. For a one component system the Helmholtz free energy is A=A(N,V,T). Other variables such as the pressure P, the chemical potential μ, the entropy S are given by equations of state μ=∂A/ ∂N, P= - ∂A/ ∂V, S=- ∂A/ ∂T An approximate equation of state describing non-ideal gases Is the van der Waals equation which obtains from the free energy Comparing with the monatomic ideal gas expression we interpret the constant a as a measure of the attraction between molecules, outside a hard core volume b

19
**Equilibrium phase transitions**

For special values of the control parameters (P,N,T in the Gibbs ensemble) the order parameters (V,μ,S) changes abruptly. P and T for a single component system when solids/liquid and gas/liquid co-exists at equilibrium is shown at the top while the specific volume v=V/N for a given P is below. The gas/liquid and solid/liquid transition are 1st order (or dis- continuous). The critical endpoint of the latter is a 2nd order (or continuous ). Specific heat and fluctuations diverge at critical point.

20
**Combining the P-V and P-T plots into a single 3D plot.**

21
**Problem 4: Van der Waals equation.**

Express the equation as a cubic equation in v=V/N. Find the values for which the roots coincide. Rewrite the equation in terms of the reduced variables. the resulting equation is called the law of corresponding states. d. Discuss the behaviour of the isotherms of the law of corresponding states.

22
**Solution: Multiplying out and dividing by a common factor we find**

When the roots are equal this expression must be on the form comparing term by term we find after some algebra Again, after some algebra we find in terms of the new variables the law of corresponding states There are no free parameters. All van der Waals fluids are the same! We next plot p(v,t) for different values of t

23
**We have μ=(A+PV)/N so that dμ=(SdT+VdP)/N μ₂- μ₁=∫²₁=∫VdP/N **

For t<1 there is a range of pressures p₁<p<p₂ with 3 solutions . The points p₁ and p₂ are spinodals. In the region between them the compressibility -∂P/∂V is negative and the system is unstable. The equilibrium state has the lowest chemical potential μ ←t=1.1 P p₂ t=1 → ←t=0.9 p₁ We have μ=(A+PV)/N so that dμ=(SdT+VdP)/N μ₂- μ₁=∫²₁=∫VdP/N The equilibrium vapor pressure is when the areas between the horizontal line and the isotherms are equal. Between the equilibrium pressure and the spinodal there is a metastable region. Dust can nucleate rain drops. If particles too small surface tension prevents nucleation. VdW theory qualitatively but not quantitatively ok.

24
**Symmetry breaking: Alben model.**

A large class of equilibrium systems characterized by a high temperature symmetric phase and low temperature broken symmetry phase. Some examples: Magnetic models: Selection of spin orientation of “spins”, Ising model, Heisenberg model, Potts model.... Miscibility models: High temperature solution phase separates at lower temperatures. Many other models in cosmology, particle and condensed matter. The Alben model [7] is a simple mechanical example. An airtight piston of mass M separates two regions, each with N ideal gas molecules at temp. T The piston is in a semicircular tube of cross section a. The order parameter of the is the angle Φ of the piston.

25
**The free energy of an ideal gas is**

A=-NkT ln (V/N)+terms independent of V, N yielding the equation of state PV=-∂A/ ∂V=NkT We add the gravitational potential energy to obtain A=M gR cos φ- NkT(ln [aR(π/2+ φ)/N]+ln [aR(π/2+-φ)/N] Minimizing with respect to φ gives 0= ∂A/ ∂ φ=-MgR sin φ-8NkT φ/(π²-4 φ²) This equation has a nonzero solution corresponding to minimum for T<T₀=MgR π²/(8Nk). The resulting temperature dependence of the order parameter is shown below to the left. To the right we show the behaviour if the amount of gas on the two sides is N(1± 0.05)

26
At equilibrium: Second law: Variational principles apply: entropy maximum subject to constraints → free energy minimum . System stable with reversible, Gaussian fluctuations (except at phase transitions, continuous or discontinuous) → no arrow of time, detailed balance. Zeroth law: Pressure, temperature chemical potential constant throughout system. No currents (of heat, charge, particles) Thermodynamic variables: Extensive (proportional to system size, N,V,E,S), or intensive (independent of size, μ,T,P). Law of existence of matter. Typical fluctuations proportional to square root of size.

27
**Information theory approach**

30
**Steady state vs. equilibrium:**

Steady states are time independent, but not in equilibrium. Heat current Electric current T+Δ T J Q No zeroth law! Gradients of temperature, chemical potential and pressure allowed. Osmosis Salt water Fresh water

31
**Economy Energy Raw materials Labour Capital Waste Products Wages**

Profits If no throughput there is no economy! Economic “equilibrium” an oxymoron. Not everyone agrees [9] !

32
**Equilibrium wealth distribution: N = # of citizen >>1**

w = W/N= average wealth x= wealth of individual Most probable wealth distribution [9]: Equilibrium approached, if funds are spent and earned in many small random amounts. We can imagine this taking place if wealth changes by buying and selling goods and services . Average wealth analogous to thermodynamic temperature. Problem: Wealth inequality varies from society to society, and over time! Wealth of the very rich follows power law [10]. Most people’s earnings come from salaries, which tend to change by multiplicative process. Income distribution similar but inequality tends to be less pronounced.

33
The power law part of the income or wealth distribution is commonly referred to as the Pareto tail after Vilfredo Pareto who wrote in 1887: In all places and at all times the distribution of income in a stable economy, when the origin of measurement is at a sufficiently high income level, will be given approximately by the empirical formula where y is the number of people having an income x or greater and v is approximately 1.5. To my knowledge this is the first claim in the literature of universality of critical exponents! Empirically the exponent is probably not universal. There is also some indication that the exponent is different for the very and the extremely rich. Note that the number of people in the tail will not be universal, so the inpact on overall inequality will vary from society to society.

34
**(Subject to constraint).**

In non-equilibrium steady state entropy not maximized (Subject to constraint). Example: Equal gas containers are connected by a a capillary. One is kept at a higher temperature than the other. If the capillary not too thin system will reach mechanical equilibrium (constant pressure). The cold container will then have more molecules than the hot one. The entropy per molecule is higher in hot gas P, V Tc P, V Th If entropy maximum subject to temperature constraint molecules would move from cold to hot. This does not happen! Chemical potential will be different on the two sides in steady state. Steady states not governed by variational principle! Zeroth law does not apply!

35
**Systems near equilibrium**

Ohm’s law: Electrical current V=R I V=voltage, R= resistance, I=current Fick’s law of diffusion Stokes law for settling velocity Many similar laws!

36
**More subtle effects [11]:**

Temperature gradient in bimetallic strip produces electromotive force → Seebeck effect Electric current through bimetallic strip leads to heating at one end And cooling at the other → Peltier effect Dust particle moves away from hot surface towards cooler region. →Thermophoresis. Particles in mixture may selectively move towards hotter or cooler regions → Ludwig-Soret effect In rarefied gases a temperature gradient may a pressure gradient → thermo molecular effect

37
**Le Chatelier’s principle**

Any change in concentrations, volume or pressure is counteracted by shift in equilibrium that opposes change. If a system is disturbed from equilibrium , it will return monotonically towards equilibrium as disturbance is removed (no overshoot). This is also suggested by the fact that the Onsager coefficients have real eigenvalues. In neoclassical economics it has been suggested that if supply exceeds demand, prices will fall, production cut and equilibrium restored. A similar process should occur if demand exceeds supply. But is this true?

38
**Belousov-Zhabotinsky reaction**

In the 1950s Boris Pavlovich Belousov discovered an oscillatory chemical reaction in which Cerium IV ions in solution is reduced by one chemical to Ce III and subsequently oxidized to Ce IV causing a periodic color change. Belousov was unable to publish his results, but years later Anatol Zhabotinsky repeated the experiments and produced a theory . By now many other chemical oscillators have been found.

39
**Rate equation approach:**

Chemistry’s law of mass action allows one to find steady states (equivalent to equations of state), and the approach to equilibrium. Consider the reaction X+Y↔Z in which 2 atoms combine to form a molecule. Concentrations: x,y,z. Rate constants: k₁ (→), k₂ (←). Rate equations: The steady state is obtained by putting the time derivatives to zero The approach to steady state con then be obtained by solving the differential equations.

40
The brusselator: Ilya Prigogine and co-workers developed a simple model which shows similar behavior to the BZ system. Usually it is studied as a spatial model in which the reactant diffuse in addition to reacting. We will simplify to the well stirred case in wich concentrations are uniform . The model consists of the fictional set of reactions. A→ X 2X+Y→ 3X B+X→ Y+D X→ E The concentrations a and b are kept constant, D and E are eliminated waste products. The reactions are irreversible and for simplicity the rate constants are set to 1. The corresponding rate equations are dx/dt=a+x²y-bx-x; dy/dt=bx-x²y

41
**The model has a fixed point (steady state) obtained by setting reaction rates to zero**

0=a+x²y-bx-x; =bx-x²y; We find x₀=a, y₀=b/a In order to study the stability properties we linearize about the fixed point. x=a+ξ; y=b/a +η; We find d ξ/dt=(b-1) ξ+a² η+...; dη/dt=-bξ-a²η+...; The solution is a linear combination of exponentials with exponents eigenvalues of We find that eigenvalues are complex (oscillatory solution) if 1+a²+2a>b>1+a²-2a They have a negative real part (stable fixed point) if b<1+a²

42
**A linear stability analysis of the fixed point in **

parameter space is shown in the picture to the right. Trajectories of the concentrations x and y are shown below. ↘ ↘ ↘ ↘ Le Chatelier’s principle does not apply because the system is driven, A,B must be kept constant. Reactions are irreversible.

43
**SIR-model of epidemiology:**

Divide population into susceptible, S, infected, I, removed (immune), R. Put Ω=S+I+R. The following processes take place: Susceptibles are born or introduced at rate Ωγ(1-ρ) Infecteds are introduced at rate Ωγρ Susceptibles, become infected at rate βSI/Ω Susceptibles die at the rate γS Infecteds are removed at rate λI, by death or immunity Assume that total population Ω is kept constant. Introduce concentrations ф=S/Ω, ψ =I/ Ω. Obtain rate equations: dф/dt= γ(1- ρ)-βфψ- γ ф dψ/dt= γρ+ βфψ-λψ Conditions for steady state obtained by putting time derivatives to zero. Make one more simplification and put ρ=0

44
**Condition for steady state:**

0=γ-βфψ- γ ф 0= βфψ-λψ There are 2 solutions ψ=0, ф=1 Ψ=λ/β, ф= γ/(λ+γ) I leave it as an exercise to show that (i) is stable when c= β/ γ<1 while (ii) is stable when c>1. In that case the system will approach the steady state through damped oscillations. It is easy to extend model to several sub-populations, or diseases. Fluctuations can be also be incorporated.

45
**Problem 5: Lotka Volterra model**

Big fish eat little fishes to survive. The little fish reproduce at a certain net rate, and die by predation. The rate model rate eqs are Simplify the model by introducing new variables b. Find the steady states of the populations and their stability. Discuss the solutions to the rate equations. c. In the model, the populations of little fish grows without limits if the big fish disappear. Modify the little fish equation to add a logistic term, with K the carrying capacity How does this change things? d. If the population of big fish becomes too small it may become extinct. Modify model to include small immigration rates. Neglect the effect of finite carrying capacity.

46
**Solution: a. With the substitutions the equations become**

dy/dτ=-ry(1-x); dx/dτ=x(1-y) b. There are two steady states {x=0,y=0}, and {x=1,y=1}. The former is a saddle point (exponential growth in the x-direction, and exponential decay of the big fishes). Linearizing around the second point x=1+ ξ, y=1+η yields dη/dτ=r ξ, dξ/dτ=-η which are the equations for harmonic motion around a center. The full differential equations can be solved exactly. Dividing the two equations and cross multiplying This equation can easily be integrated. r=1 ↓ ↓

47
**c. In reduced units we write for the rate equations**

dy/dτ=-ry(1-x); dx/dτ= x(1-y-x/K) There is a new fixed point {y=0,x=K}. Linearizing about this point y=η; x=K+ξ dη/dτ=-rη(1-1/K); dξ/dτ=-K(η+ξ/K) We find that the fixed point is a saddle for K>1, stable focus for K<1. The old fixed point at {1,1} is now shifted to {1,1-1/K}, it is unphysical for K<1. Linearizing for K>1 with y=1-1/K+η; x=1+ξ dη/dτ=-r(1-K)ξ; dξ/dτ=-η-ξ /K The eigenvalues are i.e. If 4K²r(K-1)>1 it is a stable focus otherwise a stable node.

48
**Here are some typical situations:**

r=1, K=2 ↓ ↓ ↓ ↓ ↓ ↓ r=0.1, K=1.6 r=1, K=0.5 In the left plot there is not enough little fish habitat to support a big fish population. In the middle plot the big fish can survive a long time without food. In the plot to the right the big fish are more dependent on a steady supply of little fish.

49
**d. With λ the big fish immigration rate in reduced units, we have**

dy/dτ= -r x (1-y)+λ; dx/dτ=x (1-y) The {x=0,y=0} fixed point shifts to {x=0, y= λ/r}. Linearizing around this point with x=ξ, y= λ/r+η gives d ξ/dτ= ξ(1-λ/r); dη /dτ= λ ξ-r η Assuming r>λ we see that the point remains unstable. There is a second fixed point at {x=1-λ/r, y=1} . With x=1-λ/r + ξ; y=1+ η dη/dτ= rξ- η λ, d ξ/dτ= (1-λ/r)η The eigenvalues are The system will thus undergo damped oscillations towards a stable fixed point. If λ is small the damping is weak. If the carrying capacity is finite immigration will not change things much except, when K<1, there will be a small, number of starving big fish. If they have any sense they will emigrate again!

50
Concluding remarks: The SIR model is quite robust. It can be easily be modified without changing the overall qualitative properties. The Lotka-Volterra model is fragile. Small modifications led to drastic change. Recall that the Lotka-Volterra equation was separable Integrating we find the first integrals ln y –y +r(ln x-x)=const. Different values of the constant gives rise to the different closed curves in the x-y plane. A similar situation arises with coupled harmonic oscillators. Any kind of dissipation destroys the separability. Both the L-V and the SIR models exhibit oscillatory approach to steady state, violating Le Chatelier’s principle. This happens because the interaction term has opposite signs. If the rate equations come from approach to a free energy minimum, with rates proportional to the force (- the derivative of the potential) the two terms must have the same sign.

51
**The logistic equation dN/dτ=λN(1-N/K)**

is important in ecology. It gives an idealized description of limitation to growth K= carrying capacity, λ= growth rate. Put t= λ τ, y=N/K dy/dt=y(1-y) Equation is easy to solve. Solution assumes instantaneous response to changes in population size. In practice there is often a time delay in how changes in population affects growth rate. In economics time delay between investment and results.

52
Modify logistic equation to account for delays between changes in the environment and their effect. A celebrated example is the Nicholson [13] equation which describes population fluctuations in sheep blowflies, a known pest on Australian sheep farms: dN/dt=r N(t-τ) exp(-N(t-τ)/K)-mN Here τ is a time delay due to hatching and maturing, r is the birth rate when the population is small compared to the carrying capacity K and m is the death rate.

53
**dN/dt’=r N {1-N(t’-τ)/K} With y=N/K, t=t’/ τ we get dy/dt=r{1-y(t-1)} **

The Nicholson equation is said to describe the data quite well. We will instead consider the simpler Hutchinson’s equation [14],[15]. dN/dt’=r N {1-N(t’-τ)/K} With y=N/K, t=t’/ τ we get dy/dt=r{1-y(t-1)} The solution y=1 is a fixed point we need to study its stability. assume that r is positive and y=1+δ exp(λt) We obtain the eigenvalue equation λ=-r exp(- λ) or r=- λ exp(λ) λ is real We find 2 negative roots for r<1/e No real roots for r>1/e Negative real roots→ stability - λ exp(λ)

54
(ii) λ = λ ₁ + i λ₂ complex We have r=- λ ₁ cos( λ ₁) = λ₂ sin(λ₂) We see that λ ₁ changes sign when r=(2n+1)π/2, n=0,1,2... For large values of r the Hutchinson equation is probably not very realistic. To solve dy/dt=r{1-y(t-1)} numerically for t>0 we need to specify initial conditions in the range We plot below Numerical solutions for r=1.4 and r=1.7 r=1.7 r=1.4

55
**Another delay example is the pig farmers dilemma**

Another delay example is the pig farmers dilemma. A pig farmer must decide how many animals to raise for next season. To do this he/she needs to guess the price they sell at. If too few pigs are raised the price will be high, if too many low. In the first case the farmer will do well if he raises many pig, in the second case he/she should raise less. But, if all the pig farmers think the same way (herding effect) the guess will be wrong in all cases. This type of situation is commonly referred to in the physics literature as a minority game (see e.g. [16]). This game represent a nightmare situation for forecasters. Not only do experts forecasts of do poorly (see e.g. [17]), but their opinions tend to cluster around a prediction which is far from what actually happens. Satinover and Sornette [18] discuss this situation in the context of the minority game (and Parrondo games to be discussed later) and calls it the illusion of control, a phrase coined by the psychologist Ellen Langer [19].

56
In the previous example the stability of the system depends on the ratio between the timescale for growth and the time delay. If the delay is too large the system becomes unstable. There are numerous applications in Ecology (Delay due to maturation, re-growth) Medicine (Regulatory mechanism require production of hormones etc.) Economics (Investments takes time to produce results) Politics (Decisions take time) Mechanics (Regulatory feedback involves sensory input that must be processed) There is evidence that our brains automatically tries to calculate a trajectory to compensate for delay when catching or avoiding a flying object. The effect of inertia: Physically the effect of inertia is similar to that of delay. It takes time for a force to take effect!

57
Flow in a pipe At low velocity flow is laminar and constant. Inertia plays no role. At higher velocity, irregularities in pipe wall and external noise cause a disturbance. Competition between inertia and viscosity determines if laminar flow is re-established.

58
We find x=1, y=z=-1 giving The flow changes at a critical dimensionless Reynolds number The critical number depends on noise level and roughness. Just like gas/liquid or melting/freezing, transition is discontinuous and requires a trigger. In the absence of viscous friction the flow velocity would be associated with the pressure head h by hg=u²/2. We the associate a frictional head hf with frictional losses. It is proportional to the length l of the pipe. The only other length in the problem is the diameter d of the pipe. We define the friction factor f by Darcy’s law: A log-log plot of f vs. Re is called a Moody plot and is sketched on the next page

59
**Moody plot when pipe roughness is turbulence trigger**

r=pipe radius, k=roughness parameter (k≈1 mm for concrete, k≈0.05 mm for steel pipe)

60
**Discrete Markov processes:**

There are a number of situations in which the system evolution can be described by a series of stochastic transitions from one state to another, but detailed balance can not be assumed. Suppose the system is in state i and let the P(j←i) be the probability that the next state is j. In a Markov process these probabilities only depends on the states and not on the history. P(j←i) can be represented by a matrix in which This matrix is regular if all elements of (P(j←i)) ⁿ are positive and nonzero for some n (it must be possible to go from any state to any other state in a finite number of steps). There will then be a steady state probability distribution π(i) given by the eigenvector of P(j←i) with eigenvalue 1 normalized so that ∑ π(i) =1 in analogy with ensembles in equilibrium theory.

61
Problem 6: A random walker moves back and forth between three different rooms as shown. The walker exits through any available door with equal probability, but on average stays twice as long in room 1 than in any other room. 1 2 3 a. Set up the transition matrix P(j←i). b. Is the matrix regular? c. Find the steady state probability distribution π(i).

62
Solution: a. We find for the transition matrix Note that the column entries have to add up to one, but there is no such restriction on the rows. b. The matrix is regular. It is possible to get from any room to any other room in two steps. c. To find the steady state probabilities we must solve the eigenvalue problem π(j) = λ Σ P( j←i) π (i) We find that the normalized eigenvector with eigenvalue λ=1 is π(1)= π(2)=2/5, π(3)=1/5.

63
Master equation: Transition events need not be evenly spaced in time. To obtain the time evolution of the probability we need to consider the transition rate W(q←q’) which for a continuous process is governed by the master equation (or Kolmogorov’s forward equation) ∂P(q,t)/ ∂t=∫ dq’[W(q←q’)P(q’,t)-W(q’←q)P(q,t)]

64
**One step birth and death**

The extension to discrete processes is straightforward. Consider the case in which the populations changes because individuals are born or die. We put W(n+1←n)= b(n) birth rate W(n-1 ←n)=d(n) death rate W(n’ ←n)=0 otherwise The master equation then becomes ∂P(n,t)/ ∂t=d(n+1)P(n+1,t)+b(n-1)P(n-1,t)-[d(n)+b(n)]P(n,t) To obtain the steady state distribution π(n) put the r.h.s. to zero. For nontrivial steady states to exist d(0)=π(-1)=0, allowing us to solve the problem by recursion d(1) π(1)=b(0)π(0) d(n) π(n)=b(n-1)π(n-1) We can determine π(0) by normalizing the probabilities and thus have an exact solution!

65
Branching processes: These are birth and death processes in which the processes A→2A; A→0 happen at rates β and γ respectively. Unlike the previous case, the zero population state is an absorbing state. If the system reaches this state it stays there. The master equation is We start with a single individual at t=0, and want to find the survival probability and its asymptotic value at large times might represent long time survival probability of a new genetic mutation.

66
**We define the conditional probability**

Probability there are n individuals at time t₁ given that there were m at t₂ satisfies the Chapman-Kolmogorov equation At intermediate times the system must be in some state! The time derivative of the conditional probability is The transition rate is The rate that “something” is happening is Collecting terms we get the Kolmogorov backwards equation:

67
**In our branching model the backwards Kolmogorov model becomes**

The n=0 state is absorbing so We define The backwards equation can be rewritten The two branches in G₂ evolve independently so We get with the boundary condition G₁(z,0)=z

68
**The differential equation can be solved by elementary (if tedious) methods. We find**

The survival probability is If the death rate is greater than the birth rate ( γ>β) the population will die out exponentially. If γ<β the long time survival probability is If γ=β the expression for the survival probability is undetermined, but we can work out the limit At the critical point β=γ the population decays by a power law with exponent -1.

69
**Example: Spruce budworm model**

These insects are a serious pest killing and damaging coniferous trees in many parts of Canada and USA. They have a 12 month life cycle eggs → larvae, which hibernate in the winter → wake up in spring → feed on (preferably fresh) needles → pupate to become moths → mate → produce eggs . They damage trees by eating the needles and are in turn predated on by birds.

70
Ludwig and coworkers [21] presented a very simplified model in which the number of insects is represented by a single number N. The birth rate is b(N)=βN+∆ The immigration ∆ assumed to be very small. The death rate is d(N)= (β-r₀)N -r₀N²/K+BN²/(A²+N²) r₀ is an effective growth rate, K is a logistic carrying capacity, proportional to the number of trees and assumed constant. B is proportional to the number of birds (assumed constant). If the number of insects is small, the birds will feed on something else, while A represents a saturation effect. We first consider a rate equation approach similar to what we did for the Brusselerator dN/dt= r₀N(1-N/K)-BN²/(A²+N²)+ ∆

71
A very similar situation arises in the case of the gas-liquid transition as described by the van der Waals equation. When there are two stable solutions one will have a higher probability than the other which is said to be metastable.

72
Returning to the master equation approach: When the system size parameter A is very large we can replace a sum over N by an integral and use some other reference number than N=0 to start the recursion. We find Near the maxima the distribution will be Gaussian except at the critical point and typical fluctuations are proportional to sqrt(A). Intermediate states have low probability for large A and metastable states theoretically last a long time. State with highest prob. analogue to equilibrium state!

73
**The Fokker –Planck equation**

We had for the master equation of a continuous system ∂P(q)/ ∂t=∫ dq’[W(q←q’)P(q’,t)-W(q’←q)P(q,t)] The equation can be rewritten in terms the “jump” r=q-q’ and the jump rate w(q’,r)=W(q ←q’). We find ∂P(q,t)/ ∂t=∫dr w(q-r,r)P(q-r,t)-P(q,t) ∫dr w(q,-r) We expand the integrand in the first integral in powers of r. The leading term cancels with the second integral and we are left with The expansion is called the Kramers-Moyal expansion.

74
**If we truncate the Kramers-Moyal expansion after the second term and use the notation**

A(q)=a₁(q) , D=½ a₂ we obtain the Fokker-Planck equation Pawula’s theorem [23] states that truncating the expansion at any finite order beyond second leads to probability distributions that are not positive definite. The full expansion is equivalent to the master equation and we are no further ahead. It can then also be derived from a system size expansion [1] [20]. The basic assumption of the Fokker-Planck equation is that the jumps are short range. If this not true, serious errors may result!

75
**The structure of the Fokker-Planck equation becomes more transparent if we rewrite it as**

∂P(q,t)/ ∂t=- ∂J/ ∂q Where J=A(q)P(q,t)- ∂/ ∂x [D(q)P(q,t)] is the probability current. The situation is analogous to the customary treatment of macroscopic currents of particles or charges Here μ is the mobility, f a force, and D the diffusion constant. The Fokker-Planck equation is thus just a statement of the conservation of probability. The equation also goes under the name of Smoluchowski equation.

76
**Mesoscopic particle in a stationary fluid**

Orders of magnitude: Particle size: r= nm to micron Time between collisions with molecules in fluid: τ< ns Times of interest: t =s to days “Infinitesimal “ time: δt>> τ On these time-scales mean velocity: <v>=μf Force= f, mobility= μ Jump moments, The Fokker-Planck equation then describes the motion of a Brownian particle: If μf = A₀+A₁x is linear and D constant the Fokker Planck equation is said to be linear. and the process is an Ornstein-Uhlenbeck process. If D is not constant the diffusion is heterogeneous.

77
**Example: A settling dust particle:**

Force due to gravity: mg, assume D constant. Fokker-Planck equation now To solve this go to moving frame y=x+Mgμt P(x,t)= Π(y,t) If at t=0 the particle was located at y=0 the solution for t>0 is A random walk of this type is called a Bachelier-Wiener process. The motion is continuous, but the instantaneous velocity nowhere defined. Π is a Gaussian distribution with variance σ²=2Dt, standard deviation σ

78
**The Gaussian distribution**

is stable, meaning that the sum of two Gaussians is another Gaussian with σ²=σ₁²+σ₂². This implies that we can generate an instance of a Brownian random walk by considering the motion as made up of a series of small incremental steps of length δy and duration δt generated from a Gaussian random distribution of variance σ²=2Dδt Gaussian random number generators are easily available, for an example see [24] (more about simulations later). The probability distribution is self-affine it is invariant under a rescaling

80
**Reflection at a boundary:**

The falling dust particle solution does not admit a steady state. Eventually, though it will reach ground. If it then diffuses back into the atmosphere when hitting ground at x=0 we require that the probability current is zero for x=0. In the steady state J=0 everywhere. This gives us the ordinary d.e. With normalized solution The equilibrium probability distribution is In the case of thermal Brownian motion the mobility and diffusion constant must thus satisfy the Einstein relation D=μkT

81
**Absorbing boundaries:**

We wish to find the rate at which a diffusing particle leaves a region. We handle this by putting particle in a ”limbo state *” at boundary, and requiring the probability to be zero at the actual boundary. Consider diffusion on a square with corners at (0,0), (0,L), (L,0), (L,L). The Fokker-Planck equation is With the condition that P(x,y,t) is zero at the boundary we find In the long time limit the smallest eigenvalue dominates and the survival probability is

82
Velocity relaxation: If the velocity of a particle is disturbed from its stationary state value, there will be a time-scale τv for relaxation towards the stationary state. So far τv has been assumed to be short compared to times of interest, we now will study the velocity relaxation process in more detail. We still consider times δt large compared to time between collisions. Our macroscopic equation is The first jump moment is, if there is no force f The Fokker-Planck equation is then In the stationary state the probability current is zero

83
In equilibrium this distribution should be equal to the one dimensional Maxwell-Boltzmann distribution (β=1/kT) Giving Substituting this into the Fokker-Planck equation we obtain the Rayleigh equation: According to our previous definition this equation is linear and describes an Ornstein-Uhlenbeck process. We can solve this equation explicitly with the condition that the velocity initially is v₀. We define τv =Mμ and find

84
**We see that τv has the physical interpretation of a velocity relaxation time. The mean velocity is**

If the force doesn’t change significantly during the path traversed during the relaxation time or we are allowed to neglect velocity correlations and can treat the particle as following Aristotelian mechanics, in which the velocity is proportional to the force rather than the acceleration.

85
Example: Consider an overdamped particle moving in a harmonic potential, subject to a force -γx and diffusion constant D. It is initially at x₀. The Fokker-Planck-equation is With the replacements this equation is of the same form as the Rayleigh equation and we could write down the solution directly. We can also find the mean and variance without solving the equation. We have Integrating by parts using we find

86
Integrate by parts Integrate second term by parts once more With the initial condition <x²>=0 for t=0 the solution is And in agreement with previous result. It is relatively straight-forward to generalize the present formalism to the case where both v(t) and x(t) are dynamical variables.

87
Kramer’s equation: If both v and x are dynamical variables we have for the jump moments We have assumed the equilibrium value for the second jump moment for the velocity. The Fokker-Planck equation now goes Under the name of Kramer’s equation. Many examples of solutions to this equation are in the book by Risken [23]. The same method of integration in parts as before can be used to find <x(t)²>,<v(t)²>,<x(t)v(t)>.

88
Let us work out a couple of those averages, so that we see how it works. Consider a harmonic oscillator with force constant γ starting at x₀, v₀. We have The terms involving partial derivatives with respect to v all vanish because of the boundary condition at infinity. We find Now only the second term is nonzero and, not surprisingly The variances can be found in a similar manner.

89
Kramer’s escape rate: Consider a Brownian particle moving in an energy well as in the figure. Initially the particle is located near b. We wish to calculate the rate at which it reaches an absorbing boundary at e. The force on the particle is f(x)=-dU/dx and we assume μβ=D. We also assume that near b there is approximate thermal equilibrium so that the probability that the particle is near b is The probability current is given by

90
**We now assume that near b**

While near d Assume that the probability current J(x,t) is independent of x. Multiplying the bottom expression on the previous slide with exp(β[U(x)-U(b)]), integrating from b to e, noting that since e is absorbing P(e,t)=0, we get Most of the contributions from this integrals comes from the region near d, while in the formula for the probability p that the particle is in the well comes from the region near b. We find for the escape rate r Our next approximation is to extend the integrations to Performing the Gaussian integrations we get the Kramer’s escape rate

91
The Kramers’ escape rate formula only depends on the behavior near d and b, what happens at c is of no consequence! Landauer’s blow-torch paradox [27]: If the path is heated near c, the diffusion is speeded up there, but this is not reflected in the formula! Probability current with heterogeneous diffusion: J=μf(x)P(x,t)- ∂/∂x [D(x)P(x,t)]= =[μf(x)P(x,t)- d/dx D(x)]P(x,t)-D(x)∂P(x,t)/∂x The x-dependence of D leads to an effective force, causing the effective barrier height to change. Heterogeneous diffusion is responsible for thermophoresis and the Ludwig-Soret effect , and the difference between electromotive and electric field in the thermoelectric effects. We also assume Gaussian noise, which may not apply in non-thermal cases.

92
**Over the barrier hopping:**

A particle is moving in a periodic potential with period a and is also subject to an external force F. Most of the time it will be trapped by barriers V aF/2. Near one of the maxima at x=d near the minimum at b=d+a/2 F V+Fa/2 a Fa The net jump rate to the left and right is then The mean velocity towards the left is

93
In the figure we plot the temperature dependence of the hopping rate in reduced units. The hopping rate is maximum at a temperature where kT is close to the barrier height V. Stochastic resonance [25] is a related phenomenon in which thermal noise enhances a signal with frequency of the order of a barrier escape time. Stochastic resonance is important for sensory information processing in our brain [26].

94
Molecular motors Live organisms depend for survival on directed non-equilibrium processes. On cellular level processes are kept out of equilibrium by the metabolism and directed transport assisted by thermal noise. A very detailed review of molecular motors can be found in Reimann [28]. Cytoskeletal motors are responsible for muscle contraction and cargo transport to and from cell nuclei. Rotary motors drive flagella responsible for bacterial swimming. Nucleic acid motors separate DNA strands prior to cell division and governs transcription of RNA from DNA A simplified mechanism for these processes is the flashing ratchet.

95
**Diffusion on a ratchet:**

Particle diffusing as shown in a ratchet potential, as likely to diffuse in either direction. Left right asymmetry cannot lead to directed motion in either direction. If the potential is tilted by an external force thermal noise: net average motion in the direction of the tilt. Similarly for Brownian particle diffusing in a flat potential. o <v> o <v> o

96
In flashing ratchet, saw-tooth potential switched on /off with a period ≈ time to freely diffuse 1 cell length. The gaussians represent probability distribution after 1 period. When potential on particle is mostly stuck in one cell. When off, particle more likely to diffuse to right cell than cell to the left. Motion persists even if potential slightly tilted to the left. Motor can run uphill!

97
**Parrondo game [29] discretizes flashing ratchet as a game.**

Game B Game A Capital € Game A probability of loss (win): Game B: probability of loss of 1 € if capital multiple of 3: Loss probability Game B if capital not multiple of 3: Win probability: A losing game if ε>0 To analyze B, need steady state probability π(i), i=€(mod 3)

98
**To find the steady state probabilities need to diagonalize**

Naively one would expect π(0) to be ⅓, but it turns out that playing the game reorganizes the probabilities. Use this result to work out the probability of winning This result satisfies the ratchet requirement that at equilibrium (ε=0) the game should be neutral! We conclude that games B and A are both loosing games! If instead we randomly switch between A and B the probability π(0) will be reduced towards ⅓ making it a winning game!

99
**If we switch randomly between games the transition matrix is**

Where The steady state probability for the mod(3) state is now This is slightly less that what we had for game B alone and the game is now winning: If instead we change the frequency ratio of games A and B to 3:2 we get slightly better results. Periodic sequences such as AABBAABB... will also work.

100
There is a trivial best winning strategy, play game A if capital is 3n, n=...-3,-2,-1,0,1,2,... Play game B otherwise. Even if player made unable to know precise capital, capital dependent rule unsuitable for some applications. History dependent game B’: If two previous games were Loss, loss, win prob. p₁ Loss, win p₂ Win, loss p₃ Win, win p₄ Game A unchanged. Can construct transition probability matrix between states corresponding to the 4 histories. With p₁=9/10-ε, p₂=p₃=1/4-ε, p₄=7/10-ε, both A and B’ are neutral if ε=0, losing if ε>0 and winning if player randomly switches between the two games! For an application to financial markets see Stutzer [31]

101
Collective games: Returning to original game, consider an ensemble of players choosing to play A or B by a majority vote. A player with capital € divisible by 3 would prefer A, if € is not divisible by 3 B preferred. The fraction of players in the two camps are c(0) and c(1,2)=1-c(0), respectively. Expect A to be played if c(0)> ½, B otherwise. Suppose initially c(0)>½ and A is chosen. The winners will then switch sides reducing c(0) and eventually the vote will be B. With repeated B plays c(0) will drift towards B will the become the permanent choice and everybody looses! If on the other hand voters are “irrational” and vote randomly, in the long run everybody wins! Finally, Torral [30] studied the case where in game A’ 2 players i,j are picked randomly and 1€ is taken from one and given to the other. Again, occasional redistribution of wealth changes B into a winning game!

102
**Weakest link problems:**

In fractures or other forms of system failure it is not the average property but the properties of the weakest link that matter. Assume that a link will survive a stress σ is The probability that a chain with n links will fail is then This failure stress is not an intrinsic property such as pressure or temperature. A commonly used form is the Weibull distribution where a is proportional to chain length and ρ expected to be a material property. Weibull [32] suggested ρ=3 for Swedish steel 1.46 for Indian cotton.

103
**Weakest link problems occur in a large number of different situations:**

Ecosystems with high biodiversity tend to be more resilient. Investors diversify portfolios to reduce risk. “Honor systems” fail to prevent cheating if group too large. Earthquake size distribution different for fault system and individual sub-faults. Ecosystems may contain “keystone” species. If they disappear the whole structure changes drastically. In an economic crisis some agents are “too big to fail”. Evolving networks organize in an hierarchical structure e.g. Internet, social networks, “tree of life”, large organizations. These systems behave differently from equilibrium systems consisting of ordinary matter. In general they are not extensive, and they are not in equilibrium.

104
**Log-normal distribution: **

Suppose the logarithm of a random variable x, not the variable itself has a normal (Gaussian) distribution with mean μ variance b. The probability distribution for x is then Mean value: <x> =exp(μ +b) Most probable value of x: exp(μ-b/2) The distribution is often used to describe distribution of e.g. Height, weight or blood pressure in men or women. Concentration of pollutants in environment. a=1,b=0.5 a=1,b=2

105
**Multiplicative processes:**

A.N. Kolmogorov constructed a simple model for the size distribution of crushed rock:

106
The central limit theorem of statistics states that, if X is the sum of n independent random variables x(i) with mean <x> and variance b=<(x-<x>)²>, then X will be Gaussian with mean n<x> and variance nb. This suggests that the n’th generation rock size distribution should be lognormal This approximation works well when the distribution of ln x is narrow, but can be treacherous if occasional rare events have a big inpact. Nassim N. Taleb [33] calls such events Black Swans:

107
Binomial processes Consider a n step multiplicative process. At each step the outcome is either x or y with probability p and 1-p. If x occurs i times the outcome is The probability of this outcome is Where The probability distribution for z is then The mean outcome is In the log-normal approximation we expect with μ= p ln x+(1-p) ln y; b= p( ln x)²+(1-p)(ln y)²-μ²

108
The lognormal distribution compares well with the exact result for the case n=100, p=0.5, x=1.1, y=0.9. The exact mean is 1, while it is in the log- normal approximation. The exact mean with p=0.001, x=10, y:=110/111 is still 1. With n=100, the exact probabilities are z= With probability and with prob The log- normal mean is Not so good!

109
**Louis Bachelier’s 2000 theory of Brownian motion was based the return**

of the price of a financial stock, Assuming a Gaussian distribution of the return he found a lognormal distribution for the future price. The Black-Scholes-Merton model of option pricing extends the Bachelier theory. Scholes and Merton won the 1997 economics Nobel, but their work was severely critized by Taleb[33], Bouchaud[34], Mandelbrot[35], Sornette[36] and others, basically for ignoring the black swans. Scholes and Merton were directors of the hedge fund Long Term Capital Market which made use of their theory. Bear Stearn handled their trading, Merril Lynch their customers. LTC collapsed in1998, Bear Stearn in 2008, Merill Lynch almost collapsed the same year. All were bailed out.

110
Cauchy distribution: When deriving the Fokker-Planck equation we assumed the jump moments were finite and not too large. Gaussian distributions played a prominent part. A big advantage of Gaussians is that they are stable, the convolution of 2 Gaussians is another Gaussian and we can change the value of the time-step by rescaling the variables. We must now consider the case where large deviations matter. The Cauchy distribution is another stable distribution as can be seen by doing the contour integral a and b simply add under convolution. This makes a Cauchy walk self-similar.

112
In probability theory the Fourier transform p(k) of the probability distribution P(x) is called the characteristic function: Not all functions p(x) can serve. The normalization condition imposes the constraint p(0)=1. P(k) must satisfy the inequality The condition that P(x) be positive semi-definite imposes additional constraints. For a Gaussian while for the Cauchy distribution

113
A very useful property of the characteristic function is that the Fourier transform of the convolution integral can be written using the properties of the Dirac δ-function Note that if p(k) will be of the same form but with b=2b’; b may be complex as long as p*(k)=p(-k), but we must have Re b>0. How does the Fourier transform of p(k) look?

114
**Only if α<2 will f(k) generate a distribution which is nonnegative everywhere!**

115
**Can show that the inverse Fourier transform of**

approaches if 0<α<2, while the cumulative distribution (probability that x≥z) approaches if 0<α<2. The variance of a power law distribution is finite if α>2. The sum of n such random variables will approach a Gaussian if n large and the distribution is not stable. For α<2 the power law distribution will be Lévy-stable. The mean exists for α>1, but not for α<1, but the distribution can be normalized and there is a most probable outcome.

116
Lévy-walks: Consider next a random walk where P(x,t) is probability that a walk starting at the origin at t=0 will reach x at time t. It was shown by Khintchine and Lévy that a necessary and sufficient condition for P(x,t) to be Lévy-stable is that its generating function can either be written on the form (see [37] for details) For such distributions

117
**First-passage time distribution:**

Consider the probability P(x,t) that a Gaussian random walker, starting at the origin at t=0 reaches x for the first time at time t. We have with 0<y,x With the Laplace transforms of We find It can be shown to be Lévy-stable with α=1/2. It is normalized, but has no mean nor variance. It is called a Lévy-Smirnov distribution.

118
**Some words on simulations:**

Up to now we have been mainly concerned with obtaining probability distributions, but sometimes one is more interested in getting representative realizations through simulations. Let us consider a stochastic differential equation of the form where F is a deterministic and L a noise term. To distinguish the two we require that In a simulation we discretize time, at the n’th time step We have Consider first the case where is taken from a Gaussian distribution We have with taken from a Gaussian of mean 0 and variance 1.

119
**In the case of uncorrelated noise**

we can formally take the limit ∆→0 to obtain the Langevin equation We say that η is uncorrelated Gaussian noise. But, it is always understood that the equation comes from a limiting procedure as outline above. From the jump moments The Langevin equation with Gaussian uncorrelated noise is equivalent to the Fokker-Planck equation When solving the stochastic equation with discretized time we need a Gaussian random number generator.

120
**The Box Muller method is an efficient way of producing**

Gaussian distributed random numbers. Let x₁ and x₂ be independent stochastic variables, and let y₁ and y₂, be related to x₁ and x₂ by a coordinate transformation. If P(x₁,x₂) and Π(y₁,y₂) are probability distributions for the x, and y variables respectively Π(y₁,y₂) dy₁dy₂=P(x₁, x₂)d x₁ dx₂=JP(x₁, x₂) dy₁dy₂ where J is the Jacobian determinant Choose

121
We have We find for the Jacobian If x₁ and x₂ are uniformly distributed between 0 and 1, y₁ and y₂ will have a Gaussian distribution with 0 mean and variance 1. A C-program based on this algorithm can be found in [24]. What about non-linear noise? The trouble with this equation is that it is ambiguous, van Kampen [20] calls it a “meaning-less string of symbols”. The root of the problem is that even though <η(t)>=0, <c(x)η(t)> in general is not. This means that if we want to use the Langevin equation we must accompany it with an interpretation.

122
**If the noise is intrinsic, as in e. g**

If the noise is intrinsic, as in e.g. birth and death processes, it is “natural “ to apply the rate just before the start of the process This is known as the Itô interpretation. It can be shown [20] that the Itô interpretation leads to a Fokker-Planck equation on the form If the noise is external, due to the effects of the environment in an open system, it is “natural” to use the Stratanovich interpretation This leads to a Fokker-Planck equation on the form [20] Clearly the results with the two interpretations will be different!

123
**There is no need to limit oneself to Gaussian processes**

There is no need to limit oneself to Gaussian processes. The only assumption made, when discretizing time, was that the probability distribution was self-affine. This is the case with Lévy distributions with generating functions on the form p(k)= exp(-b|k| α) Suppose Lα is taken from such a distribution when ∆=1, then An algorithm for generating random numbers from such distributions is described in [38], while a C-program to implement it can be downloaded from [39]. The difference is that now the second jump moment does not exist and there is no corresponding Fokker Planck equation, (although a similar looking equation involving fractional derivatives can be constructed [37]). Non-linear Lévy noise has been discussed by Srokowski [40] in the present context.

124
Fat tails: Probability distributions for large x, and where α not too large, are often said to have fat tails. They are very common and are often found as a result of non-equilibrium evolutionary processes. In continuous time processes P(x,t) often have short time fat tails. If α>2 they have a finite variance and they will approach a Gaussian distribution for large times. The behaviour thus depends on the time interval chosen to describe process, but there is no ambiguity for discrete events. Often large observational data sets are available, and we want to use these to estimate frequency distribution of large events.

125
**As an example consider the probability distribution**

for The cumulative distribution is The top plot shows a log-log plot of C(x) vs. X. The blue points represents 1000 randomly generated outcomes. In N repeated experiments one expects that an outcome ≥x should occur roughly N C(x) times, where C(x) is the cumulative distribution. We order the events so that the largest event is given rank 1, while the r’th largest even is given rank r. A plot of is called a Zipf plot. exponential distribution. If The Zipf plot will be a straight line with slope 1/α.

126
The blue points in the figure is a Zipf plot of the data on the previous slide while the red points are obtained from an exponential decay process The information in a Zipf plot is the same as in a log-log plot of the cumulative distribution, visually the Zipf plot emphasizes large events. Word frequency distributions of texts in many languages exhibit a straight- line Zipf plot. Recently it was noted that non-coding DNA sequences also did this. It has been objected that one should not attach too much importance to this result. It is interesting to note that the distribution for Chinese characters do not have a straight-line Zipf plot [41], nor does the distribution for English characters, but the two distributions are not the same.

127
Typing monkeys: Trained monkeys type text written with an alphabet containing the improper letter L(0) and M regular letters L(i), i=1..M [42]. They type L(0) with probability p₀ and any of the other letters with probability (1- p₀)/M. Any given k-letter word will come up with probability times. The most common word has k=0 and shorter words are more common than longer words. The rank of a typical k-letter word varies in the range Substituting k=ln r/ln M into expression for p gives

128
**which is on the Zipf form**

which is on the Zipf form! There are many other examples of systems that exhibit straight-line Zipf plots [42]: Amplitude, or energy release in earthquakes. (Gutenberg-Richter law). Death tolls in earthquakes and other natural disasters. Size of mass extinctions. Size distributions of cities. Income distribution of the very rich (Pareto law). Number of “hits” at web-site. Number of citation of scientific papers.....etc.

129
**Correlated random walks:**

Consider a polymer chain of N links each of length a extending in space. We denote the position of each link by vectors , and write If the chain is freely jointed the orientation of successive link vectors are uncorrelated where is the Kronecker δ. We can characterize the chain configuration by its root-mean-square end-end distance S(N): It is relatively straight-forward to compute other quantities such as the mean radius of gyration and the probability distribution of the end to end distance [1].

130
**In many cases the successive steps in the chain are not completely uncorrelated:**

In the polymer poly-ethylene there are 3 preferred azinuthal angles for the relative orientation of successive links [1]. But, after a few steps in the chain the memory of the original orientation is lost. We can thus describe the chain as having N’=Na/l freely jointed links of effective length a’ and the scaling law does not change. The length l is called the persistence length.

131
**The self-avoiding walk:**

In dilute polymer solutions, each monomer occupies a finite volume, which must be excluded from the other links. This is important in low dimensions . In 1-D the self-avoiding walk is just a straight line and ν=1. If all allowed N-link walks have the same energy they will be equally likely at equilibrium. The figure shows the 1,2,3 step walks on a 2-D square lattice. The numbers indicate multiplicity. The 4 and 5 step walks are on the next slide.

133
It is now straight forward to find the average value of the end-to- end distance <S(N)²> by summing over all N step walks with the appropriate weight factor. A log-log plot of the results indicates that the RMS end-to-end distance scales as This is not a bad result considering the shortness of the paths used. The exact result is believed to be ν=3/4, independently of 2D-lattice type. It is close to 3/5 for 3D-lattices. For D=4 or higher, self-avoidance is irrelevant , and ν=1/2 just as for the freely jointed chain. For more detail and references see [1].

134
**Rescaled range analysis:**

The method was developed by the British hydrologist Harold Edwin Hurst ( ). He spent a life-time studying the flow of the river Nile. Our treatment follows that of Feder [43]. Consider discharges into a reservoir with data collected at regular intervals: N = number of discharges; ξ (n) = n’th influx = mean influx De-trended cumulated influx = range = standard deviation = rescaled range

135
**Hurst found that many empirical distributions were compatible with**

where H is called the Hurst (or Hölder) exponent (and is essentially equivalent to the exponent ν of self-avoiding walks). These records includes River discharges Lake varves (sediment layers) Tree rings Temperatures and rainfall Sunspot numbers Porosity of drilled rock cores Gaussian random walks have and S independent of N, giving H=1/2. This is true of all uncorrelated walks including Cauchy and Lévy walks, although the walks look quite different! This can be verified by simulation. R/S can often be quite well-behaved even if Rand S are not!

136
**As an example consider a Gaussian random walk with 29=512**

steps, where the increments have 0 mean and variance 1, as shown in the figure to the left. The red curve is a walk with the increments obtained from a Gaussian random number generator. The blue curve has the same increments, but they are randomly reshuffled. To the right the walks are de-trended. R

137
We have used the data from the previous slide to perform an R/S analysis . The sequence contains 1 walk of length 512, 2 independent ones of length 256, 4 of length 128 etc. Each of these need to be de-trended for the analysis. We then calculate the range and standard deviation for each walk and Average R/S over walks of the same length. The two sets of points correspond to the original and shuffled data in a log-log plot of R/S vs. N. The data is compatible with H=1/2. The two data sets are roughly compatible. If you want to convince people that your data is correlated, you must be able to show that the shuffled data is not!

138
**Hurst’s analysis predated the computer**

Hurst’s analysis predated the computer. As described in Feder’s book [43], he used instead a relabeled deck of 52 cards + a joker. There were 13 cards each worth 1 or -1 8 cards each worth 3 or -3 4 cards each worth 5 or -5 1 card each worth 7 or -7 This relabeling fits the Gaussian distribution remarkably well! Hurst then followed the following procedure:

139
**Shuffle the deck without the joker and cut it. Note the card cut.**

Deal two hands with 26 cards each. If card valued |i| cut, select the |i| highest cards from one hand and the |i| lowest cards from the other hand and transfer them to the other hand. Pick the positively (negatively) biased hand if i positive (negative). Put the joker in the chosen hand. Select increments by shuffling and cutting the chosen hand until the joker comes up. When joker comes up shuffle the whole deck minus the joker and repeat procedure. Hurst made 6 such experiments each with 1000 cuts and determined H≈0.71 which is close to the river Nile value. Feder repeated experiment on computer and found that with substantially more cuts H will revert to H≈0.5.

140
**Fractional Brownian motion (fBm):**

For convenience let us consider the case where X(0)=0. Mandelbrot and co-workers call X(t) a fBm if the Correlation function where 0<H<1. It turns out to be the same Hurst exponent as we had before [43]. If H=1/2 the motion is uncorrelated, as in a Gaussian random walk or a Cauchy or Lévy walk. If 1>H>1/2 the fBm is persistent. If ½>H>0 the fBm is anti-persistent. A step in one direction tends to be followed by a step in the opposite direction. The remarkable property of the correlation function is that it is time independent, normally correlation functions decay in time. One often finds a cross-over effect, when the past memory is lost, after a finite time, e.g. because of finite system size.

141
**Successive random addition:**

R.F. Voss [44] introduced methods for generating fractional Brownian walks of various characteristics including fractal surfaces describing clouds and landscapes. A number of them can be found in Mandelbrot’s book [45]. We will here describe the simplest case following [43]: Start with 3 positions X(t), for t=0, ½, 1 and give them normally distributed values with 0 mean and variance σ₁²=1. Next set values of X(t) for t=1/4, t=3/4 by interpolation. . Add normally distributed numbers with reduced variance Keep on interpolating and adding until enough points generated. In the limit of a large number of repeated steps we obtain a fBm with Hurst exponent H.

142
The top figure shows a de-trended walk with H close to 1 while the bottom figures have H close to zero, and is much noisier. (These curves were actually constructed from synthetic power spectra, not from random sequential addition). Sometimes there is confusion between “fat tail” distributions and temporal correlations. E.g. Ref. [46] found a striking similarity between currency and turbulent fluctuations However, no temporal correlation are found in the former case, but there are strong ones in the latter [47]. The under-laying mechanisms may be very different!

143
**Relationship to the fractal (box-counting) dimension:**

You have probably seen discussed fractal objects such as the Koch snow-flake to the right. If the side of the original triangle is 1 the length of the n’th generation segment is δ=3-n . The number of segments is N=4n =δD where D is the box-counting dimension. We find D=ln 4/ln 3≈1.263 The situation is ambiguous for the fBm. If we choose boxes of length small compared to the range in the given time interval one finds [43] D≈2-H, while if is not the case D≈1.

144
**Discrete Fourier series:**

Environmental data such as temperatures, rainfall, pollution levels, water level in rivers and lakes are often sampled at regular time intervals. The same is true for financial data such as currency exchange rates, stock market prices etc. The fast Fourier transform FFT algorithm allows efficient method to analyze time series data. The discrete Fourier transform of the N data points f(k), k=0,1,2...N-1 is Be warned that this is not the only convention! If we allow n and k to be outside the range 0,1.2...N the periodic extension is understood. If f(k) is a time series with time step Δ we associate n with the frequencies

145
**From the orthogonality relationship**

we find for the inverse transform Most observed signals are real. We must then have F(n)=F(-n)*, and there will only be N/2 distinct frequencies. The frequency is called the Nyquist frequency. A function f(t) is bandwidth limited to the frequency ν if its Fourier series does not have any nonzero F(n) with n > νΔ. The Nyquist sampling theorem states that a function f(t) can be completely reconstructed by discrete sampling if it is band width limited to the Nyquist frequency.

146
Power spectrum: The magnitude square |f(k)|² is referred to as its intensity and its sum over k represents the total power. Parseval’s theorem states that |F(n|² is thus a measure of the fraction of the power that is stored in a given frequency. It is conventional not to distinguish between positive and negative frequencies and the power spectral density is defined as For a real time series |F(n)|=|F(-n)| and sometimes the power spectral intensity is given as ½ of the above expression.

147
**Next we compute the power **

Spectrum for the Gaussian random walk shown on the top figure. The power spectrum of this walk is shown in bottom figure. A random walk will not normally end where it began. Since the FFT expects a periodic function, the discontinuity at the end introduces spurious high frequency components tat can be avoided by de-trending. The points on a log-log plot of power spectrum approximately on straight line! log(power spectrum)

148
**Concluding remarks on correlated random walks:**

If slope of log-log plot is –k, then p(f) proportional to -k and the noise producing the spectrum is called 1/fk noise. Uncorrelated noise is 1/f² noise. The Hurst exponent H is related to the fractal box-counting dimension D by D=2-H if we cover the curve with boxes that are narrow compared to the range in the interval. 1/fk noise parameter k related to the Hurst exponent by k=2H+1. There are many different looking random walks with the same H. A method to generate random walks for arbitrary correlation functions is described in an Appendix in book by Barabási and H.E. Stanley [48] Be warned that there pitfalls associated with applying the methods described. Systematic errors are associated with small data-sets.

149
**Continuous time random walk (CTRW):**

Consider a process for which the probability that a “particle” starting at the origin will end up at position x after time t Here is the probability that it reaches x after n steps and is the probability that it has taken n steps in time t. If the probability ψ(τ) that the particle has executed a step in time τ has a mean And a variance the particle will on the average execute n=t/τ₀ steps with fluctuations of the order The irregular intervals between steps are then irrelevant and we can without much error assume that we have a n-step walk.

150
Sub-diffusion: Consider next the case where ψ(τ) does not have a mean and the time for n jumps typically is ≈ τ₀ n1/μ for large n where 0<μ<1. The typical distance travelled will then be This behaviour is called sub-diffusion [37] . Examples of situation where this occurs are: Photo-conductivity in amorphous semi-conductors (important for solar power generation). Diffusion in porous materials (important in hydrology). In the latter case the problem is often modelled in terms of percolation theory. A material consists of a certain fraction p of N sites that allow transport while the rest do not. In the limit that N becomes very large there will be critical value pc when a spanning cluster which allows transport from one end of the material to the other.

151
The top figure shows the spanning cluster for a square lattice just above the threshold. This cluster consists of a backbone, and a number of dead end side branches. The backbone with the side branches removed (assuming periodic boundary conditions in the x-direction) is shown in the bottom figure. (The figures were taken from [1]).

152
The percolating cluster is sometimes modelled by a comb-like structure as in the figure, with a number of side branches of varying length. There may also be a bias field in the side-branches (see Pottier [49] and references therein). A particle diffusing on the backbone will at times be diverted into a side-branch and, as we have seen, the return time will then not have a mean if the side-branch length is infinite. A bias field may induce trapping which also may cause sub-diffusive behaviour. For an extensive review of the CTRW approach to diffusion in porous media see [50]. If efficient transport is the goal, sub-diffusion is not desirable. We will next show how a “smart” slime mould solves this problem.

153
**Physarium polycephalum**

The picture shows this common slime mould [51] in its plasmodium phase. It then contains a single cell forming a network of veins with many nuclei . It feeds on microbes, and likes oatmeal flakes in the lab. Flow of material inside is generated by contraction and relaxation of membrane walls. It shows adaptive behavior [52] even though it has no brain. With more than two sources, the amoeba produces efficient networks. E.g. when placed over a map of United Kingdom, with oatmeal flakes on the locations of major cities, it produced a network of veins that resembled the actual motor-way network of the UK !

154
The paper which probably did most to stimulate interest in the “smart slime moulds” was a brief communication in Nature by Nakagaki et. al. which demonstrated that P. Polycephaleum could solve a maze. A maze can be seen as a spanning percolating cluster. The mould then allows unproductive side-branches to atrophy making the nutrient transport more efficient!

155
Superdiffusion: Next consider the case where the number of steps per unit time has a fat-tailed distribution. The associated phenomenon is intermittency. Some examples: For Reynolds number close to the critical value for the laminar to turbulent transition there are often bursts of violent activity followed by quiet periods. Similar behavior can be seen near discontinuous phase transitions. It is not uncommon that certain internet sites suddenly goes “viral” after being dormant for some time. In financial markets one observes periods with high volatility followed by quiet periods.

156
**Toy model for market fluctuations:**

Suppose the price ξ of a certain commodity will change by a small amount δ after each transaction and that the log of the return r(1)= ln((ξ+ δ)/ ξ) has a Gaussian distribution with mean 0, standard deviation σ. The probability distribution for r after n transaction will then be as in the Bachelier theory. In the CTRW model We next assume that there are occasional periods of high volatility and to be specific we choose the Lévy-Smirnow distribution:

157
We now replace the sum over n by an integral and introduce the new variable y=1/n, dy= - dn/n². We obtain The integration is easily performed and we find The bursts of high volume causes the log-return to change from a Gaussian to a Cauchy distribution! Our choice of the Lévy-Smirnov distribution may, however, be a bit extreme.

158
**References and further reading**

The material on equilibrium statistical mechanics is standard and can be found in many textbooks. A possible choice for a graduate level text is: [1] M. Plischke and B. Bergersen: Equilibrium Statistical Physics, 3rd edition, World Scientific (2005). A more elementary text is [2] D. V. Schroeder: An introduction to Thermal Physics, Addison Wesley (2000). In neoclassical economics and conventional game theory the preferences are fixed and utility can be treated as a state variable, but this has always been controversial. For a historical perspective see [3] P. P Mirowski, More heat than light; Economics as social physics, physics as nature's economics, Cambridge University Press (1989). On the other hand in the prospect theory of Tversky and Kahneman [4] A. Tversky and D. Kahneman, Science 185 (1974) utility is highly dependent on framing which again is highly dependent on context and history. Utility must then be treated as a process variable. For my personal view on the subject see [5] B. Bergersen, Physics in Canada 65, 209 (2009). Available from

159
**Both [1] and [2] covers the information theoretic approach to entropy.**

[6] J.L. Lebowitz and E.H. Lieb, Phys. Rev. Lett. 22, 631 (1969). The Monte Carlo method is described in some detail in [1]. Alben’s original paper is [7] R. Alben. Amer. J. Phys. 40, 3 (1972) [8] J. D. Farmer and J. Geanakoplos, Complexity 14 (3), 11–38 (2009). [9] A. Dragulescu and V.M. Yakovenko, European Physical J., B17, 723 (2000). [10] N. Ding and Y-G. Wang, Chin. Phys. Lett. An excellent reference for condensed matter steady states can be found in [11] N.W. Ashcroft and N.D. Mermin: Solid state Physics, Holt Rinehart and Winston 1976. There is a chapter in [1] which discusses linear response theory in considerable detail. In particular there is a section on Onsager relations. The pictures of the Belousov -Zhabotinsky reaction were taken from the Wikipedia article on the subject. [12] R.H. Enns and G. C. McGuire: Nonlinear physics with Maple for Scientists and Engineers, 2nd edition Birkhauser 2000. The picture of the Australian sheep blow-fly Lucilla cuprina was downloaded from the Wikipedia.

160
**The standard reference to the Nicholson equation is**

[13] W.S.C. Gurney, S.P. Blythe and R.M. Nisbet, Nicholson's blowflies revisited, Nature 287, (1980). A readable introduction to delay differential equations is [14] Thomas Erneux, Applied delay differential equations, Springer 2009. Hutchinson’s original paper is [15] G.E. Hutchinson, Circular causal systems in ecology, Ann. N.Y. Acad. Sci (1948). [16] D. Challet and Y-C Zhang, Physica A 256, 514 (1998). [17] W.F.M. De Bondt and R. H. Thaler, in Heuristics and biases, eds. T. Gilovich, D.Griffin, D. Kahneman. [18] B. Satinover and D. Sornette, Eur. Phys. J. B60} 369 (2007). [19] E. J. Langer, Journal of Personality and Social Psychology}, 32 , 311 (1975). The dimensional analysis approach to the physics of fluids is discussed in my lecture notes Fluids that can be downloaded from where you also can find further references. The discussion of steady state probability distribution in the discussion

161
**of Markov processes is taken from Section 9. 2. 1 in ref**

of Markov processes is taken from Section in ref. [1], while the material which follows on stochastic processes borrows heavily from Chapter 8 of the same book. A classic reference on stochastic processes is [20] N.G. Van Kampen, Stochastic processes in Physics and Chemistry, North Holland (1981). The section on branching processes is taken from [1]. The spruce budworm model originated with [21] D. Ludwig, D.D. Jones and C.S. Holling, J. Animal Ecology 47, 315. The rate equation version of the model is described in Section 4.5 of [1], while the stochastic version is in Section 8.2. The pictures of the insect and the description of the life cycle comes from the Wikipedia article on the subject. Chapter 10, called The parable of the worm in [22] A. Nikiforuk, Empire of the beetle, Greystone Books 2011. contains a colorful description of past Budworm infestations. The main content of the book is a description of bark beetle epidemics which recently have devastated large tracts of forests in western North America and which have a similar history. The classic text on the Fokker-Planck equation is [23] H. Risken, The Fokker-Planck equation, 2nd ed. Springer 1989, The book contains a proof of Pawula’s theorem. In [20] and [1] the Fokker-Planck equation is derived from a system size expansion, with the moment expansion

162
**presented as an after-thought**

presented as an after-thought. The discussion of Brownian motion, Rayleigh and Karmer’s equation and the Kramer’s escape rate is mostly taken from [1]. [24] W. H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery. Numerical recipies in C, 2nd edition, Cambridge Univeristy Press 1992. [25] L. Gammaitoni, P. Hänggi, P. Jung and F. Marchesoni, Rev. Mod. Phys. 70} 223 (1998), P. Hänggi, P. Jung, and F. Marchesoni, Eur. Phys. J. B 69, 1–3 (2009) [26] F. Moss, L. M. Ward, W. G. Sannita, Clinical neurophysiology (2004). [27] R. Landauer, J. of Stat. Phys. 53, 233 (1988). [28] P. Reimann, Physics Reports (2002) My treatment omits biological detail and relies heavily on [29] J.M.R. Parrondo and L. Dinis, Contemporary Physics 45, 147 (2004). and a modification of the Parrondo game due to [30] R. Toral, Fluctuation and noise 2, L305 (2002). [31] M. Stutzer, The paradox of diversification. Working paper available from Prof. Stutzer’s website at University of Colorado. [32] W. Weibull, J Applied Mech. p293 Sept (1951). [33] N. N. Taleb, The black swan, second ed.. Random House , 2010.

163
**[34]J. P. Bouchaud and M. Potters, Theory of financial risk, Cambridge University**

Press,(2000). [35] B. B. Mandelbrot, Fractals and scaling in finance; discontinuity, concentration, risk, Springer 1997. [36]D. Sornette, Why stock markets crash: critical events in complex financial systems, Princeton U. Press (2003). A good general reference to Lévy processes is [37] E.W. Montroll and B.J. West, On an enriched collection of stochastic processes, Chapter 2 in E.W. Montroll and J.L. Lebowitz eds. , Fluctuation Phenomena, North Holland 1976. [38] R. Weron, Statist. Probab. Lett. 28 (1996) 165. See also: R. Weron, ”Correction to: On the Chambers-Mallows-Stuck method for simulating skewed stable random variables”, Research Report, Wroc law University of Technology, 1996, An implementation of the Weron algorithm can be found at [39] It makes use of ran1 from [24], but you may wish to replace it with another random number generator. [40] Tomasz Srokowski, Phys Rev. E (2009) [41] Wang D.,, Li M. Di Z., Physica A (2005) .

164
**[42] M. E. J. Newman, Contemporary Physics 46, 323 (2005).**

[43] J. Feder, Fractals, Plenum (1988). [44] R.F. Voss, Random fractal forgeries, in Fundamental algorithms in computer graphics, ed. R.A. Earnshaw, Springer (1985). [45] B. B. Mandelbrot, The fractal geometry of nature, Freeman (1982). The animation of the Koch curve was taken from the Wikipedia article on the subject. [46] S. Ghashghaie, W. Breymann, J. Peinke, P. Talkner and Y. Dodge, Nature 381, 767 (1996). [47] A. Arnéodo, J.-P. Bouchaud, R. Cont, J.-F. Muzy, M. Potters and D. Sornette, arXiv:cond-mat/ [48] A.-L. Barabási and H.E. Stanley, Fractal concepts in surface growth, Cambridge University Press (1995). [49]N. Pottier, Physica A 216} 1, (1995),. [50]B. Berkowitz, A.Cortis, M. Dentz, and H.Scher, Reviews of Geophysics, 44, RG2003 / 2006. [51] The picture of Physarium polycephaleum is taken from the Wiikipedia article on the subject , which also contains many references. [52] A. Dussutour, T.Latty, M. Beekman, S.J. Simpson, (2010). PNAS 107 (10): 4607 (2010).

165
**[53] A. Adamatzky and J. Jones, arXiv: 0912.3967v1.pdf,**

Journal of Bifurcation and Chaos 20 (10): 3065. [54] T. Nakagaki, H. Yamada, A. Tóth, Ágota, Nature 407 (6803): 470 (2000).

Similar presentations

Presentation is loading. Please wait....

OK

Discrete time Markov Chain

Discrete time Markov Chain

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on seven segment display tutorials Ppt on integration problems and solutions Ppt on electrical power generation system using railway track slides Ppt on question tags worksheet Ppt on 9-11 conspiracy theories attacks videos Ppt on regional transport office delhi Ppt on area of rectangle and square Ppt on lean production system Ppt on forward and future contracts Ppt on political parties and electoral process for kids