Presentation is loading. Please wait.

Presentation is loading. Please wait.

Markov Nets are systems in which: given an interaction initiated by an agent towards an agent, one can decide in advance the time at which the second agent.

Similar presentations


Presentation on theme: "Markov Nets are systems in which: given an interaction initiated by an agent towards an agent, one can decide in advance the time at which the second agent."— Presentation transcript:

1 Markov Nets are systems in which: given an interaction initiated by an agent towards an agent, one can decide in advance the time at which the second agent will be affected assuming it would not undergo another event in the meantime. This is implicitely allowing the biliard balls stuff )or in fact any short range interaction, or even interceptable rockets being used... Etc)

2 Appendix The present paper main focuss was the practical implementation of the Markov Nets / Webs on the NatLab platform. Thus we kept the main text free of formal definitions. Still, the concept of Markov Net / Web needs to be defined and in the future studied. In particular this will allow the formal study of a host of open problems: the exystence and construction of the process, its relation to Markov Chains and differential stochastic processes, its limitations, its dynamical stability (multiple equilibria, feedback or evolutionary strategies for stabilizing / regulating it), its applicability to other real life situations etc.

3 Definition of a Markov Net Given a set of Agents indexed by an integer i=1,..,N, at any time t, each of them can be in a State S(i,t). The states space can be parametrized by some discrete or continuous parameters. An agent i can undergo at any time T (in cetrain conditions described below) an event Event E(i,T). The set of events that can occur to an given agent i belong to a space that again can be parametrized by discrete and /or continuous parameters. A Certain state S(i,t) may have a probability rate M(E(i,t) | i,t,S(i,t)) to generate / undergo an event E(i,t). In turn, an event E(i,T) may have a probability C(S'(i,T) | i,T,E(i,T),S(i,T)) to change the current state S(i,T) of the agent i. into a new state S'(i,T). An event E(i,T) can also initiate an interactions I(i,T,E(i,T),j) with another agent j. with a probability D( I(i,j,T,E(i,T)) | i,j,T,E(i,T),S(i,T)) ). The interaction I(i,j,T,E(i,T)) ), may cause j to undergo at time T'>T an event F(j,T') with probability G(T',F(j,T') | i,j,T,I(i,T,E(i,T),j),S(j,T')).

4 Note the crucial point that the probability distribution G that decides T' depends on the state of j at time T'. This may look like a problem in as far as the definition of T' is self referential. In particular, events undergone by the agent j at times T^ < T' would affect T'. However, there is no problem to implement this definition without the need of solving transcendental equations. All one has to do is the following: - at the time T, after the random interaction I(i,j,T,E(i,T)) ) has been decided, one estimates (extracts the random value of) T' according to the probability distribution G(T',F(j,T') | i,j,T,I(i,T,E(i,T),j),S(j,T)) i.e. with the assumption that S(j,T') at time T' will still be S(j,T')= S(j,T). - However, each time T^< T' that the state S(j,T^) of j changes, T' is extracted again from the new probability distribution G(T',F(j,T') | i,j,T,I(i,T,E(i,T),j),S(j,T^)) i.e with S(j,T')= S(j,T^). Of course the new allowed values of T' will then be only T'>T^.

5

6

7

8

9

10

11

12

13


Download ppt "Markov Nets are systems in which: given an interaction initiated by an agent towards an agent, one can decide in advance the time at which the second agent."

Similar presentations


Ads by Google