Presentation is loading. Please wait.

Presentation is loading. Please wait.

WP7: Empirical Studies Presenters: Paolo Besana, Nardine Osman, Dave Robertson.

Similar presentations


Presentation on theme: "WP7: Empirical Studies Presenters: Paolo Besana, Nardine Osman, Dave Robertson."— Presentation transcript:

1 WP7: Empirical Studies Presenters: Paolo Besana, Nardine Osman, Dave Robertson

2 Outline of This Talk Introduce overall framework Identify four key areas: –Interaction availability –Consistency interaction-peer –Consistency peer-peer –Consistency with environment In each of these areas it is impossible to guarantee the general property we ideally would require, so the goal of analysis is to identify viable engineering compromises and explore how they scale.

3 Basic Conceptual Framework P M (P,R) P1P1 PnPn EPEP E P1 E Pn P = process name R = role of P M (P,R) = Interaction model for P in role R E P = environment of P

4 Simulation as Clause Rewriting

5 Ensuring Interactions are Available P M (P,R) P1P1 PnPn EPEP E P1 E Pn MPMP  R  R (P) → ◊(M (P,R)  M (P)  ( i (M (P,R) ) → ◊ a (M (P,R) ))) R (P) = Roles P wants to undertake M P = Interactions known to P {M (P,R), …} i (M (P,R) ) = M (P,R) is initiated a (M (P,R) ))) = M (P,R) is completed successfully

6 Specific Question Suppose that the same interaction patterns are being used repeatedly in overlapping peer groups. To what extent can basic statistical information about success/failure of interaction models solve matchmaking problems? See Deliverable 7.1 for discussion of this

7 Consistency Peer - Interaction Model P M (P,R) P1P1 PnPn EPEP E P1 E Pn K (P) K (M (P,R) )  A  K (P)  (B  K (M (P,R) )  ◊B  K (M (P,R) )) →  (A  B) K (X) = Knowledge derivable from X  (F) = F is consistent

8 Specific Question Each interaction model imposes temporal constraints Peers have deontic constraints What sorts of properties required by peers (e.g. trust properties) or by interaction modellers (e.g. fairness properties) can we test using this information alone.

9 Example In an auction, the auctioneer agent wants an interaction protocol that enforces truth telling on the bidders’ side. A =[bid(bidder,V)⇒win(bidder,P V )] ⋀ [bid(bidder,B)⇒win(bidder,P B ) ⋀ B≠V] ⋀ P B ≮P V where A∈K(P) We would like to verify: A∈K(P) ∧(B∈K(M(P,R))∨◊B∈K(M(P,R))) →σ(A∧B) 1 2 3 4 M(P,R)

10 Verifying σ(A∧B) Verify M(P,R) satisfies A:  Is A satisfied at state 1?  If result is achieved, then terminate  else, go to next state(s) and repeat 1 2 3 4 M(P,R) 1 2 3 4 1 2 … …

11 Property Checking Framework interaction state-space temporal properties deontic constraints Model Checker XSB system Table Prolog engine Temporal Proof Rules LCC Transition Rules

12 satisfies(E,tt)  true satisfies(E,Φ 1 ⋀ Φ 2 )  satisfies(E,Φ 1 ) ⋀ satisfies(E,Φ 2 ) satisfies(E,Φ 1 ⋁ Φ 2 )  satisfies(E,Φ 1 ) ⋁ satisfies(E,Φ 2 ) satisfies(E, Φ)  ∃ F. trans(E,A,F) ⋀ satisfies(F,Φ) satisfies(E,[A]Φ)  ∀ F. trans(E,A,F) ⋀ satisfies(F,Φ) satisfies(E,μZ.Φ)  satisfies(E,Φ) satisfies(E,νZ.Φ)  dual(Φ,Φ’) ⋀ ¬satisfies(E,Φ’) Temporal Proof Rules

13 trans(E :: D,A,F)  trans(D,A,F) trans(E 1 or E 2,A,F)  trans(E 1,A,F) ⋁ trans(E 2,A,F) trans(E 1 then E 2,A,E 2 )  trans(E 1,A,nil) trans(E 1 then E 2,A,F then E 2 )  trans(E 1,A,F) ⋀ F ≠ nil trans(E 1 par E 2,A,F par E 2 )  trans(E 1,A,F) trans(E 1 par E 2,A,E 1 par F)  trans(E 2,A,F) trans(M ⇐ P,in(M),null)  true trans(M ⇒ P,out(M),null)  true trans(E ← C,#(X),E)  X in C ⋀ sat(X) ⋀ sat(C) trans(E ← C,A,F)  (A ≠ #) ⋀ sat(C) ⋀ trans(E,A,F) LCC Transition Rules

14 Consistency Peer - Peer P M (P,R) P1P1 PnPn EPEP E P1 E Pn K (P) K (P 1 )  A  K (P)  P i  P (M (P,R) )  B  K (P i ) →  (A  B) P (M (P,R) ) = Peers involved in M (P,R)

15 Specific Question Agents in open environments may have different ontologies Guaranteeing complete mappings between them is infeasible (ontologies can be inconsistent, can cover different domains, etc) Agents are interested in performing tasks: mapping is required only for the terms contextual to the interactions Repetition of tasks provides the basis for modelling statistically the contexts of the interactions To what extent can interaction models can be used to focus the ontology mapping to the relevant sections of the ontology?

16 Approach Predicting the possible content of a message before processing can help to focus the mapping: –With no knowledge of the context and of the state of an interaction, a received message can be anything –the context can be used to guess the possible content of messages, filtering out unrelated elements –the guessed content is suggested to the ontology mapping engine The entities in a received message m i (e 1,...,e n ) are bound by the context of the interaction: –some entities are specific to the interaction type (purchase, request of information,...), –the set of possible entities is bound by concepts previously introduced in the interaction, –different entities may appear in a specific message with different frequencies

17 Implementation Creating the model: –Entities appearing in messages are counted, obtaining their prior and conditional frequencies –Ontological relations between entities in different messages are checked and the verified relations are counted Predicting the content of a message: –When a message is received, the probability distribution for all the terms is computed using the collected information and the current state of the interaction –The most probable terms form the set of suggestions for the ontology mapping engine Two phases: The aim is to obtain the smallest possible set that is most likely to contain the entities actually used in the message.

18 Mapping Evaluation Framework

19 Testing Interactions are abstract protocols, and agents have generated ontologies –allows us to simulate different types of relations between the messages Community preferences over elements (best sellers, etc) are simulated by probability distributions Interactions are run automatically hundreds of times Results are compared with a uniform distribution of the entities (simulates no knowledge about context) –Equivalent size for same success rate –Equivalent success rate for same size of suggestion set

20 Provisional Results After 100 interactions, the predictor is able to provide a set smaller than 7% of the ontology size containing, 70% of the time, the term actually used in message m 2 If all terms are equiprobable, the probability is directly proportional to the size of the (randomly picked) set, as shown above.

21 Consistency Peer - Environment P M (P,R) P1P1 PnPn EPEP E P1 E Pn K (E P ) K (P)  A  K (P)  B  K (E P ) →  (A  B)

22 Specific Question Suppose we have a complex environment with adversorial agents For specific goals, how complex do interaction models need to be in order to raise group performance significantly?

23 Environment Simulation Framework Group convergence randomcoordinated Comparative performance Environment simulator Simulated agents Interaction model Coordinating peer a(hunter,Id):: sawHimAt(Location) => a(hunter,RID)  visiblePlayer(Location) and strafeAttempt(Location,Location) or strafeAttempt(Location,Location)  sawHimAt(Location) <= a(hunter,RID) or movementAttempt(random_play) You can be a hunter if you send a message revealing the location of a visible opponent player upon whom you are making a strafing attack or make a strafing attack on a location if you have been told a player is there or otherwise just do what seems right


Download ppt "WP7: Empirical Studies Presenters: Paolo Besana, Nardine Osman, Dave Robertson."

Similar presentations


Ads by Google