Presentation is loading. Please wait.

Presentation is loading. Please wait.

AGENDA Problem Background and Sensor Information Fusion Framework

Similar presentations


Presentation on theme: "AGENDA Problem Background and Sensor Information Fusion Framework"— Presentation transcript:

1 MODAL LOGIC AND SENSOR INFORMATION FUSION RESEARCH OVERVIEW STEVE HENDERSON 6 SEPTEMBER, 2002

2 AGENDA Problem Background and Sensor Information Fusion Framework
Review of Modal Logics Using Modal Logics For Sensor Information Fusion Modal Logic vs. Bayesian Belief Networks Research Issues and Future Direction Summary Questions Conclusion

3 PROBLEM BACKGROUND The U.S. Army wants to improve traditional target acquisition and engagement life cycle. Command & Control 2 Sensor 1 3 Traditional Target Acquisition Sequence 1- Sensors detects targets 2- Command gets information – determines what is really happening 3- Command tells shooters to move and kill Problems: Command and control processing delay Command and control info overload Fusion of information becomes difficult with more information

4 PROBLEM BACKGROUND The Army hopes to move the command and control into a supervisory role and let sensors communicate directly with shooters. This new system is part of a large system known as the Future Combat System (FCS). Command& Control 3 Sensor 2 1 Revised Sequence 1. Sensors detects targets 2. Sensors request optimal shooter(s) 3. Command element monitors exchange

5 FCS INFORMATION FUSION FRAMEWORK
Enemy Action 1 Enemy Action 2 Fusion Enemy Action 3 Fusion Goal: What is enemy doing? (Problem Statement) Enemy Action 4 Enemy Action 5

6 PROPOSED INFORMATION FUSION FRAMEWORK
Three tiered model Enemy Operational State (EOS). Describes overall enemy intent. Examples: EOS 1 : Defensive Operations EOS 2 : Offensive Operations EOS 3 : Terrorist Operations EOS 4 : Retrograde Operations Key Descriptor (KD). A combination of various sensor data and intermediate information products that partially characterize one or more EOS’s. Examples: KD1 : { acoustic(tank) , image (tank) } Sensor Data. Raw data gathered from battlefield sensors. Examples: acoustic(x) : x is acoustic data collected image(x) : x is image data detected

7 EXAMPLE INFORMATION FUSION FRAMEWORK
EOS Level Retreat Offense Defense Resupply Activity Logistics Moving back Armor Marshalling Artillery Pushing Forward KD Level Sensor Level Sensor_1 Type: Signal Intercept Data: Heavy logistics radio network traffic Sensor_2 Type: Image Data: Img of moving trucks near logistics supply points Sensor_3 Type: Acoustic Data: Large vehicles on avenues of retreat Sensor_4 Type: Image Data: Img of tanks forming in assembly area Sensor_5 Type: Human Data: Artillery moving toward front Sensor_6 Type: Signal Intercept Data: Artillery targeting radar signatures near front

8 ESTIMATING KEY DESCRIPTORS AND EOS
Friendly forces can observe, via sensors, enemy actions on the battlefield Intelligent agents can then map this low-level sensor data to key descriptors Key descriptors can singularly provide information about enemy activity Key descriptors can also be combined to estimate EOS If EOS is known or closely estimated, we could infer those indirect EOS characteristics that are most likely to accompany the observed key descriptors Once key estimators and the EOS are closely estimated, friendly forces can use information to direct operations against EOS characteristics to disrupt and defeat enemy

9 ESTIMATION PROCESS Enemy operational state determined
Inputs Enemy Intent Objectives Plans Other Friendly Action Environment EOS Enemy operational state determined by combination of inputs

10 ESTIMATION PROCESS Actual EOS results in key descriptors Actual EOS
Inputs Enemy Intent Objectives Plans Other Friendly Action Environment Actual EOS Large Grain Enemy Activity Actual EOS results in key descriptors

11 ESTIMATION PROCESS Inputs Enemy Intent Objectives Plans Other Friendly Action Environment Actual EOS Large Grain Enemy Activity Enemy Actions EOS key descriptors result in measurable enemy action

12 ESTIMATION PROCESS Friendly sensors detect enemy action Actual EOS
Inputs Enemy Intent Objectives Plans Other Friendly Action Environment Actual EOS Large Grain Enemy Activity Enemy Actions Sensed Data Friendly sensors detect enemy action

13 ESTIMATION PROCESS Inputs Enemy Intent Objectives Plans Other Friendly Action Environment Actual EOS Large Grain Enemy Activity Enemy Actions Sensed Data Estimated Key Descriptors Intelligent agents estimate key descriptors from sensed data

14 ESTIMATION PROCESS Key descriptors are combined to estimate EOS Actual
Inputs Enemy Intent Objectives Plans Other Friendly Action Environment Actual EOS Large Grain Enemy Activity Enemy Actions Sensed Data Estimated Key Descriptors Estimated Enemy EOS Key descriptors are combined to estimate EOS

15 ESTIMATION PROCESS Estimated EOS used to direct friendly forces
Enemy Intent Objectives Current activities Future activities Inputs Enemy Intent Objectives Plans Other Environment Friendly Action Actual EOS Large Grain Enemy Activity Enemy Actions Sensed Data Estimated EOS Key Descriptors Estimated Enemy EOS Estimated EOS used to direct friendly forces against enemy EOS in order to disrupt/defeat

16 INFORMATION FUSION DIFFICULTIES
EOS Level Retreat Offense Defense Heavy Logistics Activity Logistics Moving back Armor Marshalling Artillery Pushed Forward KD Level Sensor Level Perfect World: - minimum set of sensors that define each key descriptor are mutually exclusive - minimum set of key descriptors that define each EOS are mutually exclusive - detecting enemy intent (EOS) is an exercise in mapping sensor data to KD and then KD to EOS

17 INFORMATION FUSION DIFFICULTIES
EOS Level Defense Retreat Offense Artillery Pushed Forward Heavy Logistics Activity KD Level Logistics Moving back Armor Marshalling Sensor Level However: many types of sensor data and KDs might support more than one parent (shown above in red) - Sensors are matched with multiple KDs and multiple KDs are matched with multiple EOS’s - This creates a level of propagating uncertainty that clouds interpretation of actual enemy intent

18 INFORMATION FUSION DIFFICULTIES
Potential methods to mitigate uncertainty: - Design framework to be mutually exclusive - might not be possible - eliminates the potential benefits of overlapping information (information reuse, collection/processing time, search pruning, etc.) - could require potentially more information to reach EOS clarity - Design framework to minimize overlap and reason about uncertainties - allows for more complete representation of similar EOS’s - reasoning about uncertainty provides EOS clarity while still allowing for similar KDs and EOS’s to share some common information.

19 REASONING ABOUT UNCERTAINTY IN A FCS INFORMATION FUSION FRAMEWORK
Potential approaches to address uncertainty in framework: - Modal Logic (Probability/Possibility variants) - Fuzzy Set Theory - Decision Support/Expert System Techniques - Bayesian Belief Networks - Standard probability distributions

20 MODAL LOGIC OVERVIEW Modal Logic is “Logic of Necessity and Possibility” Truth is variable across different MODES of reality Examples: Given Proposition P = “Fighting in sports is bad” P is not always true but possible Worlds in which P is TRUE: Figure Skating, Soccer Worlds in which P is FALSE: Boxing, Hockey Given Proposition Q = 3 > 2 Q is always (necessarily) true

21 MODAL LOGIC OVERVIEW Symbols are added to standard predicate logic to indicate modes  = Necessary Truth (always the case)  = Possible Truth (sometimes true) Symbols can be combined with predicate logic to make inferences

22 MODAL LOGIC OVERVIEW Modal Logic is most powerful when combined with Kripke Models. Kripke Model made up of (W, R, V): W: A group of “worlds.” A world is anything that might have a different version of truth. (Sporting events in earlier example). R: An accessibility relationship among worlds. Define what worlds are accessible, or known, by another worlds. Relationship can model any type of real world relationship. Examples: temporal, spatial relationships. V: Truth mapping that maps a value (T/F) to every possible concept in a world. Examples: The concept of fighting is assigned a truth value in Boxing, a false value in Soccer, etc.

23 MODAL LOGIC OVERVIEW Example: Two players flip coins. Before they see each others coin, they must elect to bet that they have the winning hand or fold. Player’s who fold forfeit a small ante fee. The winner is then decided according to the following table: P1 coin P2 coin Winner H H P2 H T P1 T T P1 T H P2

24 MODAL LOGIC OVERVIEW Example: Model M = { W, R, V }:
W = { (H/H) (H/T) (T/T) (T/H) } States represent possible outcomes of game. R1 = { [(H/H), (H/T)] , [(T/T), (T/H)]} Note: [(H/H) , (H/T)] indicates that if player 1 is in state (H/H), he cannot distinguish it from (H/T). This is obvious, as he only knows his coin. Relationship defines outcomes that are indistinguishable from P1’s perspective. V = ( (H,H) ╞ winner(P2) (H,T) ╞ winner(P1) (T,T) ╞ winner(P1) (T,H) ╞ winner(P2) ) Truth values in all possible outcomes.

25 MODAL LOGIC OVERVIEW H/H T/H H/T T/T Example Kripke Model:
V = (H,H) ╞ winner(P2) (H,T) ╞ winner(P1) (T,T) ╞ winner(P1) (T,H) ╞ winner(P2) H/T

26 MODAL LOGIC OVERVIEW H/H T/H H/T T/T Kripke Model:
V = (H,H) ╞ winner(P2) (H,T) ╞ winner(P1) (T,T) ╞ winner(P1) (T,H) ╞ winner(P2) H/T Lets say player 1 flips a heads and player 2 flips a tails. Possible logical inferences that might be drawn: (H,T) ╞  winner (P1) = T (It is possible that player 1 is the winner) (H,T) ╞  winner (P2) = T (It is possible that player 2 is the winner) x | (H,T) ╞  winner (X) = F (There doesn't exist a player who is the winner in all possible worlds)

27 MODAL LOGIC OVERVIEW Revised Example: Expanded Model M = { W, R, V }:
W = { (H/H) (H/T) (T/T) (T/H) } States represent possible outcomes of game. R1 = { [(H/H), (H/T)] , [(T/T), (T/H)] } R2 = { [(H/H), (T/H)] , [(T/T), (H/T)] } We know have two relationship mappings. One defines outcomes that are indistinguishable from P1’s perspective. The other defines outcomes indistinguishable from P2’s perspective. V = ( (H,H) ╞ winner(P2) (H,T) ╞ winner(P1) (T,T) ╞ winner(P1) (T,H) ╞ winner(P2) )

28 MODAL LOGIC OVERVIEW 2 1 1 2 H/H T/H H/T T/T Kripke Model:
V = (H,H) ╞ winner(P2) (H,T) ╞ winner(P1) (T,T) ╞ winner(P1) (T,H) ╞ winner(P2) T/T T/H H/T 1 2

29 MODAL LOGIC OVERVIEW 2 1 1 2 H/H T/H H/T T/T Kripke Model:
V = (H,H) ╞ winner(P2) (H,T) ╞ winner(P1) (T,T) ╞ winner(P1) (T,H) ╞ winner(P2) 2 1 T/T T/H H/T Lets say player 1 and player 2 flip heads. Possible logical inferences that might be drawn: x | (H,H) ╞  1 winner (X) = F (in player one’s view, there doesn't exist a player who is the winner in all possible worlds) x | (H,H) ╞  2 winner (X) = T (in player two’s view, there exists a player (e.g. P2) who is the winner in all possible worlds) 1 2 Player 2 is able to infer necessary result without knowing actual state!

30 PROBABILISTIC MODAL LOGIC (PML) REVIEW
Adds degree of certainty to Modal Logic (Halpern, 1987) Modal operators include probability assignments New Operators: pri(x) : The total probability of x across all accessible worlds from agent i’s perspective Probability Kripke Structure: (W, PR, V) W = set of worlds or states PR = probability based relationship between states V = truth values assigned to predicates inside state S From each state, the sum of probabilities across all worlds is 1 Probability operator can be used to make inferences about predicates

31 MODAL LOGIC OVERVIEW Revised Example: Lets say player one rigs player two’s coin so it lands tail’s side up 70% of the time. Model M = { W, PR, V }: W = { (H/H) (H/T) (T/T) (T/H) } PR(1) = { PR[(H/H), (H/T)] = 0.7 PR[(H/H), (H/H)] = 0.3 PR[(T/T), (T/H)] = 0.3 PR[(T/T), (T/T)] = } Note: [(H/H) , (H/T)] indicates that if player 1 is in state (H/H), he will tend to think with probability (0.7) that he is in state (H/T). He knows his coin is H, and thinks that the other player’s head is likely tails, because he rigged it. PR(2) = { PR[(H/H), (H/T)] = 0.5 PR[(H/H), (H/H)] = 0.5 PR[(T/T), (T/H)] = 0.5 PR[(T/T), (T/T)] = } V remains unchanged.

32 MODAL LOGIC OVERVIEW Kripke Model: H/H H/T T/T T/H 1/0.7 2/0.5 1/0.3

33 MODAL LOGIC OVERVIEW H/H T/H H/T T/T Kripke Model: 1/0.3 2/0.5 2/0.5
1/0.7 Assume each player flips heads. 1/0.3 2/0.5 T/H 1/0.3 2/0.5 H/T 1/0.7 2/0.5 1/0.7 2/0.5 Possible logical inferences: (H,H) ╞ pr 1 [winner (P2)] = 0.3 (in P1’s view, the probability that P2 is the winner = 0.3) (x) | (H,H) ╞ pr 1 [winner (x)] > 0.6 (in P1’s view there is a player (i.e. P1) who is more likely to be the winner) (H,H) ╞ pr 2 [winner (P2)] = 1.0 (in P2’s view, the probability that P2 is the winner is 1) 1/0.3 T/T 2/0.5 1/0.7 2/0.5

34 SENSOR FUSION WITH PROBABILISTIC MODAL LOGIC

35 EXAMPLE MODEL Model M: W= { S1 = Pre-War Operations, S2 = Withdraw
S3 = Advance S4 = Dual Attack S5 = West Attack S6 = East Attack } PR = { Probability that, if you think you are in one state, you might instead be in another. i.e. – indistinguishable states V = { Truth values in this example will be in terms of following predicate: move(x,y) This is analogous to a key descriptor that says x is moving near y move(x,y)  NS(z) NS(z) indicates that next state is state Z This is the information we are trying to fuse or derive } 0.4 S1 Prewar 0.1 0.5 S2 0.4 S3 0.6 0.2 Withdraw Advance 0.3 0.2 0.3 0.33 0.4 S5 S4 S6 0.5 0.33 West ATK Split ATK East ATK 0.6 0.33 0.4 0.1 0.1

36 MODEL SHOWN USING FCS FRAMEWORK
EOS Level Prewar Withdraw Advance Dual ATK West ATK East ATK …… Move(Tank, NAI1) Move(Tank, NAI1) Move(Tank, NAI1) KD Level Sensor Level ……

37 EOS S1 (Prewar Operations) EXPLANATION
NAI 1 NAI 2 NAI 3 OBJ DOG CAT NAI 4 NAI 5 NAI 6 Overview Enemy inside assembly area R (similar states) & U (probabilities) -S1 (Prewar Ops) = 0.4 -S2 (move out of area) with U = 0.1 -S3 (move into sector) with U = 0.5 V(S1) Move(tanks, NAI 3)  NS(1) Move(fuel, NAI 3)  Move(tanks, NAI 3)  NS(2) Move(tanks, NAI1)  Move(tanks, NAI 2)  NS(3) Model/Fusion Notes: Each unique move(X,Y) on left side of implication is taken from a key descriptor (KD). These three formula demonstrate reasoning about the next EOS (note right side of implication).

38 EOS S2 (Leave Area) EXPLANATION
NAI 1 NAI 2 NAI 3 OBJ DOG CAT NAI 4 NAI 5 NAI 6 Overview Enemy leaves area to fight elsewhere R (similar states) & U (probabilities) - S2 (move out of area) with U = 0.6 - S3 (move into sector) with U = 0.4 V(S2) Move(fuel,NAI 3)  Move(tanks,NAI 3)  NS(2) Move(tanks,NAI1)  Move(tanks,NAI 2)  NS(3)

39 EOS S3 (Move into Sector) EXPLANATION
NAI 1 NAI 2 NAI 3 OBJ DOG CAT NAI 4 NAI 5 NAI 6 Overview Enemy begins moving forces into battle area via routes across NAI1 and NAI 2 R (similar states) & U (probabilities) -S3 (Move into sector) with U = 0.2 -S4 (two-axis attack) with U = 0.2 S5 (attack west) with U = 0.3 S6 (attack east) with U = 0.3 V(S3) Move(tanks,NAI1)  Move(tanks,NAI 2)  NS(3) Move(tanks, NAI 1)  NS(4)  NS(5) Move(tanks, NAI 2)  NS(4)  NS(6)

40 EOS S4 (Two-Axis Attack) EXPLANATION
NAI 1 NAI 2 NAI 3 OBJ DOG CAT NAI 4 NAI 5 NAI 6 Overview Enemy attacks with two equal forces across NAI 5 and 6 toward DOG and CAT R (similar states) & U (probabilities) -S4 (two-axis attack) with U = 0.33 S5 (attack west) with U = 0.33 S6 (attack east) with U = 0.33 V(S4) Move(tanks, NAI 5)  Move(tanks, NAI 6)  ¬ Move(tanks, NAI 4)  NS(4) Move(tanks, NAI 5)  Move_West(tanks, NAI 4)  NS(5) Move(tanks, NAI 5)  Move_East(tanks, NAI 4)  NS(6)

41 N EOS S5 (Attack West) Overview
NAI 1 NAI 2 NAI 3 OBJ DOG CAT NAI 4 NAI 5 NAI 6 Overview Enemy attacks with one force south across NAI 1 then NIA 5 to DOG. Another force crosses NAI 2, feints at NAI 6, and moves West across NAI 4 to mass forces in west R (similar states) & U (probabilities) -S4 (two-axis attack) with U = 0.4 S5 (attack west) with U = 0.5 S6 (attack east) with U = 0.1 V(S5) Move(tanks, NAI 5)  Move(tanks, NAI 6)  ¬ Move(tanks, NAI 4)  NS(4) Move(tanks, NAI 5)  Move_West(tanks, NAI 4)  NS(5) Move(tanks, NAI 5)  Move_East(tanks, NAI 4)  NS(6)

42 N EOS S6 (Attack East) Overview
NAI 1 NAI 2 NAI 3 OBJ DOG CAT NAI 4 NAI 5 NAI 6 Overview Enemy attacks with one force south across NAI 2 then NIA 6 to CAT. Another force crosses NAI 1, feints at NAI 5, and moves East across NAI 4 to mass forces in east R (similar states) & U (probabilities) -S4 (two-axis attack) with U = 0.4 S5 (attack west) with U = 0.1 S6 (attack east) with U = 0.5 V(S6) Move(tanks, NAI 5)  Move(tanks, NAI 6)  ¬ Move(tanks, NAI 4)  NS(4) Move(tanks, NAI 5)  Move_West(tanks, NAI 4)  NS(5) Move(tanks, NAI 5)  Move_East(tanks, NAI 4)  NS(6)

43 INFORMATION FUSION USING PROBABILISTIC MODAL LOGIC
APPLICATION OF EXAMPLE MODEL

44 CURRENT STATE = S1 (Prewar Operations)
Overview Enemy inside assembly area R (similar states) & U (probabilities) -S1 (Prewar Ops) = 0.4 -S2 (move out of area) with U = 0.1 -S3 (move into sector) with U = 0.5 NAI 3 NAI 2 NAI 1 NAI 4 NAI 5 NAI 6 V(S1) Move(tanks, NAI 3)  NS(1) Move(fuel, NAI 3)  Move(tanks, NAI 3)  NS(2) Move(tanks, NAI1)  Move(tanks, NAI 2)  NS(3) OBJ DOG OBJ CAT V(S2) Move(fuel,NAI 3)  Move(tanks,NAI 3)  NS(2) Move(tanks,NAI1)  Move(tanks,NAI 2)  NS(3) Receive data of tanks moving NAI 3 Pr(NS(1)) = NS(1) only true in S1 (through implication triggered by new data of tanks at NAI3) S1 has probability = 0.4 -For all x != 1, Pr(NS(x)) = 0 No state change - we decide to wait V(S3) Move(tanks,NAI1)  Move(tanks,NAI 2)  NS(3) Move(tanks, NAI 1)  NS(4)  NS(5) Move(tanks, NAI 2)  NS(4)  NS(6) Fusion step lies in deriving/reasoning about probabilities associated with next state.

45 CURRENT STATE = S1 (Prewar Operations)
Overview Enemy inside assembly area R (similar states) & U (probabilities) -S1 (Prewar Ops) = 0.4 -S2 (move out of area) with U = 0.1 -S3 (move into sector) with U = 0.5 NAI 3 NAI 2 NAI 1 NAI 4 NAI 5 NAI 6 V(S1) Move(tanks, NAI 3)  NS(1) Move(fuel, NAI 3)  Move(tanks, NAI 3)  NS(2) Move(tanks, NAI1) U Move(tanks, NAI 2)  NS(3) OBJ DOG OBJ CAT V(S2) Move(fuel,NAI 3)  Move(tanks,NAI 3)  NS(2) Move(tanks,NAI1) U Move(tanks,NAI 2)  NS(3) Receive data of tanks still moving NAI 3 Receive new data of more tanks at NAI 1 Pr(NS(3)) = 0.9 This time, NS(3) is implied in S1 and S3. Each of these has a world probability of 0.4 and respectively. These sum to 0.9 We decide to update state to S3 V(S3) Move(tanks,NAI1)  Move(tanks,NAI 2)  NS(3) Move(tanks, NAI 1)  NS(4)  NS(5) Move(tanks, NAI 2)  NS(4)  NS(6)

46 CURRENT STATE = S3 (Move into Sector)
Overview Enemy begins moving forces into battle area via routes across NAI1 and NAI 2 R (similar states) & U (probabilities) -S3 (Move into sector) with U = 0.2 -S4 (two-axis attack) with U = 0.2 S5 (attack west) with U = 0.3 S6 (attack east) with U = 0.3 NAI 3 NAI 2 NAI 1 NAI 4 NAI 5 NAI 6 V(S3) Move(tanks,NAI1) U Move(tanks,NAI 2)  NS(3) Move(tanks, NAI 1)  NS(4)  NS(5) Move(tanks, NAI 2)  NS(4)  NS(6) OBJ DOG OBJ CAT V(S4), V(S5), V(S6) Move(tanks, NAI 5)  Move(tanks, NAI 6)  ¬ Move(tanks, NAI 4)  NS(4) Move(tanks, NAI 5)  Move_West(tanks, NAI 4)  NS(5) Move(tanks, NAI 5)  Move_East(tanks, NAI 4)  NS(6) Receive data of tanks still moving NAI 1 and 2 Pr(NS(3)) = 0.2 Pr(NS(4)) = 0.2 -Pr(NS(5)) = 0.3 -Pr(NS(6)) = 0.3 -No conclusive information Maintain equal defenses in east and west

47 CURRENT STATE = S3 (Move into Sector)
Overview Enemy begins moving forces into battle area via routes across NAI1 and NAI 2 R (similar states) & U (probabilities) -S3 (Move into sector) with U = 0.2 -S4 (two-axis attack) with U = 0.2 S5 (attack west) with U = 0.3 S6 (attack east) with U = 0.3 NAI 3 NAI 2 NAI 1 NAI 4 NAI 5 NAI 6 V(S3) Move(tanks,NAI1)  Move(tanks,NAI 2)  NS(3) Move(tanks, NAI 1)  NS(4)  NS(5) Move(tanks, NAI 2)  NS(4)  NS(6) OBJ DOG OBJ CAT V(S4), V(S5), V(S6) Move(tanks, NAI 5)  Move(tanks, NAI 6)  ¬ Move(tanks, NAI 4)  NS(4) Move(tanks, NAI 5)  Move_West(tanks, NAI 4)  NS(5) Move(tanks, NAI 5)  Move_East(tanks, NAI 4)  NS(6) Receive data of tanks moving NAI 5 and NAI 6 No movement NAI 4 Pr(NS(3)) = 0.2 -Pr(NS(4)) = 0.8 0.2 (via S4) (via S5) (via S6) = 0.8 -Update current state to S4

48 CURRENT STATE = S4 (Two-Axis Attack)
Overview Enemy begins moving forces into battle area via routes across NAI1 and NAI 2 R (similar states) & U (probabilities) -S4 (two-axis attack) with U = 0.4 S5 (attack west) with U = 0.3 S6 (attack east) with U = 0.3 NAI 3 NAI 2 NAI 1 NAI 4 NAI 5 NAI 6 V(S4), V(S5), V(S6) Move(tanks, NAI 5)  Move(tanks, NAI 6)  ¬ Move(tanks, NAI 4)  NS(4) Move(tanks, NAI 5)  Move_West(tanks, NAI 4)  NS(5) Move(tanks, NAI 5)  Move_East(tanks, NAI 4)  NS(6) OBJ DOG OBJ CAT Receive data of tanks moving NAI 5 and NAI 6 Receive data of tanks moving west at NAI 4 Pr(NS(4)) = 0.0 Pr(NS(5)) = 0.0 Pr(NS(6)) = 1.0 True in each world – = 1.0 -Update current state to S5 (West Attack) -Reallocate defenses to protect DOG

49 MODAL LOGIC FCS INFORMATION FUSION ALGORITHM
Assign probabilities to current states. Represents degree of certainty/probability that we are in fact in that state. Update Key Descriptors. Evaluate Key Descriptors in each possible current state world and generate estimates about next state (from each world’s perspective). Combine or “Fuse” these estimates, using probabilities associated with each possible current state/world. REPEAT

50 BAYESEAN BELIEF NETWORKS VS. MODAL LOGIC

51 EOS H1 = Next_State(EOS1) H2 = Next_State(EOS2) P(Hi) = { 0.5, 0.5 }
KD3 KD4 KD1 KD2 P(KDi | Hi) : KD1 OFF ON H H KD2 OFF ON H H KD3 OFF ON H H KD4 OFF ON H H

52 EOS Assume: We call for an update. KD’s 1-3 are ON P(Hi) = { 0.5, 0.5 } KD4 KD3 KD1 KD2 P(KDi | Hi) : KD1 OFF ON H H KD2 OFF ON H H KD3 OFF ON H H KD4 OFF ON H H = ( 0.9, 0.5) (0.9, 0.5) (0.9, 0.9) (0.5, 0.9) = (0.3645, ) P(Hi | KDi ) =  P(Hi)  =  (0.5, 0.5) (0.3645, ) =  ( , ) = (0.9418, )

53 * EOS 2 now suddenly more likely with no “new” information!
KD1 KD2 KD3 KD4 Assume: We call for an update. All KD’s are OFF. P(Hi) = { 0.5, 0.5 } P(KDi | Hi) : KD1 OFF ON H H KD2 OFF ON KD3 OFF ON H KD4 OFF ON H = ( 0.1, 0.5) (0.1, 0.5) (0.1, 0.1) (0.5, 0.1) = (0.0005, ) P(Hi | KDi ) =  (0.5, 0.5) (0.0005, ) =  ( , ) = (0.1667, ) * EOS 2 now suddenly more likely with no “new” information! * Relative weighting of negative evidence drives EOS 1 down

54 Relative weighting of negative evidence still drives EOS 1 down
What if we change our prior??? P(Hi) = { 0.8, 0.3 } EOS Assume: We call for an update. All KD’s are OFF. KD1 KD2 KD3 KD4 P(KDi | Hi) : KD1 OFF ON H H KD2 OFF ON KD3 OFF ON H KD4 OFF ON H = ( 0.1, 0.5) (0.1, 0.5) (0.1, 0.1) (0.5, 0.1) = (0.0005, ) P(Hi | KDi ) =  (0.8, 0.3) (0.0005, ) =  (0.0004, ) = (0.3478, ) Relative weighting of negative evidence still drives EOS 1 down

55 ISSUES WITH BBNs We are forced to define the probability of KD given each Hypothesis What if a KD provides no supporting information (pos or negative) about a Hypothesis? What if the relationship is unknown? E.g. We know that moving tanks always indicates an attack. But we have no data on how this KD relates to a retreat. Forces an assumption. Does 0.5 for ON and OFF represent indifference in the model? What about state based reasoning? In the current model, we define the conditional probability for a KD given a possible Next State is the same. This conditional probability is the same regardless of current state. What is a KD can mean different things, depending on our current state? Prob(fever | patient_near_death, flu) = 0.3 Prob(fever | patient_near_death, heat_stroke) = 0.8

56 FUTURE RESEARCH DIRECTION AND ISSUES

57 ISSUES How do we assign probabilities to like worlds?
How do we handle time? How to combine to KD’s from different periods of time? What if probabilities are so dissolved that they don’t provide enough information to make decision?

58 FUTURE DIRECTION Compare Modal Logic vs. other methods to reason about uncertainty in the FCS Information Fusion Framework Bayesian Belief Networks Fuzzy Logic Expert Systems Add Modality to Bayesian Belief Networks Multi-tiered Modal Logics Use modal logic to reason about truth values at various levels of the information framework. Use Modal Logic to illuminate those KD’s that support other EOS’s other than the “winning” EOS. This might help identify deception and also selection of efficient KD sets. Combine graph theory and Modal Logic to help identify the optimal set of key descriptors and which key descriptors support which EOS’s.

59 SUMMARY FCS Information Fusion Framework Three tiered model
Abstract framework for information fusion Review of Modal Logics Standard modal logic Probabilistic modal logic Using Modal Logics For Sensor Information Fusion Future Direction and Research Issues

60 QUESTIONS ?

61 CONCLUSION “If a machine, a Terminator, can learn the value of human life, maybe we can to.” - Sarah Connor, Terminator 2


Download ppt "AGENDA Problem Background and Sensor Information Fusion Framework"

Similar presentations


Ads by Google