Download presentation

Presentation is loading. Please wait.

Published byMorgan Ellerby Modified over 2 years ago

1
Terrorist Targeting, Information, and Coalition Behavior Maurice Koster Ines Lindner Gordon McCormick Guillermo Owen

2
Abstract We consider a three-person game, played by a terrorist organization, its victim, and its host (a regime which may or may not be giving aid to the terrorists). Typically, the host would like to see the victim hurt, but wishes to avoid retaliation by the victim. Thus, any support given to the terrorists must be kept secret. The terrorists must decide how frequently to attack the victim: they can be bolder when receiving support from the host, than they would be otherwise, because a supportive host will allow them to replenish their supplies. There will be substantial political costs if the victim retaliates against an innocent host, and therefore the victim has to decide whether the terrorists’ boldness can be taken as evidence that the host is in fact supporting them. The host in turn must decide whether to support the terrorists.

3
Description of Problem One of the problems facing the target (Victim) of a terrorist attack is the difficulty in launching a counter- attack, even when the identity of those (Terrorists) responsible for the initial attack are known. Generally, the Terrorists are very difficult to find, and can frequently hide among the population, so that any counter-attack is more likely to hurt bystanders. On the other hand, these Terrorists would find it hard to operate without the protection, and sometimes outright support, of some friendly regime (the Host). Thus it is tempting for the Victim to retaliate, not against the terrorist, but against the regime which is thought to be helping the terrorists.

4
Unfortunately, it is not always clear that the Host is in fact collaborating with the Terrorists, and an attack against an innocent host can have serious political consequences. On the other hand, failure to reply can cause the Victim to lose face, and will allow the terrorists to build up their assets. Thus the Victim is faced with the problem of determining, from the terrorists’ actions, whether in fact such collaboration exists – or, more exactly, whether sufficient evidence of such collaboration will be found to avoid political costs.

5
Since the evidence can never be perfect, the Victim, in our analysis, must determine the type of evidence he will accept. At what point should he be willing to retaliate against the Host? In a similar way, the Terrorist is faced with the problem of deciding how frequently to attack the Victim. Should the Terrorist attack more frequently with or without the Host’s support? At what level of resources should he attack? Finally, the Host has to decide whether to support the Terrorist. Are there situations when he should stop supporting the Terrorist, so as to avoid a retaliatory attack from the victim?

6
Description of Model We assume that, if there is no coalition, a weak T can attack at a level q. He will in fact do this, but cannot increase the average value of these attacks above q. A strong T, with H’s approval, can raise the level to p. We assume q < p 1. We assume a payoff to H of A(p-q) per time period, where A is the payoff obtained from a unit- level attack. There is a discount factor . Payoffs or benefits of size z, obtained t time periods in the future, have a discounted present value of z exp{- t}.

7
There is a suspicion factor, , which corresponds to V’s beliefs about the existence of valid evidence of the H-T coalition. In fact, V assigns subjective probability /(1+ ) to this. At the beginning of the process, this has a relatively low value (0) = c. So long as the coalition exists, then as time progresses, increases exponentially, so (t) = c exp{ t} where = p log(p/q) + (1-p) log ((1-p)/(1-q)).

8
Figure 1 shows as a function of p, using q = 0.3

9
Let us assume that these attacks continue until the level = e o is reached. This will happen at time = 1/. In that case, the total (undiscounted) payoff from these attacks will be A(p-q)/, as shown in Figure 2.

10
Total undiscounted damage

11
Note that, by choosing p slightly larger than 0.3 (the assumed value of q), H and T will do a lot of damage. However, this damage is spread out over a long period of time. (Think of 15 persons dead per year, over the next 10,000 years. H and T would both prefer to kill 1000 persons per year over the next 30 years.) The point is that we have not discounted future payoffs. If we do so, we find that the discounted payoff is A(p-q) (1-exp{- t}) /

12
Total discounted damage as a function of p

13
At time t (or s), the process stops, either because V retaliates, or because H decides to break off his relationship with T. Assume V plans to attack when at t, whereas H plans to break off the relationship at s. Suppose t s. Then V retaliates at time t, when has reached a value c exp{ t}. In that case there is probability /(1+ ) of a positive (to V) benefit, B, and 1/(1+ ) of a political cost, K. The expected, discounted value of this is exp{- t} B - K 1+

14
Apart from this, V has been suffering from T’s attacks, which as seen before has a total discounted value -A(p-q) (1- exp{- t}) / Note that we treat this as negative because we are analyzing this from V’s standpoint. Suppose on the other hand s < t. Then the process ends because H breaks off at time s. There is then only discounted payoff of the attacks.

15
Thus we have the payoff (to V) (t, s) = M(t) if t s N(s) if s < t where M(t) = -A(p-q) (1-exp{- t}) + exp{- t} c exp{ t} B – K 1+ c exp{ t} N(s) = -A(p-q) (1-exp{- s} )

16
The function M(t)

17
The function N(s)

18
Choice of Time Now there are two critical values of the time variable. One of these is t*, the value which maximizes the function M. The other is t #, which corresponds to the value # = K/B. Thus t # = ( log(K/Bc) )/ It is easily seen that, for t = t #, the second term in the definition of M vanishes, i.e., t # is the break-even point for retaliation. Thus, M(t # ) = N(t # ). Moreover, for t t #, M(t) > N(t).

19
Which is smaller? Assume first that t # ≤ t*. Suppose V chooses to retaliate at time t #. Then, if H chooses s > t #, we will have (t #, s) = M(t # ) If, on the other hand, H disengages at s ≤ t#, then (t #, s) = N(s). Since N is monotone decreasing, and s ≤ t #, it follows that N(s) ≥ N(t # ) = M(t # ). Thus, for every value of s, we have (t #, s) ≥ M(t # ).

20
Both V and H can guarantee an expected payoff of M(t#). Suppose, next, that H decides to disengage at time t # while V plans to retaliate at some time t. Then, if t < t #, we have (t, t # ) = M(t) Since M increases monotonically for t < t*, and in this case t < t # < t*, it follows that M(t) < M(t # ). If, on the other hand, t ≥ t #, then (t, t # ) = N(t # ) = M(t # ). Thus, for all t, we have (t, t # ) ≤ M(t # ).

21
Case II. Assume, on the other hand, that t* < t #. Suppose that V chooses to retaliate at t = t*. If H chooses s > t*, then (t*, s) = M(t*). If, on the other hand, H chooses s ≤ t*, then (t*, s) = N(s). Since N is monotone decreasing, we have N(s) ≥ N(t*), and, since t* ≤ t #, N(t*) ≥ M(t*). Thus, for any choice of s, (t*, s) > M(t*).

22
In case II, both V and H can guarantee the value M(t*). Suppose, next, that H chooses to disengage at s = t #. Then, if V chooses t < t #, (t, t # ) = M(t) ≤ M(t*) If, on the other hand, V chooses t ≥ t #, (t, t # ) = N(t # ) = M(t # ) ≤ M(t*) And so, for all t, (t, t) ≤ M(t*).

23
Of course there remains the possibility that either, or both, of t* and t# may be negative. In this case V will retaliate at time 0 (i.e. immediately at the beginning of the process). The value will then be either 0 or M(0). We omit the details, but give the results in the table below.

24
Payoff M(t # ) M(t*) 0 M(0) 0 H’s strategy t # 0 t # 0 V’s strategy t # t* 0 Order 0 ≤ t # ≤ t* 0 ≤ t* ≤ t # t # ≤ 0 ≤ t* t* ≤ 0 ≤ t # t # ≤ t* ≤ 0 t* ≤ t # ≤ 0

25
Interpretation The critical time t # is the moment at which the evidence of H’s collusion with T is large enough that the expected payoff from retaliation, ( B- K)/( +1), first becomes positive. H has no interest in disengaging prior to this time. At t #, however, it becomes logical both for V to retaliate, and for H to disengage. If, however, the cost of continuing attacks is sufficiently large, V might retaliate, even at risk of serious political costs, rather than put up with T’s and H’s attacks on a continuing basis.

26
Choice of p Of course, the above calculations are based on a fixed value of p. In fact, we know that H has wide latitude in choosing p¸ subject to the natural constraint p ≤ 1. We can expect that H will choose the value of p that minimizes the eventual payoff (t, s).

27
Example 1. Let A = 10, q = 0.3, B = 50, K = 100, = 0.1, and c = 0.4. We note first that, for this value of c, B and K, # = 2, and t # = log(5)/.Thus the expected payoff is M(t*) whenever µ* < 2, and M(t # ) otherwise. Also, t* = log(2.5µ*)/. We proceed to evaluate for values of p between 0.35 and 0.80.

28
Example 1 M -5.000 -9.992 -14.41 -16.84 -17.44 -17.02 -16.15 -15.12 -14.05 -13.01 t # 278.33 71.27 32.30 18.46 11.96 8.38 6.19 4.75 3.74 3.01 µ#2222222222µ#2222222222 t* 262.89 68.24 33.14 20.90 15.03 11.60 9.33 7.70 6.47 5.51 µ* 1.829 1.868 2.085 2.473 3.022 3.712 4.522 5.438 6.455 7.578.0058.0226.0498.0872.1345.1920.2600.3389.4298.5341 p 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80

29
As may be seen, the minimum value, - 17.44, was obtained with p = 0.55. Thus optimal behavior calls for H to choose p = 0.55; the process will then last 11.96 time periods, at which time the break-even point is reached. H will then disengage, and V will retaliate.

30
Example 2. Consider now the values A = 20, q = 0.3, B = 50, K = 100, = 0.1, and c = 0.4. Once again, # = 2, and t # = log(5)/, and the expected payoff is M(t*) whenever µ* < 2, and M(t # ) otherwise. Also, t* = log(2.5 µ*)/. We once again consider p between 0.35 and 0.80:

31
Example 2 M -10.00 -19.97 -28.58 -33.41 -34.82 -34.04 -32.31 -30.24 -28.11 -26.02 t # 278.33 71.27 32.30 18.46 11.96 8.38 6.19 4.75 3.74 3.01 µ#2222222222µ#2222222222 t* 238.51 56.33 25.48 15.54 11.16 8.76 7.19 6.05 5.17 4.46 µ* 1.589 1.427 1.424 1.550 1.797 2.151 2.596 3.114 3.696 4.341.0058.0226.0498.0872.1345.1920.2600.3389.4298.5341 p 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80

32
As may be seen, p = 0.55 is once again best for H. In this case, V will retaliate at time t* = 11.16, even though retaliation is still costly for him ( B < K), because he is suffering too much from T’s attacks. But note that, if H were to choose p = 0.6, V would wait until retaliation was costless ( B = K). The reason for this is that evidence of H’s collusion with T is increasing rapidly enough so that this wait would not be too long: t # = 8.38.

33
Variation in p An obvious question is whether it might not be more profitable for the H-T coalition to vary p, perhaps decreasing as approaches its critical value. While such a procedure seems reasonable, a continuously varying p seems rather difficult to implement, especially so as communications between the two partners must of necessity be secret. It may be possible of course to vary p a few times, at discrete intervals.

34
Example 3 Assume the several parameters are as given in Example 2, above. In that example, we saw that the critical value # = 2 is 5 times the original value, c = 0.4; i.e. the process will end when has increased by a factor of 5. Rather than a fixed p, we will assume that H and T will divide the period of attacks into 5 shorter periods, in each of which increases by a factor of 5^0.2 = 1.38. In other words, there will be changes in p at = 0.55, 0.76, 1.05, and 1.45.

35
Effects of varying p Starting Optimal p Time to next level Damage this interval Total 0.4.66 1.17 4.25 37.80 0.55.62 1.48 4.86 33.55 0.76.57 2.06 5.79 28.69 1.05.50 3.69 7.64 22.90 1.45.41 11.84 15.26 15.26

36
Thus, the changes in p come rather quickly at the beginning of the process (when is large), and more slowly thereafter as decreases. It may be seen that the total payoff, which is here -37.80, is substantially better (for T) than -34.84 which could be obtained if no changes in p were allowed.

Similar presentations

OK

Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.

Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Download ppt on turbo generator A ppt on loch ness monster images Ppt on water scarcity articles Ppt on face recognition technology using matlab Ppt on sources of energy for class 8th result Ppt on leadership styles in banking sector Ppt on suspension type insulators of electricity Ppt on panel discussion presentation Ppt on regional trade agreements list Ppt on nuclear family and joint family vs nuclear