Presentation is loading. Please wait.

Presentation is loading. Please wait.

Network Economics -- Lecture 3: Incentives and games in security Patrick Loiseau EURECOM Fall 2012.

Similar presentations


Presentation on theme: "Network Economics -- Lecture 3: Incentives and games in security Patrick Loiseau EURECOM Fall 2012."— Presentation transcript:

1 Network Economics -- Lecture 3: Incentives and games in security Patrick Loiseau EURECOM Fall 2012

2 References J. Walrand. “Economics Models of Communication Networks”, in Performance Modeling and Engineering, Zhen Liu, Cathy H. Xia (Eds), Springer 2008. (Tutorial given at SIGMETRICS 2008). – Available online: http://robotics.eecs.berkeley.edu/~wlr/Papers/Economic Models_Sigmetrics.pdf http://robotics.eecs.berkeley.edu/~wlr/Papers/Economic Models_Sigmetrics.pdf N. Nisam, T. Roughgarden, E. Tardos and V. Vazirani (Eds). “Algorithmic Game Theory”, CUP 2007. Chapter 17, 18, 19, etc. – Available online: http://www.cambridge.org/journals/nisan/downloads/Nis an_Non-printable.pdf http://www.cambridge.org/journals/nisan/downloads/Nis an_Non-printable.pdf

3 Outline 1.Interdependence: investment and free riding 2.Information asymmetry 3.Attacker versus defender games

4 Outline 1.Interdependence: investment and free riding 2.Information asymmetry 3.Attacker versus defender games

5 Incentive issues in security Plenty of security solutions… – Cryptographic tools – Key distribution mechanisms – etc. …useless if users do not install them Examples: – Software not patched – Private data not encrypted Actions of a user affects others!  game

6 A model of investment Jiang, Anantharam and Walrand, “How bad are selfish investments in network security”, IEEE/ACM ToN 2011 Set of users N = {1, …, n} User i invests x i ≥ 0 in security Utility: Assumptions:

7 Free-riding Positive externality  we expect free-riding Nash equilibrium x NE Social optimum x SO We look at the ratio: Characterizes the ‘price of anarchy’

8 Remarks Interdependence of security investments Examples: – DoS attacks – Virus infection Asymmetry of investment importance – Simpler model in Varian, “System reliability and free riding”, in Economics of Information Security, 2004

9 Price of anarchy Theorem: and the bound is tight

10 Comments is player j’s importance to the society PoA bounded by the player having the most importance on society, regardless of g i (.)

11 Examples

12 Bound tightness

13 Investment costs Modify the utility to The result becomes

14 Outline 1.Interdependence: investment and free riding 2.Information asymmetry 3.Attacker versus defender games

15 Information asymetry Hidden actions – See previous lecture Hidden information – Market for lemons – Example: software security

16 Market for lemons Akerlof, 1970 – Nobel prize in 2001 100 car sellers – 50 have bad cars (lemons), willing to sell at $1k – 50 have good cars, willing to sell at $2k – Each know its car quality 100 car buyers – Willing to buy bad cars for $1.2k – Willing to buy bad cars for $2.4k – Cannot observe the car quality

17 Market for lemons (2) What happens? What is the clearing price? Buyer only knows average quality – Willing to pay $1.8k But at that price, no good car seller sells Therefore, buyer knows he will buy a lemon – Pay max $1.2k No good car is sold

18 Market for lemon (3) This is a market failure – Created by externalities: bad car sellers imposes an externality on good car sellers buy decreasing the average quality of cars on the market Software security: – Vendor can know the security – Buyers have no reason to trust them So they won’t pay a premium Insurance for older people

19 Outline 1.Interdependence: investment and free riding 2.Information asymmetry 3.Attacker versus defender games

20 Network security [Symantec 2011] Security threats increase due to technology evolution – Mobile devices, social networks, virtualization Cyberattacks is the first risk of businesses – 71% had at least one in the last year Top 3 losses due to cyberattacks – Downtime, employee identity theft, theft of intellectual property Losses are substantial – 20% of businesses lost > $195k  Tendency to start using analytical models to optimize response to security threats

21 Attacker-defender games Attackers learn the defense strategies and adapt Toy example – You observe that every time a thief breaks into your house, it is a week-day – You pay two guards to stay on week-days – Next time, the thief will break into your house at week-end! Same situation in many applications – Spam detection, intrusion detection  Strategic players

22 Intrusion detection systems (IDS) Detect unauthorized use of network Monitor traffic – Signature based (store signature of known attacks) Snort Bro – Anomaly based (compare to “normal” behavior) Monitoring has a cost – CPU (e.g., for real time) [Alpcan, Basar 2011]

23 The simplest game Attacker: {attack, no attack} ({a, na}) Defender: {monitoring, no monitoring} ({m, nm}) Payoffs “Safe strategy” (or min-max) – Attacker: na – Defender: m if α s >α f, nm if α s <α f mnm a na mnm

24 The simplest game: Nash equilibrium Payoffs: Non-zero sum game There is no pure strategy NE Mixed strategy NE: – Neutralize the opponent (make him indifferent) – Opposite of own optimization (indep. own payoff) mnm a na

25 A Bayesian game formulation [Liu et al 2006]: player malicious or regular

26 A Bayesian game formulation (cont’d) – w: value of protected asset –α : detection rate –β : false alarm rate [Liu et al 2006] Attacker’s payoff: -αw+(1-α)w-c a –c a : cost of attack –c m : cost of monitoring

27 Bayesian Nash equilibrium If then pure strategy equilibrium – Attack if malicious – Do not monitor If then no pure strategy equilibrium – Attacker attack with proba – Defender monitors with proba

28 Classification Games Defender: Observes # of FS, MS attacks in N periods Spammer: Non strategic, randomly hits of FS and MS Spy: Select # of hits H in N periods [Dritsoula, L., Musacchio 2012] NatureSpammer Spy File Server Mail Server p 1-p

29 Thief or fox? cheaper chickens precious goats animal thief fox “poor” shepherd OR?

30 Formulation Spy picks # times H ∈ {0, …, N} to hit FS Defender picks a threshold T ∈ {0, …, N+1} – Classifies spy if H≥T – Classifies spammer if H<T RV: S : # of times a spammer hits FS Spy cost: J S = c d 1 T≤H – c a H Defender reward U D = p (c d 1 T≤H - c a H ) - (1-p) c fa P( S ≥ T ) U D = c d 1 T≤H - c a H - (1/p-1) c fa P( S ≥ T ) Rescale ~

31 Nash equilibrium is in mixed strategies Spy seeks to attack just below T Defender seeks to set T just equal with H  No pure strategy NE, both players mix spy’s distribution on # of FS attacks defender’s distribution on thresholds Proba to hit FS 0 times Proba to hit FS N times Proba to set threshold to 0 Proba to set threshold to N+1

32 Payoff formulation in matrix form Spy’s cost: J S = c d 1 T≤H – c a H = α’Λβ Λ: ( N+1)x(N+2) matrix defined by Technicality: add so that Λ>0 – Simple shift – Equilibrium unchanged

33 Payoff formulation in matrix form (cont’d) Defender’s payoff U D = c d 1 T≤H - c a H - (1/p-1) c fa P( S ≥ T ) = α’Λβ – μ’ β : false alarm penalty Remark: general bimatrix game –J S = α’Λβ, U D = α’Πβ, – Computationally hard Here: Almost zero-sum game [Gueye et al. 2011] –J S = α’Λβ, U D = α’Λβ – μ’ β

34 Main theorem In any NE, the defender’s strategy β maximizes the defendability θ (β) = min [ Λβ ] - μ’ β A maximizing value of β exists amongst one of the two forms for some s, with If there is a unique maximizing β  unique NE [Dritsoula, L., Musacchio 2012]

35 Main theorem (consequences) Computation of NE in polynomial time

36 Conclusion: NE Defender strategy is of form … or maybe a mix of these Search over all of these to find best defendability Invert a submatrix of Λ to find attacker mix  : Mix of Defender threshold strategies c a /c d or

37 Simulation results coincide with theory Players’ NE strategies for N = 7 θ 0 = 0.1, c d = 15, c a = 23, c fa = 10, p = 0.2 θ 0 = 0.1, c d = 10, c a = 1, c fa = 10, p = 0.8

38 Spy’s distribution Attacker must keep defender “indifferent” on thresholds in his strategy Defender increasing threshold by T to T +1 – Decreases false alarm penalty ∝ P(S = T) – Thus P(H = T) ∝ P(S = T) to make missed detection penalty increase balance

39 Spy’s strategy Spy’s NE strategy is a truncated version of spammer’s distribution + “max” attack N = 100, θ 0 = 0.1, c d = c fa =142, c a = 1, p = 0.1

40 Concluding remarks Attackers are not dumb if there is big money – Defender must take it into account Interaction statistical learning / game theory is yet largely under-explored – Application to spam filtering [Nelson et al. 2009] showed that a spammer that knows your spam filter learning rules and has only 1% of your training set can shape spams to pass through the filter – Thousands of other applications – Exciting research problem

41 References [Symantec 2011] “2011 State of Security Survey”, Symantec 2011 [Alpcan, Basar 2011] “Network Security: A Decision and Game Theoretic Approach”, Alpcan, Basar, CUP 2011 [Liu et al 2006] “A Bayesian Game Approach for Intrusion Detection in Wireless Ad Hoc Networks”, Liu, Comaniciu, Man, Valuetools 2006 [Gueye, Walrand, Anantharam 2011], “A Network Topology Design Game: How to Choose Communication Links in an Adversarial Environment?”, Gueye, Walrand, Anantharam, GameNets 2011 [Nelson et al. 2009] “Misleading Learners: Co-opting Your Spam Filter”, Nelson, Barreno, Chi, Joseph, Rubinstein, Saini, Sutton, Tygar, Xia, In Machine Learning in Cyber Trust: Security, Privacy, Reliability, Springer 2009 [Dritsoula, L., Musacchio 2012] “Computing the Nash Equilibria of Intruder Classification Games”, Lemonia Dritsoula, Patrick Loiseau, John Musacchio, in Proceedings of GameSec 2012 [Dritsoula, L., Musacchio 2012] “A Game-Theoretical Approach for Finding Optimal Strategies in an Intruder Classification Game”, Lemonia Dritsoula, Patrick Loiseau, John Musacchio, in Proceedings of IEEE CDC 2012


Download ppt "Network Economics -- Lecture 3: Incentives and games in security Patrick Loiseau EURECOM Fall 2012."

Similar presentations


Ads by Google