Download presentation

Presentation is loading. Please wait.

Published byJamya Boote Modified about 1 year ago

1
DARWIN: Distributed and Adaptive Reputation Mechanism for Wireless Ad- hoc Networks CHEN Xiao Wei, Cheung Siu Ming CSE, CUHK May 15, 2008 This talk is based on paper: Juan José Jaramillo and R. Srikant. DARWIN: Distributed and Adaptive Reputation Mechanism for Wireless Ad-hoc Networks. In Proc. of ACM 13th Annual International Conference on Mobile and Networking (MobiCom’07), Montreal, Canada, Sept. 2007

2
Outline Introduction Basic Game Theory Concepts Network Model Analysis of Prior Proposals Trigger Strategies Tit For Tat Generous Tit For Tat DARWIN Contrite Tit For Tat Definition Performance Guarantees Collusion Resistance Algorithm Implementation Simulations Settings Results Conclusion & Comments

3
Introduction Source communicates with distant destinations using intermediate nodes as relays Cooperation: Nodes help relay packets for each other In wireless networks, nodes can be selfish users that want to maximize their own welfare. Incentive mechanisms are needed to enforce cooperation.

4
Introduction (Cont.) Two types of incentive mechanisms: Credit exchange systems: by payment Reputation based systems: by neighbor's observation

5
Introduction (Cont.) Main issue Due to packet collisions and interference, sometimes cooperative nodes will be perceived as being selfish, which will trigger a retaliation situation Contributions Analyze prior reputation strategies’ robustness Propose a new reputation strategy (DARWIN) and prove its robustness, collusion resistance and cooperation.

6
The Prisoners’ Dilemma Game A Nash equilibrium is a strategy profile having the property that no player can benefit by unilaterally deviating from its strategy Repeated Prisoner’s Dilemma Game Total payoff function is the discounted sum of the stage payoffs:

7
Network Model Assumptions Nodes are selfish and rational, not malicious Node operate in promiscuous mode The value of a packet should be at least equal to the cost of the resources used to send it. (α≥1) Assume any two neighbors have uniform network traffic demands. Thus, two player’s game. Other Assumptions Two nodes simultaneously decide whether to drop or forward their respective packets, and repeat game iteratively Game time is divided into slots

8
Payoff Matrix Affine Transformation

9
Payoff Matrix (Cont.) Define p e ∈ (0,1) to be the probability of a packet that has been forwarded was not overheard by the originating node. Define to be the perceived dropping probability of node i’s neighbor at time slot k≥0 estimated by node i.

10
is the average payoff at time slot k Payoff Function Average discount average payoff of player i starting from time slot n is then given by

11 N-step Trigger Strategy If node i’s neighbor cooperates, then and then the optimal value of T=p e Actually, p e is hard to perfectly estimated, so we have two cases: If T

p e, player –i will be perceived to be cooperative as long as it drops packets with probability Full Cooperation is never the NE point with trigger strategies Trigger Strategies Define to be the dropping probability node i should use at time slot k according to strategy S.

12
Tit For Tat Tit For Tat Strategy Milan et al. proved that TFT does not provide the right incentive either for cooperation in wireless networks.

13
Generous Tit For Tat Use a generosity factor g that allows cooperation to be restored. GTFT strategy GTFT is a robust strategy where no node can gain by deviating from the expected behavior, even if it cannot achieve full cooperation. But according to the Corollary: If both nodes use GTFT the cooperation is achieved on the equilibrium path if and only if g=p e So GTFT also needs a perfect estimate of p e

14
DARWIN GOAL: propose a reputation strategy that does not depend on a perfect estimation of p e to achieve full cooperation FOUNDATION: “Contrite Tit For Tat” strategy in iterated Prisoners’ Dilemma

15
Contrite Tit For Tat Base on idea of contriteness Player always in good standing on first stage Player should cooperate if it is in bad standing, or if its opponent is in good standing Otherwise, the player should defect

16
DARWIN 1 Note: Use historic information, e.g. q i (k-1) q i (k) acts as a measurement of bad standing Can you find the “Contrite Tit For Tat” idea?

17 Performance Guarantees Theorem: Assume 1<γ

{ "@context": "http://schema.org", "@type": "ImageObject", "contentUrl": "http://images.slideplayer.com/13/3819743/slides/slide_17.jpg", "name": "Performance Guarantees Theorem: Assume 1<γ

18 Performance Guarantees Estimated error probability: p e (e) = p e + Δ where –p e <Δ<1-p e Substitute into previous equation: For the assumption to be true (1<γ

{ "@context": "http://schema.org", "@type": "ImageObject", "contentUrl": "http://images.slideplayer.com/13/3819743/slides/slide_18.jpg", "name": "Performance Guarantees Estimated error probability: p e (e) = p e + Δ where –p e <Δ<1-p e Substitute into previous equation: For the assumption to be true (1<γ

19
Performance Guarantees LEMMA: If both nodes use DARWIN then cooperation is achieved on the equilibrium path. That is, p i (k) = p -i (k) = 0 for all k>=0

20
Collusion Resistance Define to be the discounted average payoff of player i using strategy S i when it plays against player –i using strategy S -i Define p s ∈ (0,1) to be the probability that a node that implements DARWIN interacts with a colluding node.

21
Collusion Resistance (Cont.) Then we can get the average payoff to a cooperative node Similarly, the average payoff to a colluding node interacts with a node implementing DARWIN

22
Collusion Resistance (Cont.) The average payoff is bounded by A group of colluding nodes cannot gain from unilaterally deviating if and only if U(S) ＜ U(D), that is

23
Collusion Resistance (Cont.) Define strategy S to be a sucker strategy if

24
denotes connectivity, which is the forwarding ratio Algorithm Implementation The number of messages sent to j for forwarding The number of messages j actually forwarded Then j’s average connectivity ratio is

25
Algorithm Implementation (Cont.) Define Use equation (6) and (7) to find the dropping probability To meet We estimate p e as, which is the fraction of time at least one node different from j transmits

26
Simulations Settings ns-2 Dynamic Source Routing (DSR) Protocol Area: 670 x 670m 2 50 nodes randomly placed, some are selfish 14 source-destination pairs packet size is 512 byte Simulation time is 800s, time slot is 60s γ=2

27
Simulations Normalized forwarding ratio Fraction of forwarded packets in the network under consideration divided by fraction of forwarded packets in a network with no selfish nodes Objective Find how normalized forwarding ratios for both cooperative and selfish nodes vary with: Dropping probability of selfish nodes Source rate Percentage of selfish nodes

28
Simulations Normalized forwarding ratio for different dropping ratio of selfish nodes 5 selfish nodes 2 packets/s

29
Simulations Normalized forwarding ratio for different source rates 5 selfish nodes 100% dropping ratio for selfish nodes

30
Simulations Normalized forwarding ratio for different number of selfish nodes 2 packets/s 100% dropping ratio for selfish nodes Key pt.: Selfishness does not improve performance Nodes are rational

31
Conclusion Studied how reputation-based mechanisms help cooperation emerge among selfish users Showed properties of previously proposed schemes Proposed new mechanism called DARWIN DARWIN is Robust to imperfect measurements (p e ) Collusion-resistant Able to achieve full cooperation (LEMMA) Insensitive to parameter choices

32
Comments Contribution: Apply CTFT to Wireless Ad-hoc Networks Reliable as long as assumptions hold Assumed nodes do not lie about perceived dropping probability Liars can get better payoffs! Assumed nodes are rational Only the previous stage is considered Normalized forward rates, but not payoff, is shown in simulation results.

33
Thanks

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google