Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Design and Implementation of a Reliable Reputation System for File Sharing in P2P Networks 2006/7/6 黃盈傑.

Similar presentations


Presentation on theme: "1 Design and Implementation of a Reliable Reputation System for File Sharing in P2P Networks 2006/7/6 黃盈傑."— Presentation transcript:

1 1 Design and Implementation of a Reliable Reputation System for File Sharing in P2P Networks 2006/7/6 黃盈傑

2 2 Outline Introduction Related Work System Overview Experimental Results Demo Conclusions & Future Works

3 3 Problem Because of the anonymity of P2P, there is a problem that some of file providers may misuse by providing tampered files. Introduction

4 4 Reputation System It is hard for a user himself to gather enough information to get a trust value about other users directly. Introduction

5 5 Attacks to P2P Network Distribution of tampered with information Man in the middle attack U V M Intercept & Modify Introduction

6 6 Attacks to Reputation System(1/2) Reentry to get rid of the bad history Self replication Introduction

7 7 Attacks to Reputation System(2/2) Pseudospoofing Shilling attacks Introduction

8 8 Motivation Defend the attacks. Design a mechanism to determine the judgments are real or not. Introduction

9 9 Outline Introduction Related Work System Overview Experimental Results Demo Conclusions & Future Works

10 10 Recommendation-based P2P trust model “A recommendation-based peer-to-peer trust model” [DWJZ 2004] Pure P2P Any node x has a corresponding file node D x that stores all information for it. Related Work

11 11 Calculation Formula R ij : node i’s recommend degree on node j S ij : successful transactions from node j to node i. F ij : unsuccessful transactions from node j to node i. Trust Calculation formula Related Work

12 12 Restrain Slander & Magnify For every transaction, when node u puts evaluation, node v also needs to echo it in a period of time. If node u puts evaluation too frequently, the evaluation may not be accepted. u DvDv v S uv or F uv download echo Related Work

13 13 Restrain Slander When node u puts F uv (negative evaluation)  If node v echos in time: F uv is accepted.  If node v does not echo in time: F uv is accepted in the probability of 1-T v Node v is harder to be slandered when he has a higher trust value. Related Work

14 14 Restrain Magnify When node u puts S uv (positive evaluation)  If node v echos in time: S uv is accepted in the probability of T v.  If node v does not echo in time: S uv is not accepted. Node v is harder to be magnified when he has a lower trust value. Related Work

15 15 Outline Introduction Related Work System Overview Experimental Results Demo Conclusions & Future Works

16 16 Formulas Design (1/4) Global Trust Calculate Formula: Cx: client x GT x : Cx’s Global Trust -1  GT x  1, w = 2 g x = Counts of “Good” judgments of Cx b x = Counts of “Bad” judgments of Cx vb x = Counts of “Very bad” judgments of Cx J x = g x + b x +w* vb x System Overview

17 17 Formulas Design (2/4) Self-Trust Calculate Formula: ST xy : Self-trust for Cx to Cy -1  ST xy  1, w = 2 g xy = Counts of “Good” judgments that Cx report to Cy b xy = Counts of “Bad” judgments that Cx report to Cy vb xy = Counts of “Very bad” judgments that Cx report to Cy J xy = g xy + b xy + w*vb xy System Overview

18 18 Formulas Design (3/4) Reputation Calculate Formula: REP xy represents Cy’s reputation to Cx. -1  REP xy  1, 0    1 System Overview

19 19 Formulas Design (4/4) Modification Reputation Calculate Formula: -1  REP xy  1, 0    1, w =2 z: all the other clients in the P2P network System Overview

20 20 System Framework of On-line Server System Overview

21 21 System Framework of Off-line Server System Overview

22 22 Database: reputation_data table IdThis is a unique ID created by JXTA. NameClient can take a name, when he runs the JXTA application for the first time. AlertAlert state MonitorIndicate if monitor this client. 1: yes, 0: no. ReputationReputation value rep_rankReputation rank Judge transaction_countTotal transaction count (upload) transaction_sizeTotal transaction size (upload) ReportCounts of report (report) bad_reportCounts of bad report (report) good_judgmentCounts of good judgment (get report) verybad_judgmentCounts of very bad judgment (get report) bad_judgmentCounts of bad judgment (get report) reg_timeRegister time System Overview

23 23 Database: client_registration table idThis is a unique ID created by JXTA. host_ipThe IP of the server where this client registers on. mac_addrMAC address System Overview

24 24 Report Record File(1/2) typeAction type of this entry. Four types: DL: download, UP: upload, RP: report, GEREP: get report. monitorIndicate whether monitor this entry or not. true: yes, false: no. timeAction time idThe id of the interactive client. cidThe content id of the file. nameThe name of the file. size/reportIf this entry is download type or upload type, this field stores the size of the file. Otherwise, this field stores the report type. System Overview

25 25 Report Record File(2/2) ex: RP#true#2006/07/02 14:23:37 – 1151821417218 #uuid-59616261646162614A7874615032503386 C56C2FCE0C42CA9031D5989E73FD4F03 #md5:6604305f08ca5b498b5596cbaf901acb #72_cs 3-3.txt#A:Good file System Overview

26 26 Defend the Attacks(1/2) Distribution of tampered with information Use the reputation as the guide to select file provider. Reentry to get rid of the bad history ID-Design: mac_address + JXTA-ID System Overview

27 27 Defend the Attacks(2/2) Self replication Our reputation system not voting mechanism. Client can't download the files offered by themselves Pseudospoofing ID-Design: mac_address + JXTA-ID Shilling attacks Use the Monitor to detect to probably malicious attack. System Overview

28 28 Concept of the Monitor System will accept all reports from the clients. Set the alert threshold (parameter) of each action. The Monitor acts according those parameters. System Overview

29 29 Monitor Flowchart Monitor Start Check Phase Any client’s alert-state is set? Yes Monitor Done No Determine Phase Check all client’s report record file and set their alert-state with corresponding parameters Client was restrained from reporting while F-8~F-12 is set Determine F9,F12~F16 System Overview

30 30 Determine Phase Cx: the client whose alert-state is set Cg: the client gives Cx good judgment Cb: the client gives Cx bad judgment GTx: global trust of Cx, -1  GTx  1 NGTx: normalize global trust of Cx, 0  NGTx  1 System Overview

31 31 Determine Phase: F-9 Cg gives too many good judgments to a specific file of Cx  Determine: Cg magnifies Cx  Punish Cg  Remove those good judgments from Cx System Overview

32 32 Determine Phase:F-12 Cb gives too many bad judgments to a specific file of Cx  Determine: Cb slanders Cx  Punish Cb  Remove those bad judgments from Cx System Overview

33 33 Determine Phase: F-14 Cx gets too many good judgments from Cg Yes No Do nothing Remove those good judgments from Cx Reset F-14 of Cx P(NGTx) accept? No Are there other judgments? Yes Do nothing System Overview

34 34 Determine Phase: F-16 Cx gets too many bad judgments from Cb P(NGTb) accept? Yes No Do nothing Remove those bad judgments from Cx Reset F-16 of Cx P(1-NGTx) accept? No Yes Are there other judgments? Yes Do nothing No System Overview

35 35 Determine Phase: F-13 Calculate avg_good_judgment, avg_bad_judgment & final_judge final_judge > 0? Yes No Do nothing Remove those good judgments from Cx, Punish Cg Reset F-13 of Cx avg_good_judgment = mean(∑(NGTg)) avg_bad_judgment = mean(∑(NGTb)) final_judge = avg_good_judgment - avg_bad_judgment Cx gets too many good judgments System Overview

36 36 Determine Phase: F-15 Calculate avg_good_judgment, avg_bad_judgment & final_judge final_judge > 0? Yes No Remove those bad judgments from Cx, Punish Cb Punish Cx Reset F-15 of Cx Cx gets too many bad judgments avg_good_judgment = mean(∑(NGTg)) avg_bad_judgment = mean(∑(NGTb)) final_judge = avg_good_judgment - avg_bad_judgment System Overview

37 37 Outline Introduction Related Work System Overview Experimental Results Demo Conclusions & Future Works

38 38 Experiment 1 (1/3) 1,000 clients, 10,000 files. Each client downloads 100 files. This is an ideal network, so that any client can find all files of all clients, and he selects the file owner who has the highest reputation to be the file provider. A successful downloading means the client gets the right file he wants. Experimental Results

39 39 Experiment 1 (2/3) Types of bad clients:  Type-1:Bad provider. A bad client provides wrong files only.  Type-2:Slanderer. Give a “bad” judgment to the client who provides him the right file.  Type-3:Magnifier. Give a “good” judgment to the client who even provides him the wrong file. Experimental Results

40 40 Experiment 1 (3/3) Client Setting:   = 1,  = 1. Server Monitor Setting:  Acts after every 5,000 downloadings.  Determine F-13 (Cx gets too many good judgments) & F-15 (Cx gets too many bad judgments)  Both thresholds are set as 5. Experimental Results

41 41 Exp1 Type-1:Bad provider Experimental Results

42 42 Exp1 Type-2:Slanderer Experimental Results

43 43 Exp1 Type-3:Magnifier Experimental Results

44 44 Experiment 2 Compare different variations of on-line clients. Experimental Results

45 45 Exp2 Type-1:Bad provider Experimental Results

46 46 Exp2 Type-2:Slanderer Experimental Results

47 47 Exp2 Type-3:Magnifier Experimental Results

48 48 Experiment 3 Compare “clients select file providers highest reputation” and “clients select the file owner who has the reputation higher than a threshold” Experimental Results

49 49 Exp3 Type-1:Bad provider Experimental Results

50 50 Exp3 Type-2:Slanderer Experimental Results

51 51 Exp3 Type-3:Magnifier Experimental Results

52 52 System Evaluation(1/2) n: clients, m: report records  Procedure of the Monitor O(n . m 2 ) time complexity  Store report records O(n . m) space complexity Experimental Results

53 53 System Evaluation(2/2) The Monitor checks one client’s record with 2000 entries:  AMD 1.53GHz, 512MB RAM average : 559 ms (15:406~859). Each entry needs about 163 + filename Bytes 2000 entry about 344KB Experimental Results

54 54 Outline Introduction Related Works System Overview Experimental Results Demo Conclusions & Future Works

55 55 Outline Introduction Related Works System Overview Experimental Results Demo Conclusions & Future Works

56 56 Conclusions Design the Monitor to detect probably malicious behavior, find out the malicious clients and punish them. To improve the scalability, we can extend the server to multi-servers. Conclusions & Future Works

57 57 Future Works Solve the key-applying problem for encryption and decryption on multi-servers. Improve the auto-determine abilities of the Monitor for other alert-states. Improve the reputation system to incentive the clients share their files. Conclusions & Future Works

58 58 THE END


Download ppt "1 Design and Implementation of a Reliable Reputation System for File Sharing in P2P Networks 2006/7/6 黃盈傑."

Similar presentations


Ads by Google