Presentation is loading. Please wait.

Presentation is loading. Please wait.

Overlay Neighborhoods for Distributed Publish/Subscribe Systems Reza Sherafat Kazemzadeh Supervisor: Dr. Hans-Arno Jacobsen SGS PhD Thesis Defense University.

Similar presentations


Presentation on theme: "Overlay Neighborhoods for Distributed Publish/Subscribe Systems Reza Sherafat Kazemzadeh Supervisor: Dr. Hans-Arno Jacobsen SGS PhD Thesis Defense University."— Presentation transcript:

1 Overlay Neighborhoods for Distributed Publish/Subscribe Systems Reza Sherafat Kazemzadeh Supervisor: Dr. Hans-Arno Jacobsen SGS PhD Thesis Defense University of Toronto September 5, 2012

2 Content-Based Pub/Sub Pub/Sub S S S S S S S S S S S S S S P P Publish P P P P P P 2 S S Subscribers P P P Publishers

3 Thesis Contributions List of publications: [ACM Surveys] Dependable publish/subscribe systems (being submitted) [Middleware’12] Opportunistic Multi-Path Publication Forwarding in Pub/Sub Overlays [ICDCS’12] Publiy+: A Peer-Assisted Pub/Sub Service for Timely Dissemination of Bulk Content [SRDS’11] Partition-Tolerant Distributed Publish/Subscribe Systems [SRDS’09] Reliable and Highly Available Distributed Publish/Subscribe Service [ACM Transactions on Parallel and Distributed Systems] Reliable Message Delivery in Distributed Publish/Subscribe Systems Using Overlay Neighborhoods (being submitted) [Middleware Demos/Posters’12] Introducing Publiy (being submitted) 3 Dependability Reliability Ordered delivery Fault-tolerance

4 Thesis Contributions 4 Dependability Reliability Ordered delivery Fault-tolerance Multipath forwarding Adaptive overlay mesh Dynamic forwarding strategies Efficient data structure List of publications: [ACM Surveys] Dependable publish/subscribe systems (being submitted) [Middleware’12] Opportunistic Multi-Path Publication Forwarding in Pub/Sub Overlays [ICDCS’12] Publiy+: A Peer-Assisted Pub/Sub Service for Timely Dissemination of Bulk Content [SRDS’11] Partition-Tolerant Distributed Publish/Subscribe Systems [SRDS’09] Reliable and Highly Available Distributed Publish/Subscribe Service [ACM Transactions on Parallel and Distributed Systems] Reliable Message Delivery in Distributed Publish/Subscribe Systems Using Overlay Neighborhoods (being submitted) [Middleware Demos/Posters’12] Introducing Publiy (being submitted)

5 Thesis Contributions 5 Dependability Reliability Ordered delivery Fault-tolerance Multipath forwarding Adaptive overlay mesh Dynamic forwarding strategies Efficient data structures Content Dissemination Bulk content dissemination Peer-assisted hybrid architecture Dissemination strategies List of publications: [ACM Surveys] Dependable publish/subscribe systems (being submitted) [Middleware’12] Opportunistic Multi-Path Publication Forwarding in Pub/Sub Overlays [ICDCS’12] Publiy+: A Peer-Assisted Pub/Sub Service for Timely Dissemination of Bulk Content [SRDS’11] Partition-Tolerant Distributed Publish/Subscribe Systems [SRDS’09] Reliable and Highly Available Distributed Publish/Subscribe Service [ACM Transactions on Parallel and Distributed Systems] Reliable Message Delivery in Distributed Publish/Subscribe Systems Using Overlay Neighborhoods (being submitted) [Middleware Demos/Posters’12] Introducing Publiy (being submitted)

6 Overlay Neighborhoods Thesis Contributions 6 Dependability Reliability Ordered delivery Fault-tolerance Multipath forwarding Adaptive overlay mesh Dynamic forwarding strategies Efficient data structures Content Dissemination Bulk content dissemination Peer-assisted hybrid architecture Dissemination strategies List of publications: [ACM Surveys] Dependable publish/subscribe systems (being submitted) [Middleware’12] Opportunistic Multi-Path Publication Forwarding in Pub/Sub Overlays [ICDCS’12] Publiy+: A Peer-Assisted Pub/Sub Service for Timely Dissemination of Bulk Content [SRDS’11] Partition-Tolerant Distributed Publish/Subscribe Systems [SRDS’09] Reliable and Highly Available Distributed Publish/Subscribe Service [ACM Transactions on Parallel and Distributed Systems] Reliable Message Delivery in Distributed Publish/Subscribe Systems Using Overlay Neighborhoods (being submitted) [Middleware Demos/Posters’12] Introducing Publiy (being submitted)

7 7 Publications: [SRDS’11] Partition-Tolerant Distributed Publish/Subscribe Systems [SRDS’09] Reliable and Highly Available Distributed Publish/Subscribe Service [ACM Transactions on Parallel and Distributed Systems] Reliable Message Delivery in Distributed Publish/Subscribe Systems Using Overlay Neighborhoods (being submitted) [ACM Surveys] Dependable publish/subscribe systems (being submitted) [Middleware Demos/Posters’12] Introducing Publiy (being submitted) DEPENDABILITY IN PUB/SUB SYSTEMS Part I

8 Dependable pub/sub systems Challenges of Dependability in Content-based Pub/Sub Systems The “end-to-end principle” is not applicable in a pub/sub system – Loose-coupling between publishers and subscribers (endpoints) – End-point cannot distinguish message loss from filtered messages: This is especially true in content- based systems supporting flexible publication filtering 8 P S Pub/Sub Middleware Pub/Sub Middleware ✓ ✗ ✓ ✗ ? Loss cannot be differentiated from filtered pubs Filtered out (not matching sub)

9 Dependable pub/sub systems Overlay Neighborhoods Primary network: An initial spanning tree – Brokers maintain neighborhood knowledge – Allows brokers to transform overlay in a controlled manner d -Neighborhood knowledge ( d is a config. parameter): – Knowledge of other brokers within distance d – Knowledge of forwarding paths within neighborhood 9 3-neighborhood 2-neighborhood 1-neighborhood

10 Dependable pub/sub systems queue Publication Forwarding Algorithm 1.Received pubs are placed on a FIFO msg queue and kept until processing is complete 2.All known subscriptions having interest in p are identified after matching 3.Forwarding path of the publication within downstream neighborhoods are identified 4.Publication is sent to closest available brokers towards matching subscribers 10 p p d-neighborhood S S S upstream downstream

11 Dependable pub/sub systems S S S When There are Failures Broker reconnects the overlay by creating new links to neighbors of the failed brokers Publications in message queue are re-transmitted bypassing failed neighbors Multiple concurrent failed neighbors (up to d-1 ) are bypassed similarly 11 P P

12 Dependable pub/sub systems Impact of Mass Failures on Throughput Experiment setup: 500 brokers (failures injected at random brokers) Measurement interval of 2 mins (aggregate publish rate changes depending number of failures) 12 Expected # of deliveries w/o failures

13 Dependable pub/sub systems Impact of Mass Failures on Throughput Experiment setup: 500 brokers (failures injected at random brokers) Measurement interval of 2 mins (aggregate publish rate changes depending number of failures) Actual deliveries with failures 13 Expected # of deliveries w/o failures

14 Dependable pub/sub systems Impact of Mass Failures on Throughput Experiment setup: 500 brokers (failures injected at random brokers) Measurement interval of 2 mins (aggregate publish rate changes depending number of failures) Actual deliveries with failures 14 Expected # of deliveries w/o failures

15 Dependable pub/sub systems Impact of Mass Failures on Throughput Experiment setup: 500 brokers (failures injected at random brokers) Measurement interval of 2 mins (aggregate publish rate changes depending number of failures) Actual deliveries with failures 15 Expected # of deliveries w/o failures

16 Dependable pub/sub systems Impact of Mass Failures on Throughput Experiment setup: 500 brokers (failures injected at random brokers) Measurement interval of 2 mins (aggregate publish rate changes depending number of failures) Expected # of deliveries w/o failures Actual deliveries with failures Low deliveries with d=1 16

17 Dependable pub/sub systems Impact of Mass Failures on Throughput Experiment setup: 500 brokers (failures injected at random brokers) Measurement interval of 2 mins (aggregate publish rate changes depending number of failures) Low deliveries with d=1 17

18 Dependable pub/sub systems Impact of Mass Failures on Throughput Experiment setup: 500 brokers (failures injected at random brokers) Measurement interval of 2 mins (aggregate publish rate changes depending number of failures) Low deliveries with d=1 18

19 19 Publications: [Middleware’12] Opportunistic Multi-Path Publication Forwarding in Pub/Sub Overlays OPPORTUNISTIC MULTI-PATH PUBLICATION FORWARDING Part II

20 Multi-path publication forwarding – Forwarding paths in the overlay are constructed in “fixed end-to-end” manner (no/little path diversity) – This results in a high number of “pure forwarding” brokers – Low yield (ratio of msgs delivered over msgs sent is small)  Low efficiency Problems in Existing Pub/Sub Systems B C A D E P S ✗ ✗ ✓ ✗ ✓ 20

21 Multi-path publication forwarding Monitor neighborhood traffic Selectively create additional soft links Different Pub. Forwarding Strategies Multi-Path Forwarding in a Nutshell Actively utilize neighborhoods 21 A Soft links

22 Conventional systems: Strategy 0 Total msgs: 6 Forwarding strategy 1 Total msgs: 5 Forwarding strategy 2 Total msgs: 3 Different Forwarding Strategies ABC ** * p ** * ABC ** * p ** * ABC ** * p ** *

23 Multi-path publication forwarding Maximum System Throughput Experiment setup: 250 brokers Publish rate of 72,000 msgs/min 23 S2 outperforms S0 by 90% S1 outperforms S0 by 60%

24 Publications: [ICDCS’12] Publiy+: A Peer-Assisted Publish/Subscribe Service for Timely Dissemination of Bulk Content 24 BULK CONTENT DISSEMINATION IN PUB/SUB SYSTEMS Part III

25 Bulk content dissemination Applications Scenarios Involving Bulk Content Dissemination Fast replication of content: (video clips, pics) Scalability Reactive delivery Selective delivery Distribution of software updates P2P file sharing File synch. 25 Replication within CDN Social networks

26 Bulk content dissemination Control layer brokers Data layer subscribers A case for a peer-assisted design Control layer (for metadata) P/S broker overlay Distributed repository maintaining users’ subscriptions Data layer (for actual data) Form peer swarm Exchange blocks of data Subscribe Hybrid Architecture

27 Bulk content dissemination Scalability w.r.t. Number of Subscribers Network setup: 300 and 1000 clients 1 source publishing 100 MB of content 27

28 Conclusion We introduced the notion of overlay neighborhoods in distributed pub/sub systems – Neighborhoods expose brokers’ knowledge of nearby neighbors and the publication forwarding paths that crosses these neighborhoods We used neighborhood in different ways – Passive use of neighborhoods for ensuring reliable and ordered delivery – Active use of neighborhoods for multipath publication forwarding – Bulk content dissemination 28

29 Thanks for your attention! 29

30 BONUS SLIDES IF NEEDED! EXTRAS 30

31 OVERLAY NEIGHBORHOODS 31

32 London Toronto Trader 1 Content-Based Publish/Subscribe Pub/Sub S S S S S S S S S S S S S S P P Publish P P P P P P sub = [STOCK=IBM] sub= [CHANGE>-8%] NYTrader 2 Stock quote dissemination application 32

33 Overlay neighborhoods System Architecture Tree dissemination networks: One path from source to destination Pros: – Simple, loop-free – Preserves publication order (difficult for non-tree content-based P/S) Cons: – Trees are highly susceptible to failures Primary tree: Initial spanning tree that is formed as brokers join the system – Maintain neighborhood knowledge – Allows brokers to reconfigure overlay after failures on the fly ∆-Neighborhood knowledge: ∆ is configuration parameter ensures handling ∆-1 concurrent failures (worst case) – Knowledge of other brokers within distance ∆  Join algorithm – Knowledge of routing paths within neighborhood  Subscription propagation algorithm 3-neighborhood 2-neighborhood 1-neighborhood 33

34 Dependable pub/sub systems Overlay Disconnections When there are d or more concurrent failures – Publication delivery may be interrupted – No publication loss 34 B B B B B Remain connected Subtree B C A D E Failed chain of d brokers Disconnected Subtrees are Disconnected

35 Dependable pub/sub systems Experimental Evaluation Studied various aspects of system’s operation: – Impact of failures/recoveries on delivery delay – Impact of failures on other brokers – Size of d -neighborhoods – Likelihood of disconnections – Impact of disconnections on system throughput 35 Discussed next

36 Dependable pub/sub systems Publication Forwarding in Absence of Overlay Fragments Forwarding only uses subscriptions accepted brokers. Steps in forwarding of publication p : – Identify anchor of accepted subscriptions that match p – Determine active connections towards matching subscriptions’ anchors – Send p on those active connections and wait for confirmations – If there are local matching subscribers, deliver to them – If no downstream matching subscriber exists, issue confirmation towards P – Once confirmations arrive, discard p and send a conf towards p 36 Publications A A B B C C D D E E P P S S Subscriptions ☑ p ☑ ☑☑☑☑☑ C C E E p p ppp Deliver to local subscribers conf p

37 Dependable pub/sub systems Publication Forwarding in Presence of Overlay Partitions  Key forwarding invariant to ensure reliability: we ensure that no stream of publications are delivered to a subscriber after being forwarded by brokers that have not accepted its subscription. Case1: Sub s has been accepted with no pid.  It is safe to bypass intermediate brokers 37 conf Publications A A B B C C D D E E P P S S Subscriptions p C C B B D D ☑☑ ☑☑☑☑☑ p p Deliver to local subscribers conf p

38 Dependable pub/sub systems Publication Forwarding (cont’d) Case2: Sub s has been accepted with some pid. – Case 2a: Publisher’s local broker has accepted s and we ensure all intermediate forwarding brokers have also done so:  It is safe to deliver publications from sources beyond the partition. 38 conf Publications A A B B C C D D E E P P S S Subscriptions p C C B B D D ☑☑ ☑☑☑*☑* p p conf p

39 Dependable pub/sub systems Publication Forwarding (cont’d) Case2: Sub s has been accepted with some pid. – Case 2a: Publisher’s local broker has accepted s and we ensure all intermediate forwarding brokers have also done so:  It is safe to deliver publications from sources beyond the partition. 39 conf Publications A A B B C C D D E E P P S S Subscriptions p C C B B D D ☑☑ ☑☑☑*☑* p p Depending on when this link has been established either recovery or subscription propagation ensure C accepts s prior to receiving p conf p

40 Dependable pub/sub systems Publication Forwarding (cont’d) Case2: Subscription s is accepted with some pid tags. – Case 2b: Publisher’s broker has not accepted s :  It is unsafe to deliver publications from this publisher (invariant). 40 Publications A A B B C C D D E E P P S S Subscriptions p ☑ ☑*☑* p p* s was accepted at S with the same pid tag pp p Tag with pid

41 Dependable pub/sub systems Overlay Fragments When primary tree is setup, brokers communicate with their immediate neighbors in the primary tree through FIFO links. Overlay fragments: Broker crash or link failures creates “fragments” and some neighbor brokers “on the fragment” become unreachable from neighboring brokers Active connections: At each point they try to maintain a connection to its closest neighbor in the primary tree. – Only active connections are used by brokers A A B B C C D D E E F F S S P P D D pid1= Fragment detector Brokers on the fragment Brokers beyond the fragment Brokers on the fragment Active connection to E ? x 41

42 Dependable pub/sub systems Overlay Fragments – 2 Adjacent Failures What if there are more failures, particularly adjacent failures? If ∆ is large enough the same process can be used for larger fragments. A A B B C C D D E E F F S S P P D D pid1= Brokers beyond the fragment Brokers on the fragment E E + pid2= Active connection to F 42

43 Dependable pub/sub systems Overlay Fragments - ∆ Adjacent Failures Worst case scenario: ∆-neighborhood knowledge is not sufficient to reconnect the overlay. Brokers “on” and “beyond” the fragment are unreachable. No new active connection A A B B C C D D E E F F S S P P D D pid1= Brokers beyond the fragment Brokers on the fragment E E pid2= F F + pid3= 43

44 Dependable pub/sub systems Fragments Brokers are connected to closest reachable neighbors & aware of nearby fragment identifiers. How does this affect end-to-end connectivity? For any pair of brokers, a fragment on the primary path between them is: – An “island” if end-to-end brokers are reachable through a sequence of active connections – A “barrier” if end-toe-end brokers are unreachable through some sequence of active connections A A B B C C D D E E F F S S P P D D E E F F A A B B C C D D E E F F S S P P D D source destination source 44

45 Dependable pub/sub systems Store-and-Forward A copy is first preserved on disk Intermediate hops send an ACK to previous hop after preserving ACKed copies can be dismissed from disk Upon failures, unacknowledged copies survive failure and are re-transmitted after recovery – This ensures reliable delivery but may cause delays while the machine is down P P P P P P P P From here To here ack 45

46 Dependable pub/sub systems Mesh-Based Overlay Networks [Snoeren, et al., SOSP 2001] Use a mesh network to concurrently forward msgs on disjoint paths Upon failures, the msg is delivered using alternative routes Pros: Minimal impact on delivery delay Cons: Imposes additional traffic & possibility of duplicate delivery P P P P P P P P From here To here 46

47 Dependable pub/sub systems Replica-based Approach [Bhola, et al., DSN 2002] Replicas are grouped into virtual nodes Replicas have identical routing information Physical Machines Virtual node 47

48 Dependable pub/sub systems Replica-based Approach [Bhola, et al., DSN 2002] Replicas are grouped into virtual nodes Replicas have identical routing information We compare against this approach P P P P P P P P P P P P Virtual node 48

49 Multi-path publication forwarding Problems with a Single Overlay Tree Tree provides no routing diversity Overloaded root – All traffic goes through a single broker Under utilization: Not all available capacity is effectively used Tree: Single path connectivity not suitable for diverse forwarding patterns ? Unutilized bandwidth capacity Overloaded root

50 Multi-path publication forwarding Related Work – Structured Topologies A topology is an interconnection between brokers: – Topology relatively stable: long-term connections – Most commonly a global/per-publisher spanning tree Topology adaptation change topology based on: – Traffic patterns [1,2] – optimize a cost function – Maintain acyclic property by adding + removing links Advantages: – Fixed topology enables high-throughput connections – Routes may be improved from a “course-grained” system-wide perspective Disadvantages: – Routes may never be optimal for individual broker pairs – Introduces pure forwarding brokers – Diversity of routing is not accounted for [1] Virgillito, A., Beraldi, R., Baldoni, R.: On event routing in content-based publish/subscribe through dynamic networks. In: FTDCS. (2003) [2] Virgillito, A., Beraldi, R., Baldoni, R.: On event routing in content-based publish/subscribe through dynamic networks. In: FTDCS. (2003)   Re- configure Tree A Tree A’

51 Multi-path publication forwarding Related Work – Unstructured Topologies No fixed topology exists: – Short-term links are created based on message destination – [3] uses dissemination trees computed at the publishers’ brokers Advantages: – Routes may be optimal – Zero pure forwarding brokers Disadvantages: – Link maintenance is difficult and on-demand – Global knowledge is required and no support for subscription covering/merging – Scalability problems [3] Cao, F., Singh, J.: MEDYM: Match-early and dynamic multicast for content-based publish-subscribe service networks. ICDCSW (2005)

52 Multi-path publication forwarding Publication Forwarding Strategies Strategy S1 Publication is sent on intersection of primary paths towards matching subscribers Some pure forwarding brokers are bypassed Broker incurs no extra outgoing load Strategy S2 Publication is sent on as far as possible directly towards matching subscribers As many pure forwarding brokers as possible are bypassed Broker incurs high outgoing load 52 ABC X D Y Z p XY Z ABC X D Y Z p XY Z Local matching subscribers Bypassed pure forwarder

53 Master vs. Working Routing Data Structures Overlay views captured by brokers’ d -neighborhoods are relatively static  Master Overlay Map (MOM) Brokers link connectivity change dynamically, brokers need an efficient way to compute forwarding paths over the changing set of links  Working Overlay Map (WOM) via Edge retraction MOM only contains brokers with a direct link - it acts as a quick cache Master Overlay MapWorking Overlay Map construct

54 Multi-path publication forwarding Experimental Evaluation Experimental setup – Various overlays Primary network size: 120, 250, 500 brokers Fanout parameter: 3 and 10 – Datasets with sparse or dense matching distributions Synthetic datasets based on Zipf distribution Real world datasets constructed from Social Networking user traces Synthetic datasets with covering 54

55 Multi-path publication forwarding Overlay Reconfiguration 55

56 Multi-path publication forwarding Connectivity in the Overlay Mesh Experiment setup: 120 and 250 brokers Fanout of 10 56 1000 100 10 1 Pair-wise forwarding paths

57 Multi-path publication forwarding Impact of Broker Fanout on Subscription Covering Experiment setup: 500 brokers Fanout of 5-25 57

58 Multi-path publication forwarding Impact of Broker Fanout on Subscription Covering Experiment setup: 500 brokers Fanout of 5-25 58

59 Multi-path publication forwarding Impact of Broker Fanout on Subscription Covering Experiment setup: 500 brokers Fanout of 5-25 59

60 Multi-path publication forwarding Impact of Broker Fanout on Subscription Covering Experiment setup: 500 brokers Fanout of 5-25 60

61 Multi-path publication forwarding Impact of Broker Fanout on Subscription Covering Experiment setup: 500 brokers Fanout of 5-25 61

62 Multi-path publication forwarding Publication Hop Count Experiment setup: 120 Brokers Sparse publication/subscription workload Publish rate of 1,800 msgs/sec  Deliveries: 73,000 in 5 min 62

63 Multi-path publication forwarding Publication Hop Count Experiment setup: 120 Brokers Sparse publication/subscription workload Publish rate of 1,800 msgs/sec  Deliveries: 73,000 in 5 min 63

64 Multi-path publication forwarding Publication Hop Count Experiment setup: 120 Brokers Sparse publication/subscription workload Publish rate of 1,800 msgs/sec  Deliveries: 73,000 in 5 min 64

65 Multi-path publication forwarding Publication Hop Count Experiment setup: 120 Brokers Sparse publication/subscription workload Publish rate of 1,800 msgs/sec  Deliveries: 73,000 in 5 min 65

66 Multi-path publication forwarding Publication Hop Count Experiment setup: 120 Brokers Sparse publication/subscription workload Publish rate of 1,800 msgs/sec  Deliveries: 73,000 in 5 min 66

67 Multi-path publication forwarding Publication Hop Count Sparse Matching WorkloadDense Matching Workload Multi-path forwarding is more effective in sparse workloads 67

68 Multi-path publication forwarding System Yield (measure of efficiency) Delivered publications StrategyNumber of pure Pure Forwarders System Yield 73,000 (Sparse Workload) Strategy 0 91,00044% Strategy 1 42,00063% Strategy 2 29,00071% Delivered publications StrategyNumber of pure Pure Forwarders System Yield 284,000 (Dense Workload) Strategy 0 195,00059% Strategy 1 104,00073% Strategy 2 69,00080% 68

69 Multi-path publication forwarding Maximum System Throughput Experiment setup: 250 brokers Publish rate of 72,000 msgs/min 69

70 Bulk content dissemination Data Exchange Using Network Coding Blocks matrix ( k * n ) Random coefficients C used for coding Decoding uses k linearly independent blocks Segmentation Encode into data packets Decode Segment 3 Y i = Segment 3 Y i = B1BkB1Bk B1BkB1Bk Segment 2 Y i = Segment 2 Y i = Segment 1 Y i = Segment 1 Y i = B1BkB1Bk B1BkB1Bk File Segment 3 Y i = Segment 3 Y i = B1BkB1Bk B1BkB1Bk Segment 2 Y i = Segment 2 Y i = Segment 1 Y i = Segment 1 Y i = B1BkB1Bk B1BkB1Bk File C i X i Encode Transfer

71 Bulk content dissemination Hybrid architecture Regional disseminationCross-Regional dissemination Control layer Data layer Matching subscribers PList Control layer Data layer Publisher PList Coded packets Matching subscribers Involves routing of notifications in control layer Information is immediately available at broker

72 Bulk content dissemination Evaluation Results Experimental setup – UofT’s SciNet cluster with up to 1000 nodes – Peers have a capped uplink bandwidth (100-200 KB/s) Network setup: – 5 Regions – 120, 300 or 1000 subscribers uniformly distributed among regions 72

73 Bulk content dissemination Content Serving Policy Network setup: 300 clients 1 source publishes 100 MB of content 73

74 Bulk content dissemination Content Serving Policy Network setup: 300 clients 1 source publishes 100 MB of content 74

75 Bulk content dissemination Content Serving Policy Network setup: 300 clients 1 source publishes 100 MB of content 75

76 Bulk content dissemination Content Serving Policy Network setup: 300 clients 1 source publishes 100 MB of content 76

77 Bulk content dissemination Impact of Packet Loss Network setup: 300 clients 1 source publishes 100 MB of content 77

78 Bulk content dissemination Impact of Packet Loss Network setup: 300 clients 1 source publishes 100 MB of content 78

79 Bulk content dissemination Impact of Packet Loss Network setup: 300 clients 1 source publishes 100 MB of content 79

80 Bulk content dissemination Impact of source Fanout on dissemination time Network setup: 300 clients 1 source publishes 100 MB of content 80

81 Bulk content dissemination Impact of source Fanout on dissemination time Network setup: 300 clients 1 source publishes 100 MB of content 81

82 Bulk content dissemination Impact of source Fanout on dissemination time Network setup: 300 clients 1 source publishes 100 MB of content 82

83 Bulk content dissemination Impact of source Fanout on dissemination time Network setup: 300 clients 1 source publishes 100 MB of content 83

84 Bulk content dissemination Effectiveness of Traffic Shaping Experiment setup: 5 regions and 1000 clients (capped uplink bandwidth at 200 KB/s) 1 sources publish 100 MB 84

85 Bulk content dissemination Effectiveness of Traffic Shaping Experiment setup: 5 regions and 1000 clients (capped uplink bandwidth at 200 KB/s) 1 sources publish 100 MB Regional traffic 85

86 Bulk content dissemination Effectiveness of Traffic Shaping Experiment setup: 5 regions and 1000 clients (capped uplink bandwidth at 200 KB/s) 1 sources publish 100 MB Regional traffic Cross-regional traffic 86

87 Bulk content dissemination Effectiveness of Traffic Shaping Experiment setup: 5 regions and 1000 clients (capped uplink bandwidth at 200 KB/s) 1 sources publish 100 MB Regional traffic Cross-regional traffic 87

88 Bulk content dissemination Traffic Sharing Among Competing Content with Uniform Popularity Experiment setup: 5 regions and 1000 clients (capped uplink bandwidth at 200 KB/s) 15 sources (3 in each region) publish 100 MB with uniform popularity 1 TB of data 88

89 Bulk content dissemination Traffic Sharing Among Competing Content with Different Popularity Experiment setup: 5 regions and 1000 clients (capped uplink bandwidth at 200 KB/s) 15 sources (3 in each region) publish 100 MB Content has 1x, 2x, and 3x popularity Most popular Least popular Medium popularity 89

90 Bulk content dissemination Contribution of Peers Contribution of the source Avg per segment: 136% content size Contribution of subscribers Overall avg: 102% of download size Network setup: 300 clients 1 source publishes 100 MB of content 90

91 Bulk content dissemination Comparison With BitTorrent Experiment setup: 120 clients (capped uplink bandwidth at 200 KB/s) 1 source publishes 100 MB of content Upon release all clients start download Within 1300 s download ends 91

92 Bulk content dissemination Comparison With BitTorrent Experiment setup: 120 clients (capped uplink bandwidth at 200 KB/s) 1 source publishes 100 MB of content [BT]: Polling interval of 10 minutes [BT]: Within 1700 s downloads end 92

93 Bulk content dissemination Comparison With BitTorrent Experiment setup: 120 clients (capped uplink bandwidth at 200 KB/s) 1 source publishes 100 MB of content Polling interval of 2 seconds [BT]: Within 1600 s downloads end 93

94 The Need for Pub/Sub Systems Many of today’s applications and services have a distributed design – Social networking apps – Cloud applications and services – Tracking and monitoring services Complex distributed systems have communication needs that go beyond provisioning of low-level point-to-point network protocols Middleware platforms bridge this gap and – Provide high-level and ready-to-use messaging solutions – Support for scalable, flexible and many-to-many communication 94


Download ppt "Overlay Neighborhoods for Distributed Publish/Subscribe Systems Reza Sherafat Kazemzadeh Supervisor: Dr. Hans-Arno Jacobsen SGS PhD Thesis Defense University."

Similar presentations


Ads by Google