Presentation is loading. Please wait.

Presentation is loading. Please wait.

Network Layer4-1 Chapter 5 Multicast and P2P A note on the use of these ppt slides: All material copyright 1996-2007 J.F Kurose and K.W. Ross, All Rights.

Similar presentations


Presentation on theme: "Network Layer4-1 Chapter 5 Multicast and P2P A note on the use of these ppt slides: All material copyright 1996-2007 J.F Kurose and K.W. Ross, All Rights."— Presentation transcript:

1 Network Layer4-1 Chapter 5 Multicast and P2P A note on the use of these ppt slides: All material copyright J.F Kurose and K.W. Ross, All Rights Reserved Computer Networking: A Top Down Approach 4 th edition. Jim Kurose, Keith Ross Addison-Wesley, July 2007.

2 Network Layer4-2 R1 R2 R3R4 source duplication R1 R2 R3R4 in-network duplication duplicate creation/transmission duplicate Broadcast Routing r Deliver packets from source to all other nodes r Source duplication is inefficient: r Source duplication: how does source determine recipient addresses?

3 Network Layer4-3 In-network Duplication r Flooding: when node receives brdcst pckt, sends copy to all neighbors m Problems: cycles & broadcast storm r Controlled flooding: node only brdcsts pkt if it hasn’t brdcst same packet before m Node keeps track of pckt ids already brdcsted m Or reverse path forwarding (RPF): only forward pckt if it arrived on shortest path between node and source r Spanning tree m No redundant packets received by any node

4 Network Layer4-4 A B G D E c F A B G D E c F (a) Broadcast initiated at A (b) Broadcast initiated at D Spanning Tree r First construct a spanning tree r Nodes forward copies only along spanning tree

5 Network Layer4-5 A B G D E c F (a)Stepwise construction of spanning tree A B G D E c F (b) Constructed spanning tree Spanning Tree: Creation r Center node r Each node sends unicast join message to center node m Message forwarded until it arrives at a node already belonging to spanning tree

6 Multicast Routing: Problem Statement r Goal: find a tree (or trees) connecting routers having local mcast group members m tree: not all paths between routers used m source-based: different tree from each sender to rcvrs m shared-tree: same tree used by all group members Shared tree Source-based trees

7 Approaches for Building Mcast Trees Approaches: r Source-based tree: one tree per source m Shortest path trees m Reverse path forwarding r Group-shared tree: group uses one tree m Minimal spanning (Steiner) m Center-based trees …You can read the details about the above approaches in the textbook ……

8 Network Layer4-8 IP Multicast – Related Works r Seminal work by S. Deering in 1989 r Huge amount of follow-on work m Research 1000s papers on multicast routing, reliable multicast, multicast congestion control, layered multicast m Standard: IPv4 and IPv6, DVMRP/CBT/PIM m Development: in both routers (Cisco etc.) and end systems (Microsoft, all versions of Unix) m Deployment: Mbone, major ISP’s

9 Network Layer4-9 IP Multicast – Problems r Problems m Scalability Large number of multicast groups m Requirement of dynamic spanning tree Practical problem under dynamic environment m System complexity Routers maintain state information of multicast groups – deviated from stateless router design Bring out higher level features, e.g. error, congestion control.. m Autonomous Difficult across different domain for consistent policies

10 Network Layer4-10 Content Distribution Networks (CDN) r Push content to servers at network edge close to users r Support on-demand traffic, but also support broadcast r Reduce backbone traffic r CDNs like Akamai places ten of thousands of severs Akamai Edge Server Source:

11 Network Layer4-11 CDN – Streams Distribution Content delivery network (CDN)... Splitter servers Media server Example AOL webcast of Live 8 concert (July 2, 2005) 1500 servers in 90 locations 50 Gbps 175,000 simultaneous viewers 8M unique viewers Slide by Bernd Girod

12 Network Layer4-12 The Scale Problem r The aggregate capacity m To reach 1M viewers with MPEG-5 (1.5 Mbps) TV quality video, it requires 1.5 Tbps aggregate capacity m CBS NCAA tournament (March 2006), video at 400 Kbps with 268,00 users, the aggregate capacity is 100 Gbps m Akamai, the largest CND service provider, reports at the peak 200 Gbps aggregate capacity r Implication m Self-scaling property

13 Network Layer4-13 Overlay Multicast – Basic r Application layer multicast or Overlay Multicast r Build multicast trees at the application end m A virtual topology over the unicast Internet m End systems communicate through an overlay structure r Existing multicast approaches m Swarming-based (tree-less or data-driven) m Tree-based (hierarchical-based) r Examples: m End system multicast (ESM) – Hui Zhang et al. m Yoid – Paul Francis et al. m…m…

14 Network Layer4-14 Overlay Multicast

15 Network Layer4-15 Overlay Multicast – Discussion r Major advantages m Efficient multicast service deployment without the need of infrastructure support m Feasibility of implementing multicast function at the end of system m Easy to apply additional features (metrics) r Issues m Limited topological information at end user side? m How to find/determine an ideal topology? m Lack of practical system and experiment?

16 Network Layer4-16 Ideal Overlay r Efficiency: m Routing (delay) in the constructed overlay network is close to the one in the underlying network m Efficient use of bandwidth Less duplicated packets on the same link Proper number of connections at each node m Support node locality in overlay construction r Scalability: m Overlay remains tractable with the increasing number of hosts and data traffic m Small overlay network maintenance cost m Overlay constructed in a distributed way and support node locality

17 Network Layer4-17 r Randomly- connected overlay Locality-aware and Randomly-connected Overlay AS-1 AS-2 r Locality-aware overlay AS-1 AS

18 Network Layer4-18 r Objective of mOverlay [1] m The ability to exploit local resources over remote ones when possible Locate nearby object without global communication Permit rapid object delivery m Eliminate unnecessary wide-area hops for inter- domain messages Eliminate traffic going through high latency, congested stub links Reduce wide-area bandwidth utilization Locality-aware Unstructured Overlay [1] X. Zhang, Q. Zhang, Z. Zhang, G. Song and W. Zhu, "A Construction of Locality-Aware Overlay Network: mOverlay and its performance", IEEE JSAC Special Issue on Recent Advances on Service Overlay Networks, Jan

19 Network Layer4-19 Key Concepts for mOverlay r Two-level hierarchical network m A group consists of a set of hosts close to each other For ANY position P in the underlying network, the distance between P and hosts within a group could be considered as equal m Neighbor groups in this overlay are the groups nearby in the underlying network m A desirable overlay structure is that most links are between hosts within a group and only a few links between two groups r Approximation m Use neighbors of a group as dynamic landmarks

20 Network Layer4-20 Locating Process (1) Return boot host B from Group 1 (2) Measurement and information exchange (3) (4) (5) (6) (7) r 4 phrases locating m Contact RP to fetch boot hosts m Measure the distance to boot host and its neighbor groups m Determine the closest group with group criterion checking m Terminate with group criterion or stop criterion meet

21 Network Layer4-21 Popular Deployed Systems r Live P2P streaming has become increasingly popular approach r Many real deployed systems. Just name a few … r Coolstreaming: Cooperative Overlay Streaming m First release: May 2004 Till Oct 2006  Download: > 1,000,000  Average online users: 20,000  Peak-time online user: 80,000  Google entries (CoolStreaming): 370,000 CoolStreaming is the base technology for Roxbeam Corp., which launched live IPTV programs jointly with Yahoo Japan in October 2006

22 Network Layer4-22 Popular Deployed Systems (Cont.) r PPlive: well-known IPTV system m 3.5 M subscribers in 2005 m 36.9 M subscribers in 2009 predicted m May 2006 –over 200 distinct online channels m Revenues could up to $10 B m Need to understand current system to design better future systems r More to come …

23 Network Layer4-23 Pull-based Streaming r Almost all real-deployed P2P streaming systems are based on pull-based protocol m Also called “data-driven”/“swarming” protocol r Basic idea m Live media content is divided into segments and every node periodically notifies its neighbors of what packets it has m Each node explicitly requests the segments of interest from its neighbors according to their notification m Very similar to that of BitTorrent r The well-acknowledged advantages m Robustness and simplicity

24 Network Layer4-24 Hybrid Pull-Push Protocol r Pull-based protocol has the tradeoff between control overhead and delay m To minimize the delay Node notifies its neighbors of packet arrival immediately Neighbors should also request the packet immediately Result in a remarkable control overhead m To diminish the overhead Node can wait until dozens of packets arrived before inform its neighbors Neighbors can also request a bunch of packets each time Leads to a considerable delay

25 Network Layer4-25 Push-Pull Streaming Mechanism r How to reduce the delay of pull mechanism while keeping the advantages of pull mechanism? m Use the pull mechanism as a startup to measure the partners’ ability to provide video packets m Use the push mechanism to reduce the delay m Partition the video stream according to the video packets received from the partners in last interval m Packets loss during push time interval will be recovered by pull mechanism

26 Network Layer4-26 GridMedia r Gridmedia is designed to support large-scale live video streaming over world-wide Internet r The first generation: Gridmedia I m Mesh-based multi-sender structure m Combined with IP multicast m First release: May 2004 r The second generation: Gridmedia II m Unstructured overlay m Push-pull streaming mechanism m First release: Jan. 2005

27 Network Layer4-27 Real Deployment  Gala Evening for Spring Festival 2005 and 2006 m Streaming server: double-core Xeon server m Video encoding rate = 300 kbps m Maximum connections from server 2005: : 800 m Partners number = about 10 m Buffer Deadline = 20s For the largest TV station in China (CCTV)

28 Network Layer4-28 Performance Analysis r Gala Evening for Spring Festival 2005 m More than 500,000 person times in total, maximum concurrent users 15,239 m Users from 66 countries, 78.0% from China m Enabled 76 times (15,239/200≈76) in terms of capacity amplification to bounded server outgoing bandwidth Others 22% China 78% Canada 20% USA 18% UK 15% Japan 13% Others 28% GM 6%

29 Network Layer4-29 Performance Analysis (Cont.) r Gala Evening for Spring Festival 2006 m More than 1,800,000 person times in total, maximum concurrent users 224,453 m Users from 69 countries, 79.2% from China m Enabled 280 times (224,453/800≈280) in terms of capacity amplification to bounded server outgoing bandwidth

30 Network Layer4-30 Deployment Experience Online Duration Connection Heterogeneity Request Characteristics r In 2005, about 60.8% users were behind different types of NATs while at least 16.0% users (in China) accessed Internet via DSL connections r In 2006, about 59.2% users were behind different types of NATs while at least 14.2% users (in China) accessed Internet via DSL connections An effective NAT traversal scheme should be carefully considered in the system design of P2P-based live streaming applications

31 Network Layer4-31 r In 2005, nearly 50% users spent less 3 minutes and about 18% users kept active for more than 30 minutes r In 2006, roughly 30% users in 2006 left the system in 3 minutes and more than 35% user would like to enjoy the show for more than 30 minutes r Peers with longer online duration are expected to have larger average remaining online time Deployment Experience Online Duration Connection Heterogeneity Request Characteristics Taking online duration information into consideration when designing overlay structure or selecting upstream peers can improve system performance

32 Network Layer4-32 Deployment Experience Online Duration Connection Heterogeneity Request Characteristics Request rate per 30 seconds from 23:00pm to 0:00am in 2005 and 2006 r The average request rate always kept at a record of hundreds in 2005 while thousands in 2006 r Occasionally the request rate rushed to a peak beyond 3,700 in 2005 while 32,000 in 2006 The high request rate and sporadic flush-crowd essentially pose great challenge on the reliability and stability of RP server and system

33 Network Layer4-33 Future Directions r Throughput improvement should not be the only key focus r Interesting future directions m Minimize ISP core network and cross-ISP traffic Use proxy cache and locality-aware technique to relieve the link stress m Server bandwidth reduction How to let home users broadcast video with high quality? m Real Internet environment Connections across the peer link bridge between ISPs have low rate NAT/firewall prevent end-host from connecting with each other


Download ppt "Network Layer4-1 Chapter 5 Multicast and P2P A note on the use of these ppt slides: All material copyright 1996-2007 J.F Kurose and K.W. Ross, All Rights."

Similar presentations


Ads by Google