Presentation is loading. Please wait.

Presentation is loading. Please wait.

© 2009 AT&T Intellectual Property. All rights reserved. Multicast Redux: A First Look at Enterprise Multicast Traffic Elliott Karpilovsky 1, Lee Breslau.

Similar presentations


Presentation on theme: "© 2009 AT&T Intellectual Property. All rights reserved. Multicast Redux: A First Look at Enterprise Multicast Traffic Elliott Karpilovsky 1, Lee Breslau."— Presentation transcript:

1 © 2009 AT&T Intellectual Property. All rights reserved. Multicast Redux: A First Look at Enterprise Multicast Traffic Elliott Karpilovsky 1, Lee Breslau 2, Alexandre Gerber 2, Subhabrata Sen 2 Princeton University 1, AT&T Labs - Research 2

2 © 2009 AT&T Intellectual Property. All rights reserved. A brief history of Multicast Efficient 1 to many distribution at network layer Significant interest in the 1990s Experimentally deployed on Mbone Extensive protocol research (DVMRP, PIM, CBT, MOSPF, etc.) Failed to achieve significant deployment on the Internet Problems regarding economic incentives, inter-provider dependencies, security and management 2

3 © 2009 AT&T Intellectual Property. All rights reserved. Resurgence of Multicast Seeing deployment two contexts: IPTV Video distribution in a walled garden environment managed by Internet Service Providers. Enterprise networks Billing, dependency, security and management problems are mitigated Support multitude of applications (file distribution, conferencing, financial trading, etc.) 3

4 © 2009 AT&T Intellectual Property. All rights reserved. Our focus: Multicast in VPN Environment! MVPN (Multicast VPN) Enterprise typically supported by MPLS based VPNs There is clear demand for service providers to support Multicast within a VPN But little is currently known about multicast behavior and characteristics in the VPN setting: Nbr. Receivers? Bitrates? Duration? Etc. Lets study VPN multicast from vantage point of a large ISP now that it has been deployed: Pros: Scale, diversity Cons: Incomplete visibility 4

5 © 2009 AT&T Intellectual Property. All rights reserved. 5 MVPN Introduction: First Challenge: MPLS Multicast doesnt exist Customer unicast between sites Packet delivered from one PE to another PE MPLS used as point-to-point encapsulation mechanism Customer multicast between sites (receivers at multiple sites) Packets delivered from one PE to one or more other PEs MPLS does not currently support one-to-many distribution Needed: Encapsulation mechanism across provider backbone CE4 Unicast VPN Multicast VPN PE1 CE1 PE2 PE3 PE4 CE2 CE3 MPLS Src Rcv PE1 CE1 PE2 PE3 PE4 CE2 CE3 CE4 Src Rcv3 Rcv2 Rcv 1 ??? What kind of encapsulation for backbone transport?

6 © 2009 AT&T Intellectual Property. All rights reserved. 6 Basic Solution: GRE encapsulation at the PE + IP Multicast in the core Src sends mcast packet to customer group 224.5.6.7 CE2 forwards customer packet to PE2 PE2 encapsulates customer packet in provider group 239.1.1.1 PE2 transmits provider packet across backbone multicast tree PE1, PE3 and PE4 decapsulate customer packet and forward to CE1, CE3 and CE4, which then forward it to Rcv1, Rcv2 and Rcv4 Rcv 1 PE1 CE1 PE2 PE3 PE4 CE2 CE3 CE4 Src Rcv3 Rcv2 239.1.1.1 Payload224.5.6.7 Customer pkt Payload224.5.6.7239.1.1.1 Provider pkt Payload224.5.6.7 Customer pkt Encapsulation Decapsulation

7 © 2009 AT&T Intellectual Property. All rights reserved. 7 But it gets more complicated: MVPN Design Issue PE1 CE1 PE2 PE3 PE4 CE2 CE3 CE4 Src1 CG1 CG2 CG1 CG2 Customer group CG1 has receivers behind PE1 and PE3 CG2 has receivers behind PE3 and PE4 How many multicast groups should the provider use to encapsulate traffic from CE2 to CG1 and CG2? => 1 Provider Group per Customer Group: not scalable => 1 Provider Group for all Customer Groups: bandwidth wasted (e.g. CG1, CE4) Src2 Src1 Src2

8 © 2009 AT&T Intellectual Property. All rights reserved. MVPN Solution: Default MDT, Data MDT Industry practice, initially defined by Cisco A single default Provider Group per-VPN to carry low bandwidth customer traffic (and inter-PE control traffic) Referred to as Default MDT All PEs in a VPN join this group Broadcast to all VPN PEs High bandwidth customer traffic carried on special data groups in the provider backbone Referred to as Data MDTs Only relevant PEs join these groups Customer Group moved to Data MDT if exceeds throughput/duration threshold N data groups per VPN (configured) 8

9 © 2009 AT&T Intellectual Property. All rights reserved. 9 PE1 CE1 PE2 PE3 PE4 CE2 CE3 CE4 Src CG1 CG2 CG1 CG2 Default MDT CG3 Data MDT CG3 MVPN Example CG1 and CG2 –Low rate groups –Encapsulated on Default MDT CG3 –High data rate group –Encapsulated on Data MDT All PEs join Default MDT –Receive traffic for CG1 and CG2 –CE1 drops traffic for CG2 –CE4 drops traffic for CG1 PE1 and PE4 join Data MDT –CG3 traffic delivered only to PE1 and PE4

10 © 2009 AT&T Intellectual Property. All rights reserved. Data Collection SNMP poller Approximately 5 minute intervals Contacts PEs for Default, Data MDT byte count MIBs About the data: Jan. 4, 2009 to Feb. 8, 2009 88M records collected Data MDT sessions ( (S,G),start time, end time, receivers ): 25K Challenges: We only see the behavior at the backbone level (Provider Groups), Customer Groups and applications are hidden to this measurement methodology 10

11 © 2009 AT&T Intellectual Property. All rights reserved. Results – Data MDT Wide variations but some initial observations: 70% of sessions send less than 5 kbps 70% of sessions last less than 10 minutes 50% of sessions have only 1 receiver (PE) Wide variations but some initial observations: 70% of sessions send less than 5 kbps 70% of sessions last less than 10 minutes 50% of sessions have only 1 receiver (PE)

12 © 2009 AT&T Intellectual Property. All rights reserved. 12 Lets correlate some of these variables! K-mean Clustering Cluster based on: Duration Throughput Peak rate Maximum number of receivers Average number of receivers Normalize to z-scores Use k-means with simulated annealing / local search heuristic Pick a small k such that significant variance is explained

13 © 2009 AT&T Intellectual Property. All rights reserved. 13 Clustering (We pick k=4) K=4 seems to be the right number of clusters

14 © 2009 AT&T Intellectual Property. All rights reserved. 14 Clustering results: the 4 categories (Some variables removed) Short NameDurationThroughput Avg. Receivers Flows in Cluster Unicast like29 hours12mbits/sec1.20.1% Limited receivers 72 minutes40kbits/sec2.186.5% Long lived28 days605kbits/sec3.10.05% Well-fitted60 minutes21kbits/sec19.613.3%

15 © 2009 AT&T Intellectual Property. All rights reserved. 15 Conclusion First study of Enterprise multicast traffic in the wild: Variety of traffic types Significant number of flows have very few receivers Moreover, some of these flows have moderate throughput This raises more questions! Why are there such different Multicast sessions? What are the customer applications behind these Multicast sessions? What are the Multicast Customer Groups behind these Multicast Provider Groups? Future work will drill down into these Multicast sessions By actively joining groups and/or by using DPI monitoring

16 © 2009 AT&T Intellectual Property. All rights reserved. Questions? Alexandre Gerber gerber@research.att.com 16

17 © 2009 AT&T Intellectual Property. All rights reserved. Backup Page 17

18 © 2009 AT&T Intellectual Property. All rights reserved. 18 MVPN Strategy Observation Need to transport customer packets from a single ingress PE to one or more egress PEs We already have a mechanism to do this: IP Multicast Solution Compute multicast distribution trees between PEs Use GRE to encapsulate customer multicast packet in a multicast packet for transport across the provider backbone – GRE: Generic Routing Encapsulation (think IP-in-IP) Receiving PE decapsulates packet and forwards it to CE Basic idea is simple! Many design details and engineering tradeoffs to actually pull this off

19 © 2009 AT&T Intellectual Property. All rights reserved. 19 MVPN Example Src sends mcast packet to customer group 224.5.6.7 CE2 forwards customer packet to PE2 PE2 encapsulates customer packet in provider group 239.1.1.1 PE2 transmits provider packet across backbone multicast tree PE1, PE3 and PE4 decapsulate customer packet and forward to CE1, CE3 and CE4, which then forward it to Rcv1, Rcv2 and Rcv4 Rcv 1 PE1 CE1 PE2 PE3 PE4 CE2 CE3 CE4 Src Rcv3 Rcv2 239.1.1.1 Payload224.5.6.7 Customer pkt Payload224.5.6.7239.1.1.1 Provider pkt Payload224.5.6.7 Customer pkt Encapsulation Decapsulation

20 © 2009 AT&T Intellectual Property. All rights reserved. 20 MVPN Strawman Solution #1 Dedicated provider group per customer group CG1 is encapsulated in PG1; CG2 is encapsulated in PG2 PE1 and PE3 join PG1; PE3 and PE4 join PG2 Multicast routing trees reach the appropriate PEs PE1 CE1 PE2 PE3 PE4 CE2 CE3 CE4 Src CG1 CG2 CG1 CG2 PG2 PG1 Advantage: Customer multicast is only delivered to PEs that have interested receivers behind attached CEs Disadvantage: Per-customer group state in the backbone violates scalability requirement

21 © 2009 AT&T Intellectual Property. All rights reserved. 21 MVPN Strawman Solution #2 Single provider group per VPN CG1 and CG2 are both encapsulated in PG1 PE2, PE3 and PE4 all join PG1 Both customer groups delivered to all PEs PEs with no interested receivers behind attached CEs will drop the packets PE1 CE1 PE2 PE3 PE4 CE2 CE3 CE4 Src CG1 CG2 CG1 CG2 PG1 Advantage: Only a single multicast routing table entry per VPN in the backbone Disadvantage: Inefficient use of bandwidth since some traffic is dropped at the PE –E.g., CE4 drops CG1 pkts; CE1 drops CG2

22 © 2009 AT&T Intellectual Property. All rights reserved. MVPN Scalability and Performance Issue High bandwidth customer data groups are encapsulated in non- default provider groups (data MDTs) N groups per VPN What happens when there are more than N high bandwidth customer groups in a VPN? Solution: map multiple high bandwidth groups onto a single provider data group E.g., CGx and CGy are both encapsulated in PGa Implication: if CGx and CGy cover a different set of PEs (which is likely), some high bandwidth traffic reaches PEs where it is unwanted (and dropped) Wastes bandwidth Open question: to what degree will this be a problem? Unknown; potentially big; will depend on usage patterns; will need data 22

23 © 2009 AT&T Intellectual Property. All rights reserved. 23 MVPN Characteristics Isolation Each customer VPN is assigned its own set of Provider Groups, isolating traffic from different customers Scalability Backbone multicast routing state is a function of the number of VPNs, not the number of customer groups – N data groups +1 default group per VPN Flexibility No constraints on customer use of multicast applications or group addresses


Download ppt "© 2009 AT&T Intellectual Property. All rights reserved. Multicast Redux: A First Look at Enterprise Multicast Traffic Elliott Karpilovsky 1, Lee Breslau."

Similar presentations


Ads by Google