Presentation is loading. Please wait.

Presentation is loading. Please wait.

Routing Economics under Big Data Murat Yuksel Computer Science and Engineering University of Nevada – Reno, USA 1.

Similar presentations


Presentation on theme: "Routing Economics under Big Data Murat Yuksel Computer Science and Engineering University of Nevada – Reno, USA 1."— Presentation transcript:

1 Routing Economics under Big Data Murat Yuksel yuksem@cse.unr.edu Computer Science and Engineering University of Nevada – Reno, USA 1

2 Outline Routing in a Nutshell BigData Routing Problems – Economic Granularity – Routing Scalability – Bottleneck Resolution Summary 2

3 URL: http://www.youtube.com IP Address: 74.125.224.169 IP Prefix: 74.125.224/24 NSHE Broad-Band One AT&T Level3 Pac-Net Google Path: 1951-7018-3356-10026-15169 15169 10026-15169 3356-10026-15169 7018-3356-10026-15169 1951-7018-3356-10026-15169 Local Regional Tier-1 Routing in a Nutshell 3

4 AT&TCogent Level3 NSHESBC Backbone ISP (Tier-1) Regional ISPs Local ISPs Customer / Provider Peer / Peer Internet Core Customer Cone 4

5 AT&T 7018 Broad-Band One 1951 Level3 3356 Inter-domain Routing among ISPs: Single Metric (Number of hops, ISPs) Partial network information Scalable Intra-domain Routing within ISP network: Multi-Metric (Delay, bandwidth, speed, packet loss rate …) Computationally heavy, Complete network information in terms of links Not scalable for large networks Routing in a Nutshell 5

6 NSHE Broad-Band One AT&T Level3 Pac-Net Google A few Mb/s What if the flow is big? Real big? 100+ Gb/s Flow-aware Economics? Negligible Flow Cost/Value Big-Data Bob Big-Data Alice 6

7 ATT 7018 NSHE 3851 Anywhere Level3 3356 Point-to-Anywhere Not automated, rigid SLA (6+ months…) Transit service seen as commodity Value sharing structure, edge gets all the money Problem 1: Economic Granularity 7

8 Contract Routing Architecture An ISP is abstracted as a set of contract links Contract link: an advertisable contract – between peering/edge points i and j of an ISP – with flexibility of advertising different prices for edge-to-edge (g2g) intra- domain paths Contract components – performance component, e.g., capacity – financial component, e.g., price – time component, e.g., term capability of managing value flows at a finer granularity than point-to-anywhere deals Global Internet 2008 8

9 G2G Set-of-Links Abstraction Can change things a lot even for small scenarios.. 9

10 G2G Set-of-Links Abstraction Max Throughput Routing ICC 2012 Average over 50 Random topologies 10

11 G2G Set-of-Links Abstraction Min Delay Routing ICC 2012 Average over 50 Random topologiesAverage over 50 BRITE topologies 11

12 Path-Vector Contract Routing User X 2 3 5 ISP A ISP C ISP B 14 [5, A-B, 1-2-4, 15-20Mb/s, 20-30mins, $4] [5, A, 1-2, 15-30Mb/s, 15-30mins, $8] [5, A, 1-3, 5-10Mb/s, 15-20mins, $7] Paths to 5 are found and ISP C sends replies to the user with two specific contract- path-vectors. path request [A-B-C, 1-2-4-5, 20Mb/s, 30mins] [A-C, 1-3-5, 10Mb/s, 15mins] Paths to 5 are found and ISP C sends replies to the user with two specific contract- path-vectors. reply [5, 10-30Mb/s, 15-45mins, $10] 12

13 Results – Path Exploration Over 80% path exploration success ratio even at 50% discovery packet filtering thanks to diversity of Internet routes. With Locality, PVCR achieves near 100 percent path exploration success. As budget increases with BTTL and MAXFWD, PVCR becomes robust to filtering GLOBECOM 2012 13

14 Results – Traffic Engineering PVCR provides end- to-end coordination mechanisms. No hot-spots, network bottlenecks ICC 2012 14

15 Problem 2: Routing Scalability Routing scalability is a burning issue! – Growing routing state and computational complexity Timely lookups are harder to do More control plane burden – Growing demand on Customizable routing (VPN) Higher forwarding speeds Path flexibility: policy, quality 15

16 Cost of routing unit traffic is not scaling well Specialized router designs are getting costlier, currently > $40K BigData flows More packets at faster speeds.. How to scale routing functionality to BigData levels? Problem 2: Routing Scalability 16

17 Cloud services are getting abundant – Closer: Delay to the cloud is reducing [CloudCmp, IMC10] Bandwidth to the cloud is increasing – Cheaper: CPU and memory are becoming commodity at the cloud Cloud prices are declining – Computational speedups via parallelization – Scalable resources, redundancy Offload the Complexity to the Cloud? 17

18 Goal: To mitigate the growing routing complexity to the cloud Research Question: If we maintain the full router functionality at the cloud but only partial at the router hardware, can we solve some of the routing scalability problems? CAR: Cloud-Assisted Routing Router X (Hardware with Partial Routing Functions) Updates and Packets Proxy Router X (Software with Full Routing Functions) CAR Router X Updates Cloud Providing CAR Services to Many Routers 18 Use the valuable router hardware for the most used prefixes and the most urgent computations. Amdahls Law in action!

19 CAR: An Architectural View OpenFlow Flexibility (# of configuration parameters) Scalability (packets/sec) CAR Click More Platform Dependence Finer Programmability PacketShader [25] Per InterfacePer FlowPer Packet Specialized HW Pure SW Hybrid SW/HW RouteBricks [7] RCP [8] Cisco CSR [10] Specialized ASIC (Cisco Catalyst Series) SwitchBlade [17] NetFPG A Barrier being pushed 19

20 BGP Peer Establishment Scenario BGP Peer Establishment 400K Prefix Exchanged (Full Table) Takes approx. 4-5 minutes Only 4K prefixes selected as best path BGP Peer Establishment w/ CAR 4K prefixes provided to Routers Outbound Route Filtering RFC 5291 Takes approx. 1-2 minutes Only selected alternative paths out of 400K installed later Step 1: Table Exchange btw Proxies Step 2: ORF List Exchange Between Routers and Proxies Step 3: Only Selected Prefixes Exchange Initially Btw Routers CAR: A sample BGP Peering Scenario CARs CPU Principle: Keep the control plane closer to the cloud! Offload heavy computations to the cloud. 20

21 CAR: A sample BGP Peering Scenario BGP Peer Establishment 400K Prefix Exchanged (Full Table) Takes approx. 4-5 minutes Peak CPU Utilization Only 4K prefixes selected as best path Potential for 5x speed-up and 5x reduction of CPU load during BGP peer establishment

22 Traffic Partial FIB Partial RIB Temporal and Prefix Continuity / Spatiality Full FIB Full RIB Regular Updates and Replacement CAR: Caching and Delegation 22

23 Traffic Partial FIB Partial RIB Miss (0.1%) 1 st Option: Traffic into large buffers (150 ms) Resolve next hop from Cloud Proxy 2 nd Option: Reroute Traffic to Cloud Proxy via Tunnels Hit (99.9%) Revisiting Route Caching: World should be Flat, PAM 2009 One tenth of the prefixes account for 97% of the traffic One fourth of FIB can achieve 0.1 % miss rate LRU Replacement of Cache CAR: Caching and Delegation CARs Memory Principle: Keep data plane closer to the router. Keep the packet forwarding operations at the router to the extent possible. 23

24 Problem 3: Bottleneck Resolution BigData flows Long time scales Several hours Dynamic network behavior Moving bottlenecks A few mins Fixed network behavior Fixed bottlenecks We need to respond to network dynamics and resolve bottlenecks as the BigData flows run! 24

25 Intra-Node Bottlenecks Where is the Bottleneck? Source End-system Destination End-system Internet NIC Disk CPU NIC Disk CPU NIC Multiple parallel streams with inter-node network optimizations, but ignoring intra-node bottlenecks Truly end-to-end multiple parallel streams with joint intra- and inter-node network optimizations Relay Node 25

26 Quality-of-Service (QoS) Routing may help! But.. – NP Hard to configure optimally – Route flaps Multi-core CPUs are abundant – How to leverage them in networking? [CCR11] Can we use them to parallelize the protocols? Multiple instances of the same protocol Collaborating with each other Each instance working on a separate part of the network? A divide-and-conquer? Should do it with minimal disruption Overlay Leverage Multi-Core CPUs for Parallelism? 26

27 Parallel Routing 1 1 1 2 1 5 Mb/s 2 1 1 2 2 1 3 1 1 4 Substrate 1 Substrate 2Substrate 3 10 Mb/s 5 Mb/s 10 Mb/s 5 Mb/s A B C D 27

28 Nice! But, new complication: How to slice out the substrates? Parallel Routing 1 1 1 1 2 5 Mb/s Substrate 1 1 1 1 2 5 Mb/s Substrate 2 1 1 1 5 Mb/s Substrate 3 10 Mb/s 5 Mb/s 10 Mb/s 5 Mb/s A B C D A-C is maxed out B-D is maxed out 28

29 Economic Granularity – Finer, more flow-aware network architectures – An idea: Contract-Switching, Contract Routing Routing Scalability – Cheaper solutions to routers CPU and memory complexity – An idea: CAR Bottleneck Resolution – Complex algorithms to better resolve bottlenecks and respond to network dynamics – An idea: Parallel Routing Summary 29

30 Thank you! Google contract switching Project Website http://www.cse.unr.edu/~yuksem/contract-switching.htm THE END 30

31 31 Collaborators & Sponsors Faculty – Mona Hella (hellam@ecse.rpi.edu), Rensselaer Polytechnic Institutehellam@ecse.rpi.edu – Nezih Pala (palan@fiu.edu), Florida International Universitypalan@fiu.edu Students – Abdullah Sevincer (asev@cse.unr.edu) (Ph.D.), UNRasev@cse.unr.edu – Behrooz Nakhkoob (nakhkb@rpi.edu) (Ph.D.), RPInakhkb@rpi.edu – Michelle Ramirez (beemyladybug1@yahoo.com) (B.S.), UNRbeemyladybug1@yahoo.com Alumnus – Mehmet Bilgi (mbilgi@cse.unr.edu) (Ph.D.), UC Corp.mbilgi@cse.unr.edu Acknowledgments This work was supported by the U.S. National Science Foundation under awards 0721452 and 0721612 and DARPA under contract W31P4Q-08-C-0080

32 Computational Scenario Peers Internet Cloud Proxy Routers Cloud Assisted BGP Routers Peers 1) Full Table Exchange 3) Partial Table Exchange 2) Outbound Route Filter Exchange 32

33 Delegation Scenario Cloud Assisted Router Cloud Proxy Router Peers in an IXP Internet FIB Cache Full FIB Unresolved Traffic Delegation Cache Updates Cloud Proxy Router 33

34 Delegation Scenario CAR Click Router Proxy Click Router Traffic Sink Nodes EC2, N. Virginia Traffic Generator IP GRE Tunnels Emulab, Utah 34

35 Delegation Scenario Cloud-Assisted Click Router – Packet Counters for Flows Forwarded to Cloud Received Packets – Prefix Based Miss Ratio – Modified Radix-Trie Cache for Forwarding Table – Router Controller Processing Cache Updates Clock Based Cache Replacement Vector 35

36 Random topology –Inter-domain and Intra-domain are random BRITE topology –BRITE model for inter-domain –Rocketfuel Topologies (ABILENE and GEANT) for intra-domain GTITM topology –GTITM model for inter-domain –Rocketfuel Topologies (ABILENE and GEANT) for intra-domain 36 Simulation Results

37 Forwarding Mechanisms 37 bTTL: How many copies of discovery packet will be made and forwarded? Provides caps on messaging cost. dTTL: Time to Live, Hop-Count Limit MAXFWD: Max. number of neighbors to be forwarded

38 Evaluation CAIDA, AS-level, Internet Topology as of January 2010 (33,508 ISPs) Trial with 10000 ISP Pair (src,dest), 101 times With various ISP cooperation / participation and packet filtering levels – NL: No local information used – L: Local information used (with various filtering) With no directional and policy improvements for base case (worst) performance 38

39 Results - Diversity 39 Tens of paths discovered favoring multi-path routing and reliability schemes.

40 Results – Path Stretch 40

41 Results – Messaging Cost 41 Number of discovery packet copies is well below theoretical bounds thanks to path-vector loop prevention.

42 Results – Transmission Cost 42

43 Results - Reliability 43

44 Many Possibilities Intra-cloud optimizations among routers receiving the CAR service – Data Plane: Forwarding can be done in the cloud – Control Plane: Peering exchanges and routing updates can be done in the cloud Per-AS optimizations – Data Plane: Pkts do not have to go back to the physical router until the egress point – Control Plane: iBGP exchanges 44

45 Some Interesting Analogies? High cloud-router delay – CAR miss at the router Page Fault – Delegation is preferable Forward the pkt to the cloud proxy Low cloud-router delay – CAR miss at the router Cache Miss – Caching (i.e. immediate resolution) is preferable Buffer the pkt at the router and wait until the miss is resolved via the full router state at the cloud proxy 45

46 Where is the bottleneck? Intra-Node Bottlenecks NYC 1Gb/s NIC 0 NIC 1 Disk 0 Disk 1 Internet Miami CPU 0 CPU 1 SFO 1Gb/s 100Mb/s 50Mb/s file-to-NYC.dat file-to-Miami.dat 46

47 Where is the bottleneck? Intra-Node Bottlenecks NYC 1Gb/s 50 Mb/s NIC 0 NIC 1 Disk 0 Disk 1 Network Miami CPU 0 CPU 1 SFO 1Gb/s 100Mb/s 50Mb/s NYC Miami NIC 0 NIC 1 file-to-NYC.dat file-to-Miami.dat file-to-NYC.dat file-to-Miami.dat Inter-node Topology without Intra-node Visibility 100 Mb/s 75Mb/s 50 Mb/s The networks routing algorithm finds the shortest paths to NYC and Miami with NIC 0 and NIC 1 being the exit points, respectively. However, the intra-node topology limits the effective transfer rates. 47

48 Where is the bottleneck? Intra-Node Bottlenecks NYC 1Gb/s 75 Mb/s NIC 0 NIC 1 Disk 0 Disk 1 Network Miami CPU 0 CPU 1 SFO 1Gb/s 100Mb/s 50Mb/s NYC Miami NIC 0 NIC 1 file-to-NYC.dat file-to-Miami.dat file-to-NYC.dat file-to-Miami.dat Integrated Topology with Visible Intra-node Topology 100 Mb/s 75Mb/s Disk 0 Disk 1 100Mb/s 50Mb/s When the intra-node topology is included in the calculation of shortest paths by the routing algorithm, it becomes possible to find better end-to-end combinations of flows for a higher aggregate rate. 48

49 CAR: An Architectural View Routing As a Service (e.g., RCP) Managing Routers from Cloud (e.g., NEBULA, Cisco CSR) Separation of Control & Forwarding Planes (e.g., OpenFlow) Parallel Architectures (e.g., RouteBricks) Clustered Commodity Hardware (e.g., Trellis, Google) Specialized ASIC (e.g., Cisco) Long Transition from Current State of Routing to Cloud-Integrated Next-Generation Future Internet Cloud- Assisted Routing (CAR) A middle- ground to realize the architectural transition. 49

50 Technical Shortcomings 50 1 2 ISP AISP B AS-Path B-C-D: 45ms AS-Path B-C-D: 35ms

51 Technical Shortcomings 51 A BC D E AS-Path B-C-D: 35ms AS-Path B-E-D: 25ms

52 CAR: Caching and Delegation 52


Download ppt "Routing Economics under Big Data Murat Yuksel Computer Science and Engineering University of Nevada – Reno, USA 1."

Similar presentations


Ads by Google