Presentation is loading. Please wait.

Presentation is loading. Please wait.

Green Networking Jennifer Rexford Computer Science Department Princeton University.

Similar presentations


Presentation on theme: "Green Networking Jennifer Rexford Computer Science Department Princeton University."— Presentation transcript:

1 Green Networking Jennifer Rexford Computer Science Department Princeton University

2 Router Energy Consumption 2

3 Internet Infrastructure 3 router link

4 Router Energy Consumption Millions of routers in the U.S. – Several Tera-Watt hours per year – $2B/year electric bill 4 Line cards draw ~ 100W (Source: National Technical Information Service, Department of Commerce, 2000. Figures for 2005 & 2010 are projections.) 200-400 W

5 Opportunities to Save Energy Networks over-provisioned with extra capacity Diurnal shifts in traffic due to user behavior 5

6 Powering Down the Network Equipment is not energy proportional – Energy is nearly independent of load Turning off parts of the network – Entire router – Individual interface card While avoiding transient disruptions – Data traffic relies on the underlying network – Failures lead to transient packet loss and delay 6 Shut down routers and interfaces without disruptions

7 Brief Background on Routers 7

8 Router Architecture 8 Switching Fabric Processor Line card data plane control plane

9 Data, Control, and Management 9 DataControlManagement Time- scale Packet (nsec) Event (10 msec to sec) Human (min to hours) TasksForwarding, buffering, filtering, scheduling Routing, signaling Analysis, configuration LocationLine-card hardware Router software Humans or scripts

10 Data Plane: Router Line Cards Interfacing – Physical link – Switching fabric Packet handling – Packet forwarding – Decrement time-to-live – Buffer management – Link scheduling – Packet filtering – Rate limiting 10 to/from link to/from switch lookup Receive Transmit

11 Control Plane: Routing Protocols Routing protocol – Routers talk amongst themselves – To compute paths through the network Routing convergence – After a topology change – Transient period of disagreement – Packets lost, delayed, or delivered out-of-order – Major disruptions to application performance 11

12 The Rest of the Talk: Two Ideas Power down networking equipment – To reduce energy consumption – While minimizing disruption to applications Power down a router – Virtual router migration – Similar to virtual machine migration Power down an interface – Shutting down cables in a bundled link – Similar to dynamic frequency voltage scaling 12

13 VROOM: Virtual ROuters On the Move Joint work with Yi Wang, Eric Keller, Brian Biskeborn, and Kobus van der Merwe (AT&T) http://www.cs.princeton.edu/~jrex/papers/vroom08.pdf

14 Virtual ROuters On the Move Key idea – Routers should be free to roam around Useful for many different applications – Reduce power consumption – Simplify network maintenance – Simplify service deployment and evolution Feasible in practice – No performance impact on data traffic – No visible impact on routing protocols 14

15 The Two Notions of “Router” IP-layer logical functionality, and physical equipment 15 Logical (IP layer) Physical

16 Tight Coupling of Physical & Logical Root of many network-management challenges (and “point solutions”) 16 Logical (IP layer) Physical

17 VROOM: Breaking the Coupling Re-mapping logical node to another physical node 17 Logical (IP layer) Physical VROOM enables this re-mapping of logical to physical through virtual router migration.

18 Case 1: Power Savings 18 Contract and expand the physical network according to the traffic volume

19 Case 1: Power Savings 19 Contract and expand the physical network according to the traffic volume

20 Case 1: Power Savings 20 Contract and expand the physical network according to the traffic volume

21 Case 2: Planned Maintenance NO reconfiguration of VRs, NO reconvergence 21 A B VR-1

22 Case 2: Planned Maintenance NO reconfiguration of VRs, NO reconvergence 22 A B VR-1

23 Case 2: Planned Maintenance NO reconfiguration of VRs, NO reconvergence 23 A B VR-1

24 Case 3: Service Deployment/Evolution Move (logical) router to more powerful hardware 24

25 Case 3: Service Deployment/Evolution VROOM guarantees seamless service to existing customers during the migration 25

26 Virtual Router Migration: Challenges 26 1.Migrate an entire virtual router instance All control plane & data plane processes / states

27 Virtual Router Migration: Challenges 27 1.Migrate an entire virtual router instance 2.Minimize disruption Data plane: millions of packets/sec on a 10Gbps link Control plane: less strict (with routing message retrans.)

28 Virtual Router Migration: Challenges 28 1.Migrating an entire virtual router instance 2.Minimize disruption 3.Link migration

29 Virtual Router Migration: Challenges 29 1.Migrating an entire virtual router instance 2.Minimize disruption 3.Link migration

30 VROOM Architecture 30 Dynamic Interface Binding Data-Plane Hypervisor

31 Key idea: separate the migration of control and data planes 1.Migrate the control plane 2.Clone the data plane 3.Migrate the links 31 VROOM’s Migration Process

32 Leverage virtual server migration techniques Router image – Binaries, configuration files, etc. 32 Control-Plane Migration

33 Leverage virtual server migration techniques Router image Memory – 1 st stage: iterative pre-copy – 2 nd stage: stall-and-copy (when the control plane is “frozen”) 33 Control-Plane Migration

34 Leverage virtual server migration techniques Router image Memory 34 Control-Plane Migration Physical router A Physical router B DP CP

35 Clone the data plane by repopulation – Enable migration across different data planes – Avoid copying duplicate information 35 Data-Plane Cloning Physical router A Physical router B CP DP-old DP-new

36 Data-plane cloning takes time – Installing 250k routes may take several seconds Control & old data planes need to be kept “online” Solution: redirect routing messages through tunnels 36 Remote Control Plane Physical router A Physical router B CP DP-old DP-new

37 Data-plane cloning takes time – Installing 250k routes takes over 20 seconds Control & old data planes need to be kept “online” Solution: redirect routing messages through tunnels 37 Remote Control Plane Physical router A Physical router B CP DP-old DP-new

38 Data-plane cloning takes time – Installing 250k routes takes over 20 seconds Control & old data planes need to be kept “online” Solution: redirect routing messages through tunnels 38 Remote Control Plane Physical router A Physical router B CP DP-old DP-new

39 At the end of data-plane cloning, both data planes are ready to forward traffic 39 Double Data Planes CP DP-old DP-new

40 With the double data planes, links can be migrated independently 40 Asynchronous Link Migration A CP DP-old DP-new B

41 Virtualized operating system – OpenVZ, supports VM migration Routing protocols – Quagga software suite Packet forwarding – Linux kernel (software), NetFPGA (hardware) Router hypervisor – Our extensions for repopulating data plane, remote control plane, double data planes, … 41 Prototype Implementation

42 Experiments in Emulab – On realistic Abilene Internet2 topology 42 Experimental Evaluation

43 Data traffic – Linux: modest packet delay due to CPU load – NetFPGA: no packet loss or extra delay Routing-protocol messages – Core router migration (OSPF only) Inject an unplanned link failure at another router At most one retransmission of an OSPF message – Edge router migration (OSPF + BGP) Control-plane downtime: 3.56 seconds Within reasonable keep-alive timer intervals – All routing-protocol adjacencies stay up 43 Experimental Results

44 Where To Migrate Physical constraints – Latency E.g, NYC to Washington D.C.: 2 msec – Link capacity Enough remaining capacity for extra traffic – Platform compatibility Routers from different vendors – Router capability E.g., number of access control lists (ACLs) supported Constraints simplify the placement problem – By limiting the size of the search space 44

45 Conclusions on VROOM VROOM: useful network-management primitive – Separate tight coupling between physical and logical – Simplify management, enable new applications Evaluation of prototype – No disruption in packet forwarding – No noticeable disruption in routing protocols Future work – Migration scheduling as an optimization problem – Extensions to hypervisor for other applications 45

46 Greening Backbone Networks: Shutting Off Cables in Bundled Links Joint work with Will Fisher and Martin Suchara 46 http://www.cs.princeton.edu/~msuchara/publications/GreenNetsBundles.pdf

47 Power Down Links and Routers? Larger round-trip time (RTT) Slow convergence process 47

48 Bundled Links in Backbone Networks Links come in bundles – Incremental upgrades, equipment costs, … – Around 2-20 cables per link 48

49 Powering All Cables is Wasteful Only power the cables that are needed – Reduce energy consumption, without disruption 49 30-40% utilization

50 Optimization Problem Management-plane optimization problem – Input: network configuration and load – Output: list of powered cables Integer linear program NP hard  need heuristics 50 min # powered cables s.t. link loads ≤ capacities flow conservation carries all traffic demands

51 Related Tractable Problem If energy was proportional to link load? Minimize sum of link loads –Rather than the number of powered cables –Leads to a fractional linear program Benefits of this problem –Computationally tractable –Upper and lower bound on power saving –Starting point for heuristics 51

52 First Attempt: Naïve Solution Always “round up” Up to n times worse where n = # of routers 52 →

53 Fast Greedy Heuristic Solve fractional problem and “round up” Identify link with the most “rounding up” Round down and remove an extra cable Repeat if a feasible solution exists 53 → Other heuristics: Explore combinations of links

54 Experimental Set-Up Measure – Energy savings and computational time Solving linear program – AMPL/CPLEX Varying – Offered load and number of cables Topologies – Abilene with measured demands – Waxman graph with synthetic demands 54

55 Energy Savings in Abilene Energy savings depends on the bundle size 55 energy savings (%) bundle size Turn entire link on or off Similar performance of heuristics

56 Computation Time FGH suited to real-time computation – Reoptimize on/off cables during the day – Other heuristics are expensive for only small gain 56

57 Conclusion on Bundled Links Power down some cables in a bundle – Minimize energy consumption – Without disrupting data traffic Design and evaluation of heuristics – Significant energy savings – Low computational complexity – Simple heuristics are quite effective 57

58 Conclusion of the Talk Network energy consumption – Routers consume a lot of energy – Routers are not energy proportional – Selectively powering down is effective Two main ideas – New mechanism: virtual router migration – New optimization: identify cables to power down Future work – Toward energy-proportional routers – Network designs that minimize server energy 58


Download ppt "Green Networking Jennifer Rexford Computer Science Department Princeton University."

Similar presentations


Ads by Google