Presentation is loading. Please wait.

Presentation is loading. Please wait.

SmartRE: An Architecture for Coordinated Network-Wide Redundancy Elimination Ashok Anand, Vyas Sekar, Aditya Akella University of Wisconsin, Madison Carnegie.

Similar presentations


Presentation on theme: "SmartRE: An Architecture for Coordinated Network-Wide Redundancy Elimination Ashok Anand, Vyas Sekar, Aditya Akella University of Wisconsin, Madison Carnegie."— Presentation transcript:

1 SmartRE: An Architecture for Coordinated Network-Wide Redundancy Elimination Ashok Anand, Vyas Sekar, Aditya Akella University of Wisconsin, Madison Carnegie Mellon University 1

2 2 Redundancy Elimination (RE) for Increasing Network Capacity Enterprises Mobile Users Home Users Web content HTTP caches HTTP caches HTTP caches HTTP caches HTTP caches HTTP caches Data centers Wan Optimize r Other services (backup) Dedup/ archival RE: Leverage repeated transmissions Many “narrow” solutions to improve performance! Can we generalize this transparently? Benefit both users and ISPs?

3 In-Network RE as a Service 3 Routers keep a cache of recent pkts Routers keep a cache of recent pkts New packets get “encoded” or “compressed” w.r.t cached pkts New packets get “encoded” or “compressed” w.r.t cached pkts Encoded pkts are “decoded” or “uncompressed” downstream Encoded pkts are “decoded” or “uncompressed” downstream Key Issues: 1.Performance: Minimize Traffic FootPrint (“byte-hops”) 2. Cache Capacity: Can only provision finite DRAM 3. Processing Constraints: Enc/Dec are memory-access limited RE as a IP-layer service: Generalizes “narrow” deployments Transparent to users/apps: Democratizes benefits of RE Benefits ISPs: Better TE/Lower load

4 In-Network RE as a Service Hop-by-Hop (Anand et al, Sigcomm08) 4 Performance (Leverage all RE) ✔ Cache Constraints ✖ Processing Constraints ✖ Encode Decode Encode Decode Same packet encoded and decoded many times Same packet encoded and decoded many times Same packet cached many times Same packet cached many times Hop-by-hop RE is limited by encoding bottleneck Encoding: ~ 15 mem. accesses ~ 2.5 Gbps (@ 50ns DRAM) Decoding: ~ 3-4 accesses > 10 Gbps (@ 50ns DRAM)

5 5 Encode Decode Performance (Leverage all RE) ✖ Cache Constraints ✔ Processing Constraints ✔ Cannot leverage Inter-path RE Cannot leverage Inter-path RE Can leverage Intra-path RE Can leverage Intra-path RE In-Network RE as a Service At the Edge Doesn’t help ISPs (e.g., traffic engineering)

6 Motivating Question: How can we practically leverage the benefits of network-wide RE optimally? 6 EdgeHop-by-Hop Performance (Leverage all RE) ✖✔ Cache Constraints ✔✖ Processing Constraints ✔✖

7 Outline Background and Motivation High-level idea Design and Implementation Evaluation 7

8 SmartRE: High-level idea Don’t look at one-link-at-a-time Treat RE as a network-wide problem Cache Constraints: “Coordinated Caches” Each packet is cached only once downstream Cache Constraints: “Coordinated Caches” Each packet is cached only once downstream Processing Constraints: Encode @ Ingress, Decode@ Interior/Egress D ecode can occur multiple hops after encoder Performance: Network-Wide Optimization Account for traffic, routing, constraints etc. Performance: Network-Wide Optimization Account for traffic, routing, constraints etc. 8 SmartRE: Coordinated Network-wide RE

9 Cache Constraints Example 9 Packet arrivals: A, B, A,B Ingress can store 2pkts Interior can store 1pkt A,B B,A A,B BABBAB BABBAB After 2 nd pkt After 4 th pkt Total RE savings in network footprint (“byte hops”)? RE on first link No RE on interior RE on first link No RE on interior 2 * 1 = 2 Can we do better than this?

10 10 Cache Constraints Example Coordinated Caching Packet arrivals: A, B, A,B Ingress can store 2pkts Interior can store 1pkt A,B AAAAAA BBBBBB After 2 nd pkt 1 * 2 + 1 * 3 = 5 RE for pkt A Save 2 hops RE for pkt A Save 2 hops RE for pkt B Save 3 hops RE for pkt B Save 3 hops After 4 th pkt Total RE savings in network footprint (“byte hops”)?

11 Dec 11 Processing Constraints Example Enc Dec Enc Dec 4 Mem Ops for Enc 2 Mem Ops for Dec 5 Enc/s 5 Dec/s 5 Enc/s 5 Dec/s 5 Enc/s 5Dec/s Total RE savings in network footprint (“byte hops”)? 5 * 6 = 30 units/s Note that even though decoders can do more work, they are limited by encoders 20 Mem Ops Enc 5 Enc/s Dec 5 D/s Enc 5 Dec/s 5 E/s Can we do better than this?

12 5 Dec/s 12 Processing Constraints Example: Smarter Approach 4 Mem Ops for Enc 2 Mem Ops for Dec 5 Enc/s 10 Dec/s Total RE savings in network footprint (“byte hops”)? 10*3 + 5 *2 = 40 units/s 20 Mem Ops 5 Dec/s 5 Enc/s Dec @ edge Dec @ core Many nodes are idle.Still does better! Good for partial deployment also Many nodes are idle.Still does better! Good for partial deployment also

13 Outline Background and Motivation High-level idea Design and Implementation Evaluation 13

14 SmartRE Overview 14 Network-Wide Optimization “Encoding Configs” To Ingresses “Encoding Configs” To Ingresses @ NOC “Decoding Configs” To Interiors “Decoding Configs” To Interiors

15 Ingress/Encoder Operation 15 Encoding Config Encoding Config Packet Cache Check if this packet needs to be cached Identify candidate packets to encode Find “compressible” regions w.r.t cached packets Spring & Wetherall Sigcomm’00, Anand et al Sigcomm’08 Shim carries Info(matched pkt) MatchRegionSpec

16 Interior/Decoder Operation 16 Decoding Config Decoding Config Packet Cache Check if this packet needs to be cached Reconstruct “compressed” regions using reference packets Shim carries Info(matched pkt) MatchRegionSpec

17 Design Components 17 How do we specify coordinated caching responsibilities? Correctness: How do ingresses and interior nodes maintain cache consistency? How do ingresses identify candidate packets for encoding? What does the optimization entail?

18 18 How do we “coordinate” caching responsibilities across routers ? Non-overlapping hash-ranges per-path avoids redundant caching! (from cSamp, NSDI 08) [0.1,0.4] [0.7,0.9] [0.1,0.4] [0,0.3] [0.1,0.3] [0,0.1] 1.Hash (pkt.header) 2.Get path info for pkt 3.Cache if hash in range for path 1.Hash (pkt.header) 2.Get path info for pkt 3.Cache if hash in range for path

19 Design Components 19 How do we specify coordinated caching responsibilities? Correctness: How do ingresses and interior nodes maintain cache consistency? How do ingresses identify candidate packets for encoding? What does the optimization entail?

20 20 Network-wide optimization Traffic Patterns Traffic Matrix Redundancy Profile (intra + inter) Router constraints Processing (MemAccesses) Cache Size Encoding manifests Decoding manifests Objective: Max. Footprint Reduction (byte-hops) or any ISP objective (e.g., TE) Linear Program Inputs Output What does the “optimization” entail? Topology Routing Matrix Topology Map Path,HashRange

21 Design Components 21 How do we coordinate caching responsibilities across routers ? Correctness: How do ingresses and interior nodes maintain cache consistency? How do ingresses identify candidate packets for encoding? What does the optimization entail?

22 22 [0.1,0.4] [07,0.9] [0.7,0.9] [0.1,0.4] [0,0.3] [0.1,0.3] [0,0.1] How do ingresses and interior nodes maintain cache consistency? What if traffic surge on red path causes packets on black path to be evicted? Create “logical buckets” For every path-interior pair Evict only within buckets Create “logical buckets” For every path-interior pair Evict only within buckets

23 Network-Wide Optimization @ NOC RoutingRedundancy ProfileTrafficDevice Constraints SmartRE: Putting the pieces together 23 “Encoding Configs” To Ingresses “Encoding Configs” To Ingresses “Decoding Configs” To Interiors “Decoding Configs” To Interiors [0.1,0.4] [07,0.9] [0.7,0.9] [0.1,0.4] [0,0.3] [0.1,0.3] [0,0.1] Cache Consistency: Create “logical buckets” For every path-interior pair Evict only within buckets Cache Consistency: Create “logical buckets” For every path-interior pair Evict only within buckets Non-overlapping hash-ranges per-path avoids redundant caching! Candidate packets must be available on new packet’s path

24 Outline Background and Motivation High-level idea Design and Implementation Evaluation 24

25 Reduction in Network Footprint 25 SmartRE is 4-5X better than the Hop-by-Hop Approach SmartRE gets 80-90% of ideal unconstrained RE Results consistent across redundancy profiles, on synthetic traces Setup: Real traces from U.Wisc Emulated over tier-1 ISP topologies Processing constraints  MemOps & DRAM speed 2GB cache per RE device

26 More results … 26 Can we benefit even with partial deployment?  Even simple strategies work pretty well! What if redundancy profiles change over time?  Some “dominant” patterns which are stable  Get good performance even with dated configs

27 To Summarize.. RE as a network service is a promising vision – Generalizes specific deployments: benefit all users, apps, ISPs SmartRE makes this vision more practical – Look beyond link-local view; decouple encoding-decoding – Network-wide coordinated approach 4-5X better than current proposals – Works even with less-than-ideal/partial deployment Have glossed over some issues.. – Consistent configs, Decoding gaps, Packet losses, Routing dynamics Other domains: Data Center Networks, Multihop Wireless etc. 27

28 28 How do ingresses identify candidate packets for encoding? P1 P2 [0,0.2] [0,0.1] [0.2,0.5] [0.1,0.4] [0.5,0.7] [0.4,0.6] New [0,0.7] [0,0.6] Cached Always safe to encode w.r.t cached pkts on same path Always safe to encode w.r.t cached pkts on same path Cached If Cached in routers common to P1 and P2 i.e., Hash ε [0,0.4] Candidate packets must be available on this path Candidate packets must be available on this path

29 Decoding Gaps 29

30 Traffic Dynamics Changes in redundancy profiles – Ingress can track load @ interior nodes.. – Ingress tracks “matches” between paths – Report back to NOC Routing dynamics – Recompute – Send signal to ingress to indicate invalid encodings 30

31 Packet Loss Problem with most RE, not just SmartRE – Some performance hit but not that bad Avoid encoding retransmissions – Otherwise will lead to repeated loss! 31

32 Memory Efficiency Example Packet arrival order: A, B, C, D, A, B, C, D Ingress link memories can store 3pkts Interior can only store 1pkt DABCDDABCD DABCDDABCD After 4 th pkt A B C D B C D A C D A B D A B C A B C D After 6 th pkt Hop-by-Hop: Total network footprint = 32-4= 28 DABCDDABCD A B C D B C D A C D A B D A B C A B C D B C D A C D A B D A B C A B C D “Coordinated”: Total network footprint = 32- (4+3+2+1) =22 AAAAAAAAAA BBBBBBBBBB CCCCCCCCCC DDDDDDDDDD Can we do better than this? 32

33 Throughput Efficiency Example 33 10D 10E10D10E Hop-by-hop: Total savings = 5X * 4 + 10X * 2 = 40X 40 Mem Ops20 Mem Ops 5E 5D 5E 20D “Coordinated”: Total savings = 20X * 3 = 60X 40 Mem Ops20 Mem Ops 5E 0D Can we do better than this?

34 In-Network RE as a Service 34 Routers keep a cache of recent pkts Routers keep a cache of recent pkts New packets get “encoded” or “compressed” w.r.t cached pkts New packets get “encoded” or “compressed” w.r.t cached pkts Encoded pkts are “decoded” or “uncompressed” downstream Encoded pkts are “decoded” or “uncompressed” downstream In-Network RE as a Service Version1: At the Edge Key Issues: 1. Performance: Maximize Traffic FootPrint Reduction (“byte-hops”) Leverage all possible RE (e.g., inter-path) 2. Cache Capacity: Can only provision finite DRAM 3. Processing Constraints: Enc/Dec are memory-access limited Encode Decode Encode Decode Performance (Leverage all RE) ✖ Cache Constraints ✔ Processing Constraints ✔ Cannot leverage Inter-path RE Cannot leverage Inter-path RE Can leverage Intra-path RE Can leverage Intra-path RE Conceptually very appealing! Generalizes “narrow” deployments Transparent to users/apps: Democratizes benefits of RE Benefits ISPs: Better TE/Lower load

35 In-Network RE as a Service Version2: Hop-by-Hop (Anand et al, Sigcomm08) 35 Performance (Leverage all RE) ✔ Cache Constraints ✖ Processing Constraints ✖ Encode Decode Encode Decode Same packet encoded and decoded many times Same packet encoded and decoded many times Same packet cached many times Same packet cached many times Hop-by-hop RE is limited by encoding bottleneck Encoding: ~ 15 mem. accesses ~ 2.5 Gbps (@ 50ns DRAM) Decoding: ~ 3-4 accesses > 10 Gbps (@ 50ns DRAM)

36 Ingress/Encoder Algorithm Content store Send compressed packet Shim carries Info(matched pkt) MatchRegionSpec Check if this needs to be cached Encoding Config Encoding Config 36 Identify Candidate Packets to Encode i.e., cached along path of new packet Identify Candidate Packets to Encode i.e., cached along path of new packet Efficient Algorithms for Finding “Compressible Regions” Spring & Wetherall, Sigcomm’00 Anand et al, Sigcomm’08 Efficient Algorithms for Finding “Compressible Regions” Spring & Wetherall, Sigcomm’00 Anand et al, Sigcomm’08

37 Interior/Decoder Algorithm Content store Shim carries Info(matched pkt) MatchRegionSpec Check if this needs to be cached Decoding Config Decoding Config Reconstruct compressed regions Reconstruct compressed regions 37 Send uncompressed packet


Download ppt "SmartRE: An Architecture for Coordinated Network-Wide Redundancy Elimination Ashok Anand, Vyas Sekar, Aditya Akella University of Wisconsin, Madison Carnegie."

Similar presentations


Ads by Google