Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ditto - A System for Opportunistic Caching in Multi-hop Mesh Networks Fahad Rafique Dogar Joint work with: Amar Phanishayee, Himabindu Pucha, Olatunji.

Similar presentations


Presentation on theme: "Ditto - A System for Opportunistic Caching in Multi-hop Mesh Networks Fahad Rafique Dogar Joint work with: Amar Phanishayee, Himabindu Pucha, Olatunji."— Presentation transcript:

1 Ditto - A System for Opportunistic Caching in Multi-hop Mesh Networks Fahad Rafique Dogar Joint work with: Amar Phanishayee, Himabindu Pucha, Olatunji Ruwase, and Dave Andersen Carnegie Mellon University

2 Wireless Mesh Networks (WMNs) 2 Cost Effective Greater Coverage Testbeds: RoofNet@MIT, MAP@Purdue, … Commercial: Meraki 100,000 users of San Francisco ‘Free the Net’ service

3 Throughput Problem in WMNs 3 Interference GW becomes a bottleneck P1 P3

4 Exploiting Locality through Caching 4 On-Path + Opportunistic Caching -> Ditto P1 P2 P3 Path of the transfer: Alice, P1, P3, GW P1 and P3 perform on-path caching P2 can perform opportunistic caching

5 Ditto: Key Contributions Built an opportunistic caching system for WMNs Insights on opportunistic caching – Is it feasible? – Key factors Ditto’s throughput comparison with on-path and no caching scenarios – Up to 7x improvement over on-path caching – Up to 10x improvement over no caching 5 Evaluation on two testbeds

6 Outline Challenge and Opportunity Ditto Design Evaluation Related Work 6

7 Challenge for Opportunistic Caching Wireless networks experience high loss rates – Usually dealt with through link layer retransmissions Overhearing node also experiences losses 7 1 1 2 2 P1 P3 Unlike P1, P2 cannot ask for retransmissions Successful overhearing of a large file is unlikely P2 Main Challenge: Lossy overhearing

8 More Overhearing Opportunities 8 Reduces the problem of lossy overhearing Path of the transfer: Alice, P1, P3,… P2 may benefit from multi-hop transfers P2 P1 P3

9 Outline Challenge and Opportunity – Lossy Overhearing – Multiple Opportunities to Overhear Ditto Design – Chunk Based Transfers – Ditto Proxy – Sniffer Evaluation Related Work 9

10 Chunk Based Transfers Motivation – Lossy overhearing -> smaller caching granularity Idea – Divide file into smaller chunks (8 – 32 KB) – Use chunk as a unit of transfer Ditto uses Data Oriented Transfer (DOT) 1 system for chunk based transfers 1 Tolia et al, An Architecture for Internet Data Transfer. NSDI 2006. An Architecture for Internet Data Transfer 10

11 Data Oriented Transfer (DOT) 11 chunkID3 chunkID1 chunkID2 Foo.txtFoo.txt Cryptographic Hash App Request – foo.txt Response: chunk ids{A,B,C} DOT chunk ids Receiver Sender DOT chunk ids Chunk Request Chunk Response Chunking DOT Transfer

12 An Example Ditto Transfer 12 App Same as DOT DOT chunk ids Chunk request DOT chunk ids Chunk response request response request response ReceiverSender Ditto Proxy

13 13 Next-Hop based on routing table information Separate TCP connection on each hop On-Path Caching Opportunistic Caching

14 Sniffer 14 P1 P2 (Overhearing) P3 Path of the transfer: Alice, P1, P3,… TCP stream identification through (Src IP, Src Port, Dst IP, Dst Port) Placement within the stream based on TCP sequence number Next Step: Inter-Stream Chunk Reassembly

15 Inter-Stream Chunk Reassembly 15 Look for Ditto header Chunk Boundaries Exploits multiple overhearing opportunity

16 Outline Challenges and Opportunities Ditto Design Evaluation – Testbeds – Experimental Setup – Key Results – Summary Related Work 16

17 Emulab Wireless Testbed 17

18 MAP Campus Testbed (Purdue Univ.) 18 Gateway

19 Experimental Setup Mode802.11 b RateAuto Other Parameters Default RoutingStatic Cross TrafficNo Cache EvictionNo File Sizes1-5 MB Chunk Sizes8KB, 16KB, 32KB 19

20 Evaluation Scenarios Measuring Overhearing Effectiveness P1 P2 GW P3 Each observer reports the number of chunks successfully reconstructed Each node becomes a receiver Transfer Receiver Observer 20 Example Observer Receiver Transfer

21 Reconstruction Efficiency 21 Around 60% of the observers don’t reconstruct anything Around 30% of the observers reconstruct at least 50% chunks

22 Reconstruction Efficiency 22 Around 50% observers are able to reconstruct at least 50% chunks

23 23 Zooming In --- Campus Testbed

24 24

25 25 Zooming In --- Campus Testbed

26 26 Shield the gateway from becoming a bottleneck

27 Throughput Evaluation Leaf Nodes request the same file from the gateway – e.g: software update on all nodes Different request patterns: – Sequential, staggered – Random order of receivers Schemes – Ditto’s comparison with On-Path and E2E 27

28 Throughput Improvement in Ditto 28 Median = 540 Kbps Campus Testbed

29 Throughput Improvement in Ditto 29 Median = 1380 Kbps Median = 540 Kbps Campus Testbed

30 Throughput Improvement in Ditto 30 Median = 5370 Kbps Median = 1380 Kbps Campus Testbed Median = 540 Kbps

31 Evaluation Summary Metric/FactorInsight ProximityNodes closer to gateway have a very high reconstruction efficiency Can shield the gateway from becoming a hotspot ThroughputOrders of magnitude improvement with Ditto Inter-Stream Reassembly Few multiple overhearing opportunities; 10% improvement where applicable Chunk Size8 - 32 KB chunk size provide good reconstruction efficiency with low overhead 31

32 Related Work Hierarchical Caching [Fan98, Das07,..] – Caching more effective on lossy wireless links – Ditto’s overhearing feature is unique Packet Level Caching [Spring00, Afanasyev08] – Ditto is purely opportunistic – Ditto exploits similarity at inter-request timescale Making the best of broadcast [MORE, ExOR,..] – Largely orthogonal 32

33 Conclusion Opportunistic caching works! – Key Ideas: Chunk based transfer, inter-stream chunk reconstruction – Feasibility established on two testbeds – Nodes closer to the gateway can shield it from becoming a bottleneck Significant benefit to end users – Up to 7x throughput improvement over on-path caching – Up to 10x throughput improvement over no caching 33

34 Thank you! 34


Download ppt "Ditto - A System for Opportunistic Caching in Multi-hop Mesh Networks Fahad Rafique Dogar Joint work with: Amar Phanishayee, Himabindu Pucha, Olatunji."

Similar presentations


Ads by Google