Presentation is loading. Please wait.

Presentation is loading. Please wait.

Elastic Cloud Caches for Accelerating Service-Oriented Computations Gagan Agrawal Ohio State University Columbus, OH David Chiu Washington State University.

Similar presentations


Presentation on theme: "Elastic Cloud Caches for Accelerating Service-Oriented Computations Gagan Agrawal Ohio State University Columbus, OH David Chiu Washington State University."— Presentation transcript:

1 Elastic Cloud Caches for Accelerating Service-Oriented Computations Gagan Agrawal Ohio State University Columbus, OH David Chiu Washington State University Vancouver, WA

2 2 Cloud Computing Pay-As-You-Go Computing ‣ Running 1 machine for 10 hours = running 10 machines for 1 hour Elasticity ‣ Cloud applications can stretch and contract their resource requirements “Infinite resources”

3 3 Outline ‣ Accelerating Data Intensive Services Using the Cloud Motivating Application Design of an Elastic Cache ‣ Performance Evaluation Up-Scaling (cache expansion) Down-Scaling (cache contraction) ‣ Future Work & Conclusion

4 4 Motivating Application Data Sources

5 5 Computing & Storage Resources Geoinformatics Cyber Infrastructure: Lake Erie

6 6 Shared/Proprietary Web Services = Web Service Geoinformatics Cyber Infrastructure: Lake Erie

7 7... Service Interaction with Cyber Infrastructure Service Infrastructure

8 8 Service Interaction with Cyber Infrastructure... invoke results Service Infrastructure

9 9 Problem: Query Intensive Circumstances... Service Infrastructure

10 10 Outline ‣ Accelerating Data Intensive Services Using the Cloud Motivating Application Design of an Elastic Cache ‣ Performance Evaluation Up-Scaling (cache expansion) Down-Scaling (cache contraction) ‣ Future Work & Conclusion

11 11 Designing an Elastic Cache Compute Cloud... Service Infrastructure A B

12 12 Designing an Elastic Cache... Service Infrastructure A B Cache Requests Inserts Misses node = (k mod 2)

13 13 Eventual Overloading... Service Infrastructure A B Cache Requests Inserts Misses node = (k mod 2)

14 14 Scaling up to Meet Demand... Service Infrastructure A B Cache Requests Compute Cloud C node = (k mod 2)

15 15 Issues with Naive Hashing... Service Infrastructure A B Cache Requests node = (k mod 3) C How to incorporate node C with least amount of “disruption?”

16 16... A B 75 25 8 Hash Intervals (buckets) Distributed Hashtables (DHT)

17 17... A B 75 25 8 invoke: service(35) (35 mod 100) = 35 Which proxy has the page? h(k) = (k mod 100) h(35) Distributed Hashtables (DHT)

18 A B 75 25 8 50 C Only records hashing into (25,50] need to be moved from A to C! DHT to Minimize Hash Disruption when Scaling

19 19 That’s Not Completely Elastic ‣ What about relaxing the amount of nodes to help save Cloud save costs? ‣ First, we need an eviction scheme

20 20 Exponential Decay Eviction ‣ At eviction time: A value,, is calculated for each data record in the evicted slice is higher:  if was accessed more recently  if was accessed frequently If is lower than some threshold, evict

21 21 Outline ‣ Accelerating Data Intensive Services Using the Cloud Motivating Application Design of an Elastic Cache ‣ Performance Evaluation Up-Scaling (cache expansion) Down-Scaling (cache contraction) ‣ Future Work & Conclusion

22 22 Experimental Configuration Application ‣ Shoreline Extraction ‣ Takes 23 seconds to complete without benefits of cache ‣ Executed on a miss ‣ Amazon EC2 Cloud Each Cloud node:  Small Instances (Single core 1.2Ghz, 1.7GB, 32bits)  Ubuntu Linux Caches start out cold Data stored in memory only

23 23 Experimental Configuration ‣ Our approach exploits an elastic Cloud environment: ‣ We compare GBA against statically allocated Cloud environments: 2 fixed nodes (static-2) 4 fixed nodes (static-4) 8 fixed nodes (static-8) Cache overflow --> LRU eviction

24 24 Relative Speedup Querying Rate: 255 invocations/sec

25 25 Cache Expansion/Migration Times Querying Rate: 255 invocations/sec

26 26 Experimental Configuration ‣ Amazon EC2 Cloud Each Cloud node:  Small Instance (Single core 1.2Ghz, 1.7GB, 32bits) Caches start out cold Data stored in memory When 2 nodes become < 30% capacity, merge ‣ Sliding Window Configuration: Time Slice: 1 sec Size: 100 Time Slices

27 27 Data Eviction: 50/255/50 queries per sec Sliding Window Size = 100 sec 50 q/sec255 q/sec50 q/sec

28 28 Cache Contraction: 50/255/50 queries per sec

29 29 Cache Contraction: 50/255/50 queries per sec

30 30 Experimental Summary ‣ Caching Web service results reduces mean execution times significantly for our application ‣ Cloud node allocation is a huge overhead, but the cost is amortized over average execution times ‣ On average, our approach uses less nodes (and thus, less cost) than statically allocated schemes

31 31 Outline ‣ Accelerating Data Intensive Services Using the Cloud Motivating Application Design of an Elastic Cache ‣ Performance Evaluation Up-Scaling (cache expansion) Down-Scaling (cache contraction) ‣ Future Work & Conclusion

32 32 Conclusion ‣ We introduced to some challenges in the Cloud: Controlling Cost Real-time system management (downscaling, upscaling) ‣ We saw how the Cloud’s elasticity could be harnessed to accelerate service-oriented processes

33 33 Future/Current Work

34 34 Thank you ‣ Questions and Comments? David Chiu - david.chiu@wsu.edu Gagan Agrawal - agrawal@cse.ohio-state.eduagrawal@cse.ohio-state.edu In memory of Prof. Yuri Breitbart (1940 -- 2010)


Download ppt "Elastic Cloud Caches for Accelerating Service-Oriented Computations Gagan Agrawal Ohio State University Columbus, OH David Chiu Washington State University."

Similar presentations


Ads by Google