Presentation is loading. Please wait.

Presentation is loading. Please wait.

Request Dispatching for Cheap Energy Prices in Cloud Data Centers

Similar presentations


Presentation on theme: "Request Dispatching for Cheap Energy Prices in Cloud Data Centers"— Presentation transcript:

1 Request Dispatching for Cheap Energy Prices in Cloud Data Centers
Hiroshi Yamada (TUAT) In cooperation with Takumi Sakamoto( ), Hikaru Horie, Kenji Kono (Keio Univ.)

2 Cloud Service Providers
Make use of geographically-distributed data centers Offer users a large amount of resources in a pay-as-you-go manner Enable us to easily use resources as needed e.g. Amazon has more than tens of DCs over the world Cloud platforms are very common. The current could service providers, like Amazon, Google, Microsoft, commonly make use of geographically-distributed data centers. They orchestrate these data centers to provide users a large amount of resources. Cloud users can utilize them in a pay-as-you-go manner, which means they can easily use resources as needed. For example, Amazon has more than tens of DCs over the world. It has DCs in London , Tokyo, US Dallas, Seattle and so on. If a cloud user needs more resources, he or she can use these DCs’ resources to provide the services. Tokyo London Dallas Miami Amsterdam Seattle

3 Energy cost is EXTREMELY high
Millions of dollars of electric costs per year [Qureshi+ SIGCOMM ’04] Google spends $38 million on electricity Other large companies pay tens of million dollars Significant financial overhead in the management cost Energy prices can be higher and higher Limited fossil fuel Restriction to build nuclear power plants ⇒ Lead to higher charges for the use of clouds In the management of these DCs, energy cost is a significant concern. It is extremely high recently. Some document reports that energy costs are equal to millions of dollars per year. For example, google annually spends 38 million dollars for electricity. Additionally, other large companies pay tens of million dollars for electricity. This means that energy cost is significant financial overhead in the management cost. Unfortunately, energy prices tend to become more and more expensive due to limited fossil fuel and restriction to build nuclear power plants. This situation is more severe in Japan. This fact can lead to higher charges for the use of clouds so we want to avoid this situation.

4 Existing solution for reducing energy costs
Forwards client requests to DCs in cheaper energy price regions [Qureshi+ SIGCOMM ’09] Average utilization in DCs is 20〜30% Servers can be replicated in different DCs The previous work has tackled this problem. To reduce energy cost, the previous work dispatches client requests to DCs in cheaper energy price regions. The insight of this work is to pay attention to the fact that energy prices are different in different places. And, the average utilization in DCs is 20 to 30 % for successfully handling request bursts, and server instances can be easily replicated among DCs like Amazon EC2. This graph shows energy prices change over time in the US. This shows three places’ energy price, California, Midwest, and New York. X-axis is time, and Y-axis is electric price. We can see that electric prices change over time, and the price is different in each place. Energy prices are fluctuated over time in the US.

5 Challenges for dispatching client requests in clouds
DCs’ capacities The previous work dispatches requests only to the DC in the cheapest energy price region Even if the DC is overloaded ⇒ Service qualities can be degraded Lose 1 % income if the response time is 10 % increased in Amazon [Kohavi+ ’07] Various factors to be considered DC loads, response time, ・・・ ⇒ Cannot dispatch requests based on only energy prices However, the existing solution cannot be applied to clouds. There are two challenges in energy-price-driven request dispatching. First is DCs’ capacities. The existing solution just consider energy price, so it dispatches requests to only the DC in the cheapest energy price region even if the DC is overloaded, which means the services running on the DC are severely degraded. And the DC cannot handle suddenly-occurred request burst. This is critical for services. For example, if response time is 10 % increased in Amazon, income is 1 % decreased. Second is to consider various factors in dispatching requests in clouds. For example, such as DC loads, response time, and so on. Since the existing work takes into account only energy prices, these demands are not satisfied.

6 Proposal Dispatching client requests to DCs in cheaper energy price regions, considering: DCs’ capacities Various factors such as DC loads and response time Assumptions Know the real-time energy prices that vary hourly Use a function to automatically launch servers inside DCs like Amazon auto scaling function Requests Dispatcher To address this issue, we presens an approach to dispatching client requests to DCs in cheaper energy price regions, considering DCs’ capacities and additional factors such as DC loads and response time. Our dispatcher receives client requests, and dispatches them to DCs in cheaper energy price regions, satisfying predefined demands such as DC loads. We have the same assumptions with Qureshi’s SIGCOMM paper. First we know the real-time energy prices that vary hourly. Second, we use a function to automatically launch servers inside DCs like Amazon auto-scaling function. $1 / MWh $2 / MWh $3 / MWh

7 Contributions Propose a mechanism to dispatch requests to save energy prices, satisfying pre-defined demands Show an example algorithm that satisfies pre-defined demands and can be integrated to our dispatcher Implement our prototype and perform simulation evaluation using real data Our contributions are here. We propose a mechanism to dispatch requests to save energy prices with pre-defined demands satisfied. We also show an simple algorithm that satisfies pre-defined demand, DC loads and response time, and can be integrated to our dispatcher. Finally, we implement our prototype and evaluate it in simulation using realistic data.

8 Design of request dispatcher
Requirements Scalability Has to handle a huge amount of requests sent to DCs Ability to follow frequent-redirection-changes Energy price and DC loads are changed frequently Availability Easy to fail to redirect requests if the dispatcher is fragile Solution: Mapping Nodes Run between clients and DCs in a P2P fashion Satisfy the three requirements To design our dispatcher, we have to consider three requirements. First is scalability. The dispatcher has to successfully handle a huge amount of requests sent to DCs. Second, the dispatcher needs to follow frequent-redirection-changes since energy price and DC loads are changed frequently. So, it needs to have an ability to do it. Third is availability. If the dispatcher is fragile, client request redirection is easily failed. To achieve the requirements, we prepare mapping nodes that run between clients and DCs in a P2P fashion.

9 Mapping Node Constructs a distributed hash table (DHT)
Key: a host name, Value: Mapping Entry Mapping entry contains the host’s IP and RTT (for cache) Returns IP address like DNS servers Clients use the returned address to connect Mapping Node in details. Mapping nodes work together. They construct a distributed hash table (DHT) to achieve the three requirements I said early, availability, scalability and so on. They work like a DNS server for client. Mapping nodes manage a host name as a key, and mapping entry as a value. Mapping entry is a data object that contains host’s IP and RTT for cache in a mapping node. Mapping node returns an IP address to the client when its request is received. First, client sends a request to a known mapping node to get an IP address of the server. Mapping node searches the mapping entry of the given server name. It returns the IP address to the client after finding the mapping entry.

10 A Request Dispatching Algorithm
Used for mapping nodes to return IP addresses Calculates dispatching request ratio in each DC A DC to forward is decided based on the ratio Cloud service providers can change weights Example: request dispatching based on DC load, response time, and energy prices In this work, our mapping node employs this request dispatching algorithm to decide IP addresses to be returned. This is very simple so there are a lot of rooms to improve and can borrow novel algorithms from various previous work. But an important point is that we can develop an algorithm that is applicable to mapping nodes. This algorithm decides percentages of requests to send to a DC, following pre-defined threshold. It considers three factors, DC load, response time, and energy prices. Each factor can be set weight that is adjusted based on which factor is more important. In this algorithm, the weigh changes in a situation. We set these functions to reduce percentages when response time and DC loads are over the threshold. DC loads. Weight is decreased as DC loads are more severe Response time. Weight is 0 when latency is over 70 msec Energy prices. Weight is smoothly decreased as the price is higher weight

11 Simulation Evaluation
Implement a prototype on Overlay Weaver [Shodo+ ’06] DC emulator: CloudSim [Buyya+ ’09] Parameters Energy prices data: 1st week in the US on Aug. 2010 Workloads: E-commerce DC workload, borrowing from [Heller+ NSDI ’10] Constraints (Threshold) DC loads: less than 80 % Response time: less than 70 ms We implemented our prototype on Overlay weaver that is a DHT framework and evaluate it on a DC emulator named CloudSIm. We also used the following parameters. We used energy price data for the first week in the US on August 2010, and E-commerce DC workload borrowed from Heller’s NDSI paper. In addition, we set thresholds for DC loads and response time to 80% and 70 msecond respectively. For comparison, we prepared 4 request dispatching policies. Random, Latency & price, Latency & load, and our proposal. Random dispatches request randomly, Latency & price consider response time and energy price, Latency & load considers response time and DC loads, And, our proposal consider all. Compared with the following policies Random Dispatch requests randomly Latency & Price Consider response time and energy price Latency & Load Consider response time and DC loads Our Proposal Consider response time, DC loads, and energy price (shown in the previous slide)

12 Result (Energy price and constraint violation)
Our proposal lowers energy prices without the constraint violation 28 % better in energy prices than Random ⇒ Saves several hundred dollars if annual energy cost is several thousand dollars Energy price nomalized by Random Constraint Violation Lower is better The results are here. The left graph is energy price, and the right graph is constraint violation which means whether DC loads and response time are over the threshold. We can see that our proposal successfully lowers energy prices without constraint violation. Random and Latency & Price violate the constraints. Specifically, our proposal is 28 % better in energy prices than Random, which means it saves several hundred dollars if annual energy cost is several thousand dollars. Lower is better

13 Result (DC loads) Our proposal successfully regulates DC loads
DC loads in Latency & Price are fluctuated severely Threshold = 80% Random Loads are changed along with the workload. 1 2 3 4 5 6 7 Day Threshold = 80% Latency & Price Loads are sharply fluctuated. The results of DC loads are like this. We can see that our proposal almost successfully regulates DC loads under the threshold. In latency & price, the loads are severely fluctuated since it does not consider any DC loads situation. 1 2 3 4 5 6 7 Day Threshold = 80% Latency & Load Loads are changed along with the workload. 1 2 3 4 5 6 7 Day Threshold = 80% Our Proposal Loads are changed along with the workload. 1 2 3 4 5 6 7 Day

14 Related Work Energy-price-driven request dispatcher [Qureshi+ SIGCOMM ’09] Dispatches requests to DCs in a cheap energy price region Only consider energy price Load-balancer for DCs [Wendell+ SIGCOMM ’10] Dispatches requests to DCs based on their loads Does not consider energy prices DHT-based DNS [Ramasubramanian+ SIGCOMM ’04] Achieves fault-tolerant and high performance DNS using DHT Not focus on energy prices and cloud user policy Related work. Qureshi proposes energy-price-drive request dispatcher. However, as I said, it considers only energy prices so it cannot take into account other factors like response time and DC loads and so on. The Load-balancer for DCs dispatches requests to DCs based on their loads. This balancer does not consider energy prices. DHT-base DNS is proposed and it achieves fault-tolerant and high performance DNS using DHT. We can use this techniques to mapping node to enhance it.

15 Conclusions Proposal: Energy-price-driven request dispatcher for clouds Takes response time and DC loads into account Leverages a DHT mechanism Our simulation result shows that our dispatcher saves energy costs without violation of pre-defined constraints Future work Enhance policies used in mapping nodes Evaluate our prototype in the real world Conclude this talk. We propose an energy-price-driven request dispatcher for clouds that can take into account various factors like not only energy prices but DC loads, response time and so on. It leverages a DHT mechanism to achieve its scalability and availability. Our simulation result shows that our dispatcher saves energy costs without violation of pre-defined constraints. Future work includes to enhance request dispatching policies used in mapping nodes, and to evaluate our prototype in the real world.

16 Result (Mapping node execution time)
Latency Mean Median 90th % Legacy DNS (original) 382 ms 39 ms 337 ms Legacy DNS (revised) 246 ms 30 ms 163 ms Mapping Node 102 ms 52 ms 204 ms [ms]


Download ppt "Request Dispatching for Cheap Energy Prices in Cloud Data Centers"

Similar presentations


Ads by Google