Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Data Center Efficiency with Optimized Cooling.

Similar presentations


Presentation on theme: "1 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Data Center Efficiency with Optimized Cooling."— Presentation transcript:

1 1 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Data Center Efficiency with Optimized Cooling Control 2011 6SigmaDC Data Center Conference Chuck Rego, Sr. P.E. Chief Datacenter Architect Cloud Infrastructure Group Intel Corporation 11/15/2011

2 2 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. The Environmental Challenge Mid life Expected end of life Year 1 Data Center Timeline Total design capacity Usable Capacity Operational intent Data centers are designed with a total design capacity and an expected EoL Rudimentary techniques coupled with “best practices” are currently used design – and carried over to Operations W/Sq. Ft kW/ Cabinet CFM/kW Cold Aisle-Hot Aisle

3 3 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. The Environmental Challenge Mid life Expected end of life Year 1 Data Center Timeline Typical Operation “Lost” Lifespan “Lost” Capacity Total design capacity Usable Capacity Data centers seem to “lose” capacity over time and reach end-of-life early Operational intent “The biggest challenge for 80%+ of Owner/Operators is obtaining the right balance between Space, Power and Cooling” - Gartner The typical data center operates at only 50% of its potential capacity - U.S. Department of Energy

4 4 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. The Environmental Challenge IT equipment Cooling system Power distribution Utility supply 6 units lost 20 units lost 100 units in Design PUE of 1.5 Operational PUE of 2.2 The Real Truth - Capacity isn’t lost, only hidden by low efficiency!

5 5 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Current State of Data Center Cooling Control: The Server/Facility Disconnect The facility and the servers are designed separately – a major cause of performance problems and low efficiency IT Equipment Manufacturer Disconnect Facilities Manager

6 6 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. The Environmental Challenge Servers Switch Storage Different technologies, different power densities...different airflow requirements Same equipment, one works... one fails.

7 7 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Air Cooling - Challenges and Opportunities Instrumentation and Control Higher Temperature Operations Workload Distribution and Provisioning E2E DC Architecture Variability Levers RLU – DC CPM

8 8 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Integrated Cooling Control Equipment Manufacturer Facilities Manager A novel approach that ties the facility cooling and airflow to the server demand

9 9 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. DC Efficiency = Computation Total Energy = ( ) 1 PUE 1 SPU E ( ) XX (a)(b)(c) Facility Focus Future Focus More Focus here server hardware efficiency workload computation efficiency ( ) Computation Total Energy to Electronic Components EQUATION 5.1: Breaking an energy efficiency metric into three components: a facility term (PUE = Power Usage Effectiveness) (a), a server energy conversion term (SPUE = Server PUE) (b), and the efficiency of the electronic components in performing the computation per se (c). Warehouse Scale Computing – Barroso, Holzle, Pg 48

10 10 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Data Center layouts have moved towards segregation of supply and return but ACU temperature control strategies are relatively unchanged Return air temperature control attempts to achieve a constant ambient temperature within the room by varying the temperature of the air being supplied Temperature = 25C Cooling Control Today

11 11 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Supply Temperature Oscillation

12 12 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Increasing Data Center operating temperatures with optimized layouts can potentially reduce overall energy consumption and carbon footprint  Optimizing temperature, power, and performance benefits customers and reduces overall capital and operating expenditures.  Increase in server power is offset by significant savings at the data center level.  Testing results indicate no performance degradation at higher temperatures. High Ambient Data Center Operations Power increase 0-6% * Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Trade-offs between Facility Power and Server Power must be understood to embrace High Ambient DC Operation

13 13 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. 1. Collect Data Outlet Temp 2. Aggregation Airflow Inlet Temp IA enables Rack and Row level Management 3. Evaluation Compute Migrate Useful Work Cooling Adjust Temp Adjust Airflow Power Cap Servers Balance Phases 4. Manage Events Platform Based Sensors at each server 3 rd Party Sensors in isolated locations vs. Platform Enabled Data Center utilization Metrics Optimizing Data Center Resources Scheduling Analytics Modify Migrate Park Control Power

14 14 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Integrating Data to Optimize Resources Future Data Center Manageability – Power and Thermal Aware Scheduling Power and Thermal Data Input Power Mgmt Systems Thermal Mgmt Systems Configuration Mgmt Systems

15 15 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Using real time data from a group of servers to optimize data center cooling  The new sensors are defined for providing server level information  Total Airflow through the server  Average outlet temperature of server  New sensors are derived from sensory data already available on the server  New sensors are computed using new algorithms and mathematical formulas  The Airflow is derived from the speed (RPM) of each Zone fan  Q = f (Fan RPM)  Average outlet temperature is computed using the total power drawn by the server and airflow  Toutlet = Tinlet + Ptotal · 1.76 / Q Server Airflow Management * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation.

16 16 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Compute Data Current Technology Research in Hardware Based Compute Utilization 1.Operating System Independent Monitoring No dependencies on OS agents No impact on production network No deployment validation required 2.Complete Resource Utilization Mapping Correlation of Compute – Power – Thermal 3.Facility Level Workload Placement Deployment of workloads in a heterogeneous environment AC Upgrades Power Upgrades

17 17 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Thermal Data Facilities Manager Rack Room Platform Next Generation Technology in Thermal Airflow Management 1.Near Real Time Thermal Analysis 2.Enablement of High Ambient Operating Temperature Optimize Cooling load Reduce Cooling Costs Finding Airflow 525 CFM to 501 CFM 2.4% Reduction in Fan Power Consumption Chilled Water Temp 14.5 C to 23.9 C 36% Reduction in CW System Power 1 C Increase ~ 20% Savings in CW Power Consumption Proof of Concept Participants Future Facilities, Inc.

18 18 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Power Data 1.Increase Compute Density Rack Power Allocation 12,320 Watts Initial setup 32 U Filled at 385 W per server Stable Rack Setup 36 U Filled at 342 W per server Efficient Rack Setup 40 U Filled at 308 W per server 2.Workload Based Power Optimization WorkloadWatts Saved per Node Proof of Concept Participants Baidu China Telecom BMW Oracle Technology Available today in Node Manager and Data Center Manager

19 19 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Simulation Results Supply Air of 23.9°C (an increase of 8.4 °C from “as-is” Exhaust Air Makes Its Way to the Return with Minimal Mixing The changes incorporated have yielded a better path to airflow based on the demand of the servers This is a dynamic system where the cooling vents regulate the airflow and temperature based on the temperature sensors in the servers

20 20 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Potential Savings “As-Is”Proposed Changes Estimated Energy Savings Supply Flow>*525 CFM (assuming 250 W motor) 501 CFM2.4% fan power reduction Supply Temperature14.5°C23.9°C~36% reduction in CW system (1 deg-C rise in CW yields 20% in chiller savings)

21 21 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Summary 1.The real time nodal data integrated with a Data Center CFD tool. 2.The combination of measurement and simulation provides data center operators with the ability to manage a dynamic cooling environment in their data center and operate at high ambient temperatures. 3.A dynamic data center cooling environment reacts to the server requirements, allowing higher ambient temperatures and lower fan speeds therefore reducing running costs without impact to IT resilience


Download ppt "1 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Data Center Efficiency with Optimized Cooling."

Similar presentations


Ads by Google