1 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Data Center Efficiency with Optimized Cooling.

Slides:



Advertisements
Similar presentations
Data Center Design Issues Bill Tschudi, LBNL
Advertisements

Matt Warner Future Facilities Proactive Airflow Management in Data Centre Operation - using CFD simulation to improve resilience, energy efficiency and.
6SigmaDC Conference & User Training, 15 th – 17 th November 2011 Hassan Moezzi A New Approach Welcome to the Model Data Centre.
Energy and heat-aware metrics for data centers Jaume Salom, Laura Sisó IREC - Catalonia Institute for Energy Research Ariel Oleksiak, Mateusz.
The Value of DOD Installation Energy Management Control Systems (EMCS) and Command Centers for Improved Operations and Increased Energy Efficiency Moderator:
Smart Devices. Smart Buildings. Smart Business The Potential for DCx Technology Enabled HVAC Operation Scot Duncan, P.E.
Cooling Product Positioning
Achieving Data Center Availability along with Efficiency by Management Systems Shimon Katz, Data Center Project Manager ELECTRICITY 2012 – Eilat, Israel.
Overcoming the challenge of virtual blindness Colin Richardson on365 Ltd.
PI – Monitoring Energy in the Data Center Peter Vieites Technology Architect Microsoft Technology Center - New York.
Green Cloud Computing Hadi Salimi Distributed Systems Lab, School of Computer Engineering, Iran University of Science and Technology,
Project Motivation: Opportunity to explore building efficiency technology and the engineering design process Improving the thermal efficiency will save.
Anand Vanchi- Intel IT Ravi Giri – Intel IT Sujith Kannan – Intel Corporate Services Comprehensive Energy Efficiency of Data Centers – Case study shared.
Utility-Function-Driven Energy- Efficient Cooling in Data Centers Authors: Rajarshi Das, Jeffrey Kephart, Jonathan Lenchner, Hendrik Hamamn IBM Thomas.
Measuring and Validating Attempts to Green Columbia’s Data Center October 14, 2010 Rich Hall Peter M Crosta Alan Crosswell Columbia University Information.
02/24/09 Green Data Center project Alan Crosswell.
Datacenter Power State-of-the-Art Randy H. Katz University of California, Berkeley LoCal 0 th Retreat “Energy permits things to exist; information, to.
Energy Management with the use of an Intelligent Building Management System.
Kick-off meeting 3 October 2012 Patras. Research Team B Communication Networks Laboratory (CNL), Computer Engineering & Informatics Department (CEID),
Effect of Rack Server Population on Temperatures in Data Centers CEETHERM Data Center Laboratory G.W. Woodruff School of Mechanical Engineering Georgia.
Thermal Management Solutions from APW President Systems
44 th Annual Conference & Technical Exhibition By Thomas Hartman, P.E. The Hartman Company Georgetown, Texas Sustainable Chilled Water.
UTSW Thermal Energy Plants, Power Generation and Electrical System What do we do to meet the Emission Reduction, Energy usage Reduction and Electrical.
All content in this presentation is protected – © 2008 American Power Conversion Corporation Rael Haiboullin System Engineer Capacity Manager.
24 x 7 Energy Efficiency February, 2007 William Tschudi
CoolAir Temperature- and Variation-Aware Management for Free-Cooled Datacenters Íñigo Goiri, Thu D. Nguyen, and Ricardo Bianchini 1.
Federal Data Center Consolidation Initiative Karen Petraska August 17, 2011.
1 May 12, 2010 Federal Data Center Consolidation Initiative.
Enabling High Efficiency Power Supplies for Servers : Update on industry, government and utility initiatives Brian Griffith System Power Architect Intel.
Data Centre Power Trends UKNOF 4 – 19 th May 2006 Marcus Hopwood Internet Facilitators Ltd.
Data Center Consolidation & Energy Efficiency in Federal Facilities
Best Practices in HVAC Design/Retrofit
Green IT and Data Centers Darshan R. Kapadia Gregor von Laszewski 1.
Optimising Data Centre Power Planning and Managing Change in Data Centres - 28th November Cirencester.
Sensor-Based Fast Thermal Evaluation Model For Energy Efficient High-Performance Datacenters Q. Tang, T. Mukherjee, Sandeep K. S. Gupta Department of Computer.
Mission Energy: Energy Efficient Cooling How Data Centre energy consumption and operating costs can be significantly reduced through readily available.
Liam Newcombe BCS Data Centre Specialist Group Secretary Energy Efficient Data Centres.
Energy Usage in Cloud Part2 Salih Safa BACANLI. Cooling Virtualization Energy Proportional System Conclusion.
Overview of Data Center Energy Use Bill Tschudi, LBNL
Thermal-aware Issues in Computers IMPACT Lab. Part A Overview of Thermal-related Technologies.
Dealing with Hotspots in Datacenters Caused by High-Density Computing Peter Hannaford Director of Business Development EMEA.
Software Architecture for Dynamic Thermal Management in Datacenters Tridib Mukherjee Graduate Research Assistant IMPACT Lab ( Department.
Thermal Aware Data Management in Cloud based Data Centers Ling Liu College of Computing Georgia Institute of Technology NSF SEEDM workshop, May 2-3, 2011.
The Green Grid’s Data Center Maturity Model November 18, 2015.
1 Thermal Management of Datacenter Qinghui Tang. 2 Preliminaries What is data center What is thermal management Why does Intel Care Why Computer Science.
Authors: William Tschudi, Lawrence Berkeley National Lab Stephen Fok, Pacific Gas and Electric Company Stephen Fok, Pacific Gas and Electric Company Presented.
Increasing DC Efficiency by 4x Berkeley RAD Lab
Key Customer ChallengesCustomer Pain Points How You can Help the CustomerProductsSolutionsServices Increasing Density Difficult to maintain 300 cfm per.
Optimizing Power and Data Center Resources Jim Sweeney Enterprise Solutions Consultant, GTSI.
Energy Savings in CERN’s Main Data Centre
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
1 ITM 1.2 How IT Decisions Impact Data Center Facilities: The Importance of Collaboration Lars Strong P.E. Upsite Technologies, Inc.
1 PCE 2.1: The Co-Relationship of Containment and CFDs Gordon Johnson Senior CFD Manager at Subzero Engineering CDCDP (Certified Data Center Design Professional)
Ventilation & Airflow NetShelter CX 18U,24U,38U Ambient office air is drawn in to the soundproofed air-intake chambers on either side of the NetShelter.
Monitoreo y Administración de Infraestructura Fisica (DCIM). StruxureWare for Data Centers 2.0 Arturo Maqueo Business Development Data Centers LAM.
Usage Of Cloud Computing Simulators And Future Systems In Computational Research Dr. Ramkumar Lakshminarayanan Mr. Rajasekar Ramalingam.
Data Center Energy Efficiency SC07 Birds of a Feather November, 2007 William Tschudi
1 Copyright © 2016, The Green Grid The webcast will begin shortly Today’s live session will be recorded.
Background Data Centre (DC) energy consumption doubled between 2000 and 2005 and grew by 50% from 2005 to 2010, consuming 1.5% of global primary energy.
CANOVATE MOBILE (CONTAINER) DATA CENTER SOLUTIONS
Unit 2: Chapter 2 Cooling.
Thermal-aware Task Placement in Data Centers (part 4)
The Data Center Challenge
Using Heat to Increase Cooling George Hannah BEng (Hons) CEng MIMechE
Data Center Research Roadmap
Sensing the Datacenter
Energy Efficiency in District Coiling System
The University of Adelaide, School of Computer Science
The Benefit of Including Energy Recovery System Analysis
The Greening of IT November 1, 2007.
Presentation transcript:

1 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Data Center Efficiency with Optimized Cooling Control SigmaDC Data Center Conference Chuck Rego, Sr. P.E. Chief Datacenter Architect Cloud Infrastructure Group Intel Corporation 11/15/2011

2 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. The Environmental Challenge Mid life Expected end of life Year 1 Data Center Timeline Total design capacity Usable Capacity Operational intent Data centers are designed with a total design capacity and an expected EoL Rudimentary techniques coupled with “best practices” are currently used design – and carried over to Operations W/Sq. Ft kW/ Cabinet CFM/kW Cold Aisle-Hot Aisle

3 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. The Environmental Challenge Mid life Expected end of life Year 1 Data Center Timeline Typical Operation “Lost” Lifespan “Lost” Capacity Total design capacity Usable Capacity Data centers seem to “lose” capacity over time and reach end-of-life early Operational intent “The biggest challenge for 80%+ of Owner/Operators is obtaining the right balance between Space, Power and Cooling” - Gartner The typical data center operates at only 50% of its potential capacity - U.S. Department of Energy

4 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. The Environmental Challenge IT equipment Cooling system Power distribution Utility supply 6 units lost 20 units lost 100 units in Design PUE of 1.5 Operational PUE of 2.2 The Real Truth - Capacity isn’t lost, only hidden by low efficiency!

5 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Current State of Data Center Cooling Control: The Server/Facility Disconnect The facility and the servers are designed separately – a major cause of performance problems and low efficiency IT Equipment Manufacturer Disconnect Facilities Manager

6 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. The Environmental Challenge Servers Switch Storage Different technologies, different power densities...different airflow requirements Same equipment, one works... one fails.

7 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Air Cooling - Challenges and Opportunities Instrumentation and Control Higher Temperature Operations Workload Distribution and Provisioning E2E DC Architecture Variability Levers RLU – DC CPM

8 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Integrated Cooling Control Equipment Manufacturer Facilities Manager A novel approach that ties the facility cooling and airflow to the server demand

9 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. DC Efficiency = Computation Total Energy = ( ) 1 PUE 1 SPU E ( ) XX (a)(b)(c) Facility Focus Future Focus More Focus here server hardware efficiency workload computation efficiency ( ) Computation Total Energy to Electronic Components EQUATION 5.1: Breaking an energy efficiency metric into three components: a facility term (PUE = Power Usage Effectiveness) (a), a server energy conversion term (SPUE = Server PUE) (b), and the efficiency of the electronic components in performing the computation per se (c). Warehouse Scale Computing – Barroso, Holzle, Pg 48

10 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Data Center layouts have moved towards segregation of supply and return but ACU temperature control strategies are relatively unchanged Return air temperature control attempts to achieve a constant ambient temperature within the room by varying the temperature of the air being supplied Temperature = 25C Cooling Control Today

11 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Supply Temperature Oscillation

12 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Increasing Data Center operating temperatures with optimized layouts can potentially reduce overall energy consumption and carbon footprint  Optimizing temperature, power, and performance benefits customers and reduces overall capital and operating expenditures.  Increase in server power is offset by significant savings at the data center level.  Testing results indicate no performance degradation at higher temperatures. High Ambient Data Center Operations Power increase 0-6% * Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Trade-offs between Facility Power and Server Power must be understood to embrace High Ambient DC Operation

13 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. 1. Collect Data Outlet Temp 2. Aggregation Airflow Inlet Temp IA enables Rack and Row level Management 3. Evaluation Compute Migrate Useful Work Cooling Adjust Temp Adjust Airflow Power Cap Servers Balance Phases 4. Manage Events Platform Based Sensors at each server 3 rd Party Sensors in isolated locations vs. Platform Enabled Data Center utilization Metrics Optimizing Data Center Resources Scheduling Analytics Modify Migrate Park Control Power

14 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Integrating Data to Optimize Resources Future Data Center Manageability – Power and Thermal Aware Scheduling Power and Thermal Data Input Power Mgmt Systems Thermal Mgmt Systems Configuration Mgmt Systems

15 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Using real time data from a group of servers to optimize data center cooling  The new sensors are defined for providing server level information  Total Airflow through the server  Average outlet temperature of server  New sensors are derived from sensory data already available on the server  New sensors are computed using new algorithms and mathematical formulas  The Airflow is derived from the speed (RPM) of each Zone fan  Q = f (Fan RPM)  Average outlet temperature is computed using the total power drawn by the server and airflow  Toutlet = Tinlet + Ptotal · 1.76 / Q Server Airflow Management * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation.

16 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Compute Data Current Technology Research in Hardware Based Compute Utilization 1.Operating System Independent Monitoring No dependencies on OS agents No impact on production network No deployment validation required 2.Complete Resource Utilization Mapping Correlation of Compute – Power – Thermal 3.Facility Level Workload Placement Deployment of workloads in a heterogeneous environment AC Upgrades Power Upgrades

17 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Thermal Data Facilities Manager Rack Room Platform Next Generation Technology in Thermal Airflow Management 1.Near Real Time Thermal Analysis 2.Enablement of High Ambient Operating Temperature Optimize Cooling load Reduce Cooling Costs Finding Airflow 525 CFM to 501 CFM 2.4% Reduction in Fan Power Consumption Chilled Water Temp 14.5 C to 23.9 C 36% Reduction in CW System Power 1 C Increase ~ 20% Savings in CW Power Consumption Proof of Concept Participants Future Facilities, Inc.

18 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Power Data 1.Increase Compute Density Rack Power Allocation 12,320 Watts Initial setup 32 U Filled at 385 W per server Stable Rack Setup 36 U Filled at 342 W per server Efficient Rack Setup 40 U Filled at 308 W per server 2.Workload Based Power Optimization WorkloadWatts Saved per Node Proof of Concept Participants Baidu China Telecom BMW Oracle Technology Available today in Node Manager and Data Center Manager

19 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Simulation Results Supply Air of 23.9°C (an increase of 8.4 °C from “as-is” Exhaust Air Makes Its Way to the Return with Minimal Mixing The changes incorporated have yielded a better path to airflow based on the demand of the servers This is a dynamic system where the cooling vents regulate the airflow and temperature based on the temperature sensors in the servers

20 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Potential Savings “As-Is”Proposed Changes Estimated Energy Savings Supply Flow>*525 CFM (assuming 250 W motor) 501 CFM2.4% fan power reduction Supply Temperature14.5°C23.9°C~36% reduction in CW system (1 deg-C rise in CW yields 20% in chiller savings)

21 * Other names and brands may be claimed as the property of others. Copyright © 2010, Intel Corporation. Summary 1.The real time nodal data integrated with a Data Center CFD tool. 2.The combination of measurement and simulation provides data center operators with the ability to manage a dynamic cooling environment in their data center and operate at high ambient temperatures. 3.A dynamic data center cooling environment reacts to the server requirements, allowing higher ambient temperatures and lower fan speeds therefore reducing running costs without impact to IT resilience