Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Energy Efficient Data Centers: Strategies from the Save Energy Now Program Federal Environmental Symposium June 4, 2008 Dale Sartor Lawrence Berkeley.

Similar presentations


Presentation on theme: "1 Energy Efficient Data Centers: Strategies from the Save Energy Now Program Federal Environmental Symposium June 4, 2008 Dale Sartor Lawrence Berkeley."— Presentation transcript:

1 1 Energy Efficient Data Centers: Strategies from the Save Energy Now Program Federal Environmental Symposium June 4, 2008 Dale Sartor Lawrence Berkeley National Laboratory DASartor@lbl.gov

2 High Tech Buildings are Energy Hogs:

3 LBNL Feels the Pain!

4 LBNL Super Computer Systems Power:

5 Typical Data Center Energy End Use Server Load /Computing Operations Cooling Equipment Power Conversions & Distribution 100 Units 33 Units Delivered 35 Units

6 Overall Electrical Power Use Courtesy of Michael Patterson, Intel Corporation

7 Lifetime electrical cost will soon exceed cost of IT equipment, but IT equipment load can be controlled Consolidation Server efficiency –Flops per watt –Efficient power supplies Software efficiency (Virtualization, MAID, etc.) Power management – Low power modes Redundant power supplies Reducing IT load has a multiplier effect –Equivalent savings +/- in infrastructure

8 20-40% savings typically possible Aggressive strategies can yield better than 50% savings Potentially defer need to build new generating capacity and avoid millions of metric tons of carbon emissions Extend life and capacity of existing data center infrastructures But is my center good or bad? Potential Benefits of Improved Data Center Energy Efficiency:

9 Benchmarking for Energy Performance Improvement: Energy benchmarking can identify best practices. As new strategies are implemented (e.g. liquid cooling), benchmarking enables comparison of performance.

10 With funding from PG&E and CEC, LBNL conducted benchmark studies of 22 data centers: –Found wide variation in performance –Identified best practices

11 Your Mileage Will Vary The relative percentages of the energy actually doing computing varied considerably.

12 Data Center Performance Varies in Cooling and Power Conversion DCiE (Data Center Infrastructure Efficiency) < 0.5 –Power and cooling systems are far from optimized –Currently, power conversion and cooling systems consume half or more of the electricity used in a data center: Less than half of the power is for the servers Typical Practice DCiE < 0.5 Better Practice DCiE = 0.7 Server Load /Computing Operations Cooling & Power Conversions Server Load /Computing Operations Cooling & Power Conversions Server Load /Computing Operations Best Practice DCiE = 0.85 DCiE Data Center Infrastructure Efficiency Energy for IT Equipment Total Energy for Data Center DCiE =

13 High Level Metric— Data Center Infrastructure Efficiency (DCiE) Ratio of Electricity Delivered to IT Equipment Average.57 Higher is better Source: LBNL Benchmarking

14 Alternative High Level Metric – Data Center Total / IT Equipment (PUE) Average 1.83 Lower is better Source: LBNL Benchmarking

15 Save Energy Now On-line profiling tool: “Data Center Pro” OUTPUTS Overall picture of energy use and efficiency End-use breakout Potential areas for energy efficiency improvement Overall energy use reduction potential INPUTS Description Utility bill data System information IT Cooling Power On-site gen

16 Other Data Center Metrics: Watts per square foot Power distribution: UPS efficiency, IT power supply efficiency –Uptime: IT Hardware Power Overhead Multiplier (ITac/ITdc) HVAC –IT total/HVAC total –Fan watts/cfm –Pump watts/gpm –Chiller plant (or chiller or overall HVAC) kW/ton Lighting watts/square foot Rack cooling index (fraction of IT within recommended temperature range) Return temperature index (RAT-SAT)/ITΔT

17 17 DOE Assessment Tools (Under Development): Identifies and prioritizes key performance metrics –IT Equipment and software –Cooling (air management, controls, CRACs, air handlers, chiller plant) –Power systems (UPS, distribution, on-site generation) Action oriented benchmarking –Tool will identify retrofit opportunities based on questionnaire and results of benchmarking –First order assessment to feed into subsequent engineering feasibility study

18 Server Load/ Computing Operations Cooling Equipment Power Conversion & Distribution Alternative Power Generation High voltage distribution Use of DC power Highly efficient UPS systems Efficient redundancy strategies Load management Server innovation Energy Efficiency Opportunities Are Everywhere Better air management Better environmental conditions Move to liquid cooling Optimized chilled-water plants Use of free cooling On-site generation Waste heat for cooling Use of renewable energy/fuel cells

19 Examination of individual systems and components in the centers that performed well helped to identify best practices: Air management Right-sizing Central plant optimization Efficient air handling Free cooling Humidity control Liquid cooling Improving power chain UPSs and equipment power supplies On-site generation Design and M&O processes Using benchmark results to find best practices:

20 Air Management: Typically, much more air is circulated through computer room air conditioners than is required due to mixing and short circuiting of air Computer manufacturers now provide ASHRAE data sheets which specify airflow and environmental requirements Evaluate airflow from computer room air conditioners compared to server needs

21 Isolating Hot and Cold: Energy intensive IT equipment needs good isolation of “cold” inlet and “hot” discharge Computer room air conditioner airflow can be reduced if no mixing occurs Overall temperature can be raised in the data center if air is delivered to equipment without mixing Coils and chillers are more efficient with higher temperature differences

22 Enforce hot aisle/cold aisle arrangement Eliminate bypasses and short circuits Reduce air flow restrictions Proper floor tile arrangement Proper locations of air handlers Optimize Air Management:

23 Data Center Layout © 2004, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Reprinted by permission from ASHRAE Thermal Guidelines for Data Processing Environments. This material may not be copied nor distributed in either paper or digital form without ASHRAE’s permission. Underfloor Supply Cold Aisle Hot Aisle Only one zone for under floor arrangement!

24 Data Center Layout © 2004, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Reprinted by permission from ASHRAE Thermal Guidelines for Data Processing Environments. This material may not be copied nor distributed in either paper or digital form without ASHRAE’s permission. Overhead Supply Cold Aisle Hot Aisle VAV can be included on each branch

25 Aisle Air Containment: © 2004, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Reprinted by permission from ASHRAE Thermal Guidelines for Data Processing Environments. This material may not be copied nor distributed in either paper or digital form without ASHRAE’s permission. Cold Aisle Caps End cap Hot aisle lid © APC reprinted with permission Cold Aisle Hot Aisle

26 70-75º 95-100º 70-75º Best Scenario— Isolate Cold and Hot

27 Fan Energy Savings – 75% If mixing of cold supply air with hot return air can be eliminated- fan speed can be reduced

28 Better Temperature Control Can Allow Raising the Temperature in The Entire Center ASHRAE Recommended Range Ranges during demonstration

29 Environmental Conditions ASHRAE - consensus from all major IT manufacturers on temperature and humidity conditions Recommended and Allowable ranges of temp and humidity Air flow required

30 Temperature Guidelines – at The Inlet to IT Equipment ASHRAE Allowable Maximum ASHRAE Allowable Minimum ASHRAE Recommended Maximum ASHRAE Recommended Minimum

31 Best air management practices: Arrange racks in hot aisle/cold aisle configuration Try to match or exceed server airflow by aisle –Get thermal report data from IT if possible –Plan for worst case Get variable speed or two speed fans on servers if possible Provide variable airflow fans for AC unit supply –Also consider using air handlers rather than CRACs for improved performance Use overhead supply where possible Provide isolation of hot and cold spaces Plug floor leaks and provide blank off plates in racks Draw return from as high as possible Use CFD to inform design and operation

32 Data Center HVAC often under-loaded Ultimate load uncertain Design for efficient part-load operation –modularity –variable-speed fans, pumps, compressors Upsize fixed elements (pipes, ducts) Upsize cooling towers Right-Size the Design:

33 Have one (vs. distributed cooling) Medium temperature chilled water Aggressive temperature resets Primary-only CHW with variable flow Thermal storage Monitor plant efficiency Optimize the Central Plant:

34 Fewer, larger fans and motors VAV easier Central controls eliminate fighting Outside-air economizers easier Design for Efficient Central Air Handling:

35 Outside-Air Economizers –Can be very effective (24/7 load) –Controversial re: contamination –Must consider humidity Water-side Economizers –No contamination question –Can be in series with chiller Use Free Cooling:

36 Eliminate inadvertent dehumidification –Computer load is sensible only –Medium-temperature chilled water –Humidity control at make-up air handler only Use ASHRAE allowable RH and temperature Eliminate equipment fighting –Coordinated controls on distributed AHUs Improve Humidity Control:

37 Water is 3500x more effective than air on a volume basis Cooling distribution is more energy efficient Water-cooled racks available now; liquid- cooled computers are coming Heat rejection at a higher temperature –Chiller plant more efficient –Water-side economizer more effective Use Liquid Cooling of Racks and Computers:

38

39 Improving the Power Chain: Increase distribution voltage DC distribution Improve equipment power supplies Improve UPS

40 Power supplies in IT equipment generate much of the heat. Highly efficient supplies can reduce IT equipment load by 15% or more. UPS efficiency also varies a lot. Specify Efficient Power Supplies and UPSs

41 Redundancy Understand what redundancy costs – is it worth it? Different strategies have different energy penalties (e.g. 2N vs. N+1) Redundancy in electrical distribution puts you down the efficiency curve

42 Measured UPS Efficiency Redundant Operation

43 Can use waste heat for cooling –sorption cycles –typically required for cost effectiveness Swaps role with utility for back-up Consider On-Site Generation:

44 Get IT and Facilities people to work together Use life-cycle total cost of ownership analysis Document design intent Introduce energy optimization early Benchmark existing facilities Re-commission as a regular part of maintenance Improve Design and Operations Processes:

45 Top best practices identified through benchmarking

46 Design Guides were developed based upon the observed best practices Guides are available through PG&E and LBNL websites Self benchmarking protocol also available Design Guidelines Are Available http://hightech.lbl.gov/datacenters.html

47 Federal Energy Management Program Best practices showcased at Federal data centers Pilot adoption of Best-in-Class guidelines at Federal data centers Adoption of to-be-developed industry standard for Best-in-Class at newly constructed Federal data centers EPA Metrics Server performance rating & ENERGY STAR label Data center performance benchmarking Industrial Technologies Program Tool suite & metrics Energy baselining Training Qualified specialists Case studies Certification of continual improvement Recognition of high energy savers Best practice information Best-in-Class guidelines Industry Tools Metrics Training Best practice information Best-in-Class guidelines IT work productivity standard

48 Links to Get Started DOE Website: Sign up to stay up to date on new developments www.eere.energy.gov/datacenters Lawrence Berkeley National Laboratory (LBNL) http://hightech.lbl.gov/datacenters.html LBNL Best Practices Guidelines (cooling, power, IT systems) http://hightech.lbl.gov/datacenters-bpg.html http://hightech.lbl.gov/datacenters-bpg.html ASHRAE Data Center technical guidebooks http://tc99.ashraetcs.org/ http://tc99.ashraetcs.org/ The Green Grid Association – White papers on metrics http://www.thegreengrid.org/gg_content/ Energy Star ® Program http://www.energystar.gov/index.cfm?c=prod_development.server_efficiency http://www.energystar.gov/index.cfm?c=prod_development.server_efficiency Uptime Institute white papers www.uptimeinstitute.org www.uptimeinstitute.org

49 Contact Information: Dale Sartor, P.E. Lawrence Berkeley National Laboratory Applications Team MS 90-3011 University of California Berkeley, CA 94720 DASartor@LBL.gov (510) 486-5988 http://Ateam.LBL.gov


Download ppt "1 Energy Efficient Data Centers: Strategies from the Save Energy Now Program Federal Environmental Symposium June 4, 2008 Dale Sartor Lawrence Berkeley."

Similar presentations


Ads by Google