Presentation is loading. Please wait.

Presentation is loading. Please wait.

Thermodynamic Feasibility 1 Anna Haywood, Jon Sherbeck, Patrick Phelan, Georgios Varsamopoulos, Sandeep K. S. Gupta.

Similar presentations


Presentation on theme: "Thermodynamic Feasibility 1 Anna Haywood, Jon Sherbeck, Patrick Phelan, Georgios Varsamopoulos, Sandeep K. S. Gupta."— Presentation transcript:

1 Thermodynamic Feasibility 1 Anna Haywood, Jon Sherbeck, Patrick Phelan, Georgios Varsamopoulos, Sandeep K. S. Gupta

2 INTRODUCTION Thermal architecture side: II-EN: BlueTool: Infrastructure for Innovative Cyberphysical Data Center Management Research NSF funded award #0855277 In response to an acknowledged problem Increasing amount of data center energy use, Currently at 3% of all US energy consumption ~50% of which is used for cooling the data center 2

3 The BlueTool project http://impact.asu.edu/BlueTool/ 3

4 Objective The overall objective of thermal project is to reduce the grid power consumption of the cooling system for data centers. 4 HOW? use heat-driven, LiBr absorption chiller to reduce the cooling load on a typical Computer Room AC (CRAC) heat to drive the chiller will be originated from the data center itself.

5 Challenges 1. generating enough high-temperature heat from the blade components inside the data center, Target heat source = CPUs 2. and then capturing and transporting that heat effectively and efficiently to a Li-Br heat-activated absorption unit. dynamic/fluctuating heat output of CPUs 5

6 Overall Concept: Capture 90% of CPU heat and send to chiller 6 CPUs dissipate most heat on the board  high heat fraction (HHF)  a capture fraction (CF) of 0.90. Low loss LiBr heat- activated chiller  T budget =70-95 o C

7 High Heat Fraction 7 42U chassis IT blade server CPUs Figure 1. IT server equipment. Dell DataCenter Capacity Planner Tool: 103W/CPU, 294W/blade and 19.78kW/rackTool HHF: CPU heat /blade heat = 0.7 (206W/294W) 10blades/chassis 5 chassis/rack 7U

8 Target Heat Source Dell PE 1855 Intel® Xeon® Nocona Processors 3.20 GHz 2 CPUs/server blade 103W/CPU = 206W/blade TDP @ 72 o C 8

9 How much heat required from CPUs to run chiller for best performance? Cooling Capacity: 10 ton =35.2 kW and COP C = 0.7 *Goal: 50.3 kW *Translates into 269 server blades of the Dell PE 1855 with dual Xeon Nocona CPUs.

10 Apply Steady-State System Analysis Data center + cooling system layout System equations applied to PUE Apply equations to gauge system performance Analyze power effectiveness of data center PUE metric: Power Usage Effectiveness Ratio of power delivered to the facility divided by power used exclusively for the IT equipment 10

11 System diagram: work and heat flow paths 11 Contributors: Dr. Phelan, Anna Haywood, Jon Sherbeck, Phani Domalapally

12 PUE is traditionally defined assuming conventional electric supply configurations For non-conventional configurations, using alternative sources or reusing heat, 12. PUELevel of Efficiency 3.0Very Inefficient 2.5Inefficient 2.0Average 1.5Efficient 1.2Very Efficient Industry benchmarked PUE values (GreenGrid 2009)  PUE may fall below 1.0

13 PUE applied to our system diagram 13 Relates electric power for compressor to the heat load on the CRAC CPU heat load removal

14 Equations relating PUE to HHF, CF, Q EXT 14 total heat flow from data center as a load on the cooling equipment chiller’s cooling capacity reduces heat load on CRAC heat extraction from the CPUs to the storage Portion of rack heat driving chiller

15 15 Rearranging terms and simplifying PUE can become less than one. This cooling portion (heat removal) is divided by COP CRAC to represent electric power. Term suggests that external heating can generate excess cooling that can be “exported,” i.e., used to cool adjacent rooms or facilities, -- (pwr out) Win = Q L /COP CRAC

16 Calculated values for our data center with 6 racks 16 HHF0.70 ( 6 racks ) 118.70 kWe (CPUs) 74.77 kWth 0.55 kWe 0.50 kWe 0.21 kWe -1.88 kWe Coefficients of performance COP CRAC 3.9 *typical COP R COP C 0.7 optimum Capture Fraction (CF) 0.90 Expected PUE 0.99 Heat pwr Elec pwr

17 PUE “very efficient” for our Data Center 17 PUE=0.99 PUE

18 PUE even better with Solar Source Added 18 PUE=0.81 PUE

19 Conclusion The potential exists to utilize some of the waste heat generated by data centers to drive absorption chillers, which would then relieve some of the cooling load on the conventional computer room air conditioner (CRAC). By reusing data center waste heat and supplementing the high-temperature heat captured from the CPUs with an external source of heating, such as from solar energy, it is theoretically possible to generate a PUE (Power Usage Effectiveness) ratio of less than one. 19

20 Extra material 20

21 Example using reused heat Take an initial PUE of 1.2 83% of that goes to servers If can utilize 30% of dissipated heat, then PUE drops to 0.9 21


Download ppt "Thermodynamic Feasibility 1 Anna Haywood, Jon Sherbeck, Patrick Phelan, Georgios Varsamopoulos, Sandeep K. S. Gupta."

Similar presentations


Ads by Google