Presentation is loading. Please wait.

Presentation is loading. Please wait.

Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.

Similar presentations


Presentation on theme: "Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011."— Presentation transcript:

1 Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011

2 TACC Mission & Strategy The mission of the Texas Advanced Computing Center is to enable scientific discovery and enhance society through the application of advanced computing technologies. To accomplish this mission, TACC: –Evaluates, acquires & operates advanced computing systems –Provides training, consulting, and documentation to users –Collaborates with researchers to apply advanced computing techniques –Conducts research & development to produce new computational technologies Resources & Services Research & Development

3 TACC Resources are Terascale, Comprehensive and Balanced HPC systems to enable larger simulations analyses and faster turnaround times Scientific visualization resources to enable large data analysis and knowledge discovery Data & information systems to store large datasets from simulations, analyses, digital collections, instruments, and sensors Distributed/grid/cloud computing servers & software to integrate all resources into computational grids and clouds Network equipment for high-bandwidth data movements and transfers between systems

4 Recent History of Systems at TACC 2001 – IBM Power4 system, 1 TFlop, ~300kW 2003 – Dell Linux cluster, 5 TFlops, ~300 kW 2006 – Dell Linux blade cluster, 62 TFlops ~500 kW, 16 kW per rack 2008 – Sun Linux blade cluster, Ranger, 579 TFlops, 2.4 MW, 30kW per rack 2011 – Dell Linux blade cluster, Lonestar 4, 302 Tflops, 800 kW, 20kW per rack

5 TACC Data Centers Commons Center (CMS) –Originally built in 1986 with 3,200 sq. ft. –Designed to house large Cray systems –Retrofitted multiple times to increase power/cooling infrastructure, ~1 MW total power –18” raised floor, standard CRAC cooling units Research Office Complex (ROC) –Built in 2007 as part of new office building –6,400 sq.ft., 1 MW original designed power –Refitted to support 4 MW total power for Ranger –30” raised floor, CRAC and APC In-Row Coolers

6 CMS Data Center Previously

7 CMS Data Center Now

8 Lonestar 4 Dell Intel 64-bit Xeon Linux Cluster 22,656 CPU cores (302 TFlops) 44 TB memory, 1.8 PB disk

9 Lonestar 4 Front Row

10 Lonestar 4 End of Rows

11 Lonestar 4 Electrical Panels

12 ROC Data Center Houses Ranger, Longhorn, Corral, and other support systems Built in 2007 and already nearing capacity

13 Ranger

14 Data Center of the Future Exploring flexible and efficient data center designs Planning for 50 kW per rack, 10 MW total system power in the near future Prefer 480V power distribution to racks Exotic cooling ideas not excluded –Thermal storage tanks –Immersion cooling

15 Immersive Cooling – Green Revolution Cooling Servers suspended in mineral oil Improves heat transfer and more efficient “transport” of heat than air Requires refit of servers to remove fans

16 Summary Data center/rack power densities increasing Efficiency of delivering power and cooling the heat generated becoming substantial Air cooling reaching limits of cooling capability Future data centers will require more “exotic” or customized cooling solutions for very high power density


Download ppt "Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011."

Similar presentations


Ads by Google