02/24/09 Green Data Center project Alan Crosswell.

Slides:



Advertisements
Similar presentations
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Green Datacenter Initiatives at SDSC Matt Campbell SDSC Data Center Services.
Advertisements

Can ICT Beat CO 2 ? Daniel Gagné ITU – Symposium on ICTs, the Environment and Climate Change May 29th 2012.
MSAENV472B Implement and Monitor Environmentally Sustainable Work Practices Neale Farmer – Coordinator Major Environmental Initiatives – Hunter TAFE
Columbia University’s Advanced Concepts Data Center Pilot June 17, 2011.
King Fahd University of Petroleum and Minerals Mechanical Engineering Department Presented by Mohammad A. Al-Muaigel /10/2004 National Energy.
05/15/09 Green Data Center Program Alan Crosswell.
MidAmerican Energy Holdings Company Telecom Power Infrastructure Analysis Premium Power for Colocation Telecom Power Infrastructure Analysis February 27,
Zycko’s data centre proposition The very best products and technologies Focus on what your customers need Turning products into solutions Wide range of.
An Economical Analysis on Data Servers; Green vs. Regular Teddy Hsieh, Chris Davis, Christopher Perrenoud, Matt Nickeson, Shabnam Jahromi ETM 535-Winter.
CIT  Describe servers that offer substantial savings in electrical usage over traditional servers  Explain methods for greening the workplace.
Cooling Product Positioning
PG&E and Altera Data Center Energy Efficiency Project.
Anand Vanchi- Intel IT Ravi Giri – Intel IT Sujith Kannan – Intel Corporate Services Comprehensive Energy Efficiency of Data Centers – Case study shared.
Measuring and Validating Attempts to Green Columbia’s Data Center October 14, 2010 Rich Hall Peter M Crosta Alan Crosswell Columbia University Information.
1 NYSERDA Data Center Efficiency Program Columbia University Green Data Center Winter Workshop January 7 th, 2011.
10/06/2009 Fall Internet2 Member Meeting, San Antonio Green Data Center Program Alan Crosswell.
Addressing the Needs for Research and Administrative Computing Research and Administrative Computing Committee Presented October 8, 2007.
03/20/09 Green Data Center Program (short version:-) Alan Crosswell.
Green Data Center Program
ON IT 1 Con Edison Energy Efficiency Programs Sustaining our Future Rebecca Craft Director of Energy Efficiency.
Green Data Center?  Reduces Data Center energy costs  Virtual Technology Implementation  Utilization improvement.
05/15/09 Green Data Center Program Ian Katz Power Metering, The New Frontier.
Columbia University’s Green Data Center Winter Workshop Agenda – January 7, :00amRegistration & Breakfast 9:30 – 10:15Welcome and Opening Remarks.
14 June 2010 Green Data Center Program Alan Crosswell.
An Academic Data Center Build Out at the University of Michigan Paul Killey Tempe, AZ Feb
 Site  Requirements  Local Resources  Initial layout ideas  Brief material selection  Supply options.
MODULAR DATA CENTER PUE
Upstate Energy Expo 2010 NYSERDA Program Overview March 30, 2010 Cheryl Glanton, Project Manager.
Steve Craker K-12 Team Lead Geoff Overland IT and Data Center Focus on Energy Increase IT Budgets with Energy Efficiency.
Bowie State University Fine and Performing Arts Center Zachary Lippert Faculty Advisor: Dr. Stephen Treado.
Best Practices in HVAC Design/Retrofit
Energy Conservation Western North Carolina Recycler’s Networking Council Black Mountain, North Carolina December 9, 2004 Keyes McGee, E.I. North Carolina.
Optimising Data Centre Power Planning and Managing Change in Data Centres - 28th November Cirencester.
Columbia University’s Advanced Concept Data Center Pilot Project May 15, 2009.
University of Michigan, CSG, May 2006 UM/MITC Data Center.
Air Conditioning and Computer Centre Power Efficiency The Reality Christophe Martel Tony Cass.
Thermodynamic Feasibility 1 Anna Haywood, Jon Sherbeck, Patrick Phelan, Georgios Varsamopoulos, Sandeep K. S. Gupta.
Data centre air management Case studies Sophia Flucker.
Virtualization is only half the battle for efficiency in Data Centers Eduard Bodor System Engineer Romania & Bulgaria.
Energy Usage in Cloud Part2 Salih Safa BACANLI. Cooling Virtualization Energy Proportional System Conclusion.
OUCC 2015 Inspiring Innovation Presentation: We think it was time: The rehabilitation of infrastructure at Waterloo Presenter: Jason Gorrie Date: Monday,
Terry Alexander Exec Director, Office of Campus Sustainability.
Physical Infrastructure Issues In A Large Centre July 8 th 2003 CERN.ch.
Progress Energy Corporate Data Center Rob Robertson February 17, 2010 of North Carolina.
DELL CONFIDENTIAL 1 GREEN IT SOLUTIONS. GREEN IS EVERYWHERE DELL CONFIDENTIAL 2.
Infrastructure Gina Dickson – Product Manager Kevin Jackson – Sales Manager.
Most organization’s data centers that were designed before 2000 were we built based on technologies did not exist or were not commonplace such as: >Blade.
Dealing with Hotspots in Datacenters Caused by High-Density Computing Peter Hannaford Director of Business Development EMEA.
Energy Conservation in Computer Intensive Learning Environments.
50th HPC User Forum Emerging Trends in HPC September 9-11, 2013
High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015.
#watitis2015 TOWARD A GREENER HORIZON: PROPOSED ENERGY SAVING CHANGES TO MFCF DATA CENTERS Naji Alamrony
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
BENEFITTING FROM GREEN IT – FINDINGS FROM THE SUSTEIT PROJECT Professor Peter James
CERN - IT Department CH-1211 Genève 23 Switzerland t Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007 Tony Cass.
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
PIC port d’informació científica First operational experience from a compact, highly energy efficient data center module V. Acín, R. Cruz, M. Delfino,
1 PCE 4.4 New Development In DC Containment Steve Howell.
Federal Electronics Symposium – East Peter Hathaway June 17, 2009 NREL’s Gold Standard.
Monitoreo y Administración de Infraestructura Fisica (DCIM). StruxureWare for Data Centers 2.0 Arturo Maqueo Business Development Data Centers LAM.
InfraStruXure Systems Alex Tavakalov
West Cambridge Data Centre Ian Tasker Information Services.
CANOVATE MOBILE (CONTAINER) DATA CENTER SOLUTIONS
The Lean and Green by Design
Overview: Cloud Datacenters II
Fifty Questions What Business and IT Officers Need to Know about their Campus’ Carbon Emissions Did your CEO sign the American College and University.
Green IT Focus: Server Virtualization
MODULAR DATA CENTER PUE
CERN Data Centre ‘Building 513 on the Meyrin Site’
Presentation transcript:

02/24/09 Green Data Center project Alan Crosswell

02/24/09 2 Agenda The Data Center Status Quo Future State Goals Our NYSERDA Advanced Concepts Datacenter proposal

02/24/09 3 The Data Center Status Quo Architectural –Built in 1963, updated somewhat in the 1980's. –5000 sf (465 m^2) raised floor machine room space. –1750 sf (163 m^2) raised floor space, now offices. –12” raised floor –Adequate support spaces nearby Staff Staging Storage Mechanical & fire suppression

02/24/09 4 The Data Center Status Quo (cont'd)‏ Electrical –Supply: 3-phase 208V from automatic transfer switch. –Distribution: 208V to wall-mounted panels; 120V to most servers. –No central UPS; lots of rack-mounted units. –Generator: 1750 kW shared with other users. At capacity. –No metering. (Spot readings every decade or so:-)‏ –IT demand load tripled from : 107 kVA 2008: 335 kVA

02/24/09 5 The Data Center Status Quo (cont'd)‏ Mechanical –On floor CRAC units served by campus chilled water. –Also served by backup glycol dry coolers. –Supplements a central overhead chilled air system. –Heat load is shared between the overhead and CRAC. –No hot/cold aisles –Rows are in various orientations –Due to tripling of demand load, the backup (generator-powered) CRAC units lack sufficient capacity.

02/24/09 6 The Data Center Status Quo (cont'd)‏ IT systems –A mix of mostly administrative (non- research) systems. –Most servers dual-corded 120V power input. –Many ancient servers. –Due to lack of room UPS, each rack has UPSes taking up 30-40% of the space. –Lots of spaghetti in the racks and under the floor.

02/24/09 7 The Data Center Status Quo (cont'd)‏ Other data centers –Many small school, departmental & research server rooms all over the place. –Most lack electrical or HVAC backup. –Many could be better used as academic space (labs, offices, classrooms). –Growth in research HPC putting increasing pressure on these server rooms. –Lots of wasted money building new server rooms for HPC clusters that are part of faculty startup packages, etc.

02/24/09 8 Future State Goals – Next 5 years Begin phased upgrades of the Data Center to Improve power and space efficiency. Overall cost ~ $20M. Consolidate and replace pizza box servers with blades (& virtualization). Consolidate and simplify storage systems. Accommodate growing demand for HPC research clusters Accommodate server needs of new Interdisciplinary Science Building. Develop internal cloud services. Explore external cloud services.

02/24/09 9 Future State Goals – Next 10 years Build a new data center of 10,000-15,000 sf Consolidate many small server rooms. Significant use of green-energy cloud computing resources. From

02/24/09 10 Our NYSERDA Proposal New York State Energy Research & Development Authority Program Opportunity Notice 1206 ~$1M ($447K from NYSERDA awarded pending contract)‏ Improve space & power efficiency of primarily administrative servers. Contribute to Columbia's PlaNYC carbon footprint reduction goal. Make room for shared research computing in the existing data center. Measure and test vendor claims of energy efficiency improvements.

02/24/09 11 Our NYSERDA Proposal – Specific Tasks Identify 30 old servers to replace. Instrument server power consumption and data center heat load. “Real time” with SNMP. Establish PUE profile (use DoE DC Pro survey tool)‏ Implement 9 racks of high-density cooling (in-row/rack). Implement proper UPS and high-voltage distribution. Compare old & new research clusters' power consumption for the same workload. Implement advanced server power management and measure improvements. Review with internal, external and research faculty advisory groups. Communicate results.