High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015.

Slides:



Advertisements
Similar presentations
Achieving the MDGs: RBA Training Workshop Module 6: Urban Development Investment Cluster May 9-12, 2005.
Advertisements

Objectives: Chapter 9: Data Centre Architecture VLAN definition and benefits * VLANs and broadcast domains * Routers role in VLANs * Types of VLANs * VLANs.
Computer Room Provision in Atlas and R89 Graham Robinson.
Del Mar Union Schools Facilities Master Plan Update September 19, 2012.
State Data Center Re-scoped Projects With emphasis on reducing load on cooling systems in OB2 April 4, 2012.
Raymond School District Raymond School Facility Assessment Executive Summary September 16, 2013 Eppstein Uhen Architects Eric Dufek, Senior Design Architect.
MidAmerican Energy Holdings Company Telecom Power Infrastructure Analysis Premium Power for Colocation Telecom Power Infrastructure Analysis February 27,
Zycko’s data centre proposition The very best products and technologies Focus on what your customers need Turning products into solutions Wide range of.
The CDCE BNL HEPIX – LBL October 28, 2009 Tony Chan - BNL.
Cooling Product Positioning
Architect of the Capitol (AOC) Presentation to American Council of Engineering Companies of Metropolitan Washington February 16, 2012.
ICW Water Leak at FCC Forces a Partial Computing Outage at Feynman Computing Center J. MacNerland, M. Thomas, G. Bellendir, A. Walters May 31, 2007.
State Data Center Re-scoped Projects With emphasis on reducing load on cooling systems in OB2 April 4, 2012.
Anand Vanchi- Intel IT Ravi Giri – Intel IT Sujith Kannan – Intel Corporate Services Comprehensive Energy Efficiency of Data Centers – Case study shared.
1 Northwestern University Information Technology Data Center Elements Research and Administrative Computing Committee Presented October 8, 2007.
02/24/09 Green Data Center project Alan Crosswell.
University of Michigan Administrative Information Services MAIS/MCIT Arbor Lakes Parallel Data Center Mark Linsenman Manager Computer Operations MAIS Joint.
Whiteboard Message for Cloud Computing Answering End user questions in 5 minutes with a white board 1.What is Cloud Computing? 2.How does Cloud computing.
A New Building Data Center Upgrade capacity and technology step 2011 – May the 4 th– Hepix spring meeting Darmstadt (de) Pascal Trouvé (Facility Manager.
© 2008 IBM Corporation Sales Training IBM SystemsGreen Energy Efficiency Rapid Assessment Scott Barielle STG Lab.
Siemens sans siemens sans bold siemens sans italic siemens sans italic bold siemens sans black siemens black italic Siemens Building Technologies.
October 23rd, 2009 Visit of CMS Computing Management at CC-IN2P3.
Connectivity Week Santa Clara Convention Center May 23, 2011.
University of Michigan, CSG, May 2006 UM/MITC Data Center.
Energy Usage in Cloud Part2 Salih Safa BACANLI. Cooling Virtualization Energy Proportional System Conclusion.
Co-location Sites for Business Continuity and Disaster Recovery Peter Lesser (212) Peter Lesser (212) Kraft.
MISSION CRITICAL COLOCATION 360 Technology Center Solutions.
New Data Center at BNL– Status Update HEPIX – CERN May 6, 2008 Tony Chan - BNL.
Storage Trends: DoITT Enterprise Storage Gregory Neuhaus – Assistant Commissioner: Enterprise Systems Matthew Sims – Director of Critical Infrastructure.
1 NEW FAIRFIELD FREE PUBLIC LIBRARY Renovation Project
Progress Energy Corporate Data Center Rob Robertson February 17, 2010 of North Carolina.
INFORMATION TECHNOLOGY SERVICES University Data Center Project Overview January 11, 2010.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
July LEReC Review July 2014 Low Energy RHIC electron Cooling Al Pendzick LEReC Civil Construction.
1 Capital Improvements Committee 2011 Capital Request April 14, 2010 Chief Edward A. Flynn Milwaukee Police Department.
Infrastructure Improvements 2010 – November 4 th – Hepix – Ithaca (NY)
Jon Walton Director of Technology Bay Area Technology Forum November 5, 2009 Technology Consolidation: Fact or Fiction.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
Welcome Georgia Infrastructure Transformation 2010 Kick-off Meeting January 30, 2008.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Upgrade Project Wayne Salter HEPiX November.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Nikhef/(SARA) tier-1 data center infrastructure
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Consolidation Project Vincent Doré IT Technical.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
Optimizing Power and Data Center Resources Jim Sweeney Enterprise Solutions Consultant, GTSI.
Energy Savings in CERN’s Main Data Centre
A Tour of the VCU Computer Center Presentation by John Owens Photography by Zoltan Forray.
Vault Reconfiguration IT DMM January 23 rd 2002 Tony Cass —
CERN - IT Department CH-1211 Genève 23 Switzerland t Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007 Tony Cass.
CD-doc-650 Fermilab Computing Division Physical Infrastructure Requirements for Computers FY04 – 07 (1/4/05)
Monitoreo y Administración de Infraestructura Fisica (DCIM). StruxureWare for Data Centers 2.0 Arturo Maqueo Business Development Data Centers LAM.
Energy Systems Integration Facility May Renewable and Efficiency Technology Integration ESIF Supports National Goals National carbon goals require.
West Cambridge Data Centre Ian Tasker Information Services.
C21 World (UK) Consultancy Services for Telecom Industry DATA CENTERS
CD-doc-650 Fermilab Computing Division Physical Infrastructure Requirements for Computers FY04 – 07 (1/4/05)
Expansion Plans for the Brookhaven Computer Center HEPIX – St. Louis November 7, 2007 Tony Chan - BNL.
Public Safety Facilities Plan. Goal Make Recommendations to the BOS regarding the location of Public Safety Facilities This is a 50 year decision – Plan.
Harness the Power of your Operating Budget to Enhance your Learning Environment MASBO Winer Conference – Jackson, MS.
Data Center Network Topologies
DC Market Trends and the key focus areas within
Consolidation, Virtualization and DR
CERN Data Centre ‘Building 513 on the Meyrin Site’
Background B513 built in early 1970’s for mainframe era.
GES SYSTEM THE IMPORTANCE OF GES SYSTEM IN BUILDING
Storage Trends: DoITT Enterprise Storage
Improving Energy Reliability & Performance
Improving Energy Reliability & Performance
DATA CENTERS C21 Engineering Consultancy Services for Telecom Industry
Data Centre Environment
Program Proposal to ITGC
Presentation transcript:

High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015

2 Quick Facts !!  Located on 1 st floor in Building 515  Built in the 60’ to house Mainframe computers at the time  22,000 Sq.ft Gross Area  24 Million kW- hrs/ Year Energy Usage  60% to Power  40% to Cooling  3.2 MW Emergency Back Up Power  1MW Battery UPS (No Generator)  2.2 MW Fly Wheel UPS Coupled with 2.3 MW Diesel Generator  1200 Ton Cooling Via Chilled Water CRAH Units  80% Cooling unavailable on back up power  Calculated PUE 1.9

3 Core Networking & Cyber Security Systems

4 NY Blue/NSLS-II/NNDC

5 Blue Gene/Q CFN Clusters

6 RHIC Computing Facility Linux Farm

7 RHIC & ATLAS Computing Facility Servers & Storage

Limiting Layout & Cooling Deficiencies  The existing raised floor is only 12” deep and highly congested.  Evolution of the physically fragmented floor plan contributes to inefficiencies and prevents optimization of space  Existing cooling systems are obsolete and unreliable  Back-up chilled water service does not exist in B515  “Chaos cooling” practices (cooling the room air, not the equipment) result in cooling inefficiencies and wasted energy 8 Issues and Impacts in current Data Center Limits ability to meet increasing computing equipment cooling requirements & puts computing equipment & reliability at risk

Power System Deficiencies  Total facility power availability is inadequate for increasing modern computing power requirements - no power capacity for growth  Existing power infrastructure negatively impacts reliability due to lack of sufficient UPS power distribution. 9 Limits ability to meet increasing computing power requirements, prevents growth & ability to meet future requirements & puts reliability at risk

Inadequate/Limited Physical Space  Inability leverage BNL existing strengths in data-centric and high- throughput computational science and to take advantage of efficiencies and productivity gains by the co-location of the computational support staff with multiple user communities  Inability to leverage current and future joint ventures with other institutions and general industry 10 Limits ability to meet increasing computing power requirements, prevents growth & ability to meet future requirements & puts reliability at risk

Existing Infrastructure Deficiencies 11 inappropriately located electrical equipment Tape storage (minimal cooling requirements) co- located with high density equipment Inefficient location of power distribution equipment on computing floor Lack of adequate wire management systems Stored materials on data center floor Inability to maximize use of available rack space

What To Do !!  A mission need exists to provide computational and data storage support to current and planned particle physics experiments at both RHIC and the ATLAS detector at CERN  A capability gap exists at the current computing facility as it has become functionally obsolete relative to the ability to meet power, cooling, and reliability requirements of modern mid-scale computing  Failure to address the capability gap will impact the mission readiness of the RACF and will impose risk on research funded by NP and HEP, as well as other programs that rely on BNL’s computational and data storage capabilities 12

Proposed Data Center Building-725  Proposed Scope for B725 Renovation Alternative New cooling infrastructure and back up cooling capabilities for initial (4-5 yr.) and long term (15 yr. “+”) needs - Chillers & cooling towers - Secondary chilled water service (for back-up) - Internal distribution with provision for incremental growth New electrical infrastructure - Switchgear, generators, UPS/Flywheel equipment - Internal distribution with provision for incremental growth Architectural Modifications - Demolition and removals - New roofing and exterior window systems - Raised floors, ceilings, lighting, finishes - ADA upgrades Life Safety - Fire suppression and detection systems - Emergency lighting - Lightning protection 13

Proposed Layout -725 Data Center 14

725 Data Center- Preliminary Schedule  Preliminary Critical Decision Schedule CD-2/3A allows for potential early infrastructure equipment procurements (I.e. chillers, generators, flywheels) 15 CD-0 Approve Mission NeedFY 2015 CD-1 Approve Alternative Selection and Cost RangeFY 2016 CD-2/3A Approve Performance BaselineFY 2017 CD-3B Approve Start of ConstructionFY 2018 CD-4 Approve Project CompletionFY 2020

Thank you 16