1 PDC Site Update at HP-CAST NTIG 1st April 2008 by Peter Graham PDC Site Update at HP-CAST NTIG April 1st 2008 Linköping by Peter Graham

Slides:



Advertisements
Similar presentations
Designing a cluster for geophysical fluid dynamics applications Göran Broström Dep. of Oceanography, Earth Science Centre, Göteborg University.
Advertisements

Alastair Dewhurst, Dimitrios Zilaskos RAL Tier1 Acknowledgements: RAL Tier1 team, especially John Kelly and James Adams Maximising job throughput using.
Supercomputing Challenges at the National Center for Atmospheric Research Dr. Richard Loft Computational Science Section Scientific Computing Division.
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
Contact: Hirofumi Amano at Kyushu 40 Years of HPC Services In this memorable year, the.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Martin Hamilton, Centre Manager − hpc-midlands.ac.uk HPC Midlands Cloud Supercomputing for Academia and Industry.
Supermicro © 2009Confidential HPC Case Study & References.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
High Performance Computing Center North  HPC2N 2002 all rights reserved HPC2N and SweGrid Åke Sandgren, HPC2N and SweGrid Technology Group.
1. 2 Welcome to HP-CAST-NTIG at NSC 1–2 April 2008.
PDC Enabling Science ParallellDatorCentrum Olle Mulmo
UMF Cloud
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
A New Building Data Center Upgrade capacity and technology step 2011 – May the 4 th– Hepix spring meeting Darmstadt (de) Pascal Trouvé (Facility Manager.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Scheduling of Tiled Nested Loops onto a Cluster with a Fixed Number of SMP Nodes Maria Athanasaki, Evangelos Koukis, Nectarios Koziris National Technical.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Appro Products and Solutions Anthony Kenisky, Vice President of Sales Appro, Premier Provider of Scalable Supercomputing Solutions: 4/16/09.
ww w.p ost ers essi on. co m E quipped with latest high end computing systems for providing wide range of services.
Tomographic mammography parallelization Juemin Zhang (NU) Tao Wu (MGH) Waleed Meleis (NU) David Kaeli (NU)
Hardware. THE MOVES INSTITUTE Hardware So you want to build a cluster. What do you need to buy? Remember the definition of a beowulf cluster: Commodity.
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
Serial vs.Parallel Computing Scalable Perf. vs. Availability
Green technology used for ATLAS processing Dr. Ing. Fărcaş Felix NATIONAL INSTITUTE FOR RESEARCH AND DEVELOPMENT OF ISOTOPIC AND MOLECULAR.
University of Southampton Clusters: Changing the Face of Campus Computing Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton.
1 Introduction to Parallel Computing. 2 Multiprocessor Architectures Message-Passing Architectures –Separate address space for each processor. –Processors.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
MISSION CRITICAL COLOCATION 360 Technology Center Solutions.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
Parallelization of the Classic Gram-Schmidt QR-Factorization
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Contact: Hirofumi Amano at Kyushu Mission 40 Years of HPC Services Though the R. I. I.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
Infrastructure Improvements 2010 – November 4 th – Hepix – Ithaca (NY)
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
News from Alberto et al. Fibers document separated from the rest of the computing resources
Today’s Computers By Sharif Safdary. The right computer for you. Advantages to Laptops Advantages to Laptops Size Size Weight Weight Portability Portability.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
Towards energy efficient HPC HP Apollo 8000 at Cyfronet Part I Patryk Lasoń, Marek Magryś.
A 1.7 Petaflops Warm-Water- Cooled System: Operational Experiences and Scientific Results Łukasz Flis, Karol Krawentek, Marek Magryś.
Patryk Lasoń, Marek Magryś
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
Datacenter Energy Efficiency Research: An Update Lawrence Berkeley National Laboratory Bill Tschudi July 29, 2004.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
Multiprocessor  Use large number of processor design for workstation or PC market  Has an efficient medium for communication among the processor memory.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
CERN - IT Department CH-1211 Genève 23 Switzerland t Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007 Tony Cass.
CD-doc-650 Fermilab Computing Division Physical Infrastructure Requirements for Computers FY04 – 07 (1/4/05)
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
KOLKATA Grid Kolkata Tier-2 Status and Plan Site Name :- IN-DAE-VECC-02 Gocdb Name:- IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :-
Brief introduction about “Grid at LNS”
CERN Data Centre ‘Building 513 on the Meyrin Site’
Competitive Advantage of Registered Memory
Competitive Advantage of Registered Memory
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Cluster Computers.
Presentation transcript:

1 PDC Site Update at HP-CAST NTIG 1st April 2008 by Peter Graham PDC Site Update at HP-CAST NTIG April 1st 2008 Linköping by Peter Graham Kungl Tekniska Högskolan

2 PDC Site Update at HP-CAST NTIG 1st April 2008 by Peter Graham PDC premises since 2004

3 PDC Site Update at HP-CAST NTIG 1st April 2008 by Peter Graham

4 PDC a Centre for HP Scientific Computing (HP=High Performance) HP Itanium cluster Lucidor

5 PDC Site Update at HP-CAST NTIG 1st April 2008 by Peter Graham upgraded in 2007 to… HP Itanium cluster Lucidor 2

6 PDC Site Update at HP-CAST NTIG 1st April 2008 by Peter Graham Lucidor2 at a glance The system contains 106 nodes, each with four Itanium2 (McKinley) 1.3GHz CPUs. 22 nodes have 48 Gb RAM and the rest have 32 Gb RAM. At least 64 nodes available for general (i.e. SNIC) user. The interconnect, Myrinet 2000, now uses MX stack which improves latency. Myrinet M3-E128 switch is populated with 112 ports. Each card/port has a data rate of 2+2 Gbit/s, all through 50/125 multi-mode fiber. Linux distribution CentOS 5.

7 PDC Site Update at HP-CAST NTIG 1st April 2008 by Peter Graham Latest HP addition ”Key” HP SMP system (donated to KTH in 2008) Key is a shared memory system consisting of GHz cores of IA64 (Intel) type with 18 MB cache. The total main memory will be 256 GB. (named after Ellen Key)

8 PDC Site Update at HP-CAST NTIG 1st April 2008 by Peter Graham Current systems Lenngren, 442 nodes, Dell 1850 Lucidor2, 106 nodes, HP Itanium2 SweGrid, 100 nodes, South Pole Pentium 4 SBC, 354 nodes, Dell P4 and South Pole Athlon XP Hebb, IBM BlueGene/L New systems: Key, HP SMP, 16 nodes Ferlin, 680 nodes, Dell M600 blade SweGrid2, 90 nodes, Dell M600 blade Climate and turbulence system, under joint procurement with NSC, for SMHI, MISU at SU and Dep of Mechanics at KTH

9 PDC Site Update at HP-CAST NTIG 1st April 2008 by Peter Graham Infrastructure power & cooling Change of transformer from 800 kVA to 2 MVA, done Upgrading of UPS from 400 kVA to 1100 kVA, in progress Diesel generator 400 kVA, existing Upgrade of cooling exchanger, done Adding 300 kW APC cooling hut for Ferlin and SweGrid2 Addition of 300 kW chiller for redundant cooling, in progress

10 PDC Site Update at HP-CAST NTIG 1st April 2008 by Peter Graham Price-performance vs energy Power per node W Energy cost for 300W over 4 years is nearly 15 kSEK If you pay 15 kSEK per node you spend equal amount on investment and energy To develop more energy efficient nodes will give a competitive advantage We would prefer to spend money on application experts rather than on energy bills

11 PDC Site Update at HP-CAST NTIG 1st April 2008 by Peter Graham Summing up PDC is tripling power capacity to meet the need of the new systems coming in High density cooling is required for the new systems with around or above 20 kW per rack Energy efficiency, both in regards to cost but also out of environmental concern is becoming more important Our new patch cables… -->> (for the UPS)