2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft The GridKa Installation.

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

24-Apr-03UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS SECURITY SOFTWARE MAINTENANCE BACKUP.
Forschungszentrum Karlsruhe Technik und Umwelt LHCC Meeting Bologna, 15 June Planning a Regional Data and Computing Center in Germany Peter Mickel.
Service Data Challenge Meeting, Karlsruhe, Dec 2, 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Plans and outlook at GridKa Forschungszentrum.
LCG Grid Deployment Board, March 2003 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Status of GridKa for LCG-1 Forschungszentrum Karlsruhe.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
Forschungszentrum Karlsruhe Technik und Umwelt Regional Data and Computing Centre Germany (RDCCG) RDCCG – Regional Computing and Data Center Germany software.
IWR Ideen werden Realität Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Institut für Wissenschaftliches Rechnen GridKa Database Services Overview.
Chris Brew RAL PPD Site Report Chris Brew SciTech/PPD.
Martin Bly RAL Tier1/A RAL Tier1/A Site Report HEPiX-HEPNT Vancouver, October 2003.
 Changes to sources of funding for computing in the UK.  Past and present computing resources.  Future plans for computing developments. UK Status &
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
HELICS Petteri Johansson & Ilkka Uuhiniemi. HELICS COW –AMD Athlon MP 1.4Ghz –512 (2 in same computing node) –35 at top500.org –Linpack Benchmark 825.
E-Science Workshop, Santiago de Chile, 23./ KIT ( Frank Schmitz Forschungszentrum Karlsruhe Institut.
On St.Petersburg State University Computing Centre and our 1st results in the Data Challenge-2004 for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev,
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
TRIUMF SITE REPORT – Corrie Kost April Catania (Italy) Update since last HEPiX/HEPNT meeting.
EU funding for DataGrid under contract IST is gratefully acknowledged GridPP Tier-1A Centre CCLRC provides the GRIDPP collaboration (funded.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Terabyte IDE RAID-5 Disk Arrays David A. Sanders, Lucien M. Cremaldi, Vance Eschenburg, Romulus Godang, Christopher N. Lawrence, Chris Riley, and Donald.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
April 2001HEPix/HEPNT1 RAL Site Report John Gordon CLRC, UK.
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
9-Sept-2003CAS2003, Annecy, France, WFS1 Distributed Data Management at DKRZ Distributed Data Management at DKRZ Wolfgang Sell Hartmut Fichtel Deutsches.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
St.Petersburg state university computing centre and the 1st results in the DC for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev, V.I.Zolotarev.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
Large Scale Parallel File System and Cluster Management ICT, CAS.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
Tier1 Andrew Sansum GRIDPP 10 June GRIDPP10 June 2004Tier1A2 Production Service for HEP (PPARC) GRIDPP ( ). –“ GridPP will enable testing.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
RAL Site Report John Gordon HEPiX/HEPNT Catania 17th April 2002.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
The Hungarian ClusterGRID Project Péter Stefán research associate NIIF/HUNGARNET
BaBar Cluster Had been unstable mainly because of failing disks Very few (
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
KIT – Universität des Landes Baden-Württemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft Steinbuch Centre for Computing
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
NL Service Challenge Plans
UK GridPP Tier-1/A Centre at CLRC
Presentation transcript:

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft The GridKa Installation for HEP Computing Forschungszentrum Karlsruhe GmbH Central Information and Communication Technologies Department Hermann-von-Helmholtz-Platz 1 D Eggenstein-Leopoldshafen Holger Marten

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Helmholtz Foundation of German Research Centres (HGF) 15 German research centres employees largest German scientific organization 6 main research areas - Structure of Matter - Earth & Environment - Traffic & Outer Space - Health - Energy - Key Technologies

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Forschungszentrum Karlsruhe 40 institutes and divisions employees 13 research programs for - Structure of Matter - Earth & Environment - Health - Energy - Key Technologies many close collaborations with TU Karlsruhe

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft HIK provides institutes of the Research Centre with state-of-the-art high performance computers and IT solutions for each purpose. vector computers, parallel computers, Linux Clusters, workstations, ~2500 PCs, online storage, tape robots, networking infrastructure, printers and printing services, central software, user support,.... About 90 persons in 7 departments Central Information and Communication Technologies Dept. (Hauptabteilung Informations- und Kommunikationstechnik, HIK) Your Key to Success

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Organisational Structure Datendienste Anwendungen System- überwachung Infrastruktur (R. Kupsch) DASI Hochleistungs- rechnen (F. Schmitz) HLR Grid-Computing Infrastruktur und Service GridKa (H. Marten) GIS Grid-Computing und e-Science Competence Centre (M. Kunze) GES Netzinfrastruktur und Netzanwendungen (K. -P. Mickel) NiNa PC-Betreuung und Büro- kommunikation (A. Lorenz) PC/BK Reprografie (G.Dech) Repro Zentralabtlg.Sekretariat HIK (K.-P. Mickel) Zentralabteilung und Sekretariat: IT-Innovationen, Accounting, Billing, Budgetierung, Ausbildungskoordination, sonstige zentrale Aufgaben GridKa

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Grid Computing Centre Karlsruhe - The mission German Tier-1 Regional Centre for 4 LHC HEP-Experiments test phase main set-up phase production phase German Computing Centre for 4 non-LHC HEP-Experiments production environment for BaBar, CDF, D0, Compass

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Regional Data and Computing Centre Germany, Requirements For LHC (Alice, Atlas, CMS, LHCb) + BaBar, Compass, D0, CDF : Test Phase : LHC Setup Phase 2007+: Operation + services other sciences ▲ starts in 2001 ! Including resources for BaBar Tier-A

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Organization of GridKa It’s a project with 41 user groups from 19 German institutions Project Leader & Deputy H. Marten, M. Kunze Overview Board controls execution & financing, arbitrates in case of conflicts Technical Advisory Board defines technical requirements

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft German users of GridKa 19 institutions 41 user groups ~ 350 scientists

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft The GridKa Installation

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Support for multiple experiments I experiments ask for RedHat 6.2, 7.2, 7.1 or Fermi Linux, SuSE in different environments experiments ask for Grid (Globus, EDG,...), batch & interactive login split the CPUs into 8 parts ? - would be administrative challenge - likely, whole machine would be busy only part time reconfigure for each experiment ? - non-LHC experiments produce and analyse data all the time - who should define a time schedule ? - what about other sciences ?

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Support for multiple experiments II Strategy as starting point: Compute Nodes = general purpose, shared resources - GridKa Technical Advisory Board agreed to RedHat 7.2 Experiment-specific software server - “arbitrary” development environments at the beginning - pure software servers and/or Globus gatekeepers later-on

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Experiment Specific Software Server 8x dual PIII for Alice, Atlas, BaBar,..., each with 2 GB ECC RAM, 4x 80 GB IDE-Raid5, 2x Gbit Ethernet Linux & basic software on demand: RH 6.2, 7.1.1, 7.2, Fermi-Linux, SuSE 7.2 Used as Development environment per experiment Interactive login & Globus gatekeeper per experiment Basic admin (root) by FZK Specific software installation by experiment admin

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Grid LAN Backbone Extreme Black Diamond 6808 redundant power supply redundant management board 128 Gbit/s back plane max. 96 Gbit ports currently 80 ports available

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Compute Nodes 124x dual PIII, each with 1 GHz or 1.26 GHz 1 GB ECC RAM 40 GB HDD IDE 100 Mbit Ethernet Total numbers: 5 TB local disk 124 GByte RAM R peak > 270 GFlops RedHat 7.2 OpenPBS automatic installation with NPACI Rocks

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft scalability: many nodes to install and maintain (ca. 2000) heterogeneity: different (Intel-based?) hardware over time consistency: software must be consistent on all nodes manpower: administration by few persons only Cluster Installation, Monitoring & Management This is for Administrators, not for a Grid Resource Broker

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft scalability: - hierarchical instead of pure central management - combined push and pull for management information - info & event handling via separate management network heterogeneity: rpm instead of disk cloning consistency: distribute software from a central service manpower: automatise as much as you can Philosophies for Cluster Management

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Architecture for Scalable Cluster Administration Master Manager C 1 Manager C 2 Manager C n Nodes C 1 Nodes C 2 Nodes C n Management Network Private Compute Network Public Net Cabinet 1Cabinet 2Cabinet n Naming scheme: C ; F01-003,...

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft subnet Master Manager C 1 Manager C 2 Manager C n Nodes C 1 Nodes C 2 Nodes C n Management Network Private Compute Network Public Net Installation - NPACI Rocks with FZK extensions reinstall all nodes in < 1 h DHCP-server for Managers C1...Cn RH kickstart incl. IP conf. for Managers rpm to Managers DHCP-server compute IP install nodes

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft System Monitoring with Ganglia also installed on fileservers - CPU usage - Bytes I/O - Packets I/O - disk space -... and published on the Web

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft subnet Ganglia Master Manager C 1 Manager C 2 Manager C n Nodes C 1 Nodes C 2 Nodes C n Management Network Private Compute Network System Monitoring with Ganglia - Combined Push-Pull write into round robin DB 300 kB / node Ganglia daemon on each node info via multicast no routing publish to Web requestreport

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft subnet Nagios Master Manager C 1 Manager C 2 Manager C n Nodes C 1 Nodes C 2 Nodes C n Management Network Private Compute Network Cluster Management with Nagios - Combined Push-Pull analyse data handle local events publish to Web report ping SNMP report syslog analyse data handle events GPL

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Combining Nagios with Tivoli Enterprise Console ? TEC vector computer parallel computer file server Linux Cluster STK robots Tivoli SMS Mail inventory DB Remedy workflow Nagios Master GridKa Cluster

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Infrastructure We all want to build a Linux cluster do we have the cooling capacity?

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Closed rack-based cooling system - A common development of FZK and Knürr frontrear internal heat exchanger CPU disk 19’’ technique 38 units height usable 70x 120 cm floor space 10 kW cooling redundant DC fans temperature controlled CPU shut-down

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Closed rack-based cooling system - Estimated cost reduction > 70% compared to air conditioning building external heat exchanger

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Build a cluster of clusters (or a Campus Grid ?) bld. C bld. A bld. B LAN bld. D

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Online Storage 59 TB brutto 45 TB net capacity ~ 500 disk drives mixed IDE, SCSI, FC ext3 & ufs file systems DAS: 2.6 TB brutto, 7.2k SCSI 120 GB, attached to SUN Enterprise 220 R SAN: 2x 5.1 TB brutto, 10k FC 73,4 GB, IBM Fast500 NAS: 42.2 TB brutto, 19x IDE-System, dual PIII 1.0/ 1.26 GHz, dual 3Ware Raid Controller, 16x 5.4k IDE 100/ 120/ 160 GB SAN-IDE: 3.8 TB brutto, 2 Systems, 12x 5.4k IDE 160 GB, driven by Linux-PC

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Available Disk Space for HEP: 36 TByte net

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Scheme of (Disk) Storage Cluster Nodes (clients) SAN Grid backbone Tape Drives MDC Fileserver IDE NAS TB/System Disk Subsystems Tests with Linux Server & FC/IDE successful SAN goes commodity !

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Disk Storage & Management – does it scale? >600 automount operations per second for 150 processors measured IDE NAS throughput - >150 MB/s local read (2x RAID5 + RAID0) MB/s w/r.... over NFS... but < 10 MB/s with multiple I/O and multiple users 150 jobs write to a single NAS box Linux file system limit 2 TB disk volumes of >50 TB with flexible volume management desirable mature system needed now ! We will test gfs & gpfs for Linux

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Available Tape Space for HEP: 106 TByte native

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft GridKa Tape System IBM 3584 ~ 2400 slots LTO Ultrium 100 GB/tape 106 TB native available 8 drives, 15 MByte/s each Backup/Archive with Tivoli Storage Manager FZK SAN FZK Tape 2 FZK Tape 1

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Discussion of Tape Storage Management HPSS, TSM, SAM-FS, gen. HSM... do exist - for vendor specific file systems - on vendor specific disk systems - for vendor specific tape systems File systems, data management & mass storage are strongly coupled

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Tape Storage & Data Management under discussion TSM GridKa disk storage - robotics - tape error handling - tape recycling backup SAM dCache DataGrid Castor ?

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Gbit backbone 250 CPUs PIV this week + ~60 PIV until April experiment specific server 35 TB net disk + 40 TB net until April TB tape TB until April 2003 (or on demand) a few central servers Globus, batch, installation, management,... WAN Gbit-test Karlsruhe-Cern FZK-Grid-CA -- testbed... exclusively for HEP and Grid Computing Summary I - The GridKa installation

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Summary II There is a whole bunch of questions.... Grid, security, gfs, gpfs, Castor, dCache, stability, reliability, HSM, certification, hundreds of users, commodity, low cost storage, OGSA, authorisation, networks, high throughput, and a pragmatic solution: start with existing technologies analyse - learn - evolve

2 nd Large Scale Cluster Computing Workshop, FermiLab, October 21-22, 2002 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft The Federal Ministry of Education and Research, BMBF, considers the construction of GridKa and the German contribution to a World Wide Grid as a national task.