TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 TRIUMF Site Report HEPiX/HEPNT NIKHEF, Amsterdam May 19-23/2003 Corrie Kost.

Slides:



Advertisements
Similar presentations
UCL HEP Computing Status HEPSYSMAN, RAL,
Advertisements

24-Apr-03UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS SECURITY SOFTWARE MAINTENANCE BACKUP.
IBM Software Group ® Integrated Server and Virtual Storage Management an IT Optimization Infrastructure Solution from IBM Small and Medium Business Software.
Martin Bly RAL CSF Tier 1/A RAL Tier 1/A Status HEPiX-HEPNT NIKHEF, May 2003.
Rationale for GLIF November CA*net 4 Update >Network is now 3 x 10Gbps wavelengths – Cost of wavelengths dropping dramatically – 3 rd wavelength.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
End to end lightpaths for large file transfer over fast long-distance networks Jan 29, 2003 Bill St. Arnaud, Wade Hong, Geoff Hayward, Corrie Cost, Bryan.
ATLAS Tier 2 Status (IU/BU) J. Shank Boston University iVDGL Facilities Workshop (March 20-22, 2002) BNL.
“GRID” Bricks. Components (NARA I brick) AIC RMC4E2-QI-XPSS 4U w/SATA Raid Controllers: 3ware- mirrored root disks (2) Areca- data disks, battery backed.
Networks, Grids and Service Oriented Architectures eInfrastructures Workshop.
TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 TRIUMF SITE REPORT Corrie Kost Update since Hepix Spring 2005.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
CT NIKHEF June File server CT system support.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
TRIUMF SITE REPORT – Corrie Kost April Catania (Italy) Update since last HEPiX/HEPNT meeting.
Gareth Smith RAL PPD HEP Sysman. April 2003 RAL Particle Physics Department Site Report.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Windows Server MIS 424 Professor Sandvig. Overview Role of servers Performance Requirements Server Hardware Software Windows Server IIS.
CHAPTER 11: Modern Computer Systems
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
CHAPTER 11: Modern Computer Systems
April 2001HEPix/HEPNT1 RAL Site Report John Gordon CLRC, UK.
Enterprise Computing With Aspects of Computer Architecture Jordan Harstad Technology Support Analyst Arizona State University.
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Our Group Covered Three Topics With the Students HTML NetworkingInternet.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
HEPiX/HEPNT TRIUMF,Vancouver 1 October 18, 2003 NIKHEF Site Report Paul Kuipers
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
CA*net 4 International Grid Testbed Tel:
Update on CA*net 4 Network
TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost TRIUMF SITE REPORT Corrie Kost Head Scientific Computing.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
Introduction of CRON Lin Xue Feb What is CRON “cron.cct.lsu.edu” testbed project is based on the Emulab system in the University of Utah. Emulab:
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
HEPix April 2006 NIKHEF site report What’s new at NIKHEF’s infrastructure and Ramping up the LCG tier-1 Wim Heubers / NIKHEF (+SARA)
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Probe Plans and Status SciDAC Kickoff July, 2001 Dan Million Randy Burris ORNL, Center for.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
1 SOS7: “Machines Already Operational” NSF’s Terascale Computing System SOS-7 March 4-6, 2003 Mike Levine, PSC.
Gareth Smith RAL PPD RAL PPD Site Report. Gareth Smith RAL PPD RAL Particle Physics Department Overview About 90 staff (plus ~25 visitors) Desktops mainly.
DataTAG Work Package 4 Meeting Bologna Simone Ludwig Brunel University 23rd and 24th of May 2002.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Setting up a Printer. ♦ Overview Linux servers can be used in many different roles on a LAN. File and print servers are the most common roles played by.
NL Service Challenge Plans
PC Farms & Central Data Recording
DataTAG Project update
Storage Networks and Storage Devices
Alice Software Demonstration
Cost Effective Network Storage Solutions
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Presentation transcript:

TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 TRIUMF Site Report HEPiX/HEPNT NIKHEF, Amsterdam May 19-23/2003 Corrie Kost

TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 Overview Move to new facility Printing moves to CUPS Print/Copy/Scan-to- Integration WestGrid components at TRIUMF C4-Testbed Proposal

TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 Move to ISAC II Building

TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 New Facilities in ISAC II

TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 REVISED LAN

TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 Current LAN

TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 Move to New Facility Weekend of March 29/2003 – 2 Passport 8600’s Failure of air conditioner March 29 pm Web/file server disk problems –Old system failed – replaced –New system (disk) failed during backup –Diskcopy does not work for EXT3 –DD gave ~37,000 lost&found entries –Murphy’s law –Mirrored on spare linux system – last gasp Legacy clean-up / move to racks / change servers

TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 Common Unix Printing System CUPS Components CUPS Foomatic (ppd) gs Samba PRINT SERVER A (printcap) PRINT SERVER B (CUPS) Stage 1 : Separate Windows and Linux Print Servers Stage 2 : Use SAMBA to use a single Linux Print Server

TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 PRINT/COPY/SCAN( ) Integration

TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 WestGrid – UBC/TRIUMF 1000 Cpu’s 10 TBytes Disk 75 Tbytes Tape SUN HPIBM INTERDYNAMIX DELLATAPA LINUXNETWORKX RACKSAVER And the winner is……

TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 WestGrid Solution For TRIUMF/UBC GHz Xeons 14 Blades (28 CPU)/Chassis 533 MHz FSB 2GB memory /node 40GB drive/node 36 Chassis in 6 racks ~ 20Kwatts power/rack ~ 400,000 Btu/hr (33 tons) 40% less heat/power than 1U ~ 1500 lbs/rack 3.2 TeraFlop (peak) Tflops (linpack) 4*6*6 =144 Gige into 232 port Foundry Switch – max 756 nodes 286Mbits/sec unblocked IBM

TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 CA*net 4 Testbed Project Proposed Testbed Network Topology OC192 (10Gbps) Netherlight Starlight 7 + Nodes hosts a 1.5 Terabyte Linux file server, each with 2 GbE optical interfaces, and a 10/100/1000 copper ethernet to provide LAN management.

TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 CA*net 4 Testbed Project Software Matrix Software Packages Disk Server Router RedHat Linux 7.3 or 8.0  Globus Toolkit 2.2  OpenAFS  Sun Grid Engine  Sun J2EE SDK   iperf   Tsunami  bbFTP  Web 100   Net 100   GNU Zebra 0.93a 

TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 CA*net 4 Testbed Project Project time contributions ~ 24 months International in scope Request persistent testbed End-to-end lightpaths by user – to 10Gbps –Quality of service –Guaranteed bandwidth –Stable network characteristics Test long-distance transfer protocols –10Gbit Ethernet, Remote Direct Memory Access RDMA/IP, FibreChannel/IP, serial SCSI, HyperSCSI etc