TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost TRIUMF SITE REPORT Corrie Kost Head Scientific Computing.

Slides:



Advertisements
Similar presentations
UCL HEP Computing Status HEPSYSMAN, RAL,
Advertisements

24-Apr-03UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS SECURITY SOFTWARE MAINTENANCE BACKUP.
The International Grid Testbed: a 10 Gigabit Ethernet success story in memoriam Bob Dobinson GNEW 2004, Geneva Catalin Meirosu on behalf of the IGT collaboration.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 TRIUMF SITE REPORT Corrie Kost Update since Hepix Spring 2005.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
TRIUMF SITE REPORT – Corrie Kost April Catania (Italy) Update since last HEPiX/HEPNT meeting.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Edinburgh Site Report 1 July 2004 Steve Thorn Particle Physics Experiments Group.
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
RHUL1 Site Report Royal Holloway Sukhbir Johal Simon George Barry Green.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
Computer Co-ordinators Day – Sydney Region ASI Solutions T4L Offerings.
April 2001HEPix/HEPNT1 RAL Site Report John Gordon CLRC, UK.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
Site Report May 2006 RHUL Simon George Sukhbir Johal Royal Holloway, University of London, Egham, Surrey TW20 0EX HEP SYSMAN May 2006.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
HEPiX/HEPNT TRIUMF,Vancouver 1 October 18, 2003 NIKHEF Site Report Paul Kuipers
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Spending Plans and Schedule Jae Yu July 26, 2002.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
22nd March 2000HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
2-3 April 2001HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Greedy Applications Ethernet Everywhere Real Time Networks for the Atlas Experiment Brian Martin CERN In memory of Bob Dobinson And on behalf of a large.
NL Service Challenge Plans Kors Bos, Sander Klous, Davide Salomoni (NIKHEF) Pieter de Boer, Mark van de Sanden, Huub Stoffers, Ron Trompert, Jules Wolfrat.
INFSO-RI Enabling Grids for E-sciencE Hellas Grid infrastructure update Kostas Koumantaros, Christos Aposkitis EGEE-HellasGrid Coordination.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
TRIUMF Site Report – HEPiX/HEPNT – NIKHEF, Amsterdam, May 19-23/2003 TRIUMF Site Report HEPiX/HEPNT NIKHEF, Amsterdam May 19-23/2003 Corrie Kost.
Online-Offsite Connectivity Experiments Catalin Meirosu *, Richard Hughes-Jones ** * CERN and Politehnica University of Bucuresti ** University of Manchester.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
The DCS lab. Computer infrastructure Peter Chochula.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
RAL Site Report John Gordon HEPiX/HEPNT Catania 17th April 2002.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
HEPiX/HEPNT report Helge Meinhard, Alberto Pace, Denise Heagerty / CERN-IT Computing Seminar 05 November 2003.
CERN Campus Network Infrastructure Specificities Jean-Michel Jouanigot Campus Network Leader CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH EUROPEAN LABORATORY.
Mass Storage at SARA Peter Michielse (NCF) Mark van de Sanden, Ron Trompert (SARA) GDB – CERN – January 12, 2005.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
TRIUMF Site Report for HEPiX, JLAB, October 9-13, 2006 – Corrie Kost TRIUMF SITE REPORT Corrie Kost & Steve McDonald Update since Hepix Spring 2006.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
KOLKATA Grid Kolkata Tier-2 Status and Plan Site Name :- IN-DAE-VECC-02 Gocdb Name:- IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :-
Brief introduction about “Grid at LNS”
Cluster Status & Plans —— Gang Qin
OpenLab Enterasys Meeting
NL Service Challenge Plans
UK GridPP Tier-1/A Centre at CLRC
The INFN TIER1 Regional Centre
Christof Hanke, HEPIX Spring Meeting 2008, CERN
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Presentation transcript:

TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost TRIUMF SITE REPORT Corrie Kost Head Scientific Computing

TRIUMF MANPOWER Public Computing Services2.0 FTE Network Infrastructure1.5 FTE Software Support & Security3.0 FTE PC Support Group2.0 FTE Data Acquisition Systems2.5 FTE Private CPU farms and Westgrid0.5 FTE Conferencing facilities0.3 FTE TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost

WestGrid – UBC/TRIUMF Site 504 dual 3.06 GHz Xeon blades Red Hat Linux 7.3 compute nodes PBS Scheduling with Maui 10 TB disk storage 24 TB tape storage Direct Gigabit connection between sites sites Possible 10GB in future Subatomic physics allocation: ~1/3 of compute resources TWIST will be a Beta user this month. TRIUMF/UBC first HEP site to install large (1008 CPU) blade cluster? TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost

WestGrid – UBC/TRIUMF Site ( Like the summer rains – two weeks away since August! Blade power consumption exceeded expectations –Use 12 of 14 –Replace hot swap power supplies GPFS has some teething problems Reduced cabling welcome – wireless would be better! Very little infant mortality (no blades failed!) TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost

14 dual-cpu blades in 7U Chassis Side view of a blade TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost

Rear View of Blades Chassis TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost

WestGrid –UBC/TRIUMF Site Storage 1 Rack Blades 6 Racks UPS Batteries Inverters Switch Gear

SATA IDE Drives system from HardData 64 bit 1.6GHz Opterons approx. equal to 32 bit 2.8GHz P4 (Xeons) Performance detailsPerformance details: Konstantin Olchanski First look at Dual 1.6GHz Opterons TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost

Misc. Developments 1 TB Raid 5 disks attached to Linux (TRSHARE) for Linux & Windows users (via Samba) Mosix & PBS Clustering – Steve McDonald 11:30 Wed Bandwidth Management – Steve McDonald – 11:40 Wed Colubris Radius Server – Steve McDonald – 11:50 Wed SPAM fighting at TRIUMF – Andrew Daviel – 13:50 Tue Document Management System: Docushare Windows 2003 Terminal Service (TNT2K3) TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost

Misc. Developments TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost Videoconferencing –One older room with limited capability –A new room with Westgrid funded equipment –Several protocols supported: ISDN, H323, VRVS, AccessGrid –Participation in the Westgrid Collaborative/Visualization effort to improve videoconferencing quality and tools

First Transatlantic Native 10 Gigabit Ethernet Connection Catalin Meirosu CERN and “Politehnica” University of Bucuresti On behalf of the International GRID Testbed International Testbed

The First Transatlantic 10 GE WAN PHY over an OC-192c circuit using lightpaths provided by SURFnet and CANARIE Cisco ONS Force10 E 600 Force10 E 600 HP Itanium-2 HP Itanium-2 Cisco ONS Cisco ONS Cisco ONS Cisco ONS Ixia 400T Intel Itanium-2 Intel Xeon Ixia 400T 10GE WAN PHY 10GE LAN PHY OC192c OttawaTorontoChicagoAmsterdam Geneva 9.24 Gbps using traffic generators 5.5 Gbps using TCP on PCs 6 Gbps using UDP on PCs ITU Telecom World International Testbed

The First Transatlantic 10 GE Cisco ONS Force10 E 600 Force10 E 600 HP Itanium-2 HP Itanium-2 Cisco ONS Cisco ONS Cisco ONS Cisco ONS Ixia 400T Intel Itanium-2 Intel Xeon Ixia 400T 10GE WAN PHY 10GE LAN PHY OC192c OttawaTorontoChicagoAmsterdam Geneva ITU Telecom World International Testbed ESTA, IST