OUHEP STATUS Hardware OUHEP0, 2x Athlon 1GHz, 2 GB, 800GB RAID

Slides:



Advertisements
Similar presentations
WGISS #19 Plenary, CONAE, Cordoba, Argentina, March 2005 Cluster and Grid Project: Status & Update Pakorn Apaphant Geo-Informatics and Space Technology.
Advertisements

Computing Infrastructure
Southwest Tier 2 Center Status Report U.S. ATLAS Tier 2 Workshop - Harvard Mark Sosebee for the SWT2 Center August 17, 2006.
McFarm: first attempt to build a practical, large scale distributed HEP computing cluster using Globus technology Anand Balasubramanian Karthik Gopalratnam.
ATLAS Tier 2 Status (IU/BU) J. Shank Boston University iVDGL Facilities Workshop (March 20-22, 2002) BNL.
Group 11 Pekka Nikula Ossi Hämäläinen Introduction to Parallel Computing Kentucky Linux Athlon Testbed 2
Chiba City: A Testbed for Scalablity and Development FAST-OS Workshop July 10, 2002 Rémy Evard Mathematics.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
RHUL1 Site Report Royal Holloway Sukhbir Johal Simon George Barry Green.
Terabyte IDE RAID-5 Disk Arrays David A. Sanders, Lucien M. Cremaldi, Vance Eschenburg, Romulus Godang, Christopher N. Lawrence, Chris Riley, and Donald.
15-Feb-02PvS Brunel Report, GridPP 3 Cambridge 1 Brunel University ECE Brunel Grid Activities Report Peter van Santen Distributed and Grid Computing Group.
Michigan Grid Testbed Report Shawn McKee University of Michigan UTA US ATLAS Testbed Meeting April 4, 2002.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Sensitivity of Cluster File System Access to I/O Server Selection A. Apon, P. Wolinski, and G. Amerson University of Arkansas.
CMAQ Runtime Performance as Affected by Number of Processors and NFS Writes Patricia A. Bresnahan, a * Ahmed Ibrahim b, Jesse Bash a and David Miller a.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
Software Scalability Issues in Large Clusters CHEP2003 – San Diego March 24-28, 2003 A. Chan, R. Hogue, C. Hollowell, O. Rind, T. Throwe, T. Wlodek RHIC.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Status of Grid-enabled UTA McFarm software Tomasz Wlodek University of the Great State of TX At Arlington.
26SEP03 2 nd SAR Workshop Oklahoma University Dick Greenwood Louisiana Tech University LaTech IAC Site Report.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
D0 Taking Stock 11/ /2005 Calibration Database Servers.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
DØSAR a Regional Grid within DØ Jae Yu Univ. of Texas, Arlington THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington.
From DØ To ATLAS Jae Yu ATLAS Grid Test-Bed Workshop Apr. 4-6, 2002, UTA Introduction DØ-Grid & DØRACE DØ Progress UTA DØGrid Activities Conclusions.
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
John Matrow, System Administrator/Trainer. Short History HiPeCC created April 1999 Purchased 16p 300Mhz SGI Origin 2000 April 2001: Added 8p 250Mhz.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
11 January 2005 High Performance Computing at NCAR Tom Bettge Deputy Director Scientific Computing Division National Center for Atmospheric Research Boulder,
UTA Site Report DØrace Workshop February 11, 2002.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Where we are... Public Interconnects: Number of peers on switch ~60 Aggregate bandwidth through switching fabric ~530mb/s average - ~680mb/s peak.
1 Leiden, November 20-21, 2001OAC ASTROWISE TEAM OAC WP4: OAC BeoWulf Hardware choice criteria : physical space problem flexible configuration cope with.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Re-Reconstruction Of Generated Monte Carlo In a McFarm Context 2003/09/26 Joel Snow, Langston U.
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
McFarm Improvements and Re-processing Integration D. Meyer for The UTA Team DØ SAR Workshop Oklahoma University 9/26 - 9/27/2003
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
12/19/01MODIS Science Team Meeting1 MODAPS Status and Plans Edward Masuoka, Code 922 MODIS Science Data Support Team NASA’s Goddard Space Flight Center.
13 January 2004GDB Geneva, Milos Lokajicek Institute of Physics AS CR, Prague LCG regional centre in Prague
Computing Infrastructure – Minos 2009/12 ● Downtime schedule – 3 rd Thur monthly ● Dcache upgrades ● Condor / Grid status ● Bluearc performance – cpn lock.
Brief introduction about “Grid at LNS”
Cluster / Grid Status Update
Monte Carlo Production and Reprocessing at DZero
Southwest Tier 2 Center Status Report
SAM at CCIN2P3 configuration issues
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
UK GridPP Tier-1/A Centre at CLRC
UTFSM computer cluster
HYCOM CONSORTIUM Data and Product Servers
ALICE-Grid Activities in Bologna
Alice Software Demonstration
Genre1: Condor Grid: CSECCR
News p has no fetchpkgs.sh and fetchprod.sh shell script in the .../D0reltools/ These are missing due to the switch over to the new distribution.
Computing activities at Victoria
News New Updated DØRACE setup instruction to reflect the download split available on the web Three new sites are at various stages Princeton: Phase IV.
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Presentation transcript:

OUHEP STATUS Hardware OUHEP0, 2x Athlon 1GHz, 2 GB, 800GB RAID OUHEP1, 2x P3 1GHz, 1GB, 70GB OUHEP2, 2x P3 500 MHz, 1GB, 0 OUHEP3, 2x P3 500 MHz, 1GB, 0 OUHEP4, 2x P3 1GHz, 1GB, 40GB OUHEP5, 2x P3 1GHz, 1GB, 30GB OUHEP[6,7,8], 1x P2, 233MHz, 64MB,0 D0 Workshop 2002, Joel Snow, Langston University

OUHEP STATUS Hardware University OC12 (622 Mbs) to I2 Cluster 100 Mbs Internet and Gigabit interconnect on private switch Gigabit to OSCER Access to OSCER, > 100 dual P4 1.5GHz D0 Workshop 2002, Joel Snow, Langston University

OUHEP STATUS Software RH Linux 7.2 (2.4.9 Kernel) D0 software infrastructure D0 software releases Code development Remote Data Access - SAM Station MCFarm Globus 2 Condor D0 Workshop 2002, Joel Snow, Langston University

OUHEP STATUS MCFarm Compatibilty 1 node adding more Using p10.15.01 from UTA Working on problems with p10.15.03 Remote job submission from UTA Compatibilty Facility shared with ATLAS Condor for job scheduling D0 Workshop 2002, Joel Snow, Langston University