Cluster currently consists of: 1 Dell PowerEdge 2950 3.6Ghz Dual, quad core Xeons (8 cores) and 16G of RAM Original GRIDVM - SL4 VM-Ware host 1 Dell PowerEdge.

Slides:



Advertisements
Similar presentations
DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
Advertisements

Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
17-19 Oct, 2007Geant4 Japan Oct, 2007Geant4 Japan Oct, 2007Geant4 Japan 2007 Geant4 Japan.
Enabling Grids for E-sciencE Medical image processing web portal : Requirements analysis. An almost end user point of view … H. Benoit-Cattin,
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
E-Infrastructure hierarchy Networking and Computational facilities in Armenia ASNET AM Network Armenian National Grid Initiative Armenian ATLAS site (AM-04-YERPHI)
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
ORGANIZING AND ADMINISTERING OF VOLUNTEER DISTRIBUTED COMPUTING PROJECT Oleg Zaikin, Nikolay Khrapov Institute for System Dynamics and Control.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
MaterialsHub - A hub for computational materials science and tools.  MaterialsHub aims to provide an online platform for computational materials science.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
Ashok Agarwal University of Victoria 1 GridX1 : A Canadian Particle Physics Grid A. Agarwal, M. Ahmed, B.L. Caron, A. Dimopoulos, L.S. Groer, R. Haria,
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
HEPIX - HEPNT, 1 Nov Milos Lokajicek, IP AS CR, Prague1 Status Report - Czech Republic HEP Groups and experiments Networking and Computing Grid activities.
Grid activities in the Czech Republic Jiří Kosina, Miloš Lokajíček, Jan Švec Institute of Physics of the Academy of Sciences of the Czech Republic
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
1 Development of a High-Throughput Computing Cluster at Florida Tech P. FORD, R. PENA, J. HELSBY, R. HOCH, M. HOHLMANN Physics and Space Sciences Dept,
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Atlas Software Structure Complicated system maintained at CERN – Framework for Monte Carlo and real data (Athena) MC data generation, simulation and reconstruction.
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
Predrag Buncic (CERN/PH-SFT) Software Packaging: Can Virtualization help?
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
10 March Andrey Grid Tools Working Prototype of Distributed Computing Infrastructure for Physics Analysis SUNY.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
GDB Meeting CERN 09/11/05 EGEE is a project funded by the European Union under contract IST A new LCG VO for GEANT4 Patricia Méndez Lorenzo.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
Brief introduction about “Grid at LNS”
Moroccan Grid Infrastructure MaGrid
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Partner Status HPCL-University of Cyprus
MaterialsHub - A hub for computational materials science and tools.
Статус ГРИД-кластера ИЯФ СО РАН.
Virtualization in the gLite Grid Middleware software process
Simulation use cases for T2 in ALICE
Interoperability & Standards
SitE Report University of Johannesburg Stavros Lambropoulos
The LHCb Computing Data Challenge DC06
Presentation transcript:

Cluster currently consists of: 1 Dell PowerEdge Ghz Dual, quad core Xeons (8 cores) and 16G of RAM Original GRIDVM - SL4 VM-Ware host 1 Dell PowerEdge SC Ghz Dual quad-Core Opteron's (8 Cores) and 16G of RAM File Server, with 8.6 TB of disk space 11 Dell PowerEdge SC Ghz Dual quad-Core Opteron's (8 Cores) and 16G of RAM Worker nodes 16 Dell PowerEdge M605 blades 2.8Ghz Dual six-Core Opteron's (12 Cores) and 32G of RAM Worker nodes Total: 296 cores

25 June 2013 DST / NRF Research Infrastructure 3 UJ-ATLAS : ATHENA installed, using Pythia event generator to study various Higgs scenarios,

Diamond Ore Sorting (Mineral-PET)‏ S Ballestrero, SH Connell, M Cook, M Tchonang, Mz Bhamjee + Multotec GEANT4 MonteCarlo Online diamond detection Monte Carlo simulation

Diamond Ore Sorting (Mineral-PET)‏ Simulation of radiation dose as a function of position from a body of radioactive material Misaligned – before optimisation Simplified numerical model After optimisation Full physics Monte-Carlo PET point source image - automatic detector parameter tweaking

Monte Carlo (GEANT4) Particle Tracking – Accelerator Physics, Detector Physics

The stellar astrophysics group at UJ Astrophysics projects on the UJ fast computing cluster: A large group of researchers, co-led by Chris Engelbrecht at UJ, involving three SA institutions, an Indian institution and eight postgraduate students in total, are working on important corrections to current theories of stellar structure and evolution. To this end, we use the UJ cluster for two essential functions: 1.Performing Monte Carlo simulations of random processes in order to determine the statistical significance of purported eigenfrequency detections in telescope data. A typical run takes about 48 hours running on the majority of the cluster nodes. 2.Running stellar models containing new physics in the stellar structure codes (not in use yet, but implementation later in 2013 expected)

The stellar astrophysics group at UJ Astrophysics projects on the UJ fast computing cluster: A large group of researchers, co-led by Chris Engelbrecht at UJ, involving three SA institutions, an Indian institution and eight postgraduate students in total, are working on important corrections to current theories of stellar structure and evolution. To this end, we use the UJ cluster for two essential functions: 1.Performing Monte Carlo simulations of random processes in order to determine the statistical significance of purported eigenfrequency detections in telescope data. A typical run takes about 48 hours running on the majority of the cluster nodes. 2.Running stellar models containing new physics in the stellar structure codes (not in use yet, but implementation later in 2013 expected)

The stellar astrophysics group at UJ Astrophysics projects on the UJ fast computing cluster: A large group of researchers, co-led by Chris Engelbrecht at UJ, involving three SA institutions, an Indian institution and eight postgraduate students in total, are working on important corrections to current theories of stellar structure and evolution. To this end, we use the UJ cluster for two essential functions: 1.Performing Monte Carlo simulations of random processes in order to determine the statistical significance of purported eigenfrequency detections in telescope data. A typical run takes about 48 hours running on the majority of the cluster nodes. 2.Running stellar models containing new physics in the stellar structure codes (not in use yet, but implementation later in 2013 expected)

Successful test : 20 September 2012 CHAIN Interoperability, where some of SA Grid participated UJ, UFS, UCT and CHPC Shown below – gLite sites in SA

Feature of the UJ Research Cluster Maintain interoperability on 2 Grids : OSG and gLite Virtual machines (compute element and user interface for each platform) Shown below – OSG sites

Currently in the middle of an upgrade: Nodes and virtual machines running a spread of Scientific Linux 4, 5 and 6 to keep services online. System administrator is a South African currently based at CERN in Europe. Able to administer the cluster using remote tools. Using Pixie and Puppet, can reboot a node and reinstall to any version of Scientific Linux and EMI (European middleware initiative) within 45 minutes.

Trying to maintain usability by - SAGrid - ATLAS (Large Hadron Collider) - ALICE (Large Hadron Collider) - e-NMR (Bio-molecular) - OSG ATLAS jobs running for last 9 months, in production queue (in test mode) for last 4 weeks. Difficult to keep both OSG and gLite running – when the one demands an upgrade, the other breaks. Important though – grids are all about joining computers; we are helping to keep compatibility between the two big physics grids. Currently on the to-do list: Finish partially completed Scientific Linux upgrade Return OSG to functional status Set up IMPI implementation – allow complete remote control at lower level than OS.