Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.

Slides:



Advertisements
Similar presentations
Florida Tech Grid Cluster P. Ford 2 * X. Fave 1 * M. Hohlmann 1 High Energy Physics Group 1 Department of Physics and Space Sciences 2 Department of Electrical.
Advertisements

CHEPREO Tier-3 Center Achievements. FIU Tier-3 Center Tier-3 Centers in the CMS computing model –Primarily employed in support of local CMS physics community.
Chris Brew RAL PPD Site Report Chris Brew SciTech/PPD.
DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
Southwest Tier 2 Center Status Report U.S. ATLAS Tier 2 Workshop - Harvard Mark Sosebee for the SWT2 Center August 17, 2006.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
SouthGrid Status Pete Gronbech: 4 th September 2008 GridPP 21 Swansea.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Computing at COSM by Lawrence Sorrillo COSM Center.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Florida Tier2 Site Report USCMS Tier2 Workshop Fermilab, Batavia, IL March 8, 2010 Presented by Yu Fu For the Florida CMS Tier2 Team: Paul Avery, Dimitri.
Sydney Region IT School Support Term Smaller Servers available on Contract.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
RAL PPD Site Update and other odds and ends Chris Brew.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
Planning and Designing Server Virtualisation.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
KIT – The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) Hadoop on HEPiX storage test bed at FZK Artem Trunov.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
The DCS lab. Computer infrastructure Peter Chochula.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Site report HIP / CSC HIP : Helsinki Institute of Physics CSC: Scientific Computing Ltd. (Technology Partner) Storage Elements (dCache) for ALICE and CMS.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
USCMS T2 Site Admin Toolkit Samir Cury MTF Meeting – May 26 th, 2011.
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Drupal Service: Infrastructure Update 2 Marek Salwerowicz Sergio Fernandez ENTICE Meeting
Florida Tier2 Site Report USCMS Tier2 Workshop Livingston, LA March 3, 2009 Presented by Yu Fu for the University of Florida Tier2 Team (Paul Avery, Bourilkov.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
LCG and Tier-1 Facilities Status ● LCG interoperability. ● Tier-1 facilities.. ● Observations. (Not guaranteed to be wry, witty or nonobvious.) Joseph.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
An Introduction to Campus Grids 19-Apr-2010 Keith Chadwick & Steve Timm.
A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
2007/05/22 Integration of virtualization software Pierre Girard ATLAS 3T1 Meeting
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Atlas Tier 3 Overview Doug Benjamin Duke University.
July 18, 2011S. Timm FermiCloud Enabling Scientific Computing with Integrated Private Cloud Infrastructures Steven Timm.
Brief introduction about “Grid at LNS”
The Beijing Tier 2: status and plans
Belle II Physics Analysis Center at TIFR
Monitoring and Information Services Technical Group Report
Cluster / Grid Status Update
HEPiX Spring 2014 Annecy-le Vieux May Martin Bly, STFC-RAL
Yaodong CHENG Computing Center, IHEP, CAS 2016 Fall HEPiX Workshop
Southwest Tier 2 Center Status Report
Luca dell’Agnello INFN-CNAF
Computing Board Report CHIPP Plenary Meeting
Oxford Site Report HEPSYSMAN
UTFSM computer cluster
Christof Hanke, HEPIX Spring Meeting 2008, CERN
Composition and Operation of a Tier-3 cluster on the Open Science Grid
Presentation transcript:

Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team

Server's Hardware profile SuperMicro machines 2 X Intel Xeon dual 2.0 GHz 4 GB RAM RAID GB HDs

Nodes Hardware profile (40) Dell PowerEdge 2950 – 2 x Intel Xeon Quad 2.33 GHz – 16 GB RAM – RAID 0 – 6 x 1 TB Hard Drives CE Resources – 8 Batch slots – 66.5 kHS06 – 2 GB RAM / Slot SE Resources – 5.8 TB Useful for dCache or hadoop Private network only

Nodes Hardware profile (2+5) Dell R710 – 2 are Xen Servers – not worker nodes – 2 X Intel Xeon Quad 2.4 GHz – 16 GB RAM RAID 0 – 6 x 2 TB Hard Drives CE – 8 Batch Slots (or more?) – kHS06 – 2 GB RAM / Slot SE – 11.8 TB for dCache or hadoop ` Private network only

First phase nodes Profile (82) SuperMicro Server – 2 Intel Xeon single 2.66 GHz – 2 GB RAM – 500 GB Hard Drive & 40 GB Hard Drive CE Resources – Not used – Old CPU’s & low RAM per node SE Resources – 500 GB per node

Plans for the future - Hardware Buying 5 more Dell R710 Deploying 5 R710 when the disks arrive – More 80 cores – More 120 TB Storage – More 1244 kHS06 Total CE - 40 PE R710 = 400 Cores || 3.9 kHS06 SE = 405 TB

Software profile – CE OS – CentOS bits 2 OSG Gatekeepers – Both running OSG x – Maintenance tasks eased by redundancy – less downtimes GUMS Condor for job scheduling

Software profile – SE OS - CentOS bits dCache 1.8 – 4 GridFTP Servers PNFS 1.8 PhEDEx 3.2.0

Plans for the future: Software/Network SE Migration – Right now we use dCache/PNFS – We plan to migrate to BeStman/Hadoop Some effort already comes up with results Adding the new nodes to the Hadoop SE Migrate the data Test with real production environment – Jobs and users accessing Network Improvement – RNP (our network provider) plan to deliver for us a 10 Gbps link before the next SuperComputing Conference.

T2 Analysis model & associated Physics groups We have reserved 30 TB for each of the groups: Forward Physics B-Physics Studying the possibility to reserve space for Exotica The group has several MSc & PhD students working on CMS Analysis for a long time – These have a good support Some Grid users submit, sometimes run into trouble and give up – don't ask for support

Developments Condor Mechanism based on suspend to give priority to a very little pool of important users : – 1 pair of batch slots per core – When the priority user’s jobs arrive, it pauses the normal job on the other batch slot – Once it finishes and vacate the slot, his pair automatically resumes. – Documentation can become available for the interested – Developed by Diego Gomes

Developments Condor4Web – Web interface to visualize condor queue Shows grid DN’s – Useful for Grid users that want to know how the job is going scheduled inside the site – – Available on – Still have much to evolve, but already works – Developed by Samir

CMS UERJ During LISHEP 2009 – January we have inaugurated a small control room for CMS on UERJ:

CMS Center Our computing team have participated on tutorials and now we have four potential CSP Shifters

CMS Centre (quick) profile Hardware – 4 Dell workstations with 22” monitors – 2 x 47” TV’s – Polycom SoundStation Software – All the conferences including with the other CMS Centers are done via EVO

Cluster & Team Alberto Santoro (General supervisor) Eduardo Revoredo (Hardware coordinator) Samir Cury (Site admin) Douglas Milanez (Trainee) Andre Sznajder (Project coordinator) Jose Afonso (Software coordinator) Fabiana Fortes (Site admin) Raul Matos (Trainee)

2009/2010 year’s goals We have worked in 2009 mostly in – Getting rid of the infra-structure problems Electrical Insuficciency AC – Many downtimes due to this These are solved now – Besides that problems Running official production on small workflows Doing private production & analysis for local and Grid users 2010 goal – Use the new hardware and infra-structure for a more reliable site – Run more heavy workflows and increase participation and presence on official production.

Thanks! I want to formally thank Fermilab, USCMS and OSG for their financial help to bring an UERJ representantive here. Also want to thank USCMS for this very useful meeting

Questions? Comments?