São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.

Slides:



Advertisements
Similar presentations
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Advertisements

Florida Tech Grid Cluster P. Ford 2 * X. Fave 1 * M. Hohlmann 1 High Energy Physics Group 1 Department of Physics and Space Sciences 2 Department of Electrical.
CHEPREO Tier-3 Center Achievements. FIU Tier-3 Center Tier-3 Centers in the CMS computing model –Primarily employed in support of local CMS physics community.
DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
Southwest Tier 2 Center Status Report U.S. ATLAS Tier 2 Workshop - Harvard Mark Sosebee for the SWT2 Center August 17, 2006.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Oxford Jan 2005 RAL Computing 1 RAL Computing Implementing the computing model: SAM and the Grid Nick West.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
SAMGrid as a Stakeholder of FermiGrid Valeria Bartsch Computing Division Fermilab.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
DOSAR Workshop at Sao Paulo Dick Greenwood What’s Next for DOSAR? Dick Greenwood Louisiana Tech University 1 st DOSAR Workshop at the Sao Paulo, Brazil.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
Tape logging- SAM perspective Doug Benjamin (for the CDF Offline data handling group)
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
22 nd September 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
26SEP03 2 nd SAR Workshop Oklahoma University Dick Greenwood Louisiana Tech University LaTech IAC Site Report.
June 10, D0 Use of OSG D0 relies on OSG for a significant throughput of Monte Carlo simulation jobs, will use it if there is another reprocessing.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
4 March 2004GridPP 9th Collaboration Meeting SAMGrid:JIM and CDF Development CDF Accepts the Need for the Grid –Requirements How to Meet the Need –Status.
Brazilian HEP Grid initiatives: ‘São Paulo Regional Analysis Center’ Rogério L. Iope SPRACE Systems Engineer 2nd EELA Workshop June 2006 Island of.
DØSAR a Regional Grid within DØ Jae Yu Univ. of Texas, Arlington THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
Data reprocessing for DZero on the SAM-Grid Gabriele Garzoglio for the SAM-Grid Team Fermilab, Computing Division.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
Stefano Belforte INFN Trieste 1 CMS Simulation at Tier2 June 12, 2006 Simulation (Monte Carlo) Production for CMS Stefano Belforte WLCG-Tier2 workshop.
Southgrid Technical Meeting Pete Gronbech: May 2005 Birmingham.
July 29' 2010INDIA-CMS_meeting_BARC1 LHC Computing Grid Makrand Siddhabhatti DHEP, TIFR Mumbai.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
4/25/2006Condor Week 1 FermiGrid Steven Timm Fermilab Computing Division Fermilab Grid Support Center.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
OSG Abhishek Rana Frank Würthwein UCSD.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
Florida Tier2 Site Report USCMS Tier2 Workshop Livingston, LA March 3, 2009 Presented by Yu Fu for the University of Florida Tier2 Team (Paul Avery, Bourilkov.
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
July 26, 2007Parag Mhashilkar, Fermilab1 DZero On OSG: Site And Application Validation Parag Mhashilkar, Fermi National Accelerator Laboratory.
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
April 18, 2006FermiGrid Project1 FermiGrid Project Status April 18, 2006 Keith Chadwick.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
5/12/06T.Kurca - D0 Meeting FNAL1 p20 Reprocessing Introduction Computing Resources Architecture Operational Model Technical Issues Operational Issues.
Belle II Physics Analysis Center at TIFR
Monte Carlo Production and Reprocessing at DZero
Computing Board Report CHIPP Plenary Meeting
Southwest Tier 2.
DØ MC and Data Processing on the Grid
The LHCb Computing Data Challenge DC06
Presentation transcript:

São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006

September 14, 2006Eduardo Gregores2 Who We Are Now São Paulo High Energy Physics Group –Sérgio Novaes Professor at UNESP (coordinator) –Eduardo GregoresProfessor at UFABC –Sérgio LiettiPostdoc at UNESP –Pedro MercadantePostdoc at UNESP –Alexandre AlvesPostdoc at USP –Gustavo PavaniPostdoc at UNESP –Rogerio Iope Graduated Student / System Manager at USP –Thiago Tomei Graduated Student at UNESP –Marco Dias System Manager at UNESP –Wescley Teixeira Undergraduate Student at USP DZero Experiment (Active since 1999). –Hardware: Forward Proton Detector –Analysis: New Phenomena (Search on Large Extra Dimensions). –Distributed Computing Infra-Structure (SAM and SAMGrid). Started official Monte Carlo production on More then 10 Million events produced so far. Operational SAMGrid site since CMS Experiment (Active since 2005). –LCG computing element as a CMS T2 under OSG.

September 14, 2006Eduardo Gregores3 SPRACE Computing Infrastructure SPRACE Cluster: –20 dual Xeon 2.4 GHz 1GB workers since March –32 dual Xeon 3.0 GHz 2 GB workers since June –32 dual Xeon dual Core (Woodcrest) 2.4 GHz 4GB workers this week. –2 head nodes (SAMGrid + OSG), 1 disk server, 1 dCache server. –12 TB on 4 RAID modules (SCSI Ultra K). –232 Condor batch slots with 320 kSpecInt2k of overall computing power. –Extra 16 TB on local disks to be deployed soon, making 28 TB total. SPRACE Connectivity: –Internal Gigabit connection between all cluster elements. –Gigabit connection shared with USP up to WHREN-LILA Giga link to Abilene. –Exclusive Gigabit Lambda on the next couple of weeks (1.2  2.5 Gbps). SPRACE Configuration: –2 Separated Clusters Dzero/SamGrid Cluster CMS/OSG Cluster

September 14, 2006Eduardo Gregores4 SamGrid at SPRACE SamGrid Cluster –RH 7.3 on all SamGrid computing elements. –1 Head Node (SamStation and JIM suite) –31 workers on Condor pool. Monte Carlo Production for DZero

September 14, 2006Eduardo Gregores5 OSG/CMS Setup at T2_SPRACE Compute Element Head Node: spgrid.if.usp.br –OSG Globus – Basic grid job handling system. Monalisa – Monitoring tool. GUMS/PRIMA – Grid User Membership Service. Maps and authenticates VO registered users to local accounts. GIP – OSG Generic Information Provider based on the GLUE schema. BDII – Berkeley Database Information Index for LCG interfacing –Condor Batch System. Distribute jobs to the workers. –Ganglia: Cluster monitoring system –NFS: Exports OSG and Condor to the Workers Work Nodes: –21 Workers –NFS access to OSG, Condor and VO’s Application area. –Work done on local scratch partition. Storage Element: Head Node: spdc00.if.usp.br –PNFS: Locally Distributed File System –dCache: Local Storage File Catalogue System –SRM: Local Storage Resources Management System –Phedex: CMS File Transfer and Catalogue System –Squid: CMS Calibration Database System for analysis jobs. dCache Pool Nodes: –Each node runs its own transfer agent and has its own WAN IP for overall enhanced connectivity. –dCache-pool and SRM clients. –File Server: spraid.if.usp.br Raid arrays server (12 TB). Exports VO’s Application area to all cluster –Work Nodes: spdcNN.if.usp.br Uses Compute Element Worker Nodes hardware Local dedicated high capacity SATA disks.

September 14, 2006Eduardo Gregores6 OSG Jobs and Data Transfers

September 14, 2006Eduardo Gregores7 SamGrid – OSG Integration Only one cluster for both Grid Infrastructures One head fits all –Compatible up to OSG –Don’t work: VDT versions are now incompatibles A two headed cluster. –One head node for SamGrid (like the present one) –Local batch submition to OSG Condor pool. –Could not make it work: Condor versions incompatibilities. Forwarding SamGrid -> OSG station. –One per region (OSG-OUHEP for the Americas, CCIN2P3 for Europe) –About 3 GB download per batch slot from the station. –Download of 700 GB to fill our farm! –Solution: Local stager on a VOBox. –Under implementation with help of SamGrid people.