Overview of ATLAS Israel Computing RECFA, April 2014 1 Overview of ATLAS Israel Computing Overview of ATLAS Israel Computing RECFA Meeting Tel Aviv University,

Slides:



Advertisements
Similar presentations
HPCx Power for the Grid Dr Alan D Simpson HPCx Project Director EPCC Technical Director.
Advertisements

National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Alastair Dewhurst, Dimitrios Zilaskos RAL Tier1 Acknowledgements: RAL Tier1 team, especially John Kelly and James Adams Maximising job throughput using.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Grid Computing Reinhard Bischof ECFA-Meeting March 26 th 2004 Innsbruck.
ATLAS computing in Geneva 268 CPU cores (login + batch) 180 TB for data the analysis facility for Geneva group grid batch production for ATLAS special.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
NL Cloud Meeting, 5 April 2011 Israel ATLAS Tier2 Status 1 Israel ATLAS TIER-2 Status April 2011 Lorne Levinson.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
Global Science experiment Data hub Center Oct. 13, 2014 Seo-Young Noh Status Report on Tier 1 in Korea.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Grid Computing Oxana Smirnova NDGF- Lund University R-ECFA meeting in Sweden Uppsala, May 9, 2008.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
RAL PPD Site Update and other odds and ends Chris Brew.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Status of the DESY Grid Centre Volker Guelzow for the Grid Team DESY IT Hamburg, October 25th, 2011.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN)
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
S CIENCE AND T ECHNOLOGY TRANSFER IN I SRAEL I SRAEL F RANCE INTRODUCTION MEETING E REZ E TZION T EL A VIV U NIVERSITY 20/2/2011.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
GridPP3 Project Management GridPP20 Sarah Pearce 11 March 2008.
Main title ERANET - HEP Group info (if required) Your name ….
Main title HEP in Greece Group info (if required) Your name ….
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
October LHCUSA meeting BNL Bjørn S. Nilsen Update on NSF-ITR Proposal Bjørn S. Nilsen The Ohio State University.
A Distributed Tier-1 An example based on the Nordic Scientific Computing Infrastructure GDB meeting – NIKHEF/SARA 13th October 2004 John Renner Hansen.
21 October 2010 Dietrich Liko Grid Tier-2 HEPHY Scientific Advisory Board.
Dave Newbold, University of Bristol8/3/2001 UK Testbed 0 Sites Sites that have committed to TB0: RAL (R) Birmingham (Q) Bristol (Q) Edinburgh (Q) Imperial.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
INFSO-RI Enabling Grids for E-sciencE Experience of using gLite for analysis of ATLAS combined test beam data A. Zalite / PNPI.
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
ECFA European Committee for Future Accelerators Report from the chairman M. Krammer HEPHY, Vienna, Austria July 24, 2014RECFA DESY1.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
LHC Computing, CERN, & Federated Identities
STATUS OF KISTI TIER1 Sang-Un Ahn On behalf of the GSDC Tier1 Team WLCG Management Board 18 November 2014.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
tons, 150 million sensors generating data 40 millions times per second producing 1 petabyte per second The ATLAS experiment.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
The Grid in Israel Presentation to the Athens meeting, April 06 David Horn Tel Aviv University.
GRID IL Tel Aviv, G.Mikenberg2 General Comments on Israeli Education and Research (2005… but not far from now..) Israeli Population 6.86 Millions.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
1 1 – Statistical information about our resource centers; ; 2 – Basic infrastructure of the Tier-1 & 2 centers; 3 – Some words about the future.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
DATA ACCESS and DATA MANAGEMENT CHALLENGES in CMS.
National e-Science Infrastructure Consortium of THAILAND
SuperB – INFN-Bari Giacinto DONVITO.
A Dutch LHC Tier-1 Facility
Belle II Physics Analysis Center at TIFR
Vanderbilt Tier 2 Project
Russian Regional Center for LHC Data Analysis
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Southwest Tier 2.
SA1 ROC Meeting Bologna, October 2004
One Talk from Verizon One TalkSM offers a flexible, scalable phone system that combines employees’ desk phones and mobile phones using the same number.
Francois Le Diberder 1rst China-France on LHC and Grid in2p3.
GRIF : an EGEE site in Paris Region
Presentation transcript:

Overview of ATLAS Israel Computing RECFA, April Overview of ATLAS Israel Computing Overview of ATLAS Israel Computing RECFA Meeting Tel Aviv University, Israel April 2014 Lorne Levinson Weizmann Institute of Science

Overview of ATLAS Israel Computing RECFA, April Organization We are ~1.3% of ATLAS. Our pledge is 1.3% of ATLAS Tier-2 resources –plus, we provide resources for use by Israeli ATLAS, ~40 users ATLAS Israel provides a distributed Tier-2 and Tier-3 Tier-0 = CERN Tier-1: 10: at large national labs: Europe, Taiwan, USA, Canada Tier-2: ~30: one per country or region of a large country Tier-3: university group, for local computing only –clusters at Technion, Tel Aviv, Weizmann –each cluster combines Tier-2 and Tier-3 resources all resources are shared flexibly between Tier-2 and Tier-3 Single management; single budget and purchasing The three sites are as identical as possible Steering Committee for overall policy Management & Operations team for the three sites

Overview of ATLAS Israel Computing RECFA, April The bandwidth of our connection to Europe lags because undersea cables are more expensive.

Resources Above storage is a high performance storage system (DDN 9900) –Cluster-wide global file system for both Tier-2 and Tier-3 (Lustre) –Connected to cluster via 2 or 3 10Gbit/s links System administration manpower: 2.5 FTEs Steering Committee: –Ehud Duchovni, Erez Etzion, Lorne Levinson, Yoram Rozen Overview of ATLAS Israel Computing RECFA, April Tel AvivTechnionWeizmannTotal Storage (net TB) ,230 Work nodes Job slots7841,1081,6113,503

Pledged resources Storage (TB) CPU ( HEPspec) Overview of ATLAS Israel Computing RECFA, April Has increased about 15% per year Tracks at least 1.3% of ATLAS Tier-2 declared requirements 2014: storage: 1.96% CPU: 2.46% of ATLAS Tier-2 (ATLAS barely increased, but we did.) 2013: storage: 1.71% CPU: 1.54% of ATLAS Tier-2

Our contribution to ATLAS We are 1.3% of ATLAS. Our pledge is 1.3% of ATLAS Tier-2 resources. We deliver more, but depend on Nikhef/SARA to send jobs and files. Overview of ATLAS Israel Computing RECFA, April % Despite most sites contributing more than their pledge.

Other users We support also other grid Virtual Organizations at a few percent or when ATLAS is not busy: –ILC –LHCb We also have local groups who buy and place equipment in our clusters: HEP phenomenologists, Phenix Heavy Ion expt, Xenon Dark Matter expt, Condensed Matter theorists, Geneticists They benefit from: –our system management and user support –joining our tender to purchase –storage space and fast networking –the possibility to burst a large number of jobs at once and then disappear for days or weeks. On average they use less than they contribute; but, they can get bursts of power 10+ times what they purchased. Guests are occasionally granted access for a short term: Computer Science, Electrical Engineering, Biology, … Overview of ATLAS Israel Computing RECFA, April

Overview of ATLAS Israel Computing RECFA, April End