Israel Cluster Structure. Outline The local cluster Local analysis on the cluster –Program location –Storage –Interactive analysis & batch analysis –PBS.

Slides:



Advertisements
Similar presentations
LNL CMS M.Biasotto, Bologna, 29 aprile LNL Analysis Farm Massimo Biasotto - LNL.
Advertisements

NCAS Unified Model Introduction Part 1b: Running the UM University of Reading, 3-5 December 2014.
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
Why python? Automate processes Batch programming Faster Open source Easy recognition of errors Good for data management What is python? Scripting programming.
13/05/2004Janusz Martyniak Imperial College London 1 Using Ganga to Submit BaBar Jobs Development Status.
GANGA Overview Germán Carrera, Alfredo Solano (CNB/CSIC) EMBRACE COURSE Monday 19th of February to Friday 23th. CNB-CSIC Madrid.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Server-Side vs. Client-Side Scripting Languages
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
Athena. Outline Setting up the environment Running an Athena job.
DIRAC API DIRAC Project. Overview  DIRAC API  Why APIs are important?  Why advanced users prefer APIs?  How it is done?  What is local mode what.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
GETTING STARTED ON THE GRID: W. G. SCOTT (RAL/PPD) RAL PHYSICS MEETING TUES 15 MAY GENERATED 10K SAMPLES IN EACH CHANNEL ON LXPLUS (IN 2006) SIMULATED/DIGITISDED.
The ATLAS Production System. The Architecture ATLAS Production Database Eowyn Lexor Lexor-CondorG Oracle SQL queries Dulcinea NorduGrid Panda OSGLCG The.
Chapter 7 Microsoft Windows XP. Windows XP Versions XP Home XP Home XP Professional XP Professional XP Professional 64-Bit XP Professional 64-Bit XP Media.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
K. Harrison CERN, 20th April 2004 AJDL interface and LCG submission - Overview of AJDL - Using AJDL from Python - LCG submission.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Belle MC Production on Grid 2 nd Open Meeting of the SuperKEKB Collaboration Soft/Comp session 17 March, 2009 Hideyuki Nakazawa National Central University.
LHCb and DataGRID - the workplan for 2001 Eric van Herwijnen Wednesday, 28 march 2001.
David Adams ATLAS DIAL status David Adams BNL July 16, 2003 ATLAS GRID meeting CERN.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
Stuart Wakefield Imperial College London Evolution of BOSS, a tool for job submission and tracking W. Bacchi, G. Codispoti, C. Grandi, INFN Bologna D.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
Ganga A quick tutorial Asterios Katsifodimos Trainer, University of Cyprus Nicosia, Feb 16, 2009.
Enabling Grids for E-sciencE EGEE-III INFSO-RI Using DIANE for astrophysics applications Ladislav Hluchy, Viet Tran Institute of Informatics Slovak.
M. Schott (CERN) Page 1 CERN Group Tutorials CAT Tier-3 Tutorial October 2009.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
The CRI compute cluster CRUK Cambridge Research Institute.
Introduction to Ganga Karl Harrison (University of Cambridge) ATLAS Distributed Analysis Tutorial Milano, 5-6 February 2007
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
Nurcan Ozturk University of Texas at Arlington US ATLAS Transparent Distributed Facility Workshop University of North Carolina - March 4, 2008 A Distributed.
Ganga 4 Basics - Tutorial Jakub T. Moscicki ARDA/LHCb Ganga Tutorial, November 2005.
Portal Update Plan Ashok Adiga (512)
August 30, 2002Jerry Gieraltowski Launching ATLAS Jobs to either the US-ATLAS or EDG Grids using GRAPPA Goal: Use GRAPPA to launch a job to one or more.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
The ATLAS Cloud Model Simone Campana. LCG sites and ATLAS sites LCG counts almost 200 sites. –Almost all of them support the ATLAS VO. –The ATLAS production.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
K. Harrison CERN, 3rd March 2004 GANGA CONTRIBUTIONS TO ADA RELEASE IN MAY - Outline of Ganga project - Python support for AJDL - LCG analysis service.
K. Harrison CERN, 22nd September 2004 GANGA: ADA USER INTERFACE - Ganga release status - Job-Options Editor - Python support for AJDL - Job Builder - Python.
Using Ganga for physics analysis Karl Harrison (University of Cambridge) ATLAS Distributed Analysis Tutorial Milano, 5-6 February 2007
2 June 20061/17 Getting started with Ganga K.Harrison University of Cambridge Tutorial on Distributed Analysis with Ganga CERN, 2.
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
The ATLAS Strategy for Distributed Analysis on several Grid Infrastructures D. Liko, IT/PSS for the ATLAS Distributed Analysis Community.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
Testing Infrastructure Wahid Bhimji Sam Skipsey Intro: what to test Existing testing frameworks A proposal.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
INFSO-RI Enabling Grids for E-sciencE Ganga 4 Technical Overview Jakub T. Moscicki, CERN.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
ATLAS Physics Analysis Framework James R. Catmore Lancaster University.
Geant4 GRID production Sangwan Kim, Vu Trong Hieu, AD At KISTI.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
Advanced Computing Facility Introduction
Compute and Storage For the Farm at Jlab
IW2D migration to HTCondor
Bulk production of Monte Carlo
A full demonstration based on a “real” analysis scenario
Extended OSG client for WLCG
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Submit BOSS Jobs on Distributed Computing System
Artem Trunov and EKP team EPK – Uni Karlsruhe
Production client status
Presentation transcript:

Israel Cluster Structure

Outline The local cluster Local analysis on the cluster –Program location –Storage –Interactive analysis & batch analysis –PBS –Grid-UI

The Local Cluster

Program Location All the software is located on the local software. The software directory can be accessed using: $ATLAS_INST_PATH The directory structute of $ATLAS_INST_PATH –setup environment – general scripts that define all the necessary environment variables for each of the installed program. There is a general script setupEnv.sh that source all the other scripts. initialization – general scripts that build up the user environment. For example ‘setupAthena’ will build the athena environment, create a default requirements files and create an init.sh script (see previous tutorial). swInstallation – install scripts, relevant for the software administrator only. –athena releases – the different athena kits for releases versions. nightly – nightlies that are downloaded regularly during the night (not in use for now) groupArea – groupArea for certain athena projects. For now only the tutorial groupArea is installed –atlantis – one version of atlantis is locally installed. Should be updated from time to time to the lates version –gridTools Dq2 – the latest version of dq2 only. Should be updated on regular basis. ganga – a version of ganga (see grid tutorial). Should be updated from time to time –generators – the different generators in use.

Storage Home directories are backed up. Backup is expensive and the charge is per MB. The backup is for 6 months, so even if you delete the data the next day we will keep paying for 6 months. So keep your data on the large disks (Panasas or Thumper) and only your analysis code on your home dirs. Delete old data. It is very easy to overload the disks with old MC samples. But be careful, after deletion there is no turning back. At later time data management and data control guidelines will be made.

Interactive analysis & Batch analysis The local cluster holds: – ~140 cpu (tau ~56cpu) –~70TB –1-2 interactive working stations. Interactive work –First look at data – most of the time ARA/pyAthena/matlab/root –Code development –Testing jobs before submission to batch mode Batch mode –Everything that takes more than ~1hr is probably best to send as a batch job. Batch jobs can run either on the Grid or on the local cluster –Grid – All the jobs that need to use datasets that are stored on the Grid must be run on the Grid. Do not copy data to the local cluster! Also, for long jobs that can be fragmented into several small jobs. –Local – short jobs, or jobs on local stored data.

PBS PBS (Portable Batch System) is the batch system installed on the local cluster. Jobs are submitted into queues. The choice of the queue is according to the job length. On lxplus there is a similar system called LSF (Load Sharing Facility)

Grid-UI Grid-UI software is installed on the working stations. After initiating the proxy it is possible to send jobs to the grid, and retrieving its output, and datasets. ganga – a software developed in Cern provide a python based environment to send jobs to the different Grids and to the local job manger (PBS/LSF) dq2 – a software for dataset manipulation on the grid pathena – an Athena plug-in that sends athena jobs into the grid. Unlike ganga it can send jobs only to the panda grid (US).