JetWeb on the Grid Ben Waugh (UCL), GridPP6, 2003-01-30 What is JetWeb? How can JetWeb use the Grid? Progress report The Future Conclusions.

Slides:



Advertisements
Similar presentations
Your university or experiment logo here BaBar Status Report Chris Brew GridPP16 QMUL 28/06/2006.
Advertisements

B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Experimental Particle Physics PHYS6011 Joel Goldstein, RAL 1.Introduction & Accelerators 2.Particle Interactions and Detectors (2) 3.Collider Experiments.
LHCb Computing Activities in UK Current activities UK GRID activities RICH s/w activities.
10th May 2007SLAC-PPA Summit1 Mike Whalley Durham University
McFarm: first attempt to build a practical, large scale distributed HEP computing cluster using Globus technology Anand Balasubramanian Karthik Gopalratnam.
M-grid Using Ubiquitous Web Technologies to create a Computational Grid R J Walters and S Crouch 21 January 2009.
Reconstruction and Analysis on Demand: A Success Story Christopher D. Jones Cornell University, USA.
UPPSALA DATABASE LABORATORY Managing Scientific Queries over Distributed Data in a Grid Environment Ruslan Fomkin.
11 Dec 2000F Harris Datagrid Testbed meeting at Milan 1 LHCb ‘use-case’ - distributed MC production
A tool to enable CMS Distributed Analysis
JIM Deployment for the CDF Experiment M. Burgon-Lyon 1, A. Baranowski 2, V. Bartsch 3,S. Belforte 4, G. Garzoglio 2, R. Herber 2, R. Illingworth 2, R.
Andrew McNab - Manchester HEP - 22 April 2002 UK Rollout and Support Plan Aim of this talk is to the answer question “As a site admin, what are the steps.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
Tev4LHC Workshop, QCD, Emily Nurse, UCL for the CEDAR collaboration (Andy Buckley, Jon Butterworth, James Monk, Ben Waugh, Mike Whalley,
Grappa: Grid access portal for physics applications Shava Smallen Extreme! Computing Laboratory Department of Physics Indiana University.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
Flexibility and user-friendliness of grid portals: the PROGRESS approach Michal Kosiedowski
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
INFSO-RI Enabling Grids for E-sciencE Logging and Bookkeeping and Job Provenance Services Ludek Matyska (CESNET) on behalf of the.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
HERA/LHC Workshop, MC Tools working group, HzTool, JetWeb and CEDAR Tools for validating and tuning MC models Ben Waugh, UCL Workshop on.
Monitoring the Grid at local, national, and Global levels Pete Gronbech GridPP Project Manager ACAT - Brunel Sept 2011.
Nick Brook Current status Future Collaboration Plans Future UK plans.
Belle MC Production on Grid 2 nd Open Meeting of the SuperKEKB Collaboration Soft/Comp session 17 March, 2009 Hideyuki Nakazawa National Central University.
LHCb and DataGRID - the workplan for 2001 Eric van Herwijnen Wednesday, 28 march 2001.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
The huge amount of resources available in the Grids, and the necessity to have the most up-to-date experimental software deployed in all the sites within.
GridPP, Durham 5 th July1 PhenoGrid Status Peter Richardson Durham University.
Production Tools in ATLAS RWL Jones GridPP EB 24 th June 2003.
13 May 2004EB/TB Middleware meeting Use of R-GMA in BOSS for CMS Peter Hobson & Henry Nebrensky Brunel University, UK Some slides stolen from various talks.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
LCG Accounting John Gordon Grid Deployment Board 13 th January 2004.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Shibboleth & Grid Integration STFC and University of Oxford (and University of Manchester)
Andrew McNab - Manchester HEP - 17 September 2002 UK Testbed Deployment Aim of this talk is to the answer the questions: –“How much of the Testbed has.
RIVET Introduction By Mehar Ali Shah PhD Student National Centre for Physics Quaid-I-Azam University Pakistan 1.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Diffractive Dijet Production Issues with Analysis Hardeep Bansil Birmingham Weekly ATLAS Meeting 15/09/2011.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES Andrea Sciabà Hammercloud and Nagios Dan Van Der Ster Nicolò Magini.
On the D4Science Approach Toward AquaMaps Richness Maps Generation Pasquale Pagano - CNR-ISTI Pedro Andrade.
Global ADC Job Monitoring Laura Sargsyan (YerPhI).
Enabling Grids for E-sciencE CMS/ARDA activity within the CMS distributed system Julia Andreeva, CERN On behalf of ARDA group CHEP06.
David Adams ATLAS ATLAS Distributed Analysis (ADA) David Adams BNL December 5, 2003 ATLAS software workshop CERN.
Alien and GSI Marian Ivanov. Outlook GSI experience Alien experience Proposals for further improvement.
A Data Handling System for Modern and Future Fermilab Experiments Robert Illingworth Fermilab Scientific Computing Division.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
PROTO-GRID Status of Grid-enabled UTA McFarm software Tomasz Wlodek University of the Great State of Texas At Arlington.
Accessing the VI-SEEM infrastructure
Eleonora Luppi INFN and University of Ferrara - Italy
Moving the LHCb Monte Carlo production system to the GRID
CEDAR Combined E-science Data Analysis Resource cedar. ac
OMII evaluation: Preliminary results Current status of T6
Ruslan Fomkin and Tore Risch Uppsala DataBase Laboratory
Nicolas Jacq LPC, IN2P3/CNRS, France
Scalability Tests With CMS, Boss and R-GMA
Fail Fail Poor Communication Lack of Documentation Poor Execution.
Gridifying the LHCb Monte Carlo production system
Status and plans for bookkeeping system and production tools
b-Quark Production at the Tevatron
Presentation transcript:

JetWeb on the Grid Ben Waugh (UCL), GridPP6, What is JetWeb? How can JetWeb use the Grid? Progress report The Future Conclusions

JetWeb A WWW Interface and Database for Monte Carlo Tuning and Validation See J. M. Butterworth and S. Butterworth, hep-ph/ , also submitted to Comput. Phys. Commun. Based on HzTool (J. Bromley et al., Future Physics at HERA, vol 1, ) Database of data, MC and comparisons Web interface allows access to DB and submission of jobs to generate more MC plots

What is JetWeb for? Final state in (esp. hadron-hadron) collisions poorly understood. Hadronization not calculable in perturbative QCD. Monte Carlo generators (e.g. Pythia, Herwig) are valuable, but have many free parameters. How do we know which predictions to trust when planning for future colliders? Tune to existing data, but which data? Different models (fail to) describe different measurements. Automate procedure: allow comparison of new MC (or set of parameters) with experimental results stored in a database.

HzTool Developed in HERA Workshop to enable comparison of data with existing and future MC generators. Routine written in Fortran for each analysis: fills HBOOK histograms from generated events to compare with measurements. Range of data already included: H1, ZEUS, UA5, OPAL, CDF, D0. Contributing authors also from ATLAS and Linear Collider. Still need more analyses from more experiments to be included. Longer-term: move to OO framework?

What does JetWeb add to HZTOOL? Easier access via WWW interface. Expanding database of existing data, predictions and comparisons. Reduces duplication of effort and computing resources. Scalable design to keep up with new data and models.

JetWeb home page

JetWeb Search Form

JetWeb Search Results

A JetWeb Fit

JetWeb Plots

The JetWeb Server Java object model Java servlets running in Tomcat container Data underlying model stored in MySQL database. MC: Model, Logparms, Logfile Data: Paper, Plot, DataPoint Comparison: Fit

The JetWeb Server

JetWeb on the Grid Processing power: –Currently submit jobs to separate batch farms at Manchester and UCL. –CPU intensive: as use increases, need more power. –Grid should enable transparent access to a wider range of resources. –Small(ish) self-contained executable: run almost anywhere. –Users could submit jobs, using their own certificates, to any resource they are entitled to use. Storage –Make database accessible as a resource in its own right. –Use Grid mechanisms to mirror data for faster and more reliable access.

The Story so Far Writes out Grid scripts as well as PBS/NQS. “Semi-automatic” procedure: –shell script submits jobs (sometimes) –output retrieved by hand Limited success –teething troubles with scripts –frequent failures of Grid components (RB, LB, VO server) –difficult to configure Grid node (CE, SE) correctly Four jobs run so far on GridPP testbed via IC and CERN RBs. (At UCL, Manchester, Oxford – thanks!) Many more to come.

The Future More automated job submission and output retrieval Running jobs with user-provided proxy certificates –Something similar done in “GUIDO”? Grid storage and database access –Spitfire? –OGSA-DAI? –Combine JetWeb DB with more general DB of results (Durham)

Conclusions Gathering useful experience, but progress is slow. Hard work getting anything to run –lack of documentation → hard for non-expert to use –failure of Grid components (but more stable now) Need more (wo)manpower, more powerful web and DB servers, more expertise! Well, it is a TESTbed, and things should become easier as we move towards a production Grid.