The Particle Physics Data Grid Collaboratory Pilot Richard P. Mount For the PPDG Collaboration DOE SciDAC PI Meeting January 15, 2002.

Slides:



Advertisements
Similar presentations
International Grid Communities Dr. Carl Kesselman Information Sciences Institute University of Southern California.
Advertisements

Jens G Jensen Atlas Petabyte store Supporting Multiple Interfaces to Mass Storage Providing Tape and Mass Storage to Diverse Scientific Communities.
SWITCH Visit to NeSC Malcolm Atkinson Director 5 th October 2004.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Plateforme de Calcul pour les Sciences du Vivant SRB & gLite V. Breton.
A conceptual model of grid resources and services Authors: Sergio Andreozzi Massimo Sgaravatto Cristina Vistoli Presenter: Sergio Andreozzi INFN-CNAF Bologna.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
The DOE Science Grid Computing and Data Infrastructure for Large-Scale Science William Johnston, Lawrence Berkeley National Lab Ray Bair, Pacific Northwest.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
The LHC Computing Grid Project Tomi Kauppi Timo Larjo.
Grid Architecture Grid Canada Certificates International Certificates Grid Canada Issued over 2000 certificates Condor G Resource TRIUMF.
Interfacing Interactive Data Analysis Tools with the Grid: PPDG CS-11 Activity Doug Olson, LBNL Joseph Perl, SLAC ACAT 2002, Moscow 24 June 2002.
Workload Management Massimo Sgaravatto INFN Padova.
Miron Livny Computer Sciences Department University of Wisconsin-Madison From Compute Intensive to Data.
Open Science Ruth Pordes Fermilab, July 17th 2006 What is OSG Where Networking fits Middleware Security Networking & OSG Outline.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Experiment Requirements for Global Infostructure Irwin Gaines FNAL/DOE.
10/20/05 LIGO Scientific Collaboration 1 LIGO Data Grid: Making it Go Scott Koranda University of Wisconsin-Milwaukee.
DataGrid Middleware: Enabling Big Science on Big Data One of the most demanding and important challenges that we face as we attempt to construct the distributed.
CoG Kit Overview Gregor von Laszewski Keith Jackson.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Welcome and Condor Project Overview.
Ruth Pordes, Fermilab CD, and A PPDG Coordinator Some Aspects of The Particle Physics Data Grid Collaboratory Pilot (PPDG) and The Grid Physics Network.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
GriPhyN EAC Meeting (Jan. 7, 2002)Carl Kesselman1 University of Southern California GriPhyN External Advisory Committee Meeting Gainesville,
ESnet PKI Developed for the DOE Science Grid and SciDAC.
Perspectives on Grid Technology Ian Foster Argonne National Laboratory The University of Chicago.
09/02 ID099-1 September 9, 2002Grid Technology Panel Patrick Dreher Technical Panel Discussion: Progress in Developing a Web Services Data Analysis Grid.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
1 ARGONNE  CHICAGO Grid Introduction and Overview Ian Foster Argonne National Lab University of Chicago Globus Project
1 DØ Grid PP Plans – SAM, Grid, Ceiling Wax and Things Iain Bertram Lancaster University Monday 5 November 2001.
The Earth System Grid (ESG) Computer Science and Technologies DOE SciDAC ESG Project Review Argonne National Laboratory, Illinois May 8-9, 2003.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Packaging & Testing: NMI & VDT.
…building the next IT revolution From Web to Grid…
Middleware Camp NMI (NSF Middleware Initiative) Program Director Alan Blatecky Advanced Networking Infrastructure and Research.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
CEDPS Data Services Ann Chervenak USC Information Sciences Institute.
PPDGLHC Computing ReviewNovember 15, 2000 PPDG The Particle Physics Data Grid Making today’s Grid software work for HENP experiments, Driving GRID science.
US Grid Efforts Lee Lueking D0 Remote Analysis Workshop February 12, 2002.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Open Science Grid & its Security Technical Group ESCC22 Jul 2004 Bob Cowles
1 I.Foster LCG Grid Technology: Introduction & Overview Ian Foster Argonne National Laboratory University of Chicago.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
U.S. Grid Projects and Involvement in EGEE Ian Foster Argonne National Laboratory University of Chicago EGEE-LHC Town Meeting,
LHC Computing, CERN, & Federated Identities
Open Science Grid in the U.S. Vicky White, Fermilab U.S. GDB Representative.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL DOE/NSF Review of US LHC Software and Computing Fermilab Nov 29, 2001.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
Grid Activities in CMS Asad Samar (Caltech) PPDG meeting, Argonne July 13-14, 2000.
DOE/NSF Quarterly review January 1999 Particle Physics Data Grid Applications David Malon Argonne National Laboratory
Towards deploying a production interoperable Grid Infrastructure in the U.S. Vicky White U.S. Representative to GDB.
Victoria A. White Head, Computing Division, Fermilab Fermilab Grid Computing – CDF, D0 and more..
All Hands Meeting 2005 BIRN-CC: Building, Maintaining and Maturing a National Information Infrastructure to Enable and Advance Biomedical Research.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Hall D Computing Facilities Ian Bird 16 March 2001.
Bob Jones EGEE Technical Director
Patrick Dreher Research Scientist & Associate Director
Open Science Grid at Condor Week
Status of Grids for HEP and HENP
Presentation transcript:

The Particle Physics Data Grid Collaboratory Pilot Richard P. Mount For the PPDG Collaboration DOE SciDAC PI Meeting January 15, 2002

sin(2  ) = 0.59 ± 0.14 (statistical) ± 0.05 (systematic) Observation of CP violation in the B 0 meson system. (Announced July 5, 2001) 32 million B 0 – anti-B 0 pairs studied: these are the July 2001 plots after months of analysis

The Top Quark Discovery (1995)

Quarks Revealed: structure inside Protons & Neutrons 1990 Nobel Prize in Physics Richard Taylor (SLAC)

Scope and Goals Who: OASCR (Mary Anne Scott) and HENP (Vicky White) Condor, Globus, SRM, SRB (PI, Miron Livny, U.Wisconsin) High Energy and Nuclear Physics Experiments - ATLAS, BaBar, CMS, D0, JLAB, STAR (PIs Richard Mount, SLAC and Harvey Newman, Caltech) Project Coordinators: Ruth Pordes, Fermilab and Doug Olson, LBNL Experiment data handling requirements today : Petabytes of storage, Teraops/s of computing, Thousands of users, Hundreds of institutions, 10+ years of analysis ahead Focus of PPDG: Vertical Integration of Grid middleware components into HENP experiments’ ongoing work Pragmatic development of common Grid services and standards – data replication, storage and job management, monitoring and planning.

 End to end integration and deployment of experiment applications using existing and emerging Grid services.  Deployment of Grid technologies and services in production (24x7) environments with stressful performance needs.  Collaborative development of Grid middleware and extensions between application and middleware groups – leading to pragmatic and least risk solutions.  HENP experiments extend their adoption of common infrastructures to higher layers of their data analysis and processing applications.  Much attention paid to integration, coordination, interoperability and interworking with emphasis on incremental deployment of increasingly functional working systems. The Novel Ideas

Impact and Connections IMPACT.  Make Grids usable and useful for the real problems facing international physics collaborations and for the average scientist in HENP.  Improving the robustness, reliability and maintainability of Grid software through early use in production application environments.  Common software components that have general applicability and contributions to standard Grid middleware. Connections  DOE Science Grid will deploy and support Certificate Authorities and develop Policy documents.  Security and Policy for Group Collaboration provides Community Authorization Service.  SDM/SRM working with PPDG on common storage interface APIs and software components.  Connections with other SciDAC projects (HENP and non-HENP).

Challenge and Opportunity

The Growth of “Computational Physics” in HENP Detector and Computing Hardware Physics Analysis and Results Large Scale Data Management Worldwide Collaboration (Grids) Feature Extraction and Simulation ~500 people (BaBar) ~10 people ~7 Million Lines of Code (BaBar) ~100k LOC

The Collaboratory Past 30 years ago an HEP “collaboratory” involved:  Air freight of bubble chamber film (e.g. CERN to Cambridge) 20 years ago:  Tens of thousands of tapes  100 physicists from all over Europe (or US)  Air freight of tapes, 300 baud modems 10 years ago:  Tens of thousands of tapes  500 physicists from US, Europe, USSR, PRC …  64k bps leased lines and air freight

The Collaboratory Present and Future Present:  Tens of thousands of tapes  500 physicists from US, Europe, Japan, FSU, PRC …  Dedicated intercontinental links at up to 155/622 Mbps  Home brewed, experiment-specific, data/job distribution software (if you’re lucky) Future (~2006):  Tens of thousands of tapes  2000 physicists from, worldwide collaboration  Many links at 2.5/10 Gbps  The Grid

End-to-End Applications & Integrated Production Systems to allow thousands of physicists to share data & computing resources for scientific processing and analyses Operators & Users Resources: Computers, Storage, Networks PPDG Focus: - Robust Data Replication - Intelligent Job Placement and Scheduling - Management of Storage Resources - Monitoring and Information of Global Services Relies on Grid infrastructure: - Security & Policy - High Speed Data Transfer - Network management the challenges! Put to good use by the Experiments

Project Activities to date – “One-to-one”Experiment – Computer Science developments  Replicated data sets for science analysis  BaBar – SRB  CMS – Globus, European Data Grid  STAR – Globus  JLAB – SRB  Distributed Monte Carlo simulation job production and management  ATLAS – Globus, Condor   D0 – Condor  CMS – Globus, Condor, EDG – SC2001 Demo  Storage management interfaces  STAR – SRM  JLAB – SRB

Cross-Cut – all collaborator - activities Certificate Authority policy and authentication – working with the SciDAC Science Grid, SciDAC Security and Policy for Group Collaboration and ESNET to develop policies and procedures. PPDG experiments will act as early testers and adopters of the CA. Monitoring of networks, computers, storage and applications – collaboration with GriPhyN. Developing use cases and requirements; evaluating and analysing existing systems with many components – D0 SAM, Condor pools etc. SC2001 demo: Architecture components and interfaces – collaboration with GriPhyN. Defining services and interfaces for analysis, comparison, and discussion with other architecture definitions such as the European Data Grid. International test beds – iVDGL and experiment applications.

Common Middleware Services Robust file transfer and replica services  SRB Replication Services  Globus replication services  Globus robust file transfer  GDMP application replication layer - common project between European Data Grid Work Package 2 and PPDG. Distributed Job Scheduling and Resource Management:  Condor-G, DAGman, Gram; Sc2001 demo with GriPhyN  Storage Resource Interface and Management  Common API with EDG, SRM Standards Committees  Internet2 HENP Working Group  Global Grid Forum

Grid Realities BaBar Offline Computing Equipment Bottom-up Cost Estimate (December 2001) (Based only on costs we already expect) (To be revised annually)

Grid Realities

PPDG World An Experiment PPDG HENP Grid SciDAC connections