Other servers Java client, ROOT (analysis tool), IGUANA (CMS viz. tool), ROOT-CAVES client (analysis sharing tool), … any app that can make XML-RPC/SOAP.

Slides:



Advertisements
Similar presentations
Data Management Expert Panel - WP2. WP2 Overview.
Advertisements

 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
High Performance Computing Course Notes Grid Computing.
CMS Applications Towards Requirements for Data Processing and Analysis on the Open Science Grid Greg Graham FNAL CD/CMS for OSG Deployment 16-Dec-2004.
Data Grids Darshan R. Kapadia Gregor von Laszewski
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
CoreGRID Workpackage 5 Virtual Institute on Grid Information and Monitoring Services Authorizing Grid Resource Access and Consumption Erik Elmroth, Michał.
23rd. June 2003JJB, GAE Workshop1 GAE (Grid Analysis Environment) Overview of Caltech effort Slides for the Caltech GAE Workshop June 2003.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Workload Management Massimo Sgaravatto INFN Padova.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
Web-based Portal for Discovery, Retrieval and Visualization of Earth Science Datasets in Grid Environment Zhenping (Jane) Liu.
Copyright © 2010 Platform Computing Corporation. All Rights Reserved.1 The CERN Cloud Computing Project William Lu, Ph.D. Platform Computing.
Data Analysis on Handheld Devices Using Clarens Tahir Azim NUST.
Grappa: Grid access portal for physics applications Shava Smallen Extreme! Computing Laboratory Department of Physics Indiana University.
XCAT Science Portal Status & Future Work July 15, 2002 Shava Smallen Extreme! Computing Laboratory Indiana University.
RomeWorkshop on eInfrastructures 9 December LCG Progress on Policies & Coming Challenges Ian Bird IT Division, CERN LCG and EGEE Rome 9 December.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Korea Workshop May Grid Analysis Environment (GAE) (overview) Frank van Lingen (on behalf of the GAE.
CoG Kit Overview Gregor von Laszewski Keith Jackson.
Flexibility and user-friendliness of grid portals: the PROGRESS approach Michal Kosiedowski
1 School of Computer, National University of Defense Technology A Profile on the Grid Data Engine (GridDaEn) Xiao Nong
The Data Grid: Towards an Architecture for the Distributed Management and Analysis of Large Scientific Dataset Caitlin Minteer & Kelly Clynes.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
Through the development of advanced middleware, Grid computing has evolved to a mature technology in which scientists and researchers can leverage to gain.
GridFE: Web-accessible Grid System Front End Jared Yanovich, PSC Robert Budden, PSC.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
Data Logistics in Particle Physics Ready or Not, Here it Comes… Prof. Paul Sheldon Vanderbilt University Prof. Paul Sheldon Vanderbilt University.
NOVA Networked Object-based EnVironment for Analysis P. Nevski, A. Vaniachine, T. Wenaus NOVA is a project to develop distributed object oriented physics.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Code Applications Tamas Kiss Centre for Parallel.
Cracow Grid Workshop October 2009 Dipl.-Ing. (M.Sc.) Marcus Hilbrich Center for Information Services and High Performance.
CPT Demo May Build on SC03 Demo and extend it. Phase 1: Doing Root Analysis and add BOSS, Rendezvous, and Pool RLS catalog to analysis workflow.
Grid Architecture William E. Johnston Lawrence Berkeley National Lab and NASA Ames Research Center (These slides are available at grid.lbl.gov/~wej/Grids)
Tools for collaboration How to share your duck tales…
09/02 ID099-1 September 9, 2002Grid Technology Panel Patrick Dreher Technical Panel Discussion: Progress in Developing a Web Services Data Analysis Grid.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
CHEP 2004 Grid Enabled Analysis: Prototype, Status and Results (on behalf of the GAE collaboration) Caltech, University of Florida, NUST, UBP Frank van.
CLRC and the European DataGrid Middleware Information and Monitoring Services The current information service is built on the hierarchical database OpenLDAP.
…building the next IT revolution From Web to Grid…
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
CEOS Working Group on Information Systems and Services - 1 Data Services Task Team Discussions on GRID and GRIDftp Stuart Doescher, USGS WGISS-15 May 2003.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
International Symposium on Grid Computing (ISGC-07), Taipei - March 26-29, 2007 Of 16 1 A Novel Grid Resource Broker Cum Meta Scheduler - Asvija B System.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
Korea Workshop May GAE CMS Analysis (Example) Michael Thomas (on behalf of the GAE group)
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Clarens Toolkit Building Blocks for a Simple TeraGrid Gateway Tutorial Conrad Steenberg Julian Bunn, Matthew Graham, Joseph Jacob, Craig Miller, Roy Williams.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Grid Execution Management for Legacy Code Architecture Exposing legacy applications as Grid services: the GEMLCA approach Centre.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI solution for high throughput data analysis Peter Solagna EGI.eu Operations.
Workload Management Workpackage
Database Replication and Monitoring
The CMS Grid Analysis Environment GAE (The CAIGEE Architecture)
Grid Computing.
Enabling High Speed Data Transfer in High Energy Physics
ExaO: Software Defined Data Distribution for Exascale Sciences
Gridifying the LHCb Monte Carlo production system
Presentation transcript:

other servers Java client, ROOT (analysis tool), IGUANA (CMS viz. tool), ROOT-CAVES client (analysis sharing tool), … any app that can make XML-RPC/SOAP calls LHC Data Grid Hierarchy: developed at Caltech Tier 1 Tier2 Center Online System CERN Center PBs of Disk; Tape Robot FNAL Center IN2P3 Center INFN Center RAL Center Institute Workstations ~ MBytes/sec Gbps 0.1 to 10 Gbps Tens of Petabytes by An Exabyte ~5-7 Years later. Physics data cache ~PByte/sec ~10-40 Gbps Tier2 Center ~ Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/(  Tier1)/(  Tier2) ~1:1:1 Emerging Vision: A Richly Structured, Global Dynamic System GRID Analysis Environment for LHC Particle PhysicsClient Web server Web server Service 3 rd party applications Client Client Client http/https GAE development (services) MCPS. Policy based Job submission and workflow management portal, developed in collaboration with FNAL and UCSD JobStatus. Access to Job Status information through Clarens and MonALISA, developed in collaboration with NUST JobMon. implements a secure and authenticated method for users to access running Grid jobs, developed in collaboration with FNAL BOSS. Uniform job submission layer developed in collaboration with INFN SPHINX. Grid scheduler developed at UFL CAVES. Analysis code sharing environment developed at UFL Core services (Clarens): Discovery, Authentication, Proxy, Remote file access, Access control management, Virtual Organization management VO Management AuthenticationAuthorizationLogging (remote) File Access ShellKey Escrow MonaLisa ( monitoring) ROOT (analysis)Clarens portalIGUANA (viz. app.) Clarens Grid Portal: Secure cert-based access to services through browser GRID Enabled Analysis: User view of a collaborative desktop This work is partly supported by the Department of Energy as part of the Particle Physics DataGrid project (DOE/DHEP and MICS) and be the National Science Foundation (NFS/MPS and CISE). Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Department of Energy or the National Science Foundation More information: GAE web page: GAE web page: Clarens web page: MonaLisa : SPHINX: Scientific Exploration at the High Energy Physics Frontier Physics experiments consist of large collaborations: CMS and ATLAS each encompass 2000 physicists from approximately 150 institutes ( physicists in 30 institutes in the US) HEP Challenges: Frontiers of Information Technology Rapid access to PetaByte/ExaByte data stores Secure, efficient, transparent access to heterogeneous worldwide distributed computing and data A collaborative scalable distributed environment for thousands of physicists to enable physics analysis Tracking the state and usage patterns of computing and data resources, to make possible rapid turnaround and efficient utilization of resources Clarens provides a ROOT Plug-In that allows the ROOT user to gain access to Grid services via the portal, for example to access ROOT files at remote locations The Clarens Web Service Framework A portal system providing a common infrastructure for deploying Grid enabled web services Features: Access control to services Session management Service discovery and invocation Virtual Organization management PKI based security Good performance (over 1400 calls per second) Role in GAE: Connects clients to Grid or analysis applications Acts in concert with other Clarens servers to form a P2P network of service providers Two implementations: Python/C using Apache web server Java using Tomcat servlets Monitoring SC04 BWC, 101 GBs A distributed monitoring service system using JINI/JAVA and WSDL/SOAP technologies. A Acts as a dynamic service system and provides the functionality to be discovered and used by any other services or clients that require such information. Can integrate existing monitoring tools and procedures to collect parameters describing computational nodes, applications and network performance. Provides the monitoring information from large and distributed systems to a set of loosely coupled "higher level services" in a flexible, self describing way. This is part of a loosely coupled service architectural model to perform effective resource utilization in large, heterogeneous distributed centers. Policy based access to workflows Tier2 Site Workflow Execution Network Compute Site Scheduler Catalogs Grid Services Web Server Execution Priority Manager Grid Wide Execution Service Data Management Fully- Concrete Planner Fully- Abstract Planner Virtual Data Replica Applications Monitoring Partially- Abstract Planner Metadata HTTP, SOAP, Sphinx MonALISA Clarens BOSS ORCA ROOT FAMOS Discovery,Discovery, Acl management,Acl management, Certificate based accessCertificate based access The GAE Architecture Implementations, developed within Physics and CS community associated with GAE components BOSS Workflow Definitions XML-RPC, JSON, RMI Runjob MCPS Storage JobMon DCache Reservation PlanningMonitoring MonALISA Global Command & Control MonALISA Monitoring Clients MonALISA Clients JobStatus Global view of the system Proactive in minimizing Grid traffic jams Analysis clients talk standard protocols to the Clarens Grid Service Portal Enabling Selection of Workflows (e.g. Monte Carlo simulation, data transfer, analysis) Jobs generated submitted to scheduler, which creates a plan based on monitor information Submission of jobs and feedback on job status MonALISA based monitoring services provide global views of the system MonALISA based components proactively manage sites and networks based on Monitoring information The Clarens portal and MonALISA clients hides the complexity of the Grid services from the client, but can expose it in as much detail as required for e.g. monitoring. Other Clients Web browser ROOT (analysis tool) Python Cojac (detector viz.)/ IGUANA (cms viz tool) “Analysis Flight Deck” JobMon Client JobStatus Client MCPS Client Grid Analysis Environment (GAE) The “Acid Test” for Grids; crucial for LHC experiments Large, diverse, distributed community of users Support for 100s to 1000s of analysis tasks, shared among dozen of sites Widely varying task requirements and priorities Need for priority schemes, robust authentication and security Operates in a severely resource limited and policy constrained global system Dominated by collaboration policy and strategy Requires real-time monitoring; task and workflow tracking; decisions often based on a global system view Where physicists learn to collaborate on analysis across the country, and across world regions Focus is on the LHC CMS experiment but architecture and services can potentially be used in other (physics) analysis environments