Open Science Ruth Pordes Fermilab, July 17th 2006 What is OSG Where Networking fits Middleware Security Networking & OSG Outline.

Slides:



Advertisements
Similar presentations
Dec 14, 20061/10 VO Services Project – Status Report Gabriele Garzoglio VO Services Project WBS Dec 14, 2006 OSG Executive Board Meeting Gabriele Garzoglio.
Advertisements

 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Open Science Grid June 28, 2006 Bill Kramer Chair of the Open Science Grid Council NERSC Center General Manager, LBNL.
Open Science Grid Use of PKI: Wishing it was easy A brief and incomplete introduction. Doug Olson, LBNL PKI Workshop, NIST 5 April 2006.
R. Pordes, I Brazilian LHC Computing Workshop 1 What is Open Science Grid?  High Throughput Distributed Facility  Shared opportunistic access to existing.
Jan 2010 Current OSG Efforts and Status, Grid Deployment Board, Jan 12 th 2010 OSG has weekly Operations and Production Meetings including US ATLAS and.
Open Science Grid Frank Würthwein UCSD. 2/13/2006 GGF 2 “Airplane view” of the OSG  High Throughput Computing — Opportunistic scavenging on cheap hardware.
Assessment of Core Services provided to USLHC by OSG.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
Key Project Drivers - FY11 Ruth Pordes, June 15th 2010.
OSG Operations and Interoperations Rob Quick Open Science Grid Operations Center - Indiana University EGEE Operations Meeting Stockholm, Sweden - 14 June.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
Open Science Grid Frank Würthwein OSG Application Coordinator Experimental Elementary Particle Physics UCSD.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
May 8, 20071/15 VO Services Project – Status Report Gabriele Garzoglio VO Services Project – Status Report Overview and Plans May 8, 2007 Computing Division,
Mar 28, 20071/9 VO Services Project Gabriele Garzoglio The VO Services Project Don Petravick for Gabriele Garzoglio Computing Division, Fermilab ISGC 2007.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
Data Intensive Science Network (DISUN). DISUN Started in May sites: Caltech University of California at San Diego University of Florida University.
Partnerships & Interoperability - SciDAC Centers, Campus Grids, TeraGrid, EGEE, NorduGrid,DISUN Ruth Pordes Fermilab Open Science Grid Joint Oversight.
The Open Science Grid OSG Ruth Pordes Fermilab. 2 What is OSG? A Consortium of people working together to Interface Farms and Storage to a Grid and Researchers.
Mine Altunay July 30, 2007 Security and Privacy in OSG.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
Open Science Grid An Update and Its Principles Ruth Pordes Fermilab.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
Grid Operations Lessons Learned Rob Quick Open Science Grid Operations Center - Indiana University.
Open Science Grid Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab September 1, 2005.
OSG Consortium Meeting (January 23, 2006)Paul Avery1 University of Florida Open Science Grid Progress Linking Universities and Laboratories.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
OSG Integration Activity Report Rob Gardner Leigh Grundhoefer OSG Technical Meeting UCSD Dec 16, 2004.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Open Science Grid & its Security Technical Group ESCC22 Jul 2004 Bob Cowles
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
The OSG and Grid Operations Center Rob Quick Open Science Grid Operations Center - Indiana University ATLAS Tier 2-Tier 3 Meeting Bloomington, Indiana.
April 26, Executive Director Report Executive Board 4/26/07 Things under control Things out of control.
DTI Mission – 29 June LCG Security Ian Neilson LCG Security Officer Grid Deployment Group CERN.
INFSO-RI Enabling Grids for E-sciencE An overview of EGEE operations & support procedures Jules Wolfrat SARA.
Sep 25, 20071/5 Grid Services Activities on Security Gabriele Garzoglio Grid Services Activities on Security Gabriele Garzoglio Computing Division, Fermilab.
Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab October 25, 2005.
OSG Area Coordinator’s Report: Workload Management Maxim Potekhin BNL May 8 th, 2008.
OSG Deployment Preparations Status Dane Skow OSG Council Meeting May 3, 2005 Madison, WI.
An Introduction to Campus Grids 19-Apr-2010 Keith Chadwick & Steve Timm.
Открытая решетка науки строя открытое Cyber- инфраструктура для науки GRID’2006 Dubna, Россия Июнь 26, 2006 Robertovich Gardner Университет Chicago.
1 An update on the Open Science Grid for IHEPCCC Ruth Pordes, Fermilab.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
Towards deploying a production interoperable Grid Infrastructure in the U.S. Vicky White U.S. Representative to GDB.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
Ruth Pordes Executive Director University of Washingon Seattle OSG Consortium Meeting 21st August University of Washingon Seattle.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Ruth Pordes, March 2010 OSG Update – GDB Mar 17 th 2010 Operations Services 1 Ramping up for resumption of data taking. Watching every ticket carefully.
OSG Facility Miron Livny OSG Facility Coordinator and PI University of Wisconsin-Madison Open Science Grid Scientific Advisory Group Meeting June 12th.
Defining the Technical Roadmap for the NWICG – OSG Ruth Pordes Fermilab.
1 Open Science Grid Progress & Vision Keith Chadwick, Fermilab
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
What is OSG? (What does it have to do with Atlas T3s?) What is OSG? (What does it have to do with Atlas T3s?) Dan Fraser OSG Production Coordinator OSG.
Open Science Grid Interoperability
Bob Jones EGEE Technical Director
Gene Oleynik, Head of Data Storage and Caching,
Regional Operations Centres Core infrastructure Centres
Open Science Grid Progress and Status
Ian Bird GDB Meeting CERN 9 September 2003
Open Science Grid at Condor Week
Presentation transcript:

Open Science Ruth Pordes Fermilab, July 17th 2006 What is OSG Where Networking fits Middleware Security Networking & OSG Outline

2 OSG - goals and scope OSG is a Consortium joining efforts from disparate projects OSG is national distributed facility for many sciences: providing access to heterogeneous computing, network and storage resources and common operational services, software stack, policies and procedures. Stakeholders include US HEP and NP experiments and Fermilab users, LIGO, Astrophysics… and core Grid technology groups. Infrastructure OSG Infrastructure is a core piece of the WLCG and delivers accountable resources and cycles for LHC experiment production and analysis. OSG relies on many external projects for development and support. E.g. for Network infrastructures. OSG federates with other grids - especially TeraGrid, Campus Grids and EGEE. OSG must deliver to LHC and LIGO milestones and schedules - - ie 2007, 2008 throughput and capacity needs.

3 OSG Consortium US LHC, LIGO, Tevatron Run II, STAR, SDSS, DOE Labs - BNL, Fermilab, JLab, LBNL, SLAC Multidisciplinary Campus Grids - GLOW, GROW, Crimson Grid Bio-Informatics - GADU, FMRI Condor, Globus, Storage Resource Manager

4 OSG’s world is flat - a Grid of Grids - from Local to Global Global Science Community Systems e.g. FermiGrid, NWIC Local Campus And Regional Grids National CyberInfrastructures for Science e.g. OSG-TeraGrid e.g. CMS, D0

5 OSG’s world is flat - a Grid of Grids - from Local to Global Global Science Community Systems e.g. FermiGrid, NWIC Local Campus And Regional Grids National CyberInfrastructures for Science e.g. OSG-TeraGrid e.g. CMS, D0 People Working Together!

6 Monitored “OSG jobs”-- July OSG deployment Off to a running start … but lot’s more to do. — Routinely exceeding 1Gbps at 3 sites: — Need to Scale by x4 by 2008 with many more sites — Routinely exceeding 1000 running jobs per client — Need to Scale by at least x10 by 2008 — Have reached 99% success rate for 10,000 jobs per day submission — Need to reach this routinely, even under heavy load

7 Operations Model In practice, support organizations often play multiple roles Lines represent communication paths and, in our model, agreements. We have not progressed very far with agreements yet. Gray shading indicates that OSG Operations composed of effort from all the support centers.

8 Outline, OSG Network People Where Networking fits TG-Networks - Shawn McGee and Don Petravick. TG-Storage currently covers data distribution and management issues also. LHC Tier-1s network groups. MonaLisa monitoring & accounting service.

9 Uses commodity networks - ESNet, Campus LANs Some sites are well network provisioned -- connected to Starlight etc. Some OSG sites are also on TeraGrid. Connectivity of Resources range from full-duplex, outgoing only, to fully behind firewalls. Network Connectivity - purvue of the Sites & VOs: Labs and Universities

10 CMS Experiment - example of a global community grid Germany Taiwan UK Italy Data & jobs moving locally, regionally & globally within CMS grid. Transparently across grid boundaries from campus to the world. Florida CERN Caltech Wisconsin UCSD France Purdue MIT UNL OSG EGEE

11 CMS Global Operations Job submission: 16,000 jobs per day submitted across EGEE & OSG via INFN RB. Data movement to 39 sites worldwide for CMS data transfer challenge. Peak transfer rates of ~5Gbps are reached. All 7 CMS OSG sites have reached 5TB/day goal. Caltech, Florida, UCSD exceed 10TB/day.

12 ATLAS Global Data Distribution Deploys “DQ” data management on “Edge Service”. VO specific hardware at Edge of each site. OSG plans to deploy such Edge Services based on XEN VMs to enable better site control and auditing.

13 Middleware Me: thin user layer My friends: VO services VO infrastructure VO admins The Grid: anonymous sites & admins Common to all. Me & My friends are usually domain science specific. Middleware & Service Principles: Me -- My friends -- The grid

14 OSG Middleware Layering NSF Middleware Initiative (NMI): Condor, Globus, Myproxy Virtual Data Toolkit (VDT) Common Services NMI + VOMS, CEMon (common EGEE components), MonaLisa, Clarens, AuthZ, Squid, OSG Release Cache: VDT + Configuration, Validation, VO management LHC Services & Framework LIGO Data Grid CDF, D0 SamGrid & Framework Infrastructure Application s … Bio Services & Framework

15 OSG Middleware Deployment Domain science requirements. OSG stakeholders and middleware developer (joint) projects. Integrate into VDT Release. Deploy on OSG integration grid Provision in OSG release & deploy to OSG production. Condor, Globus, EGEE etc Test on “VO specific grid” Test Interoperability With EGEE and TeraGrid

16 Integration Testbed - a Grid for System Testing & Component Validation Software Providers deliver components and/or services to OSG Applications or Facility. Activities have responsible technical lead. Provide Readiness Plan to Integration. Identify Support Center. Submit request for inclusion in VDT - agree to support model, supply test programs etc. Validate on VO-specific Grid then Integration Testbed. Integration Coordinator recommends when new component is ready for Provisioning to production. ITB

17 Security - Integrated & End-To-End OSG Security Program is Risk based; it covers OSG assets and Agreements with & between VOs, sites, and grids; we are looking at Residual Risks to identify priorities for work.  Incident Response Plan; AUPs, Risk Assessment, Security Plan. Scope includes Operational Security, Auditing, Quality Assurance, Documented Policies, Collaboration with peers, Trust Controls. We will provide common references for sites+VOs+Grids (Otherwise program does not scale) Scope includes Software: baseline, auditing, patching. Good collaboration with IGTF, EGEE -- JSPG,MSWG -- TeraGrid, Security for Open Science CET etc.

18 e.g. User and VO Management VO Registers with with Operations Center  Provides URL for VOMS service to be propagated to the sites.  Several VOMS are shared with EGEE as part of WLCG. User registers through VOMRS or VO administrator  User added to VOMS of one or more VOs.  VO responsible for users to sign AUP.  VO responsible for VOMS service support. Site Registers with the Operations Center  Signs the Service Agreement.  Decides which VOs to support (striving for default admit)  Populates GUMS from VOMSes of all VOs. Chooses account UID policy for each VO & role. VOs and Sites provide Support Center Contact and joint Operations.  For WLCG: US ATLAS and US CMS Tier-1s directly registered to WLCG. Other support centers propagated through OSG GOC to WLCG.

19 e.g. Software Fast & Critical Updates -- already exercised for mysql vulnerabilities

20 e.g. Risk Assessment Incident Response Plan

21 OSG & Networking needs OSG needs end-to-end performance and reliability of the networks it uses. Thus OSG needs access to comprehensive and usable network monitoring, capacity and performance data. OSG need both “ real time ” and historical information including:  Network Weather Maps  Troubleshooting Tools  Performance analysis and diagnosis. Network characteristics are part of the Information Service that aggregates static and dynamic resource information and uses an LDAP-based collection service.

22 OSG & Networking needs cont. OSG does not include development activities nor specific effort for networking. Thus we rely on External Projects for this development and support --- Help! OSG will provide common reference configurations for end points e.g. for storage services. We need to document reference and performant Site and Campus Network configurations e.g. Firewalls affect throughput and capabilities of Sites. Resource Management at multiple interfaces is becoming more important as the OSG expands. We need provide the middleware and applications availability and allocation of network bandwidth information for these services.

23 The End