Presentation is loading. Please wait.

Presentation is loading. Please wait.

10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab

Similar presentations


Presentation on theme: "10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab"— Presentation transcript:

1 10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab http://www.opensciencegrid.org

2 OSG at CANS 2 What is OSG? Shared Common Distributed Infrastructure Supporting access to contributed Processing, disk & tape resources Over production and research networks and Open tor use by Science Collaborations

3 OSG at CANS 3 96 Resources across production & integration infrastructures 20 Virtual Organizations +6 operations Includes 25% non-physics. ~20,000 CPUs (from 30 to 4000) ~6 PB Tapes ~4 PB Shared Disk Snapshot of Jobs on OSGs Sustaining through OSG submissions: 3,000-4,000 simultaneous jobs. ~10K jobs/day ~50K CPUhours/day. Peak test jobs of 15K a day. Using production & research networks OSG Snapshot

4 OSG at CANS 4 OSG - a Community Consortium DOE Laboratories and DOE, NSF, other, University Facilities contributing computing farms and storage resources, infrastructure and user services, user and research communities. Grid technology groups: Condor, Globus, Storage Resource Management, NSF Middleware Initiative. Global research collaborations: High Energy Physics - including Large Hadron Collider, Gravitational Wave Physics - LIGO, Nuclear and Astro Physics, Bioinformatices, Nanotechnology, CS research…. Partnerships: with peers, development and research groups Enabling Grids for EScience (EGEE),TeraGrid, Regional & Campus Grids (NYSGrid, NWICG, TIGRE, GLOW..) Education: I2U2/Quarknet sharing cosmic ray data, Grid schools… 19992000200120022005200320042006200720082009 PPDG GriPhyN iVDGL TrilliumGrid3 OSG (DOE) (DOE+NSF) (NSF)

5 OSG at CANS 5 OSG sits in the middle of an environment of a Grid- of-Grids from Local to Global Infrastructures Inter-Operating and Co-Operating Grids: Campus, Regional, Community, National, International. Virtual Organizations doing Research & Education.

6 OSG at CANS 6 Overlaid by virtual computational environments of single to large groups of researchers local to worldwide

7 OSG at CANS 7 OSG Core Activities Integration: software, systems and end-to-end environments. Production, integration, test infrastructures. Operations: common support mechanisms, security protections, troubleshooting. Inter-Operation: across administrative and technical boundaries. OSG Principles and Characteristics  Guaranteed and opportunistic access to shared resources.  Heterogeneous environment.  Interfacing and Federation across Campus, Regional, national/international Grids preserving local autonomy  New services and technologies developed external to OSG. Each activity includes technical work with Collaborators in the US and elsewhere.

8 OSG at CANS 8 OSG Middleware Infrastructure Applications VO Middleware Core grid technology distributions: Condor, Globus, Myproxy: shared with TeraGrid and others Virtual Data Toolkit (VDT) core technologies + software needed by stakeholders:many components shared with EGEE OSG Release Cache: OSG specific configurations, utilities etc. HEP Data and workflow management etc Biology Portals, databases etc User Science Codes and Interfaces Existing Operating, Batch systems and Utilities. Astrophysics Data replication etc

9 OSG at CANS 9 What is the VDT? A collection of software  Grid software: Condor, Globus and lots more  Virtual Data System: Origin of the name “VDT”  Utilities: Monitoring, Authorization, Configuration  Built for >10 flavors/versions of Linux Automated Build and Test: Integration and regression testing. An easy installation:  Push a button, everything just works.  Quick update processes. Responsive to user needs:  process to add new components based on community needs. A support infrastructure:  front line software support,  triaging between users and software providers for deeper issues.

10 OSG at CANS 10 Middleware to Support Security Identification and Authorization based on X509 extended attribute certificates. In common with Enabling Grids for EScience (EGEE). Address needs of Roles of groups of researchers for control and policies of access. Operational auditing across core OSG assets.

11 OSG at CANS 11 OSG Active in Control and Understanding of Risk Security Process modelled on NIST Management, Operational, Technical controls Security Incidents: When not If.  Organizations control their own activities: Sites, Communities, Grids.  Coordination between operations centers of participating infrastructures.  End-to-end troubleshooting involves people, software and services from multiple infrastructures & organizations

12 OSG at CANS 12 High Energy Physicists Analyze today’s Data Worldwide PB/mo = High impact path Production path University of Science and Technology of China

13 OSG at CANS 13 Physics needs in 2008: 20-30 Petabyte tertiary automated tape storage at 12 centers world-wide physics and other scientific collaborations. High availability (365x24x7) and high data access rates (1GByte/sec) locally and remotely. Evolving and scaling smoothly to meet evolving requirements. E.g. CMS computing model Tier-0 Tier-1 Tier-2

14 OSG at CANS 14 OSG Data Transfer, Storage and Access - GBytes/sec 365 days a year for CMS & ATLAS Data Rates need to reach ~X3 in 1 year 600MB/sec ~7 Tier-1s, CERN + Tier-2s Bejing is a Tier-2 in this set

15 OSG at CANS 15 Aggressive program of End to End Network performance Complex end-to-end routes. Monitoring, configuration, diagnosis. Automated redundancy and recovery.

16 OSG at CANS 16 Submitting Locally, Executing Remotely: 15,000 jobs/day. 27 sites. Handful of submission points. + test jobs at 55K/day.

17 OSG at CANS 17 Applications cross infrastructures e.g.OSG and TeraGrid

18 OSG at CANS 18 The OSG Model of Federation OSG A(nother) Grid e.g. NAREGI Service-X Adaptor between OSG-X and AGrid-X VO or User that acts across grids Interface to Service-X Security, Data, Jobs, Operations, Information, Acccounting…

19 OSG at CANS 19 Before FermiGrid e.g.Fermilab User Resource Head Node Workers Astrophysics Resource Head Node Workers Common Resource Head Node Workers ParticlePhysics Resource Head Node Workers Theory Existing Common Gateway & Central Services Common Gateway & Central Services Guest User Local Grid with adaptor to national grid Central Campus wide Grid Services Enable efficiencies and sharing across internal farms and storage Maintain autonomy of individual resources Next Step: Campus Infrastructure Days - new activity OSG, Internet2 and TeraGrid

20 OSG at CANS 20 Information & Monitoring Storage Interfaces Interoperation Increasing in Scope

21 OSG at CANS 21 Summary of OSG today Providing core services, software and a distributed facility for an increasing set of research communities. Helping Virtual Organizations access resources on many different infrastructures. Reaching out to others to collaborate and contribute our experience and efforts.


Download ppt "10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab"

Similar presentations


Ads by Google