TeraGrid CTSS Plans and Status Dane Skow for Lee Liming and JP Navarro OSG Consortium Meeting 22 August, 2006.

Slides:



Advertisements
Similar presentations
Scaling TeraGrid Access A Testbed for Attribute-based Authorization and Leveraging Campus Identity Management
Advertisements

TeraGrid Deployment Test of Grid Software JP Navarro TeraGrid Software Integration University of Chicago OGF 21 October 19, 2007.
1 US activities and strategy :NSF Ron Perrott. 2 TeraGrid An instrument that delivers high-end IT resources/services –a computational facility – over.
The Anatomy of the Grid: An Integrated View of Grid Architecture Carl Kesselman USC/Information Sciences Institute Ian Foster, Steve Tuecke Argonne National.
System Center Configuration Manager Push Software By, Teresa Behm.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Universität Dortmund Robotics Research Institute Information Technology Section Grid Metaschedulers An Overview and Up-to-date Solutions Christian.
Distributed Application Management Using PLuSH Jeannie Albrecht, Christopher Tuttle, Alex C. Snoeren, and Amin Vahdat UC San Diego CSE {jalbrecht, ctuttle,
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
TeraGrid Science Gateway AAAA Model: Implementation and Lessons Learned Jim Basney NCSA University of Illinois Von Welch Independent.
Simo Niskala Teemu Pasanen
Globus Computing Infrustructure Software Globus Toolkit 11-2.
Kate Keahey Argonne National Laboratory University of Chicago Globus Toolkit® 4: from common Grid protocols to virtualization.
Network, Operations and Security Area Tony Rimovsky NOS Area Director
Attribute-based Authentication for Gateways Jim Basney Terry Fleury Stuart Martin JP Navarro Tom Scavo Jon Siwek Von Welch Nancy Wilkins-Diehr.
TeraGrid’s Integrated Information Service “IIS” Grid Computing Environments 2009 Lee Liming, JP Navarro, Eric Blau, Jason Brechin, Charlie Catlett, Maytal.
NOS Objectives, YR 4&5 Tony Rimovsky. 4.2 Expanding Secure TeraGrid Access A TeraGrid identity management infrastructure that interoperates with campus.
CCSM Portal/ESG/ESGC Integration (a PY5 GIG project) Lan Zhao, Carol X. Song Rosen Center for Advanced Computing Purdue University With contributions by:
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
GIG Software Integration: Area Overview TeraGrid Annual Project Review April, 2008.
TeraGrid Information Services December 1, 2006 JP Navarro GIG Software Integration.
GIG Software Integration Project Plan, PY4-PY5 Lee Liming Mary McIlvain John-Paul Navarro.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
TeraGrid Information Services John-Paul “JP” Navarro TeraGrid Grid Infrastructure Group “GIG” Area Co-Director for Software Integration and Information.
OSG Public Storage and iRODS
TeraGrid Information Services JP Navarro, Lee Liming University of Chicago TeraGrid Architecture Meeting September 20, 2007.
TeraGrid Science Gateways: Scaling TeraGrid Access Aaron Shelmire¹, Jim Basney², Jim Marsteller¹, Von Welch²,
GRAM: Software Provider Forum Stuart Martin Computational Institute, University of Chicago & Argonne National Lab TeraGrid 2007 Madison, WI.
CTSS 4 Strategy and Status. General Character of CTSSv4 To meet project milestones, CTSS changes must accelerate in the coming years. Process –Process.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
Set of priorities per WBS level 3 elements: (current numbering need to be mapped to new WBS version from Tim) (AD = member of wheels with oversight responsibility)
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
GRAM5 - A sustainable, scalable, reliable GRAM service Stuart Martin - UC/ANL.
TeraGrid Quarterly Meeting Dec 5 - 7, 2006 Data, Visualization and Scheduling (DVS) Update Kelly Gaither, DVS Area Director.
Tutorial: Building Science Gateways TeraGrid 08 Tom Scavo, Jim Basney, Terry Fleury, Von Welch National Center for Supercomputing.
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
SAN DIEGO SUPERCOMPUTER CENTER Inca TeraGrid Status Kate Ericson November 2, 2006.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
Open Science Grid Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab September 1, 2005.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Leveraging the InCommon Federation to access the NSF TeraGrid Jim Basney Senior Research Scientist National Center for Supercomputing Applications University.
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
Portal Update Plan Ashok Adiga (512)
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Testing Grid Software on the Grid Steven Newhouse Deputy Director.
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
Biomedical and Bioscience Gateway to National Cyberinfrastructure John McGee Renaissance Computing Institute
Distributed Data for Science Workflows Data Architecture Progress Report December 2008.
2005 GRIDS Community Workshop1 Learning From Cyberinfrastructure Initiatives Grid Research Integration Development & Support
Data, Visualization and Scheduling (DVS) TeraGrid Annual Meeting, April 2008 Kelly Gaither, GIG Area Director DVS.
Security Solutions Rachana Ananthakrishnan University of Chicago.
Network, Operations and Security Area Tony Rimovsky NOS Area Director
Dynamic Creation and Management of Runtime Environments in the Grid Kate Keahey Matei Ripeanu Karl Doering.
Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab October 25, 2005.
Status of Globus activities Massimo Sgaravatto INFN Padova for the INFN Globus group
Attribute-based Authentication for Gateways Jim Basney Terry Fleury Stuart Martin JP Navarro Tom Scavo Nancy Wilkins-Diehr.
Quality Assurance (QA) Working Group Update July 1, 2010 Kate Ericson (SDSC) Shava Smallen (SDSC)
CTSS Rollout update Mike Showerman JP Navarro April
TeraGrid’s Common User Environment: Status, Challenges, Future Annual Project Review April, 2008.
CTSSv3 Cutover Checklist Draft 3 JP Navarro, Lee Liming May 25, 2006.
Software Integration Highlights CY2008 Lee Liming, JP Navarro GIG Area Directors for Software Integration University of Chicago, Argonne National Laboratory.
ACGT Architecture and Grid Infrastructure Juliusz Pukacki ‏ EGEE Conference Budapest, 4 October 2007.
TeraGrid Capability Discovery John-Paul “JP” Navarro TeraGrid Area Co-Director for Software Integration University of Chicago/Argonne National Laboratory.
Data Infrastructure in the TeraGrid Chris Jordan Campus Champions Presentation May 6, 2009.
Integrated Information Services “IIS” JP Navarro, U. of Chicago/ANL OGF 30 October 28, 2010.
TeraGrid’s Process for Meeting User Needs. Jay Boisseau, Texas Advanced Computing Center Dennis Gannon, Indiana University Ralph Roskies, University of.
TeraGrid Software Integration: Area Overview (detailed in 2007 Annual Report Section 3) Lee Liming, JP Navarro TeraGrid Annual Project Review April, 2008.
TeraGrid Information Services
Presentation transcript:

TeraGrid CTSS Plans and Status Dane Skow for Lee Liming and JP Navarro OSG Consortium Meeting 22 August, 2006

Common TeraGrid Software & Services (CTSS) Science Gateway TeraGrid Service Level 1: Login Access Level 2: Grid Service Level 3: TeraGrid Service Science Gateway TG Compute Resource CTSS TG Data Resource CTSS TG Compute Resource CTSS TG Viz Resource

CTSS v3 Status GT4 based services –Description at –Cutover to GT4 services production on June 19 –Moving Aggressively to WS services Continue to work through deployment configuration issues –(primarily MPI related) –15 unique platforms to build –Difficulties keeping up with resource upgrades and with new 64 bit architectures

Focus Areas Community Software Areas –Persistent area for community supported software –Very useful for multi-grid work MDS4 –Initial focus is support of user portal overview of resources –Extend to general service registry INCA, Gateways Coordination with VDT Build & Test

Planned TeraGrid/NMI Automated Build Framework TG submit host grandcentral TeraGrid Software Sources Binaries NMI Build/Test Framework Condor / Condor-G / DAGMan TG build host NMI Framework Wrapper Condor Startd TG Software Build Scripts TG Build Tools TeraGrid CVS Build Scripts Build Specs NMI Results DB Build Job Info Existing NMI component Evolved TG component New TG component

General Character of CTSSv4 To meet project milestones, CTSS changes must accelerate in the coming years. Process –Process will be the focus of CTSSv4. –Significant changes in who and how, not so much in what. –Process changes now will enable us to more effectively manage content changes in the future. Content –Newer component versions that include features we need –More allowable versions –Support for more platforms

CTSS 4 Process Goals Change the focus from software packages to capabilities. –Software should be deployed to meet user capability requirements, not to satisfy abstract agreements. –Which capabilities ought to be coordinated, and why? Be explicit about which capabilities are expected to be on which systems. –The CTSS core (mandatory capabilities) is radically smaller. –Each RP explicitly decides which additional capabilities it will provide, based on the intended purpose of each system. Make the process of defining CTSS more open and inclusive and more reflective of the TeraGrid project structure (GIG + multiple RPs, working groups, RATs, etc.). –GIG/RP working groups and areas have an open mechanism for defining, designing, and delivering CTSS capabilities. –Expertise is distributed, so the process should be distributed also.

Goals Continued Improve coordination significantly. –Changes are coordinated more explicitly with more TeraGrid sub-teams. –Each sub-team has a part in change planning.

CTSS 4 Strategy Break the CTSS monolith into multiple capability modules. Employ a formal change planning process.

CTSS “Kits” Reorganize CTSS into a series of kits. –A kit provides a small set of closely related capabilities. (job execution service, dataset hosting service, high-performance data movement, global filesystem, etc.) –A kit is (as much as possible) independent from other kits. Each kit includes: –a definition of the kit that focuses on purpose, requirements, and capabilities, including a problem statement and a design; –a set of software packages that RP administrators can install on their system(s) in order to implement the design; –documentation for RP administrators on how to install and test the software; –inca tests that test whether a given system satisfies the stated requirements; –softenv definitions that allow users to access the software.

The “Core” Kit Provides the capabilities that are absolutely necessary for a resource to meet the most basic integrative requirements of the TeraGrid. –Common Authentication, Authorization, Auditing, and Accounting capabilities –A system-wide registry of capabilities and service information –A Verification & Validation mechanism for capabilities –System-wide Usage Reporting capabilities This is much smaller than the current set of “required” CTSSv3 components. Unlike other capability kits, the Core Kit is focused on TeraGrid operations, as opposed to user capabilities.

Core Kit Provides Integrative Services Authentication, Authorization, Auditing, Accounting Mechanisms –Supports TeraGrid allocation processes –Allows coordinated use of multiple systems –Supports TeraGrid security policies –Goal: Forge a useful link between campus authentication systems, science gateway authentication systems, and TeraGrid resources Service Registry –Goal: Provide a focal point for registering presence of services and capabilities on TeraGrid resources –Goal: Support documentation, testing, automatic discovery, and automated configuration for distributed services (tgcp) Verification & Validation –Independently verifies the availability of capabilities on each resource –Goal: Focus more clearly on the specific capabilities each resource is intended to offer Usage Reporting –Goal: Support the need to monitor and track usage of TeraGrid capabilities

CTSS Capability Kits Each CTSS capability kit is an opportunity for resource providers to deploy a specific capability in coordination with other RPs. –Focal point for collecting and clarifying user requirements (via a RAT) –Focal point for designing, documenting, and implementing a capability (via a WG) –Focal point for deploying the capability (via the software WG) RPs can explicitly decide and declare which capabilities they intend to provide on each resource. –What is appropriate for each resource? –What is the RP’s strategy for delivering service to the community? TeraGrid’s system-wide registry tracks which CTSS capabilities are provided by each resource. –By declaration, by registration, and by verification

CTSS Capability Kits Kits may be defined, designed, implemented, packaged, documented, and supported by a broad range of people. –RATs –Working groups –GIG areas –Resource providers –Other communities The key feature of a CTSS capability is that its deployment is coordinated among RPs.

CTSS 4 Capability Kits –TeraGrid Core Capabilities Authorization, Usage Reporting, Service Registration, V & V –Application Development & Runtime Globus libraries, compilers, softenv, MPIs, etc –Remote Compute GRAM, WS-GRAM –Remote Login GSI-SSH, clients –Science Workflow Support Condor (G), GridShell –Data Management RLS, SRB –Data Movement GridFTP, RFT –Wide Area Filesystem GPFS-WAN

Change Coordination Process Motivation –New CTSS kit structure results in more potential sources of changes. –Scaling of resources (both number and diversity) results in more potential points of confusion/coord failure. –In general, we’d like to do this better. Goals –Clarity of purpose for changes –Help for documentation –Help in identifying points requiring coordination –Tracking deployment steps and progress –Easy to use for small changes, helpful for large changes.

Change Coordination Structure Change Description Data Sheet –Collects the basic facts about a proposed change –Provides everyone with the information needed to understand what is planned and who is involved and to identify potential risks –Will provide a record of changes Change Planning Checklist –Helps the planning team brainstorm about what the necessary points of coordination are –Provides opportunities for recording the coordination plans for each sub-team (docs, user services, RP admins, security, etc.) –Helps get coordination started early