Presentation is loading. Please wait.

Presentation is loading. Please wait.

Grid Computing Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science The University of Chicago.

Similar presentations


Presentation on theme: "Grid Computing Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science The University of Chicago."— Presentation transcript:

1 Grid Computing Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science The University of Chicago http://www.mcs.anl.gov/~foster Seminar at Fermilab, October 31 st, 2001

2 foster@mcs.anl.gov Argonne  Chicago Grid Computing

3 foster@mcs.anl.gov Argonne  Chicago Issues I Will Address l Grids in a nutshell –Problem statement –Major Grid projects l Grid architecture –Globus Project™ and Toolkit™ l HENP data grid projects –GriPhyN, PPDG, iVDGL

4 foster@mcs.anl.gov Argonne  Chicago The Grid Problem Resource sharing & coordinated problem solving in dynamic, multi-institutional virtual organizations

5 foster@mcs.anl.gov Argonne  Chicago Grid Communities & Applications: Data Grids for High Energy Physics Tier2 Centre ~1 TIPS Online System Offline Processor Farm ~20 TIPS CERN Computer Centre FermiLab ~4 TIPS France Regional Centre Italy Regional Centre Germany Regional Centre Institute Institute ~0.25TIPS Physicist workstations ~100 MBytes/sec ~622 Mbits/sec ~1 MBytes/sec There is a “bunch crossing” every 25 nsecs. There are 100 “triggers” per second Each triggered event is ~1 MByte in size Physicists work on analysis “channels”. Each institute will have ~10 physicists working on one or more channels; data for these channels should be cached by the institute server Physics data cache ~PBytes/sec ~622 Mbits/sec or Air Freight (deprecated) Tier2 Centre ~1 TIPS Caltech ~1 TIPS ~622 Mbits/sec Tier 0 Tier 1 Tier 2 Tier 4 1 TIPS is approximately 25,000 SpecInt95 equivalents Image courtesy Harvey Newman, Caltech

6 foster@mcs.anl.gov Argonne  Chicago Grid Communities and Applications: Network for Earthquake Eng. Simulation l NEESgrid: national infrastructure to couple earthquake engineers with experimental facilities, databases, computers, & each other l On-demand access to experiments, data streams, computing, archives, collaboration NEESgrid: Argonne, Michigan, NCSA, UIUC, USC www.neesgrid.org

7 foster@mcs.anl.gov Argonne  Chicago Access Grid l Collaborative work among large groups l ~50 sites worldwide l Use Grid services for discovery, security l www.scglobal.org Ambient mic (tabletop) Presenter mic Presenter camera Audience camera Access Grid: Argonne, others www.accessgrid.org

8 foster@mcs.anl.gov Argonne  Chicago Why Grids? l A biochemist exploits 10,000 computers to screen 100,000 compounds in an hour l 1,000 physicists worldwide pool resources for petaop analyses of petabytes of data l Civil engineers collaborate to design, execute, & analyze shake table experiments l Climate scientists visualize, annotate, & analyze terabyte simulation datasets l An emergency response team couples real time data, weather model, population data

9 foster@mcs.anl.gov Argonne  Chicago Why Grids? (contd) l A multidisciplinary analysis in aerospace couples code and data in four companies l A home user invokes architectural design functions at an application service provider l An application service provider purchases cycles from compute cycle providers l Scientists working for a multinational soap company design a new product l A community group pools members’ PCs to analyze alternative designs for a local road

10 foster@mcs.anl.gov Argonne  Chicago Elements of the Problem l Resource sharing –Computers, storage, sensors, networks, … –Sharing always conditional: issues of trust, policy, negotiation, payment, … l Coordinated problem solving –Beyond client-server: distributed data analysis, computation, collaboration, … l Dynamic, multi-institutional virtual orgs –Community overlays on classic org structures –Large or small, static or dynamic

11 foster@mcs.anl.gov Argonne  Chicago A Little History l Early 90s –Gigabit testbeds, metacomputing l Mid to late 90s –Early experiments (e.g., I-WAY), software projects (e.g., Globus), application experiments l 2001 –Major application communities emerging –Major infrastructure deployments are underway –Rich technology base has been constructed –Global Grid Forum: >1000 people on mailing lists, 192 orgs at last meeting, 28 countries

12 foster@mcs.anl.gov Argonne  Chicago The Grid World: Current Status l Dozens of major Grid projects in scientific & technical computing/research & education –Deployment, application, technology l Considerable consensus on key concepts and technologies –Membership, security, resource discovery, resource management, … l Global Grid Forum has emerged as a significant force –And first “Grid” proposals at IETF

13 foster@mcs.anl.gov Argonne  Chicago Selected Major Grid Projects NameURL & SponsorsFocus Access Grid www.mcs.anl.gov/FL/ accessgrid; DOE, NSF Create & deploy group collaboration systems using commodity technologies BlueGridIBMGrid testbed linking IBM laboratories DISCOM www.cs.sandia.gov/ discom DOE Defense Programs Create operational Grid providing access to resources at three U.S. DOE weapons laboratories DOE Science Grid sciencegrid.org DOE Office of Science Create operational Grid providing access to resources & applications at U.S. DOE science laboratories & partner universities Earth System Grid (ESG) earthsystemgrid.org DOE Office of Science Delivery and analysis of large climate model datasets for the climate research community European Union (EU) DataGrid eu-datagrid.org European Union Create & apply an operational grid for applications in high energy physics, environmental science, bioinformatics g g g g g g New

14 foster@mcs.anl.gov Argonne  Chicago Selected Major Grid Projects NameURL/SponsorFocus EuroGrid, Grid Interoperability (GRIP) eurogrid.org European Union Create technologies for remote access to supercomputer resources & simulation codes; in GRIP, integrate with Globus Fusion Collaboratory fusiongrid.org DOE Off. Science Create a national computational collaboratory for fusion research Globus Project globus.org DARPA, DOE, NSF, NASA, Msoft Research on Grid technologies; development and support of Globus Toolkit; application and deployment GridLab gridlab.org European Union Grid technologies and applications GridPP gridpp.ac.uk U.K. eScience Create & apply an operational grid within the U.K. for particle physics research Grid Research Integration Dev. & Support Center grids-center.org NSF Integration, deployment, support of the NSF Middleware Infrastructure for research & education g g g g g g New

15 foster@mcs.anl.gov Argonne  Chicago Selected Major Grid Projects NameURL/SponsorFocus Grid Application Dev. Software hipersoft.rice.edu/ grads; NSF Research into program development technologies for Grid applications Grid Physics Network griphyn.org NSF Technology R&D for data analysis in physics expts: ATLAS, CMS, LIGO, SDSS Information Power Grid ipg.nasa.gov NASA Create and apply a production Grid for aerosciences and other NASA missions International Virtual Data Grid Laboratory ivdgl.org NSF Create international Data Grid to enable large-scale experimentation on Grid technologies & applications Network for Earthquake Eng. Simulation Grid neesgrid.org NSF Create and apply a production Grid for earthquake engineering Particle Physics Data Grid ppdg.net DOE Science Create and apply production Grids for data analysis in high energy and nuclear physics experiments g g g g g New g

16 foster@mcs.anl.gov Argonne  Chicago Selected Major Grid Projects NameURL/SponsorFocus TeraGrid teragrid.org NSF U.S. science infrastructure linking four major resource sites at 40 Gb/s UK Grid Support Center grid-support.ac.uk U.K. eScience Support center for Grid projects within the U.K. UnicoreBMBFTTechnologies for remote access to supercomputers g g New Also many technology R&D projects: e.g., Condor, NetSolve, Ninf, NWS See also www.gridforum.org

17 foster@mcs.anl.gov Argonne  Chicago Grid Architecture & Globus Toolkit™ l The question: –What is needed for resource sharing & coordinated problem solving in dynamic virtual organizations (VOs)? l The answer: –Major issues identified: membership, resource discovery & access, …, … –Grid architecture captures core elements, emphasizing pre-eminent role of protocols –Globus Toolkit™ has emerged as de facto standard for major protocols & services

18 foster@mcs.anl.gov Argonne  Chicago Layered Grid Architecture (By Analogy to Internet Architecture) Application Fabric “Controlling things locally”: Access to, & control of, resources Connectivity “Talking to things”: communication (Internet protocols) & security Resource “Sharing single resources”: negotiating access, controlling use Collective “Coordinating multiple resources”: ubiquitous infrastructure services, app-specific distributed services Internet Transport Application Link Internet Protocol Architecture For more info: www.globus.org/research/papers/anatomy.pdf

19 foster@mcs.anl.gov Argonne  Chicago Grid Services Architecture (1): Fabric Layer l Just what you would expect: the diverse mix of resources that may be shared –Individual computers, Condor pools, file systems, archives, metadata catalogs, networks, sensors, etc., etc. l Few constraints on low-level technology: connectivity and resource level protocols form the “neck in the hourglass” l Globus toolkit provides a few selected components (e.g., bandwidth broker)

20 foster@mcs.anl.gov Argonne  Chicago Grid Services Architecture (2): Connectivity Layer Protocols & Services l Communication –Internet protocols: IP, DNS, routing, etc. l Security: Grid Security Infrastructure (GSI) –Uniform authentication & authorization mechanisms in multi-institutional setting –Single sign-on, delegation, identity mapping –Public key technology, SSL, X.509, GSS-API (several Internet drafts document extensions) –Supporting infrastructure: Certificate Authorities, key management, etc.

21 foster@mcs.anl.gov Argonne  Chicago Site A (Kerberos) Site B (Unix) Site C (Kerberos) Computer User Single sign-on via “grid-id” & generation of proxy cred. Or: retrieval of proxy cred. from online repository User Proxy Proxy credential Computer Storage system Communication* GSI-enabled FTP server Authorize Map to local id Access file Remote file access request* GSI-enabled GRAM server GSI-enabled GRAM server Remote process creation requests* * With mutual authentication Process Kerberos ticket Restricted proxy Process Restricted proxy Local id Authorize Map to local id Create process Generate credentials Ditto GSI in Action: “Create Processes at A and B that Communicate & Access Files at C”

22 foster@mcs.anl.gov Argonne  Chicago Grid Services Architecture (3): Resource Layer Protocols & Services l Resource management: GRAM –Remote allocation, reservation, monitoring, control of [compute] resources l Data access: GridFTP –High-performance data access & transport l Information: MDS (GRRP, GRIP) –Access to structure & state information l & others emerging: catalog access, code repository access, accounting, … l All integrated with GSI

23 foster@mcs.anl.gov Argonne  Chicago GRAM Resource Management Protocol l Grid Resource Allocation & Management –Allocation, monitoring, control of computations –Secure remote access to diverse schedulers l Current evolution –Immediate and advance reservation –Multiple resource types: manage anything –Recoverable requests, timeout, etc. –Evolve to Web Services –Policy evaluation points for restricted proxies Karl Czajkowski, Steve Tuecke, others

24 foster@mcs.anl.gov Argonne  Chicago Data Access & Transfer l GridFTP: extended version of popular FTP protocol for Grid data access and transfer l Secure, efficient, reliable, flexible, extensible, parallel, concurrent, e.g.: –Third-party data transfers, partial file transfers –Parallelism, striping (e.g., on PVFS) –Reliable, recoverable data transfers l Reference implementations –Existing clients and servers: wuftpd, nicftp –Flexible, extensible libraries Bill Allcock, Joe Bester, John Bresnahan, Steve Tuecke, others

25 foster@mcs.anl.gov Argonne  Chicago Grid Services Architecture (4): Collective Layer Protocols & Services l Community membership & policy –E.g., Community Authorization Service l Index/metadirectory/ brokering services –Custom views on community resource collections (e.g., GIIS, Condor Matchmaker) l Replica management and replica selection –Optimize aggregate data access performance l Co-reservation and co-allocation services –End-to-end performance l Etc., etc.

26 foster@mcs.anl.gov Argonne  Chicago The Grid Information Problem l Large numbers of distributed “sensors” with different properties l Need for different “views” of this information, depending on community membership, security constraints, intended purpose, sensor type

27 foster@mcs.anl.gov Argonne  Chicago Globus Toolkit Solution: MDS-2 Registration & enquiry protocols, information models, query languages –Provides standard interfaces to sensors –Supports different “directory” structures supporting various discovery/access strategies Karl Czajkowski, Steve Fitzgerald, others

28 foster@mcs.anl.gov Argonne  Chicago CAS 1. CAS request, with resource names and operations Community Authorization (Prototype shown August 2001) Does the collective policy authorize this request for this user? user/group membership resource/collective membership collective policy information Resource Is this request authorized for the CAS? Is this request authorized by the capability? local policy information 4. Resource reply User 3. Resource request, authenticated with capability 2. CAS reply, with and resource CA info capability Laura Pearlman, Steve Tuecke, Von Welch, others

29 foster@mcs.anl.gov Argonne  Chicago HENP Related Data Grid Projects l Funded (or about to be funded) projects –PPDG IUSADOE$2M1999-2001 –GriPhyNUSANSF$11.9M + $1.6M2000-2005 –EU DataGridEUEC€10M2001-2004 –PPDG IIUSADOE$9.5M2001-2004 –iVDGLUSANSF$13.7M + $2M2001-2006 –DataTAGEUEC€4M2002-2004 l Proposed projects –GridPPUKPPARC>$15M?2001-2004 l Many national projects of interest to HENP –Initiatives in US, UK, Italy, France, NL, Germany, Japan, … –EU networking initiatives (Géant, SURFNet) –US Distributed Terascale Facility ($53M, 12 TFL, 40 Gb/s net)

30 foster@mcs.anl.gov Argonne  Chicago Background on Data Grid Projects l They support several disciplines –GriPhyN: CS, HEP (LHC), gravity waves, digital astronomy –PPDG: CS, HEP (LHC + current expts), Nuc. Phys., networking –DataGrid: CS, HEP (LHC), earth sensing, biology, networking l They are already joint projects –Multiple scientific communities –High-performance scientific experiments –International components and connections l They have common elements & foundations –Interconnected management structures –Sharing of people, projects (e.g., GDMP) –Globus infrastructure –Data Grid Reference Architecture l HENP Intergrid Coordination Board

31 foster@mcs.anl.gov Argonne  Chicago Grid Physics Network (GriPhyN) Enabling R&D for advanced data grid systems, focusing in particular on Virtual Data concept ATLAS CMS LIGO SDSS

32 foster@mcs.anl.gov Argonne  Chicago Virtual Data in Action l Data request may l Access local data l Compute locally l Compute remotely l Access remote data l Scheduling & execution subject to local & global policies

33 foster@mcs.anl.gov Argonne  Chicago GriPhyN Status, October 2001 l Data Grid Reference Architecture defined –v1: core services (Feb 2001) –v2: request planning/mgmt, catalogs (RSN) l Progress on ATLAS, CMS, LIGO, SDSS –Requirements statements developed –Testbeds and experiments proceeding l Progress on technology –DAGMAN request management –Catalogs, security, policy l Virtual Data Toolkit v1.0 out soon

34 foster@mcs.anl.gov Argonne  Chicago GriPhyN/PPDG Data Grid Architecture Application Planner Executor Catalog Services Info Services Policy/Security Monitoring Repl. Mgmt. Reliable Transfer Service Compute ResourceStorage Resource DAG DAGMAN, Kangaroo GRAMGridFTP; GRAM; SRM GSI, CAS MDS MCAT; GriPhyN catalogs GDMP MDS Globus = initial solution is operational Ewa Deelman, Mike Wilde

35 foster@mcs.anl.gov Argonne  Chicago Transparency wrt materialization Id Trans FParamName … i1 F X F.X … i2 F Y F.Y … i10 G Y PG(P).Y … TransProgCost … F URL:f 10 … G URL:g 20 … Program storage Trans. name URLs for program location Derived Data Catalog Transformation Catalog Update upon materialization App specificattr. id … …i2,i10 … … Derived Metadata Catalog id Id TransParam Name … i1 F X F.X … i2 F Y F.Y … i10 G Y PG(P).Y … Trans ProgCost … F URL:f 10 … G URL:g 20 … Program storage Trans. name URLs for program location App-specific-attr id … …i2,i10 … … id Physical file storage URLs for physical file location NameLObjN… F.XlogO3 … … LCNPFNs… logC1 URL1 logC2 URL2 URL3 logC3 URL4 logC4 URL5 URL6 Metadata Catalog Replica Catalog Logical Container Name GCMS Object Name Transparency wrt location Name LObjN … … X logO1 … … Y logO2 … … F.X logO3 … … G(1).Y logO4 … … LCNPFNs… logC1 URL1 logC2 URL2 URL3 logC3 URL4 logC4 URL5 URL6 Replica Catalog GCMS Object Name Catalog Architecture

36 Fabric Tape Storage Elements Request Formulator and Planner Client Applications Compute Elements Indicates component that will be replaced Disk Storage Elements LANs and WANs Resource and Services Catalog Replica Catalog Meta-data Catalog Authentication and Security GSI SAM-specific user, group, node, station registrationBbftp ‘cookie’ Connectivity and Resource CORBAUDP File transfer protocols - ftp, bbftp, rcp GridFTP Mass Storage systems protocols e.g. encp, hpss Collective Services Catalog protocols Significant Event LoggerNaming ServiceDatabase ManagerCatalog Manager SAM Resource Management Batch Systems - LSF, FBS, PBS, Condor Data Mover Job Services Storage ManagerJob ManagerCache ManagerRequest Manager “Dataset Editor” “File Storage Server”“Project Master”“Station Master” Web Python codes, Java codes Command line D0 Framework C++ codes “Stager”“Optimiser” Code Repostory Name in “quotes” is SAM-given software component name or addedenhancedusing PPDG and Grid tools

37 foster@mcs.anl.gov Argonne  Chicago NCSA Linux cluster 5) Secondary reports complete to master Master Condor job running at Caltech 7) GridFTP fetches data from UniTree NCSA UniTree - GridFTP- enabled FTP server 4) 100 data files transferred via GridFTP, ~ 1 GB each Secondary Condor job on WI pool 3) 100 Monte Carlo jobs on Wisconsin Condor pool 2) Launch secondary job on WI pool; input files via Globus GASS Caltech workstation 6) Master starts reconstruction jobs via Globus jobmanager on cluster 8) Processed objectivity database stored to UniTree 9) Reconstruction job reports complete to master Early GriPhyN Challenge Problem: CMS Data Reconstruction Scott Koranda, Miron Livny, others

38 foster@mcs.anl.gov Argonne  Chicago Pre / Simulation Jobs / Post (UW Condor) ooHits at NCSA ooDigis at NCSA Delay due to script error Trace of a Condor-G Physics Run

39 foster@mcs.anl.gov Argonne  Chicago The 13.6 TF TeraGrid: Computing at 40 Gb/s 26 24 8 4 HPSS 5 UniTree External Networks Site Resources NCSA/PACI 8 TF 240 TB SDSC 4.1 TF 225 TB CaltechArgonne TeraGrid/DTF: NCSA, SDSC, Caltech, Argonne www.teragrid.org

40 foster@mcs.anl.gov Argonne  Chicago International Virtual Data Grid Lab Tier0/1 facility Tier2 facility 10+ Gbps link 2.5 Gbps link 622 Mbps link Other link Tier3 facility

41 foster@mcs.anl.gov Argonne  Chicago Grids and Industry l Scientific/technical apps in industry –“IntraGrid”: Aerospace, pharmaceuticals, … –“InterGrid”: multi-company teams –Globus Toolkit provides a starting point >Compaq, Cray, Entropia, Fujitsu, Hitachi, IBM, Microsoft, NEC, Platform, SGI, Sun all porting etc. l Enable resource sharing outside S&TC –“Grid Services”—extend Web Services with capabilities provided by Grid protocols >Focus of IBM-Globus collaboration –Future computing infrastructure based on Grid-enabled xSPs & applications?

42 foster@mcs.anl.gov Argonne  Chicago Acknowledgments l Globus R&D is joint with numerous people –Carl Kesselman, Co-PI; Steve Tuecke, principal architect at ANL; others acked below l GriPhyN R&D is joint with numerous people –Paul Avery, Co-PI; Mike Wilde, project coordinator; Carl Kesselman, Miron Livny CS leads; numerous others l PPDG R&D is joint with numerous people –Richard Mount, Harvey Newman, Miron Livny, Co-PIs; Ruth Pordes, project coordinator l Numerous other project partners l Support: DOE, DARPA, NSF, NASA, Microsoft

43 foster@mcs.anl.gov Argonne  Chicago Summary l “Grids”: Resource sharing & problem solving in dynamic virtual organizations –Many projects now working to develop, deploy, apply relevant technologies l Common protocols and services are critical –Globus Toolkit a source of protocol and API definitions, reference implementations l Rapid progress on definition, implementation, and application of Data Grid architecture l First indications of industrial adoption


Download ppt "Grid Computing Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science The University of Chicago."

Similar presentations


Ads by Google