Presentation is loading. Please wait.

Presentation is loading. Please wait.

APAC National Grid - A technology responses to diverse requirements Ian Atkinson James Cook University (www.jcu.edu.au), and Queensland Cyberinfrastructure.

Similar presentations


Presentation on theme: "APAC National Grid - A technology responses to diverse requirements Ian Atkinson James Cook University (www.jcu.edu.au), and Queensland Cyberinfrastructure."— Presentation transcript:

1 APAC National Grid - A technology responses to diverse requirements Ian Atkinson James Cook University (www.jcu.edu.au), and Queensland Cyberinfrastructure Foundation (www.qcif.edu.au)www.jcu.edu.au Lindsay Hood Australian Partnership for Advanced Computing www.apac.edu.au

2 Australian Partnership for Advanced Computing Partners: Australian Centre for Advanced Computing and Communications (ac3) in NSW CSIRO iVEC, The Hub of Advanced Computing in Western Australia Queensland Cyber Infrastructure Foundation (QCIF) South Australian Partnership for Advanced Computing (SAPAC) The Australian National University (ANU) The University of Tasmania (TPAC) Victorian Partnership for Advanced Computing (VPAC) 4500 CPUs, 3PB storage “providing national advanced computing, data management and grid services for eResearch”

3

4 Recent Review APAC in the future must be regarded not just as the National Facility, but as the sum of its component parts comprising: […] The National Grid […] That the APAC National Grid must be the pre-eminent grid in Australia and continue extending its coverage to include capabilities wherever they exist or develop. It must also nurture and support scientific research teams, NCRIS infrastructure and international partnerships

5 Concept of the APAC National Grid Other Grids: Institutional International Other Grids: Institutional International Data Centres Instruments Sensor Networks Research Teams APAC National Grid a virtual system of computing, data storage and visualisation facilities

6 NCRIS - National Collaborative Research Infrastructure Scheme National Plan to invest AU$500M in medium scale collaborative access research infrastructure across 5 years 2007-1011 15 Investment areas of interest including: –bioinformatics, biosecurity, geosciences, astronomy, marine and terrestrial observation systems, structural characterization APAC will now be funded via NCRIS –APAC and the National Grid must directly support the NCRIS invesment areas as a high priority –NCRIS investments are expected to develop and execute plans to ensure e-Research (cyberinfrastructure) tools are and practices are embedded into their practices and data management Data management is now a hot topic in Australia!

7 NCRIS Platforms for Collaboration Vision

8 QCIF ANU VPAC ac3 TPAC CSIRO Network: AARNet APAC Private Network ? (AARNet) Security: APAC CA (PKI) Grix MyProxy VOMRS APAC National Grid Core Services – built on Globus Portal Tools: GridSphere Info Services: MDS 2/4 MIP IVEC SAPAC APAC National Facility Systems: Gateways Partners’ systems QCIF (JCU) <15 Staff to deliver all services!

9 Some requirements Non dedicated resources (at partner sites) Varied middleware requirements (many domains to support) Complex virtual organisation structure Distributed data, workflows Simplified interface This turns out to be hard!

10 Requirements Analysis circa.mid 2006

11 Gateways Rapid churn in the middleware –You need lots of test machines Different communities want different middleware –GT2, GT4, Gridsphere, SRB … Minimises interaction with non grid- dedicated production systems Virtualisation is well understood technology Xen has a nice price

12 The gateway concept Grid middleware evolving Security across firewalls & institutional policies are problems Using gateway virtual machines to isolate production compute/storage elements from all this change – ng1 - Globus2 – ng2 - Globus4 – ngdata - gridFTP, other data (SRB) – ngportal - web application portals – Others are easy to build and deploy But some parts of GT2 especially assume they are running on the cluster mgmt/head node...

13 Gateways on a National Backbone Layer 3 Private Network Gateway Server Cluster Datastore HPC Gateway Server Cluster Datastore Cluster Installed Gateway servers at all grid sites, using VM technology to support multiple grid stacks Gateways supporting GT2, GT4, LCG, grid portals, and experimental grid stacks High bandwidth, dedicated, secure private network between grid sites consistency, ease of implementation, performance?

14 Virtual Organisations International use tends to be large VO’s Australia demands small, dynamic VO’s VOMS/VOMRS has problems –Admin security model –myproxy interaction –gridmapfile – a user can be in one VO Adopting PRIMA/GUMS –Still complicated and not especially dynamic

15 “Workflows” Many existing HPC users have significant shell scripts, and queue commands (PBS) WSGRAM, JSDL, BPEL may be human readable, but not human writable! Abstraction of HPC systems is tough –Eg SGI’s profile.pl doesn’t handle cpusets correctly Working on a gsub that will take the majority of batch scripts and run them on the grid –User doesn’t have to learn JSDL, WSGRAM … Unicore-like client GUI would be neat

16 JSDL (1) <jsdl:JobDefinition xmlns="http://www.example.org/" xmlns:jsdl="http://schemas.ggf.org/jsdl/2005/11/jsdl" xmlns:jsdl-posix="http://schemas.ggf.org/jsdl/2005/11/jsdl-posix" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> My Gnuplot invocation Simple application invocation: User wants to run the application 'gnuplot' to produce a plotted graphical file based on some data shipped in from elsewhere(perhaps as part of a workflow). A front-end application will then build into an animation of spinning data. Front-end application knows URL for data file which must be staged-in. Front-end application wants to stage in a control file that it specifies directly which directs gnuplot to produce the output files. In case of error, messages should be produced on stderr (also to be staged on completion) and no images are to be transferred.

17 JSDL (2) gnuplot /usr/local/bin/gnuplot control.txt input.dat output1.png 2097152.0 1.0

18 JSDL (3) control.txt overwrite true http://foo.bar.com/~me/control.txt input.dat overwrite true http://foo.bar.com/~me/input.dat output1.png overwrite true rsync://spoolmachine/userdir

19 Portals Web browser is the interface to everything… Present simple interface to the underlying resource Good model for many users and applications Gridsphere adopted as the web grid standard –GT4 based ngportal VM Java cogkit for standalone “portal” apps –Chemistry Java application with molecular editor being developed –Desktop job submission tool from iVEC http://www.grid.apac.edu.au/Services/ProductionPortals

20 Data services Data is hard –Different communities have different needs –Complex access controls We have gridftp between sites –Network consistency is interesting … –Data staging has to SRB today; iRODS later Dcache, SRM, Gfarm as communities require Credible use cases for a global file system?

21 Registry services MDS2, MDS4 running About to deploy Modular Information Provider to present site and aggregated information more easily Using GLUE schema, but it’s far from satisfactory for describing real-world production HPC resources

22 Improved AAA Services NCRIS will require e-research services to a much wider community than traditional HPC –PKI doesn’t scale and is conceptually difficult for non-IT focused users Australian Access Federation funded (2007-_ IAM Suite from MELCOE –Shibboleth authentication plus appropriate attributes generates short lived certificate (www.identiy –Tools for users to easily create shared workspaces and manage attribute release Only a few people will need real certificates But probably a year away before being ready for prime time

23 IAM Suite GridSphere Federation SP GroupModule VO-IdP VO-WAYF AuthN IM Fedora (internal or external, e.g. IR) VO-SP Forum Federation FedoraWeb ShARPE Autograph Presence PeoplePicker Calendar MyProxy AuthZ Mgnr VO-SP LMS VO-SP Wiki VO-SP Etc. GTK Storage GTK Specific tools GTK Cluster GTK Equipm. Search Login via IdP Receive assertions Receive assertions Receive proxy cert. AFS adaptor Macquarie University’s E-Learning Centre of Excellence (MELCOE) Erik Vullings www.federation.org.au

24 APAC National Grid Status Essentially operational –core services implemented APAC CA and myproxy, VOMRS, GT2, GT4, gridsphere, SRB –some applications close to ‘production’ mode –See http://goc.grid.apac.edu.au; http://www.grid.apac.edu.auhttp://goc.grid.apac.edu.auhttp://www.grid.apac.edu.au Systems coverage –users can access ALL systems at APAC partners via gateways from the desktop is needed –about 4600 processors and 100’s of Tbytes of disk –around 3Pbytes of disk-cached HSM systems

25 Future Strategies Expand the user base –NCRIS, Merit Allocation Scheme, Partners –Open access to core grid services Expand the services –Workflow engines and tools – Kepler, Taverna –Data management: metadata support Expand the facilities –Include major data centres data from instruments, government agencies –Include institutional systems and repositories Resulting changes: –Policies: acceptable service provision –Organisation: coordinated user support –Architecture: scaling gateways –Technologies: Attribute-based authorisation

26 Changing User Base National Collaborative Research Infrastructure Strategy –Ambitious plan to hand out $0.5B of federal money to fund research infrastructure collaboratively Evolving Biomolecular Platforms and Informatics$50.0M Integrated Biological Systems$40.0M Characterisation$47.7M Fabrication$41.0M Biotechnology Products$35.0M Networked Biosecurity Framework$25.0M Optical and Radio Astronomy$45.0M Integrated Marine Observing System$55.2M Structure and Evolution of the Australian Continent$42.8M Terrestrial Ecosystem Research Network$20.0M Population Health and Clinical Data Linkage$20.0M Platforms for Collaboration$75.0M

27 Summer in Australia?

28 New Names and Structures National Compute Infrastructure (NCI) APAC Nat. Fac. >1600cpu Altix Shoulder clusters Interoperation and Collaboration Services (ICS) Old APAC Grid Aust. Nat. Data Service (ANDS) Federation of Mass Data Stores Long term archiving and curation Australian Access Federation (AAF), AREN - Network National Coordination Council E-Reseach services

29 Bringing it all together - real applications

30

31 Future Strategies Expand the user base –NCRIS, Merit Allocation Scheme, Partners –Open access to core grid services Expand the services –Workflow engines and tools – Kepler, Taverna –Data management: metadata support, collections registry Expand the facilities –Include major data centres data from instruments, government agencies –Include institutional systems and repositories Resulting changes: –Policies: acceptable service provision –Organisation: coordinated user support –Architecture: scaling gateways –Technologies: Attribute-based authorisation

32

33 Source: Office of Integrative Activities, NSF

34

35

36

37

38 “The grid lets me run lots of jobs all over the” place – Nimrod, Gridbus “The grid lets me build a workflow that uses” distributed resources” – Kepler, Taverna “The grid lets me scale my workstation model to a supercomputer seamlessly” –DEISA So grid means different things to different communities We must deliver production quality services for all of them? Three “views” of the Grid

39 Grid Collaboration Beyond Data and Compute are the AccessGrid Australia has had a long-term commitment to the AG Small highly dispersed population –Queensland has the most distributed population in Aust. AG is still burdened by its “antique” media tools, but the concept is essential –Skype and IP video conferencing are insufficient

40 Access Grid - ATP, Sydney 1st in Australia … participating in SC-Global (Nov 2001)

41 Australia’s 2nd AG node - in Qld. JCU, Townsville April 2002 Minister Hon Paul Lucas

42 AccessGrid AccessGrid is now very widely available in Australia –Most Universities have several nodes –Extensive use in teaching (AMSI) New tools being developed –HD codes (Chris Willing UQ) –SRB data grid integration with accessgrid (Atkinson / Willing) –International Quality Assurance Program –Better Multicast / Unicast integration

43 SRB Browser Connecting the DataGrid to the AccessGrid Nigel Bajema AG VenueClient AG Vic video AG Rat audio SRB Browser AG Shared Application New SRB Java/Python interface library written (now part of SRB) All AG clients can share in data from SRB Data store Cross platform Exposes SRB metadata Files moved from SRB to/from AG data manager

44 Acknowledgment Lindsay Hood, APAC Grid Manager Rhys Francis, Former APAC Grid Manager David Bannon, VPAC Gateway project manager Rob Woodcock, CSIRO Minerals Exploration


Download ppt "APAC National Grid - A technology responses to diverse requirements Ian Atkinson James Cook University (www.jcu.edu.au), and Queensland Cyberinfrastructure."

Similar presentations


Ads by Google