GridLab WP-2 Cactus GAT (CGAT) Ed Seidel, AEI & LSU Co-chair, GGF Apps RG, Gridstart Apps TWG Gabrielle Allen, Robert Engel, Tom Goodale, *Thomas Radke.

Slides:



Advertisements
Similar presentations
W w w. h p c - e u r o p a. o r g HPC-Europa Portal: Uniform Access to European HPC Infrastructure Ariel Oleksiak Poznan Supercomputing.
Advertisements

Gabrielle Allen*, Thomas Dramlitsch*, Ian Foster †, Nicolas Karonis ‡, Matei Ripeanu #, Ed Seidel*, Brian Toonen † * Max-Planck-Institut für Gravitationsphysik.
SWIM WEB PORTAL by Dipti Aswath SWIM Meeting ORNL Oct 15-17, 2007.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
GridLab Enabling Applications on the Grid Jarek Nabrzyski et al. Poznań Supercomputing and Networking.
Universität Dortmund Robotics Research Institute Information Technology Section Grid Metaschedulers An Overview and Up-to-date Solutions Christian.
Cactus in GrADS Dave Angulo, Ian Foster Matei Ripeanu, Michael Russell Distributed Systems Laboratory The University of Chicago With: Gabrielle Allen,
Cactus in GrADS (HFA) Ian Foster Dave Angulo, Matei Ripeanu, Michael Russell.
Supporting Efficient Execution in Heterogeneous Distributed Computing Environments with Cactus and Globus Gabrielle Allen, Thomas Dramlitsch, Ian Foster,
Massimo Cafaro GridLab Review GridLab WP10 Information Services Massimo Cafaro CACT/ISUFI University of Lecce, Italy.
Portals Team GridSphere and the GridLab Project Jason Novotny Michael Russell Oliver Wehrens Albert.
Cactus Code and Grid Programming Here at GGF1: Gabrielle Allen, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational Physics,
SC 2003 Demo, NCSA booth GridLab Project Funded by the EU (5+ M€), January 2002 – December 2004 Application and Testbed oriented Cactus Code, Triana Workflow,
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
GridLab & Cactus Joni Kivi Maarit Lintunen. GridLab  A project funded by the European Commission  The project was started in January 2002  Software.
GridSphere for GridLab A Grid Application Server Development Framework By Michael Paul Russell Dept Computer Science University.
Cross Cluster Migration Remote access support Adianto Wibisono supervised by : Dr. Dick van Albada Kamil Iskra, M. Sc.
Cactus-G: Experiments with a Grid-Enabled Computational Framework Dave Angulo, Ian Foster Chuang Liu, Matei Ripeanu, Michael Russell Distributed Systems.
UCoMS: Grid Computing Framework for Petroleum Engineering.
WP6: Grid Authorization Service Review meeting in Berlin, March 8 th 2004 Marcin Adamski Michał Chmielewski Sergiusz Fonrobert Jarek Nabrzyski Tomasz Nowocień.
SUN HPC Consortium, Heidelberg 2004 Grid(Lab) Resource Management System (GRMS) and GridLab Services Krzysztof Kurowski Poznan Supercomputing and Networking.
Grid Workflow within Triana Ian Wang Cardiff University.
Building service testbeds on FIRE D5.2.5 Virtual Cluster on Federated Cloud Demonstration Kit August 2012 Version 1.0 Copyright © 2012 CESGA. All rights.
Andrew McNab - Manchester HEP - 26 June 2001 WG-H / Support status Packaging / RPM’s UK + EU DG CA’s central grid-users file grid “ping”
Scalable Systems Software Center Resource Management and Accounting Working Group Face-to-Face Meeting June 13-14, 2002.
GridLab A Grid Application Toolkit and Testbed IST Jarek Nabrzyski GridLab Project Coordinator Poznań.
WP9 Resource Management Current status and plans for future Juliusz Pukacki Krzysztof Kurowski Poznan Supercomputing.
Cactus Project & Collaborative Working Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
NeSC Apps Workshop July 20 th, 2002 Customizable command line tools for Grids Ian Kelley + Gabrielle Allen Max Planck Institute for Gravitational Physics.
Distributed EU-wide Supercomputing Facility as a New Research Infrastructure for Europe Gabrielle Allen Albert-Einstein-Institut, Germany Jarek Nabrzyski.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Scalable Systems Software Center Resource Management and Accounting Working Group Face-to-Face Meeting October 10-11, 2002.
Projects using Cactus Gabrielle Allen Cactus Retreat Baton Rouge, April 2004.
1 Cactus in a nutshell... n Cactus facilitates parallel code design, it enables platform independent computations and encourages collaborative code development.
Applications for the Grid Here at GGF1: Gabrielle Allen, Thomas, Dramlitsch, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
SURA GridPlan Infrastructure Working Group Art Vandenberg Georgia State University Mary Fran Yafchak SURA Working.
GridLab: A Grid Application Toolkit and Testbed
Nomadic Grid Applications: The Cactus WORM G.Lanfermann Max Planck Institute for Gravitational Physics Albert-Einstein-Institute, Golm Dave Angulo University.
April 1st, The ASC- GridLab Portal Edward Seidel, Michael Russell, Gabrielle Allen, and the rest of the team Max Plank Institut für Gravitationsphysik.
CHEP03 Mar 25Mary Thompson Fine-grained Authorization for Job and Resource Management using Akenti and Globus Mary Thompson LBL,Kate Keahey ANL, Sam Lang.
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
Developing Applications on Today’s Grids Tom Goodale Max Planck Institute for Gravitational Physics
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
Cactus/TIKSL/KDI/Portal Synch Day. Agenda n Main Goals:  Overview of Cactus, TIKSL, KDI, and Portal efforts  present plans for each project  make sure.
Derek Wright Computer Sciences Department University of Wisconsin-Madison Condor and MPI Paradyn/Condor.
GridLab Resource Management System (GRMS) Jarek Nabrzyski GridLab Project Coordinator Poznań Supercomputing and.
Silberschatz, Galvin and Gagne  Operating System Concepts UNIT II Operating System Services.
DGC Paris WP2 Summary of Discussions and Plans Peter Z. Kunszt And the WP2 team.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Using GStat 2.0 for Information Validation.
WP3 Information and Monitoring Rob Byrom / WP3
Globus Grid Tutorial Part 2: Running Programs Across Multiple Resources.
GridLab Resource Management System (GRMS) Jarek Nabrzyski GridLab Project Coordinator Poznań Supercomputing and.
MojaveFS Lookup Vlad Dascalu, 351C3 – U.P. Bucharest Jason Hickey, Cristian Ţăpuş, David Noblet California Institute of Technology.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
Dynamic Creation and Management of Runtime Environments in the Grid Kate Keahey Matei Ripeanu Karl Doering.
- GMA Athena (24mar03 - CHEP La Jolla, CA) GMA Instrumentation of the Athena Framework using NetLogger Dan Gunter, Wim Lavrijsen,
Dynamic Grid Computing: The Cactus Worm The Egrid Collaboration Represented by: Ed Seidel Albert Einstein Institute
PROGRESS: GEW'2003 Using Resources of Multiple Grids with the Grid Service Provider Michał Kosiedowski.
Shibboleth Use at the National e-Science Centre Hub Glasgow at collaborating institutions in the Shibboleth federation depending.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Massimo Sgaravatto INFN Padova
Duncan MacMichael & Galen Deal CSS 534 – Autumn 2016
The Cactus Team Albert Einstein Institute
Dynamic Grid Computing: The Cactus Worm
Presentation transcript:

GridLab WP-2 Cactus GAT (CGAT) Ed Seidel, AEI & LSU Co-chair, GGF Apps RG, Gridstart Apps TWG Gabrielle Allen, Robert Engel, Tom Goodale, *Thomas Radke Others from WP-1

WP2 Vision Goal: enable all Cactus apps to use GAT for Grid scenarios. Very important WP, prototyped in first year, now being fully developed as GAT implementation in first full release Cactus ( Leading application framework used by dozens of groups in astrophysics (EU Network), climate, CFD, numerical relativity, bio- informatics Used for many years for in (pre)Grid environments, and is commonly deployed for demonstrations of the use of the Grid Previously used hand-crafted Cactus modules, ad-hoc mechanisms, relying on the authors' extensive experience of Grid computing The CGAT work aims to replace this using functionality available through the GAT API Exemplar: modules for other Cactus apps, worked example for others

WP2 Dynamic Grid Computing Migration: “Cactus Worm” demonstrated SC00 Launch Job Run awhile, write checkpoint Migrate itself to next site Register new location to User tracks/steers Proof of concept, but dirty hack Created our community! Spawning: SC01 User invokes “Spawner” Analysis tasks outsourced Globus enabled login, data transfer It worked!

WP2

CGAT Conceptual View Thorn CGAT consists of a set of “thorns”, linked to GAT Engine, which provide services to Cactus applications. The vast majority of Cactus thorns will be unaware of the CGAT or GAT. Cactus Flesh GAT GridLab Service GridLab Service Application Thorns

WP2 CGAT Functionality Ability to remote trigger app checkpoint, retrieve checkpoint file, and stage it to a new host Provide performance and other data to external applications using GAT monitoring infrastructure. Export list of application-created files via GAT advert and/or replica functionality, through generic advertising service Query information about the current machine, such as cache size, memory size, size of file-systems, name of machine. Spawning of tasks, e.g. for task farming, monitoring status. Automated/triggered announcement of app events, such as app startup, reaching particular iteration, termination, etc. Etc: Working with app communities to determine need: GGF, Gridstart TWG, other projects

WP2 Status SC03: thorns were written with prototype GAT Engine to enable the GridLab migration scenario: Remote monitoring of the status of the running Cactus application Triggering of Cactus checkpointing Advertisement of Cactus checkpointing data Now: thorns converted to use the new GAT implementation and the specified GAT API Will be demonstrated at this review and at GGF this week Any application in Cactus can take advantage of this without any other modification E.g., Black Holes on regular meshes, CFD on unstructured meshes (planned), ocean-atmosphere modeling

WP2 Some Specifics Gridmake: Distribute/compile source code on an arbitrary number of machines Needed for GridLab migration, Cactus remote testing, creation of executables for MPI simulations across multiple machines Good for codes with configurable make environments, machine configuration scripts, use of CVS etc. Developed using public key infrastructure, soon as a grid service To be incorporated as a GridSphere portlet Thorn_cgat Initializes GAT Registers that this is checkpointable app with grms Receives requests from grms (or any broker) Steers cactus parameter to initiate checkpoint Reports on success of checkpoint

WP2 Near Future Review current thorns, make production versions, distribute Add remaining functionality Work with AEI/LSU numerical relativity group ensure correct functionality train on use of CGAT infrastructure develop task farming infrastructure for physics surveys Deploy across GridLab testbed and LSU-AEI-KISTI Grid Work with other Cactus app groups in astrophysics, climate, CFD, bioinformatics, others; NSF and DOE projects in US New, experienced personnel just added now that GAT ready