Dynamic Grid Simulations for Science and Engineering Ed Seidel Max-Planck-Institut für Gravitationsphysik (Albert Einstein Institute) NCSA, U of Illinois.

Slides:



Advertisements
Similar presentations
Three types of remote process invocation
Advertisements

The Anatomy of the Grid: An Integrated View of Grid Architecture Carl Kesselman USC/Information Sciences Institute Ian Foster, Steve Tuecke Argonne National.
High Performance Computing Course Notes Grid Computing.
Gabrielle Allen*, Thomas Dramlitsch*, Ian Foster †, Nicolas Karonis ‡, Matei Ripeanu #, Ed Seidel*, Brian Toonen † * Max-Planck-Institut für Gravitationsphysik.
1 GridLab Grid Application Toolkit and Testbed Contact: Jarek Nabrzyski, GridLab Project Coordinator Poznań Supercomputing and Networking.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
USING THE GLOBUS TOOLKIT This summary by: Asad Samar / CALTECH/CMS Ben Segal / CERN-IT FULL INFO AT:
GridLab Enabling Applications on the Grid Jarek Nabrzyski et al. Poznań Supercomputing and Networking.
Presented by Scalable Systems Software Project Al Geist Computer Science Research Group Computer Science and Mathematics Division Research supported by.
Cactus in GrADS Dave Angulo, Ian Foster Matei Ripeanu, Michael Russell Distributed Systems Laboratory The University of Chicago With: Gabrielle Allen,
Cactus in GrADS (HFA) Ian Foster Dave Angulo, Matei Ripeanu, Michael Russell.
Supporting Efficient Execution in Heterogeneous Distributed Computing Environments with Cactus and Globus Gabrielle Allen, Thomas Dramlitsch, Ian Foster,
The Cactus Portal A Case Study in Grid Portal Development Michael Paul Russell Dept of Computer Science The University of Chicago
GridLab: Dynamic Grid Applications for Science and Engineering A story from the difficult to the ridiculous… Ed Seidel Max-Planck-Institut für Gravitationsphysik.
Portals Team GridSphere and the GridLab Project Jason Novotny Michael Russell Oliver Wehrens Albert.
Cactus Code and Grid Programming Here at GGF1: Gabrielle Allen, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational Physics,
GridSphere for GridLab A Grid Application Server Development Framework By Michael Paul Russell Dept Computer Science University.
Grids and Globus at BNL Presented by John Scott Leita.
Cactus-G: Experiments with a Grid-Enabled Computational Framework Dave Angulo, Ian Foster Chuang Liu, Matei Ripeanu, Michael Russell Distributed Systems.
SUN HPC Consortium, Heidelberg 2004 Grid(Lab) Resource Management System (GRMS) and GridLab Services Krzysztof Kurowski Poznan Supercomputing and Networking.
QCDgrid Technology James Perry, George Beckett, Lorna Smith EPCC, The University Of Edinburgh.
Cactus Tools for the Grid Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
EU Network Meeting June 2001 Cactus Gabrielle Allen, Tom Goodale Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
Albert-Einstein-Institut Cactus: Developing Parallel Computational Tools to Study Black Hole, Neutron Star (or Airplane...) Collisions.
The Cactus Code: A Parallel, Collaborative, Framework for Large Scale Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert.
The Astrophysics Simulation Collaboratory Portal Case Study of a Grid-Enabled Application Environment HPDC-10 San Francisco Michael Russell, Gabrielle.
GridLab A Grid Application Toolkit and Testbed IST Jarek Nabrzyski GridLab Project Coordinator Poznań.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
CoG Kit Overview Gregor von Laszewski Keith Jackson.
WP9 Resource Management Current status and plans for future Juliusz Pukacki Krzysztof Kurowski Poznan Supercomputing.
Cactus Project & Collaborative Working Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
NeSC Apps Workshop July 20 th, 2002 Customizable command line tools for Grids Ian Kelley + Gabrielle Allen Max Planck Institute for Gravitational Physics.
Albert-Einstein-Institut Using Supercomputers to Collide Black Holes Solving Einstein’s Equations on the Grid Solving Einstein’s.
Projects using Cactus Gabrielle Allen Cactus Retreat Baton Rouge, April 2004.
1 Cactus in a nutshell... n Cactus facilitates parallel code design, it enables platform independent computations and encourages collaborative code development.
Applications for the Grid Here at GGF1: Gabrielle Allen, Thomas, Dramlitsch, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational.
GridLab: A Grid Application Toolkit and Testbed
Nomadic Grid Applications: The Cactus WORM G.Lanfermann Max Planck Institute for Gravitational Physics Albert-Einstein-Institute, Golm Dave Angulo University.
The Globus Project: A Status Report Ian Foster Carl Kesselman
Perspectives on Grid Technology Ian Foster Argonne National Laboratory The University of Chicago.
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
Commodity Grid Kits Gregor von Laszewski (ANL), Keith Jackson (LBL) Many state-of-the-art scientific applications, such as climate modeling, astrophysics,
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
The Cactus Code: A Problem Solving Environment for the Grid Gabrielle Allen, Gerd Lanfermann Max Planck Institute for Gravitational Physics.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Cactus/TIKSL/KDI/Portal Synch Day. Agenda n Main Goals:  Overview of Cactus, TIKSL, KDI, and Portal efforts  present plans for each project  make sure.
GridLab WP-2 Cactus GAT (CGAT) Ed Seidel, AEI & LSU Co-chair, GGF Apps RG, Gridstart Apps TWG Gabrielle Allen, Robert Engel, Tom Goodale, *Thomas Radke.
New and Cool The Cactus Team Albert Einstein Institute
Scenarios for Grid Applications Ed Seidel Max Planck Institute for Gravitational Physics
Connections to Other Packages The Cactus Team Albert Einstein Institute
Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
2/22/2001Greenbook 2001/OASCR1 Greenbook/OASCR Activities Focus on technology to enable SCIENCE to be conducted, i.e. Software tools Software libraries.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Albert-Einstein-Institut Exploring Distributed Computing Techniques with Ccactus and Globus Solving Einstein’s Equations, Black.
Dynamic Grid Computing: The Cactus Worm The Egrid Collaboration Represented by: Ed Seidel Albert Einstein Institute
Motivation: dynamic apps Rocket center applications: –exhibit irregular structure, dynamic behavior, and need adaptive control strategies. Geometries are.
New and Cool The Cactus Team Albert Einstein Institute
Cactus Workshop - NCSA Sep 27 - Oct Generic Cactus Workshop: Summary and Future Ed Seidel Albert Einstein Institute
Metacomputing Within the Cactus Framework What and why is Cactus? What has Cactus got to do with Globus? Gabrielle Allen, Thomas Radke, Ed Seidel. Albert-Einstein-Institut.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Developing HPC Scientific and Engineering Applications: From the Laptop to the Grid Gabrielle Allen, Tom Goodale, Thomas.
Cactus Project & Collaborative Working
Cactus Tools for the Grid
GridLab: Dynamic Grid Applications for Science and Engineering A story from the difficult to the ridiculous… Ed Seidel Max-Planck-Institut für Gravitationsphysik.
The Cactus Team Albert Einstein Institute
Grid Computing AEI Numerical Relativity Group has access to high-end resources in over ten centers in Europe/USA They want: Bigger simulations, more simulations.
Grid Computing.
Exploring Distributed Computing Techniques with Ccactus and Globus
Dynamic Grid Computing: The Cactus Worm
Presentation transcript:

Dynamic Grid Simulations for Science and Engineering Ed Seidel Max-Planck-Institut für Gravitationsphysik (Albert Einstein Institute) NCSA, U of Illinois

Einstein’s Equations and Gravitational Waves Two major motivations for numerical relativity Exploring Einstein’s General Relativity Want to develop theoretical lab to probe this fundamental theory Fundamental theory of Physics (Gravity) Among most complex equations of physics –Dozens of coupled, nonlinear hyperbolic-elliptic equations with 1000’s of terms –Barely have capability to solve after a century Predict black holes, gravitational waves, etc, but want much more Exciting new field about to be born: Gravitational Wave Astronomy LIGO, VIRGO, GEO, LISA, … ~ $1 Billion worldwide! Fundamentally new information about Universe A last major test of Einstein’s theory: do they exist? –Eddington: “Gravitational waves propagate at the speed of thought” One century later, both of these developments happening at the same time: very exciting coincidence!

Gravitational Waves Astronomy: New Field, Fundamental New Information about the Universe Multi-Teraflop Computation, AMR, Elliptic-Hyperbolic Numerical Relativity

Computational Needs for 3D Numerical Relativity: Can’t fulfill them now, but about to change... t=0 t=100 Multi TFlop, Tbyte machine essential: coming! Explicit Finite Difference Codes ~ 10 4 Flops/zone/time step ~ 100 3D arrays Require zones or more ~1000 Gbytes Double resolution: 8x memory, 16x Flops Parallel AMR, I/O essential A code that can do this could be useful to other projects (we said this in all our grant proposals)! Last few years devoted to making this useful across disciplines… All tools used for these complex simulations available for other branches of science, engineering... InitialData: 4 coupled nonlinear elliptics Evolution hyperbolic evolution coupled with elliptic eqs. Choose Gauge Interpret Physics

Any Such Computation Requires Incredible Mix of Varied Technologies and Expertise! Many Scientific/Engineering Components Physics, astrophysics, CFD, engineering,... Many Numerical Algorithm Components Finite difference methods? Elliptic equations: multigrid, Krylov subspace, preconditioners,... Mesh Refinement? Many Different Computational Components Parallelism (HPF, MPI, PVM, ???) Architecture Efficiency (MPP, DSM, Vector, PC Clusters, ???) I/O Bottlenecks (generate gigabytes per simulation, checkpointing…) Visualization of all that comes out! Scientist/engineer wants to focus on top bullet, but all required for results... Such work cuts across many disciplines, areas of CS…

Grand Challenge Simulations Science and Eng. Go Large Scale: Needs Dwarf Capabilities NASA Neutron Star Grand Challenge 5 US Institutions Solve problem of colliding neutron stars (try…) EU Network Astrophysics 10 EU Institutions, 3 years Try to finish these problems … Entire Community becoming Grid enabled Examples of Future of Science & Engineering Require Large Scale Simulations, beyond reach of any machine Require Large Geo-distributed Cross-Disciplinary Collaborations Require Grid Technologies, but not yet using them! Both Apps and Grids Dynamic… NSF Black Hole Grand Challenge 8 US Institutions, 5 years Solve problem of colliding black holes (try…)

“Big mesh sizes” Collaboration technology needed: A scientist’s view on a large scale computational problem: Initial Data Very efficient Evolution Algorithms Complex Analysis routines (Better be Fortran) “Easy job submission” “Large Data Output” “Parallel would be great” Scientists cannot be required to become experts in computer science.

Collaboration technology needed: A computer scientist’s view on the same problem: Next Gen. Highspeed Comm. Layers High-performance parallel I/O Code instrumentation + steering Load scheduling Interactive Visualiz. Metacomputing “Programmers, use this!” Computer scientists will not write the applications that make use of their technology

Cactus New concept in community developed simulation code infrastructure Developed as response to needs of large scale projects Numerical/computational infrastructure to solve PDE’s Freely available, Open Source community framework: spirit of gnu/linux Many communities contributing to Cactus Cactus Divided in “Flesh” (core) and “Thorns” (modules or collections of subroutines) Multilingual: User apps can be Fortran, C, C ++ ; automated interface between them Abstraction: Cactus Flesh provides API for virtually all CS type operations Storage, parallelization, communication between processors, etc Interpolation, Reduction IO (traditional, socket based, remote viz and steering…) Checkpointing, coordinates “Grid Computing”: Cactus team and many collaborators worldwide, especially NCSA, Argonne/Chicago, LBL. Revolution coming...

Modularity of Cactus... Application 1 Cactus Flesh Application 2... Sub-app AMR (GrACE, etc) MPI layer 3I/O layer 2 Unstructured... Globus Metacomputing Services User selects desired functionality… Code created... Abstractions... Remote Steer 2 MDS/Remote Spawn Legacy App 2 Symbolic Manip App

Cactus Driver API Cactus provides standard interfaces for Parallelization, Interpolation, Reduction, I/O, etc. (e.g. CCTK_MyProc, CCTK_Reduce,....) Reduction operation across processors: CCTK_Reduce(...) MPI/Globus (thorn PUGH) PVM OpenMP Nothing...

Cactus Community DLR Geophysics (Bosl) Numerical Relativity Community Cornell Crack prop. NASA NS GC Livermore SDSS (Szalay) Intel Microsoft Clemson “Egrid” NCSA, ANL, SDSC AEI Cactus Group (Allen) NSF KDI (Suen, Foster, ES) EU Network (Seidel) Astrophysics (Zeus) US Grid Forum DFN Gigabit (Seidel) “GRADS” (Kennedy, Foster, Dongarra, et al) ChemEng (Bishop) San Diego, GMD, Cornell Berkeley

Future view: much of it here already... Scale of computations much larger Complexity approaching that of Nature Simulations of the Universe and its constituents –Black holes, neutron stars, supernovae –Airflow around advanced planes, spacecraft –Human genome, human behavior Teams of computational scientists working together Must support efficient, high level problem description Must support collaborative computational science Must support all different languages Ubiquitous Grid Computing Very dynamic simulations, deciding their own future Apps find the resources themselves: distributed, spawned, etc... Must be tolerant of dynamic infrastructure (variable networks, processor availability, etc…) Monitored, viz’ed, controlled from anywhere, with colleagues anywhere else...

Our Team Requires Grid Technologies, Big Machines for Big Runs WashU NCSA Hong Kong AEI ZIB Thessaloniki How Do We: Maintain/develop Code? Manage Computer Resources? Carry Out/monitor Simulation? Paris

Grid Simulations: a new paradigm Computational Resources Scattered Across the World Compute servers Handhelds File servers Networks Playstations, cell phones etc… How to take advantage of this for scientific/engineering simulations? Harness multiple sites and devices Simulations at new level of complexity and scale

Many Components for Grid Computing: all have to work for real applications 1.Resources: Egrid ( A “Virtual Organization” in Europe for Grid Computing Over a dozen sites across Europe Many different machines 2.Infrastructure: Globus Metacomputing Toolkit Develops fundamental technologies needed to build computational grids. Security: logins, data transfer Communication Information (GRIS, GIIS)

Components for Grid Computing, cont. 3.Grid Aware Applications (Cactus example): Grid Enabled Modular Toolkits for Parallel Computation: Provide to Scientist/Engineer… Plug your Science/Eng. Applications in! Must Provide Many Grid Services –Ease of Use: automatically find resources, given some need! –Distributed simulations: use as many machines as needed! –Remote Viz and Steering, tracking: watch what happens! Collaborations of groups with different expertise: no single group can do it! Grid is natural for this…

Cactus & the Grid Cactus Application Thorns Distribution information hidden from programmer Initial data, Evolution, Analysis, etc Grid Aware Application Thorns Drivers for parallelism, IO, communication, data mapping PUGH: parallelism via MPI (MPICH-G2, grid enabled message passing library) Grid Enabled Communication Library MPICH-G2 implementation of MPI, can run MPI programs across heterogenous computing resources Standard MPI Single Proc

Cactus Computational Toolkit Science, Autopilot, AMR, Petsc, HDF, MPI, GrACE, Globus, Remote Steering... A Portal to Computational Science: The Cactus Collaboratory 1. User has science idea Selects Appropriate Resources Collaborators log in to monitor Steers simulation, monitors performance Composes/Builds Code Components w/Interface... Want to integrate and migrate this technology to the generic user…

Grid Applications so far... SC93 - SC2000 Typical scenario Find remote resource (often using multiple computers) Launch job (usually static, tightly coupled) Visualize results (usually in-line, fixed) Need to go far beyond this Make it much, much easier –Portals, Globus, standards Make it much more dynamic, adaptive, fault tolerant Migrate this technology to general user Metacomputing the Einstein Equations: Connecting T3E’s in Berlin, Garching, San Diego

Supercomputing super difficult Consider simplest case: sit here, compute there Accounts for one AEI user (real case): berte.zib.de denali.mcs.anl.gov golden.sdsc.edu gseaborg.nersc.gov harpo.wustl.edu horizon.npaci.edu loslobos.alliance.unm.edu mcurie.nersc.gov modi4.ncsa.uiuc.edu ntsc1.ncsa.uiuc.edu origin.aei-potsdam.mpg.de pc.rzg.mpg.de pitcairn.mcs.anl.gov quad.mcs.anl.gov rr.alliance.unm.edu sr8000.lrz-muenchen.de 16 machines, 6 different usernames, 16 passwords,...

Cactus Portal (Michael Russell, et al) KDI ASC Project Technology: Globus, GSI, Java, DHTML, Java CoG, MyProxy, GPDK, TomCat, Stronghold Allows submission of distributed runs Used for the ASC Grid Testbed (SDSC, NCSA, Argonne, ZIB, LRZ, AEI) Driven by the need for easy access to machines

Distributed Computation: Harnessing Multiple Computers Why would anyone want to do this? Capacity Throughput Issues Bandwidth Latency Communication needs Topology Communication/computation Techniques to be developed Overlapping comm/comp Extra ghost zones Compression Algorithms to do this for the scientist… Experiments 3 T3Es on 2 continents Last week: joint NCSA, SDSC test with 1500 processors

Distributed Terascale Test Gig-E 100MB/sec SDSC IBM SP 1024 procs 5x12x17 =1020 NCSA Origin Array x12x(4+2+2) =480 OC-12 line But only 2.5MB/sec) Solved EE’s for gravitational waves (real code) Tightly coupled, communications required through derivatives Must communicate 30MB/step between machines Time step take 1.6 sec Used 10 ghost zones along direction of machines: communicate every 10 steps Compression/decomp. on all data passed in this direction Achieved 70-80% scaling, 200GF (only 20% scaling without tricks)

Remote Visualization Must be able to watch any simulation live… IsoSurfaces and Geodesics Computed inline with simulation Only geometry sent across network Raster Images to web browser Works NOW!! Arbitrary Grid Functions Streaming HDF5 Amira LCA Vision OpenDX Any App plugged into Cactus

Remote Visualization - Issues Parallel streaming Cactus can do this, but readers not yet available on the client side Handling of port numbers clients currently have no method for finding the port number that Cactus is using for streaming development of external meta-data server needed (ASC/TIKSL) Generic protocols: need to develop them, for Cactus and the Grid Data server Cactus should pass data to a separate server that will handle multiple clients without interfering with simulation TIKSL provides middleware (streaming HDF5) to implement this Output parameters for each client

Remote Steering Remote Viz data XML HTTP HDF5 Amira Any Viz Client Changing any steerable parameter Parameters Physics, algorithms Performance

Remote Steering Stream parameters from Cactus simulation to remote client, which changes parameters (GUI, command line, viz tool), and streams them back to Cactus where they change the state of the simulation. Cactus has a special STEERABLE tag for parameters, indicating it makes sense to change them during a simulation, and there is support for them to be changed. Example: IO parameters, frequency, fields, timestep, debugging flags Current protocols: XML (HDF5)to standalone GUI HDF5to viz tools (Amira) HTTPto Web browser (HTML forms)

Thorn HTTPD Thorn which allows simulation any to act as its own web server Connect to simulation from any browser anywhere Monitor run: parameters, basic visualization,... Change steerable parameters See running example at Wireless remote viz, monitoring and steering

Remote Offline Visualization Viz Client (Amira) HDF5 VFD DataGrid (Globus) DPSS FTP HTTP Visualization Client DPSS Server FTP Server Web Server Remote Data Server Viz in Berlin 4TB distributed across NCSA/ANL/Garching Only what is needed n Accessing remote data for local visualization n Should allow downsampling, hyperslabbing, etc. n Grid World: file pieces left all over the world, but logically one file…

Dynamic Distributed Computing Static grid model works only in special cases; must make apps able to respond to changing Grid environment... Many new ideas Consider: the Grid IS your computer: –Networks, machines, devices come and go –Dynamic codes, aware of their environment, seeking out resources –Rethink algorithms of all types –Distributed and Grid-based thread parallelism Scientists and engineers will change the way they think about their problems: think global, solve much bigger problems Many old ideas 1960’s all over again How to deal with dynamic processes processor management memory hierarchies, etc

New Paradigms for Dynamic Grids a lot of work to be done to make this happen Code should be aware of its environment What resources are out there NOW, and what is their current state? What is my allocation? What is the bandwidth/latency between sites? Code should be able to make decisions on its own A slow part of my simulation can run asynchronously…spawn it off! New, more powerful resources just became available…migrate there! Machine went down…reconfigure and recover! Need more memory…get it by adding more machines! Code should be able to publish this information to central server for tracking, monitoring, steering… Unexpected event…notify users! Collaborators from around the world all connect, examine simulation.

Grid Scenario We see something, but too weak. Please simulate to enhance signal! WashU Potsdam Thessaloniki OK! Resource Estimator Says need 5TB, 2TF. Where can I do this? RZG NCSA 1Tbit/sec Hong Kong Resource Broker: LANL is best match… Resource Broker: NCSA + Garching OK, but need 10Gbit/sec… LANL Now.. LANL

New Grid Applications Dynamic Staging: move to faster/cheaper/bigger machine “Cactus Worm” Multiple Universe create clone to investigate steered parameter (“Cactus Virus”) Automatic Convergence Testing from intitial data or initiated during simulation Look Ahead spawn off and run coarser resolution to predict likely future Spawn Independent/Asynchronous Tasks send to cheaper machine, main simulation carries on Thorn Profiling best machine/queue choose resolution parameters based on queue ….

New Grid Applications (2) Dynamic Load Balancing inhomogeneous loads multiple grids Portal resource choosing simulation launching management Intelligent Parameter Surveys farm out to different machines Make use of Running with management tools such as Condor, Entropia, etc. Scripting thorns (management, launching new jobs, etc) Dynamic use of eg MDS for finding available resources

Go! Dynamic Grid Computing Clone job with steered parameter Queue time over, find new machine Add more resources Found a horizon, try out excision Look for horizon Calculate/Output Grav. Waves Calculate/Output Invariants Find best resources Free CPUs!! NCSA SDSC RZG SDSC LRZArchive data

User’s View... simple!

Cactus Worm: Illustration of basic scenario Cactus simulation (could be anything) starts, launched from a portal Queries a Grid Information Server, finds available resources Migrates itself to next site, according to some criterion Registers new location to GIS, terminates old simulation User tracks/steers, using http, streaming data, etc...… Continues around Europe… If we can do this, much of what we want can be done!

Grid Application Development Toolkit Application developer should be able to build simulations with tools that easily enable dynamic grid capabilities Want to build programming API to easily allow: Query information server (e.g. GIIS) –What’s available for me? What software? How many processors? Network Monitoring Decision Thorns –How to decide? Cost? Reliability? Size? Spawning Thorns –Now start this up over here, and that up over there Authentification Server –Issues commands, moves files on your behalf (can’t pass-on Globus proxy)

Grid Application Development Toolkit (2) Information Server –What is running where? Where to connect for viz/steering? What and where are other people in the group running? –Spawn hierarchies –Distribute/loadbalance Data Transfer –Use whatever method is desired –Gsi-ssh, Gsi-ftp, Streamed HDF5, scp, GASS, Etc… LDAP routines for simulation codes –Write simulation information in LDAP format –Publish to LDAP server Stage Executables –CVS checkout of new codes that become connected, etc… Etc… If we build this, we can get developers and users!

ID EV IO AN Example Toolkit Call: Routine Spawning Schedule AHFinder at Analysis { EXTERNAL=yes LANG=C } “Finding Horizons”

Many groups trying to make this happen EU Network Proposal AEI, Lecce, Poznan, Brno, Ameterdam, ZIB- Berlin, Paderborn, Compaq, Sun, Chicago, ISI, Wisconsin Developing this technology…

Grid Related Projects ASC: Astrophysics Simulation Collaboratory NSF Funded (WashU, Rutgers, Argonne, U. Chicago, NCSA) Collaboratory tools, Cactus Portal Starting to use Portal for production runs E-Grid: European Grid Forum (GGF: Global Grid Forum) Working Group for Testbeds and Applications (Chair: Ed Seidel) Test application: Cactus+Globus Demos at Dallas SC2000 GrADs: Grid Application Development Software NSF Funded (Rice, NCSA, U. Illinois, UCSD, U. Chicago, U. Indiana...) Application driver for grid software

Grid Related Projects (2) Distributed Runs AEI (Thomas Dramlitsch), Argonne, U. Chicago Working towards running on several computers, 1000’s of processors (different processors, memories, OSs, resource management, varied networks, bandwidths and latencies) TIKSL/GriKSL German DFN funded: AEI, ZIB, Garching Remote online and offline visualization, remote steering/monitoring Cactus Team Dynamic distributed computing … Grid Application Development Toolkit

Summary Science/Engineering Drive/Demand Grid Development Problems very large But practical, fundamentally connected to industry Grids will fundamentally change research Enable problem scales far beyond present capabilities Enable larger communities to work together (they’ll need to) Change the way researchers/engineers think about their work Dynamic Nature of Grid makes problem much more interesting Harder Matches dynamic nature of problems being studied More info