Albert-Einstein-Institut www.aei-potsdam.mpg.de Cactus: Developing Parallel Computational Tools to Study Black Hole, Neutron Star (or Airplane...) Collisions.

Slides:



Advertisements
Similar presentations
Programming Paradigms and languages
Advertisements

Beowulf Supercomputer System Lee, Jung won CS843.
Gabrielle Allen*, Thomas Dramlitsch*, Ian Foster †, Nicolas Karonis ‡, Matei Ripeanu #, Ed Seidel*, Brian Toonen † * Max-Planck-Institut für Gravitationsphysik.
Recent results with Goddard AMR codes Dae-Il (Dale) Choi NASA/Goddard, USRA Collaborators J. Centrella, J. Baker, J. van Meter, D. Fiske, B. Imbiriba (NASA/Goddard)
Introduction to Operating Systems CS-2301 B-term Introduction to Operating Systems CS-2301, System Programming for Non-majors (Slides include materials.
Presented by Scalable Systems Software Project Al Geist Computer Science Research Group Computer Science and Mathematics Division Research supported by.
Cactus in GrADS (HFA) Ian Foster Dave Angulo, Matei Ripeanu, Michael Russell.
Problem-Solving Environments: The Next Level in Software Integration David W. Walker Cardiff University.
Supporting Efficient Execution in Heterogeneous Distributed Computing Environments with Cactus and Globus Gabrielle Allen, Thomas Dramlitsch, Ian Foster,
Cactus Code and Grid Programming Here at GGF1: Gabrielle Allen, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational Physics,
GridSphere for GridLab A Grid Application Server Development Framework By Michael Paul Russell Dept Computer Science University.
Copyright Arshi Khan1 System Programming Instructor Arshi Khan.
Slide 1 of 9 Presenting 24x7 Scheduler The art of computer automation Press PageDown key or click to advance.
Cactus 4.0. Cactus Computational Toolkit and Distributed Computing Solving Einstein’s Equations –Impact on computation Large collaborations essential.
Cactus Tools for the Grid Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
STRATEGIES INVOLVED IN REMOTE COMPUTATION
EU Network Meeting June 2001 Cactus Gabrielle Allen, Tom Goodale Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
Cornell Theory Center Aug CCTK The Cactus Computational Toolkit Werner Benger Max-PIanck-Institut für Gravitationsphysik (Albert-Einstein-Institute.
Operating Systems CS3502 Fall 2014 Dr. Jose M. Garrido
The Cactus Code: A Parallel, Collaborative, Framework for Large Scale Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert.
The Astrophysics Simulation Collaboratory Portal Case Study of a Grid-Enabled Application Environment HPDC-10 San Francisco Michael Russell, Gabrielle.
Grads Meeting - San Diego Feb 2000 The Cactus Code Gabrielle Allen Albert Einstein Institute Max Planck Institute for Gravitational Physics
Cactus Project & Collaborative Working Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
Dynamic Grid Simulations for Science and Engineering Ed Seidel Max-Planck-Institut für Gravitationsphysik (Albert Einstein Institute) NCSA, U of Illinois.
BLU-ICE and the Distributed Control System Constraints for Software Development Strategies Timothy M. McPhillips Stanford Synchrotron Radiation Laboratory.
Albert-Einstein-Institut Using Supercomputers to Collide Black Holes Solving Einstein’s Equations on the Grid Solving Einstein’s.
1 Cactus in a nutshell... n Cactus facilitates parallel code design, it enables platform independent computations and encourages collaborative code development.
Objective of numerical relativity is to develop simulation code and relating computing tools to solve problems of general relativity and relativistic astrophysics.
Einstein’s elusive waves
Scalable Web Server on Heterogeneous Cluster CHEN Ge.
Applications for the Grid Here at GGF1: Gabrielle Allen, Thomas, Dramlitsch, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational.
Supercomputing Center CFD Grid Research in N*Grid Project KISTI Supercomputing Center Chun-ho Sung.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
240-Current Research Easily Extensible Systems, Octave, Input Formats, SOA.
Ed Seidel Albert Einstein Institute Sources of Gravitational Radiation A.1 Development of CACTUS n Training in the use of the Cactus.
Framework for MDO Studies Amitay Isaacs Center for Aerospace System Design and Engineering IIT Bombay.
The Cactus Code: A Problem Solving Environment for the Grid Gabrielle Allen, Gerd Lanfermann Max Planck Institute for Gravitational Physics.
F. Douglas Swesty, DOE Office of Science Data Management Workshop, SLAC March Data Management Needs for Nuclear-Astrophysical Simulation at the Ultrascale.
Sources of Gravitational Radiation EU Astrophysics Network Overview Ed Seidel Albert-Einstein-Institute Principal Network Coordinator.
Cactus/TIKSL/KDI/Portal Synch Day. Agenda n Main Goals:  Overview of Cactus, TIKSL, KDI, and Portal efforts  present plans for each project  make sure.
GridLab WP-2 Cactus GAT (CGAT) Ed Seidel, AEI & LSU Co-chair, GGF Apps RG, Gridstart Apps TWG Gabrielle Allen, Robert Engel, Tom Goodale, *Thomas Radke.
Cactus Workshop - NCSA Sep 27 - Oct Cactus For Relativistic Collaborations Ed Seidel Albert Einstein Institute
New and Cool The Cactus Team Albert Einstein Institute
Connections to Other Packages The Cactus Team Albert Einstein Institute
August 2003 At A Glance The IRC is a platform independent, extensible, and adaptive framework that provides robust, interactive, and distributed control.
Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
2/22/2001Greenbook 2001/OASCR1 Greenbook/OASCR Activities Focus on technology to enable SCIENCE to be conducted, i.e. Software tools Software libraries.
Conundrum Talk, LBL May 2000 The Cactus Code: A Framework for Parallel Computing Gabrielle Allen Albert Einstein Institute Max Planck Institute for Gravitational.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
Chapter 1 Basic Concepts of Operating Systems Introduction Software A program is a sequence of instructions that enables the computer to carry.
Albert-Einstein-Institut Exploring Distributed Computing Techniques with Ccactus and Globus Solving Einstein’s Equations, Black.
Dynamic Grid Computing: The Cactus Worm The Egrid Collaboration Represented by: Ed Seidel Albert Einstein Institute
New and Cool The Cactus Team Albert Einstein Institute
Cactus Workshop - NCSA Sep 27 - Oct Generic Cactus Workshop: Summary and Future Ed Seidel Albert Einstein Institute
Background Computer System Architectures Computer System Software.
Metacomputing Within the Cactus Framework What and why is Cactus? What has Cactus got to do with Globus? Gabrielle Allen, Thomas Radke, Ed Seidel. Albert-Einstein-Institut.
1 Chapter 2: Operating-System Structures Services Interface provided to users & programmers –System calls (programmer access) –User level access to system.
Developing HPC Scientific and Engineering Applications: From the Laptop to the Grid Gabrielle Allen, Tom Goodale, Thomas.
Cactus Project & Collaborative Working
VisIt Project Overview
Cactus Tools for the Grid
Clouds , Grids and Clusters
The Cactus Team Albert Einstein Institute
Grid Computing AEI Numerical Relativity Group has access to high-end resources in over ten centers in Europe/USA They want: Bigger simulations, more simulations.
The Client/Server Database Environment
Exploring Distributed Computing Techniques with Ccactus and Globus
Dynamic Grid Computing: The Cactus Worm
Presentation transcript:

Albert-Einstein-Institut Cactus: Developing Parallel Computational Tools to Study Black Hole, Neutron Star (or Airplane...) Collisions Solving Einstein’s Equations, Black Holes, and Gravitational Wave Astronomy Cactus, a new community simulation code framework –Toolkit for many PDE systems –Suite of solvers for Einstein and astrophysics systems Recent Simulations using Cactus –Black Hole Collisions, Neutron Star Collisions –Collapse of Gravitational Waves –Aerospace test project Metacomputing for the general user: what a scientist really wants and needs –Distributed Computing Experiments with Cactus/Globus Ed Seidel Albert-Einstein-Institut MPI-Gravitationsphysik & NCSA/U of IL Ed Seidel Albert-Einstein-Institut MPI-Gravitationsphysik & NCSA/U of IL

Albert-Einstein-Institut Einstein’s Equations and Gravitational Waves Einstein’s General Relativity –Fundamental theory of Physics (Gravity) –Among most complex equations of physics Dozens of coupled, nonlinear hyperbolic-elliptic equations with 1000’s of terms Barely have capability to solve after a century –Predict black holes, gravitational waves, etc. Exciting new field about to be born: Gravitational Wave Astronomy –Fundamentally new information about Universe –What are gravitational waves??: Ripples in spacetime curvature, caused by matter motion, causing distances to change: A last major test of Einstein’s theory: do the exist? –Eddington: “Gravitational waves propagate at the speed of thought” –1993 Nobel Prize Committee: Hulse-Taylor Pulsar (indirect evidence) –20xx Nobel Committee: ??? (For actual detection…) s(t) h =  s/s ~ ! Colliding BH’s and NS’s...

Albert-Einstein-Institut Teraflop Computation, AMR, Elliptic-Hyperbolic, ??? Numerical Relativity Waveforms: We Want to Compute What Happens in Nature... PACS Virtual Machine Room

Albert-Einstein-Institut Black Holes: Excellent source of waves Need Cosmic Cataclysms to provide strong waves! BH’s have very strong gravity, collide near speed of light, have ~ solar masses! May collide “frequently” –Not very often local region of space, but.. –Perhaps ~3 per year within 200Mpc, range of detectors… Need to have some idea what the signals will look like if we are to detect and understand them…

Albert-Einstein-Institut Einstein Equations: New Formulations, New Capabilities Einstein Eqs.: G  (  ij ) = 8  T  Traditional Evolution Equations: ADM –∂ tt  = S(  ) (Think Maxwell: ∂ E/ ∂ t = Curl B, ∂ B/ ∂ t = - Curl E) S(  ) has thousands of terms (very ugly!) –4 nonlinear elliptic constraints (Think Maxwell: Div B = Div E = 0) –4 gauge conditions (often elliptic) (Think Maxwell:  —  –Numerical Methods “ad hoc”. Not manifestly hyperbolic NEW: First Order Symmetric Hyperbolic ∂ t u  + ∂ i F i (u)  = S(u) –u is a vector of many fields, typically of order 50 –Complete set of Eigenfields (under certain conditions…) –Many variations on these formulations, dozens of papers since 1992 –Elliptic equations still there…

Albert-Einstein-Institut Computational Needs for 3D Numerical Relativity Explicit Finite Difference Codes –~ 10 4 Flops/zone/time step –~ 100 3D arrays Require zones or more –~1000 Gbytes –Double resolution: 8x memory, 16x Flops TFlop, Tbyte machine required Parallel AMR, I/O essential A code that can do this could be useful to other projects (we said this in all our grant proposals)! –Last 2 years devoted to making this useful across disciplines… –All tools used for these complex simulations available for other branches of science, engineering... InitialData: 4 coupled nonlinear elliptics Evolution hyperbolic evolution coupled with elliptic eqs. t=0 t=100

Albert-Einstein-Institut Any Such Computation Requires Incredible Mix of Varied Technologies and Expertise! Many Scientific/Engineering Components –Physics, astrophysics, CFD, engineering,... Many Numerical Algorithm Components –Finite difference methods? Unstructured meshes? –Elliptic equations: multigrid, Krylov subspace, preconditioners,... –Mesh Refinement? Many Different Computational Components –Parallelism (HPF, MPI, PVM, ???) –Architecture Efficiency (MPP, DSM, Vector, PC Clusters, ???) –I/O Bottlenecks (generate gigabytes per simulation, checkpointing…) –Visualization of all that comes out! Scientist wants to focus on top bullet, but all required for results...

Albert-Einstein-Institut This is fundamental question addressed by Cactus. Clearly need teams, with huge expertise base to attack such problems... In fact, need collections of communities to solve such problems... But how can they work together effectively? We need a simulation code environment that encourages this... These are the fundamental issues addressed by Cactus. Providing advanced comp. Science to scientists/engineers Providing collaborative infrastructure for large groups

Albert-Einstein-Institut Grand Challenges : NSF Black Hole and NASA Neutron Star Projects University of Texas (Matzner, Browne), NCSA/Illinois/AEI (Seidel, Saylor, Smarr, Shapiro, Saied) North Carolina (Evans, York) Syracuse (G. Fox) Cornell (Teukolsky) Pittsburgh (Winicour) Penn State (Laguna, Finn) NCSA/Illinois/AEI (Saylor, Seidel, Swesty, Norman) Argonne (Foster) Washington U (Suen) Livermore (Ashby) Stony Brook (Lattimer) NEW! EU Network

Albert-Einstein-Institut What we learn from Grand Challenges Successful, but also problematic… –No existing infrastructure to support collaborative HPC –Many scientists are bad Fortran programmers, and NOT computer scientists (especially physicists…like me…) –Many sociological issues of large collaborations and different cultures –Many language barriers... –Applied mathematicians, computational scientists, physicists have very different concepts and vocabularies… –Code fragments, styles, routines often clash –Successfully merged code (after years) often impossible to transplant into more modern infrastructure (e.g., add AMR or switch to MPI…) Many serious problems... Parlez-vous C? Nein! Nur Fortran!

Albert-Einstein-Institut Cactus new concept in community developed simulation code infrastructure Developed as response to needs of these projects Numerical/computational infrastructure to solve PDE’s Freely available, open community source code: spirit of gnu/linux Cactus Divided in “Flesh” (core) and “Thorns” (modules or collections of subroutines) –User apps can be Fortran, C, C ++ ; automated interface between them –Parallelism abstracted and hidden (if desired) from user –User specifies flow: when to call thorns; code switches memory on/off Many parallel utilities / features enabled by Cactus (nearly) All architectures supported: –Dec Alpha / SGI Origin 2000 / T3E / Linux clusters + laptops / Hitachi /NEC/HP/Windows NT/ SP2, Sun –Code portability, migration to new architectures very easy!

Albert-Einstein-Institut Modularity of Cactus... Application 1a Cactus Flesh Application 2... Sub-app AMR (Grace, etc) MPI layer 1I/O layer 2 Remote Steer 3 Globus Metcomputing Services User selects desired functionality... Application 1b

Albert-Einstein-Institut Computational Toolkit: provides parallel utilities (thorns) for computational scientist Cactus is a framework or middleware for unifying and incorporating code from Thorns developed by the community –Choice of parallel library layers (Native MPI, MPICH, MPICH-G(2), LAM, WMPI, PACX and HPVM) –Portable, efficient (T3E, SGI, Dec Alpha, Linux, NT Clusters…) –3 mesh refinement schemes: Nested Boxes, GrACE, HLL (coming…) –Parallel I/O (Panda, FlexIO, HDF5, etc…) –Parameter Parsing –Elliptic solvers (Petsc, Multigrid, SOR, etc…) –Visualization Tools, Remote steering tools, etc… –Globus (metacomputing/resource management) –Performance analysis tools (Autopilot, PAPI, etc…) –INSERT YOUR CS MODULE HERE...

Albert-Einstein-Institut PAPI Standard API for accessing the hardware performance counters on most microprocessors. Useful for tuning, optimization, debugging, benchmarking, etc Java GUI available for monitoring the metrics Cactus thorn CactusPerformance/PAPI

Albert-Einstein-Institut GrACE Parallel/distributed AMR via C++ library Abstracts Grid Hierarchies, Grid Functions and Grid Geometries CactusPAGH will include a driver thorn which uses GrACE to provide AMR (KDI ASC Project)

Albert-Einstein-Institut How to use Cactus: Avoiding the MONSTER code syndrome... [Optional: Develop thorns, according to some rules –e.g. specify variables through interface.ccl –Specify calling sequence of the thorns for given problem and algorithm (schedule.ccl) ] Specify which thorns are desired for simulation (Einstein equations + special method 1 +HRSC hydro+wave finder + AMR + live visualization module + remote steering tool…) Specified code is then created, with only those modules, those variables, those I/O routines, this MPI layer, that AMR system,…, needed Subroutine calling lists generated automatically Automatically created for desired computer architecture Run it…

Albert-Einstein-Institut Cactus Computational Tool Kit Flesh (core) written in C Thorns (modules) grouped in packages written in F77, F90, C, C++ Thorn-Flesh interface fixed in 3 files written in CCL (Cactus Configuration Language): –interface.ccl: Grid Functions, Arrays, Scalars (integer, real, logical, complex) –param.ccl: Parameters and their allowed values –schedule.ccl: Entry point of routines, dynamic memory and communication allocations Object oriented features for thorns (public, private, protected variables, implementations, inheritance) for clearer interfaces Compilation: –PERL parses the CCL files and creates the flesh-thorn interface code at compile time –Particularly important for the FORTRAN-C interface. FORTRAN arg. lists must be known at compile time, but depend on the thorn list

Albert-Einstein-Institut High performance: Full 3D Einstein Equations solved on NCSA NT Supercluster, Origin 2000, T3E Excellent scaling on many architectures –Origin up to 256 processors –T3E up to 1024 –NCSA NT cluster up to 128 processors Achieved 142 Gflops/s on 1024 node T3E-1200 (benchmarked for NASA NS Grand Challenge) But, of course, we want much more… metacomputing, meaning connected computers...

Albert-Einstein-Institut “Egrid” NCSA Cactus Development Projects AEI Cactus Group (Allen) NASA “Round 2” (Saylor) Round 3?? NSF KDI (Suen) EU Network (Seidel) Numerical Relativity Astrophysics Grid Forum DLR Geophysics DFN Gigabit (Seidel) Microsoft “GRADS”

Albert-Einstein-Institut Applications Neutron Stars –Developing capability to do full GR hydro –Now can follow full orbits! DLR project: working to explore capabilities for aerospace industry Black Holes (prime source for GW) –Increasingly complex collisions: now doing full 3D grazing collisions Gravitational Waves –Study linear waves as testbeds –Move on to fully nonlinear waves –Interesting Physics: BH formation in full 3D!

Albert-Einstein-Institut Evolving Pure Gravitational Waves Einstein’s equations nonlinear, so low amplitude waves just propagate away, but large amplitude waves may… –Collapse on themselves under their own self-gravity and actually form black holes Use numerical relativity: Probe GR in highly nonlinear regime –Form BH?, Critical Phenomena in 3D?, Naked singularities? –… Little known about generic 3D behavior Take “Lump of Waves” and evolve –Large amplitude: get BH to form! –Below critical value: disperses and can evolve “forever” as system returns to flat space We are seeing hints of critical phenomena, known from nonlinear dynamics

Albert-Einstein-Institut Comparison: sub vs. super-critical solutions Newman-Penrose  4 (showing gravitational waves) with lapse  underneath Subcritical: no BH forms Supercritical: BH forms!

Albert-Einstein-Institut Numerical Black Hole Evolutions Binary IVP: Multiple Wormhole Model, other models Black Holes good candidates for Gravitational Waves Astronomy –~ 3 events per years within 200Mpc –But what are the waveforms? GW astronomers want to know! S1S1 S2S2 P1P1 P2P2

Albert-Einstein-Institut Now try first 3D “Grazing Collision”: Big Step: Spinning, “orbiting”, unequal mass BHs merging. Evolution of  4 in x-z plane (rotation plane of BH) Horizon merger Alcubierre et al results 384 3, 100GB simulation, Largest production relativity Simulation 256 processor Origin 2000

Albert-Einstein-Institut Our Team Requires Grid Technologies, Big Machines for Big Runs WashU NCSA Hong Kong AEI ZIB Thessaloniki How Do We: Maintain/develop Code? Manage Computer Resources? Carry Out/monitor Simulation? Paris PACS Virtual Machine Room

Albert-Einstein-Institut Cactus Portal, Distributed Simulation under active development at NASA-Ames Deutsches Luft- und Raumfahrtzentrum (DLR) Pilot Project – a CFD code (Navier-Stokes with Turbulence model or Euler) with special extension to calculate turbine streams. Can be used for "normal" CFD problems as well. –based on finite volume discretization on a block structured regular cartesian grid. –has currently simple MPI parallelization. –Plugging into Cactus to evaluate Aerospace Applications

Albert-Einstein-Institut What we need and want in simulation science: a Portal to provide the following... Got an idea? Write Cactus module, link to other modules, and… Find resources –Where? NCSA, SDSC, Garching, Boeing…??? –How many computers? Distribute Simulations? –Big jobs: “Fermilab” at disposal: must get it right while the beam is on! Launch Simulation –How do get executable there? –How to store data? –What are local queue structure/OS idiosyncracies? Monitor the simulation –Remote Visualization live while running Limited bandwidth: compute viz. inline with simulation High bandwidth: ship data to be visualized locally –Visualization server: all privileged users can login and check status/adjust if necessary Are parameters screwed up? Very complex! Call in an expert colleague…let her watch it too Steer the simulation –Is memory running low? AMR! What to do? Refine selectively or acquire additional resources via Globus? Delete unnecessary grids? Postprocessing and analysis –1TByte output at NCSA, research groups in St. Louis and Berlin…how to deal with this?

Albert-Einstein-Institut Cactus Computational Toolkit Science, Autopilot, AMR, Petsc, HDF, MPI, GrACE, Globus, Remote Steering... A Portal to Computational Science: The Cactus Collaboratory 1. User has science idea Selects Appropriate Resources Collaborators log in to monitor Steers simulation, monitors performance Composes/Builds Code Components w/Interface... Want to integrate and migrate this technology to the generic user...

Albert-Einstein-Institut Remote Visualization IsoSurfaces and Geodesics Contour plots (download) Grid FunctionsS treaming HDF5 Amira LCA Vision OpenDX

Albert-Einstein-Institut Remote Visualization Tools under Development Live data streaming from Cactus simulation to viz client –Clients: OpenDX, Amira, LCA Vision, Xgraph Protocols –Precomputed Viz run inline with the simulation: Isosurfaces, geodesics –HTTP: Parameters, xgraph data, Jpegs, viewed and controlled from any web browser –Streaming HDF5: sends raw data from resident memory of supercomputer HDF5 provides downsampling and hyperslabbing all above data, and all possible HDF5 data (e.g. 2D/3D) two different technologies –Streaming Virtual File Driver (I/O rerouted over network stream) –XML-wrapper (HDF5 calls wrapped and translated into XML)

Albert-Einstein-Institut Remote Steering Remote Viz data XML HTTP HDF5 Amira Any Viz Client

Albert-Einstein-Institut Remote Steering Stream parameters from Cactus simulation to remote client, which changes parameters (GUI, command line, viz tool), and streams them back to Cactus where they change the state of the simulation. Cactus has a special STEERABLE tag for parameters, indicating it makes sense to change them during a simulation, and there is support for them to be changed. Example: IO parameters, frequency, fields Current protocols: –XML (HDF5)to standalone GUI –HDF5to viz tools (Amira, Open DX, LCA Vision, etc) –HTTPto Web browser (HTML forms)

Albert-Einstein-Institut Remote Offline Visualization Viz Client (Amira) HDF5 VFD DataGrid (Globus) DPSS FTP HTTP Visualization Client DPSS Server FTP Server Web Server Remote Data Server Downsampling, hyperslabs 2 TByte at NCSA Berlin ??

Albert-Einstein-Institut Metacomputing: harnessing power when and where it is needed Einstein equations typical of apps that require extreme memory, speed Largest supercomputers too small! Networks very fast! –vBNS, etc in US –DFN Gigabit testbed: 622 Mbits Potsdam-Berlin-Garching, connect multiple supercomputers –International gigabit networking possible –Connect workstations to make supercomputer Acquire resources dynamically during simulation! –AMR, analysis, etc... Seamless computing and visualization from anywhere Many metacomputing experiments in progress –Current ANL/SDSC/NCSA/NERSC/… experiment in progress...

Albert-Einstein-Institut Metacomputing the Einstein Equations: Connecting T3E’s in Berlin, Garching, San Diego Want to migrate this technology to the generic user...

Albert-Einstein-Institut Grand Picture Remote steering and monitoring from airport Origin: NCSA Remote Viz in St Louis T3E: Garching Simulations launched from Cactus Portal Grid enabled Cactus runs on distributed machines Remote Viz and steering from Berlin Viz of data from previous simulations in SF café DataGrid/DPSS Downsampling Globus http HDF5 IsoSurfaces

Albert-Einstein-Institut The Future Gravitational wave astronomy almost here: must be able to solve Einstein’s equations in detail to understand the new observations New Codes, strong collaborations, bigger computers, new formulations of EE’s: together enabling much new progress. Cactus Computational Toolkit developed orignally for Einstein’s equations, available now for many applications (NOT an astrophysics code!) –Useful as a parallel toolkit for many applications, provides portability from laptop to many parallel architectures (e.g. cluster of iPaqs!) –Many advanced collaborative tools, portal for code compostion, resource selection, computational steering, remote viz under development Advanced Grid-based metacomputing tools are maturing...

Albert-Einstein-Institut Further details... Cactus – – Movies, research overview (needs major updating) – Simulation Collaboratory/Portal Work: – Remote Steering, high speed networking – – EU Astrophysics Network –