Presentation is loading. Please wait.

Presentation is loading. Please wait.

Grid Computing AEI Numerical Relativity Group has access to high-end resources in over ten centers in Europe/USA They want: Bigger simulations, more simulations.

Similar presentations


Presentation on theme: "Grid Computing AEI Numerical Relativity Group has access to high-end resources in over ten centers in Europe/USA They want: Bigger simulations, more simulations."— Presentation transcript:

1 Grid Computing AEI Numerical Relativity Group has access to high-end resources in over ten centers in Europe/USA They want: Bigger simulations, more simulations and faster throughput Intuitive IO at local workstation No new systems/techniques to master!! How to make best use of these resources? Provide easier access … noone can remember ten usernames, passwords, batch systems, file systems, … great start!!! Combine resources for larger productions runs (more resolution badly needed!) Dynamic scenarios … automatically use what is available Many other reasons for Grid Computing for computer scientists, funding agencies, supercomputer centers ...

2 Grand Picture Viz of data from previous simulations in SF café
Remote Viz in St Louis Remote steering and monitoring from airport Remote Viz and steering from Berlin DataGrid/DPSS Downsampling IsoSurfaces http HDF5 T3E: Garching Origin: NCSA Globus Simulations launched from Cactus Portal Grid enabled Cactus runs on distributed machines

3 Thorn HTTPD Thorn which allows simulation any to act as its own web server Connect to simulation from any browser anywhere Monitor run: parameters, basic visualization, ... Change steerable parameters See running example at Wireless remote viz, monitoring and steering

4 Users View

5 User Portal Find resources Authentification Launch simulation
automatically finds machines with a user allocation (group aware!) continuously monitor resources, network etc. Authentification single login, don’t need to remember lots of usernames/passwords Launch simulation automatically create executable on chosen machine write data to appropriate storage location negotiate local queue structures Monitor/steer simulations access remote visualization and steering while simulation is running collaborative … choose who else can look in and/or steer performance … how efficient is the simulation? Archiving store thorn lists, parameter files, output locations, configurations, …

6 Cactus Portal KDI ASC Project
Technology: Globus, GSI, Java Beans, DHTML, Java CoG, MyProxy, GPDK, TomCat, Stronghold Allows submission of distributed runs Accesses the ASC Grid Testbed (SDSC, NCSA, Argonne, ZIB, AHPCC, WashU, AEI) Undergoing testing by users now! Main difficulty now is that it requires everything to work … But is going to revolutionise our use of computing resources

7 ASC: Astrophysics Simulation Collaboratory
NSF Funded Knowledge and Distributed Intelligence project Institutes: WashU, Rutgers, Argonne, U. Chicago, NCSA Aim: Develop a collaboratory for the astrophysics community to provide the capability for massively parallel computation including AMR, interactive visualization and metacomputing. ASC Portal: General simulation portal interfacing to Cactus Globus, GSI, Java Beans, DHTML, Java CoG, MyProxy, GPDK, TomCat, Stronghold Version 1 included code assembly, compilation, job submission Version 2 now being developed, based on user feedback


Download ppt "Grid Computing AEI Numerical Relativity Group has access to high-end resources in over ten centers in Europe/USA They want: Bigger simulations, more simulations."

Similar presentations


Ads by Google