Presentation is loading. Please wait.

Presentation is loading. Please wait.

The University of Chicago Center on Astrophysical Thermonuclear Flashes Terascale Computing for FLASH Rusty Lusk Ian Foster, Rick Stevens Bill Gropp.

Similar presentations


Presentation on theme: "The University of Chicago Center on Astrophysical Thermonuclear Flashes Terascale Computing for FLASH Rusty Lusk Ian Foster, Rick Stevens Bill Gropp."— Presentation transcript:

1 The University of Chicago Center on Astrophysical Thermonuclear Flashes Terascale Computing for FLASH Rusty Lusk Ian Foster, Rick Stevens Bill Gropp

2 Center for Astrophysical Thermonuclear Flashes The University of Chicago Outline n Goals n Requirements and objectives for FLASH computations n Strategy n Experiments, development, and research n Accomplishments n Results, tools, prototypes and demonstrations n Interactions n Universities, ASCI labs, other ASCI centers, students

3 Center for Astrophysical Thermonuclear Flashes The University of Chicago Why does FLASH Need Terascale Computing? n Complex non-linear physics on 10 9 zones n Problem size determined by n 3D nature of the physical problem (required by turbulence and magnetic field evolution) n Extended dynamic range required to distinguish microphysics from large-scale physics n Current methods require multiple TeraFLOPS per time step on grids of this size for tens of thousands of time steps n 1 Tflop sustained required to complete full calculation (50,000 time steps) in ~60 hours and will generate TBs of output data

4 Center for Astrophysical Thermonuclear Flashes The University of Chicago Requirements for Scientific Progress n Apply a scientific approach to code development for FLASH-1 n Scalable performance of astrophysics simulation code in next-generation computing environment n Develop and test on high-end machines n Use scalable system and math libraries n Use scalable I/O and standard data formats n Scalable tools for converting output into scientific insight through advanced visualization and data management n Ease of use for scientists in an environment with distributed resources me Ian Foster Rick Stevens

5 Center for Astrophysical Thermonuclear Flashes The University of Chicago Near-Term Strategy for Code Development n Capitalize on existing sophisticated astrophysics simulation code: ASTRO3D from U. of C. Astrophysics n Already 3D, parallel, producing visualization output n Not portable, not instrumented for performance studies n Use ASTRO3D as immediate tool for experimentation, to connect astrophysicists and computer scientists n “probe” ASCI machines n use as template and data source for new visualization work and distributed computing framework n use as test case for portability and code management experiments, performance visualization tools

6 Center for Astrophysical Thermonuclear Flashes The University of Chicago Long-Term Strategy for Scientific Code Development n Tools work in preparation for FLASH-1 code n scalable performance visualization n convenient and secure distributed computing n advanced visualization, standard data representations n adapt numerical libraries (e.g., PETSc) as necessary n adaptive mesh refinement research n studies and implementation for standard parallel I/O n Research into fundamental questions for future code n meshes, AMR schemes, and discretization strategies n multiresolution volume visualization n programming models for near-future architectures

7 Center for Astrophysical Thermonuclear Flashes The University of Chicago FY98 Accomplishments n ASTRO3D n message-passing component ported to MPI (Andrea Malagoli, Paul Plassmann, Bill Gropp, Henry Tufo n I/O ported to MPI-I/O, for portability and performance (Rajeev Thakur) n Testing of source code control, configuration management (Bill Gropp) n Using large machines (more on Tuesday) n Use of all three ASCI machines (Henry Tufo, Lori Freitag, Anthony Chan, Debbie Swider) n Use of large machines at ANL, NCSA, Pittsburgh, others n Scalability studies on ASCI machines using ASTRO3D and SUMAA3d (scalable unstructured mesh computations) (Lori Freitag)

8 Center for Astrophysical Thermonuclear Flashes The University of Chicago Accomplishments (cont.) n MPI-related work n MPICH, portable implementation of MPI, with extra features n Improving handling of datatypes n Parallel part of MPI-2 on all ASCI machines n MPICH-G, integrating MPICH and Globus n Program visualization for understanding performance in detail n Jumpshot - new Web-based system for examining logs n New effort in scalability of program visualization n Joint project with IBM, motivated by Livermore requirements

9 Center for Astrophysical Thermonuclear Flashes The University of Chicago FY99 Plans n Apply lessons learned with ASTRO3D to emerging FLASH- 1 code. n Incorporate Multigrid computations in PETSc n Continue research into discretization issues n Explore component approach to building the FLASH code n FLASH code motivator, for flexible experimentation: n with multiple meshing packages (DAGH, Paramesh, SUMAA3d, MEGA) n with a variety of discretization approaches n multiple solvers n multiple physics modules n MPI-2: beyond the message-passing model n Scalable performance visualization (with IBM and LLNL)

10 Center for Astrophysical Thermonuclear Flashes The University of Chicago FLASH Center Computer Science Interactions n With ASCI Labs n LLNL: MPICH development, MPICH-G, MPI-IO for HPSS, PETSc with PVODE n LANL: MPICH with TotalView, MPI-IO on SGI, Visualization n SNL: SUMAA3d with CUBIT, URB with Allegra, MPI-IO n With other ASCI centers n Caltech Level 1 center: parallel I/O n Utah Level 1 and AVTC: visualization n Princeton Level 2 center: visualization n Northwestern Level 2 center: data management n Old Dominion Level 2 center: parallel radiation transport n With University Groups n NCSA: HDF5 data formats group, parallel I/O for DMF n ISI: Globus n With Vendors n IBM, SGI, HP, Dolphin

11 Center for Astrophysical Thermonuclear Flashes The University of Chicago A Course in Tools for Scientific Computing n CS-341: Tools for High-Performance Scientific Computing n Graduate and advanced undergraduate n Expected 10 students, got 35 n From Chicago departments of Physics, Chemistry, Computer Science, Social Sciences, Astrophysics, Geophysical Sciences, Mathematics, Economics n Hands-on (half of each class is in computer lab) n Taught primarily by Argonne team n Features tools used by, and in many cases written by, Argonne computer scientists

12 The University of Chicago Center on Astrophysical Thermonuclear Flashes Visualization and Data Management Mike Papka, Randy Hudson, Rick Stevens, Matt Szymanski Futures Laboratory Argonne National Laboratory and FLASH Center

13 Center for Astrophysical Thermonuclear Flashes The University of Chicago Visualization and Data Management n Requirements for FLASH-1 simulation output n Large-scale 3D datasets n X128   :-) n Variety of data formats and data management scenarios n binary restart files  HDF5 and MPI-IO n Our strategy for FLASH scientific visualization n Scaling visualization performance and function n parallelism, faster surface and volume rendering n higher resolution displays and immersion tests n improving ability to visualize multiresolution data n Improve ability to manage TB class datasets n standard data and I/O formats n interfaces to hierarchical storage managers n strategies for high-speed navigation

14 Center for Astrophysical Thermonuclear Flashes The University of Chicago FLASH Visualization and Data Management Accomplishments n Taught UC Course on Visualization CS-334 n Scientific Visualization Tools and Technologies n Developed Parallel Multipipe Volume Renderer* n Developed Scalable Isosurface Renderer* n Developed HDF/netCDF I/O Exchange Module n Leveraging AVTC developments for FLASH n Integrated vTK library with CAVE environment* n Desktop integration with high-end visualization tools* n Developed a Prototype Tiled Wall Display* n Captured FLASH seminars with FL-Voyager * funded in part by ASCI Advanced Visualization Technology Center

15 Center for Astrophysical Thermonuclear Flashes The University of Chicago UC Course on Visualization n CS-334 Spring Quarter, 1998 n 17 Students about 1/2 undergrad and 1/2 grad n Course provide a base for more advanced work in VR and Visualization n Students constructed VR and visualization Applications n Students used high-end environment at ANL and workstations at UC Argonne FL Group

16 Center for Astrophysical Thermonuclear Flashes The University of Chicago Scientific Visualization for FLASH n Created FLASH dataset repository n Currently five datasets in repository n Use as challenge problems for rendering and viz research n Rendered all FLASH related datasets n ASTRO3D (multiple runs) n PROMETHEUS (current largest-scale dataset) n Provided design input on visualization interfaces n FLASH -1 code design n FY99 work closely with FLASH groups to produce visualizations of all large-scale computations

17 Center for Astrophysical Thermonuclear Flashes The University of Chicago Developed Parallel Multipipe Volume Renderer n Accelerating volume rendering of 3D datasets n using multiple Infinite Reality hardware pipes n Integrated into CAVE/Idesk environment n Providing software for use of n SGI Reality Monster (FY98) n Commodity Graphics Cluster (FY99) n Performance experiments n FY99 goals n realtime exploration ~256 3 n offline movies up to ~ ASTRO3D Jet

18 Center for Astrophysical Thermonuclear Flashes The University of Chicago Developed Scalable Isosurface Renderer n Designed to scale to x N datasets n surface rendered movies n Uses remote compute resources to compute isosurfaces  realtime n Uses Globus n FY99 plan to integrate with ASCI compute resources via Globus n FLASH dataset test n other ASCI data ASTRO3D

19 Center for Astrophysical Thermonuclear Flashes The University of Chicago Integrated vTK library with CAVE Environment n Enabling high-functionality visualizations n Builds on 600+ classes in vtk library n Enables exploration of immersion vis within vTK applications n Enables very high-resolution offline rendering n FY98 basic prototype (collaboration with LANL) n demonstrate on ASTRO3D/PROMETHEUS runs n FY99 parallel objects, performance tuning

20 Center for Astrophysical Thermonuclear Flashes The University of Chicago HDF4/5 and netCDF I/O Exchange Modules n FY98 developed two prototype interface modules n Support portable I/O for visualization n FY99 plans to integrate these modules with FLASH codes to facilitate ease in visualization SimRestartFilter AVizFilter B Visualization/Data Management Chain FY98 VizFDF Visualization/Data Management Chain FY00 Sim

21 Center for Astrophysical Thermonuclear Flashes The University of Chicago Integrated Visualization Tools with Desktop Tools for Remote Visualization n Provides desktop video view of immersive visualization n Enables remote desktop/CAVE/Idesk collaboration n FY99 plans tie to high-end visualization suite

22 Center for Astrophysical Thermonuclear Flashes The University of Chicago Developed a Prototype High-Resolution Tiled Wall Display n ActiveMural Project (AVTC funded) n collaboration with Princeton (Kai Li’s group) n eight projector prototype 2500 x 1500 pixels (up to date) n twenty projector design 4000 x 3000 pixels (up january 99) n FY99 tie into visualization tools, validate on high resolution output

23 Center for Astrophysical Thermonuclear Flashes The University of Chicago Use of Voyager Media Recorder to Capture FLASH Seminars n Enabling remote collaboration (ANL-UC) n Asynchronous playback for FLASH members n FY99 make FLASH seminars available to ASCI labs Distributed Multimedia Filesystem Nodes Java-based Voyager User Interface Voyager Server Recording Meta-data Voyager RTSP Control Streams via Corba Network DB Calls RTP Encoded Streams Audio/Video

24 The University of Chicago Center on Astrophysical Thermonuclear Flashes Terascale Distance and Distributed Computing Ian Foster, Joe Insley, Jean Tedesco, Steve Tuecke Distributed Systems Laboratory & FLASH ASAP Center Argonne National Laboratory

25 Center for Astrophysical Thermonuclear Flashes The University of Chicago Distance and Distributed Computing n Future simulation science (including FLASH & ASCI) requires “virtual” computers integrating distant resources n Scientists, computers, storage systems, etc., are rarely colocated! n Hence, need “simulation grid” to overcome barriers of distance, heterogeneity, scale n Argonne, via its Globus toolkit and GUSTO efforts, provides access to considerable expertise & technology n Many opportunities for productive interactions with ASCI n Access to distant Terascale computer and data resources n End-to-end resource management (“distance corridors”) n Security, instrumentation, communication protocols, etc. n High-performance execution on distributed systems

26 Center for Astrophysical Thermonuclear Flashes The University of Chicago FLASH Distance and Distributed Computing Strategy n Build on capabilities provided by Globus grid toolkit and GUSTO grid testbed n Use desktop access to Astro3D as initial model problem n Resource location, allocation, authentication, data access n Use remote navigation of terabyte datasets as additional research and development driver n Data-visualization pipelines, protocols, scheduling n Outreach effort to DP labs n LANL, LLNL, SNL-A, SNL-L

27 Center for Astrophysical Thermonuclear Flashes The University of Chicago Globus Project Goals (Joint with USC/ISI [Caltech ASAP]) n Enable high-performance applns that use resources from a “computational grid” n Computers, databases, instruments, people n Via n Research in grid-related technology n Development of Globus toolkit: Core services for grid-enabled tools & applns n Construction of large grid testbed: GUSTO n Extensive application experiments Resource allocation Resource location Security QoS Code management Communication Remote I/O Instrumentation Directory Fault detection DUROC,Nimrod,... MPICH-G, PAWS,... RIO,PPFS,... Metro,CAVERNsoft Astrophysics Shock Tube IP, MPI, shm... SGI, SP,... Kerberos, PKI,... LSF, PBS, NQE,... Applications Tools Platforms...

28 Center for Astrophysical Thermonuclear Flashes The University of Chicago Model Problem: Remote Execution of Astro3D n Prototype “global shell” that allows us to n Sign-on once via public key technology n Locate available computers n Start computation on an appropriate system n Monitor progress of computation n Get [subsampled] output files n Manipulate locally

29 Center for Astrophysical Thermonuclear Flashes The University of Chicago Performance Driver: Remote Browsing of Large Datasets n Problem: interactive exploration of very large (TB+) datasets n Interactive client VRUI with view management support n Data reduction at remote client (subsampling) n Use of Globus to authenticate, transfer data, access data n Future driver for protocol, quality of service issues

30 Center for Astrophysical Thermonuclear Flashes The University of Chicago Outreach to ASCI Labs n Globus deployed at LANL, LLNL, SNL-L n Pete Beckman, LANL: remote visualization n Mark Seager, Mary Zosel, LLNL: multi-method MPICH n Robert Armstrong, Robert Clay, SNL-L: clusters n Visits ANL LANL, LLNL, SNL-A, SNL-L n DP lab participation in Globus user meeting n Extensive work on multi-cluster MPI for Pacific Blue Pacific (MPICH-G) n Multi-method communication (shared memory, MPI, IP): demonstrated better performance than IBM MPI n Scalable startup for thousands of nodes

31 Center for Astrophysical Thermonuclear Flashes The University of Chicago Challenges and Next Steps n Can we use Globus to obtain access to DP lab resources? n Numerous enthusiasts within labs n But clearly “different” and requires buy-in n Smart card support may help with acceptance n Push further on “desktop Astro3D” driver; use to drive deployment n Use interactive analysis of remote TB datasets as performance driver n Incorporate additional Globus features: quality of service, smart cards, instrumentation, etc.

32 Center for Astrophysical Thermonuclear Flashes The University of Chicago END


Download ppt "The University of Chicago Center on Astrophysical Thermonuclear Flashes Terascale Computing for FLASH Rusty Lusk Ian Foster, Rick Stevens Bill Gropp."

Similar presentations


Ads by Google