Presentation is loading. Please wait.

Presentation is loading. Please wait.

National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics.

Similar presentations


Presentation on theme: "National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics."— Presentation transcript:

1 National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics Forum 2014

2 Organization Blue Waters XSEDE Advanced Digital Services

3 Common Blue Waters and XSEDE functions User support (including visualization) Network infrastructure Storage Operations Support for future NCSA efforts Resources managed by service level agreements

4 Advanced Digital Services Visualization Group – Support for data analysis and visualization. XSEDE Dave Bock Mark VanMoer Blue Waters Dave Semeraro Rob Sisneros

5 Visualization Support Software Paraview VisIt YT IDL (coming soon) (ncview, matplotlib, ffmpeg, …) Opengl driver on Blue Waters XK nodes Direct user support Data analysis / custom rendering. Scaling analysis tools / parallel IO

6 XSEDE

7 Blue Waters Stellar magnetic and temperature field Supernova ignition bubble Atmospheric downburst

8 Blue Waters Compute System. System Total Peak Performance 13.34 PF Total System Memory 1.476 PB XE Bulldozer Cores* 362,240 XK Bulldozer Cores* (CPU) 33,792 XK Kepler Accelerators (GPU)4,224 Interconnect Architecture3D Torus Topology24x24x24 Compute nodes per Gemini 2 Storage 26.4 PB Bandwidth > 1 TB/sec

9 Blue Waters Visualization System. System Total Peak Performance 13.34 PF Total System Memory 1.476 PB XE Bulldozer Cores* 362,240 XK Bulldozer Cores* (CPU) 33,792 XK Kepler Accelerators (GPU) 4,224 Interconnect Architecture3D Torus Topology24x24x24 Compute nodes per Gemini 2 Storage 26.4 PB Bandwidth > 1 TB/sec

10 Blue Waters Allocations GLCPC – 2% PRAC – over 80% Illinois – 7% Education – 1% Project Innovation and Exploration Industry

11 GLCPC Great Lakes Consortium for Petascale Computing GLCPC Mission: “…facilitate and coordinate multi-institutional efforts to advance computational and computer science engineering, and technology research and education and their applications to outstanding problems of regional or national importance…” 2% allocation 501c3 Organization* 28 Charter members** Executive Committee Allocations Committee Education Committee * State 501c3 filing complete, federal filing in progress ** 28 Charter members represent over 80 universities, national laboratories, and other education agencies

12 Industry S&E Teams High interest shared by partner companies in the following: Scaling capability of a well-known and validated CFD code Temporal and transient modeling techniques and understanding. Two example cases under discussion: NASA OVERFLOW at scale for CFD flows Temporal modeling techniques using the freezing of H2O molecules as a use case and as a reason to conduct both large-scale single runs and to gain significant insight by reducing uncertainty. Industry can participate in the NSF PRAC process 5+% allocation can dedicated to industrial use Specialized support by the NCSA Private Sector Program (PSP) staff Blue Waters staff will support the PSP staff as needed Potential to provide specialized services within Service Level Agreements parameters E.g. throughput, extra expertise, specialized storage provisions, etc.

13 Impact of OpenGL on XK NCSA, following S&E team suggestions, convinced Cray and NVIDIA to run a Kepler driver that enables OpenGL applications Allows visualization tools to run directly on XK nodes First Cray system to do this? Two early impacts Schulten – NAMD 10X to 50X rendering speedup in VMD. OpenGL render backup to ray tracing. Used to fill in failed ray traced frames. Potential for interactive remote display. Woodward Eliminate need to move data. Created movies of large data simulation in days rather than a year.

14 10,560 3 grid Inertial confinement fusion (ICF) calculation with multifluid PPM Rendered 13,688 frames at 2048x1080 pixelsv4 panels per view & 2 views per stereo image @ 4096x2160 pixels. Stereo movie is 1711 frames Real Improvement in Time to Solution Local (Minnesota)Remote (NCSA) Raw data transfer time26TB @ 20MB/s = 15 days 0 secs Rendering time 13,688 frames Estimated 33 days (6 nodes with 1 GPU/node) 24 hours (128 GPUs) Visualized data transfer0 secs38GB @ 20MB/s = 32 minutes Total timeMin 33, max 48 Days24.5 hours About 40x speedup + better analysis

15


Download ppt "National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics."

Similar presentations


Ads by Google