Presentation is loading. Please wait.

Presentation is loading. Please wait.

Science on the TeraGrid Daniel S. Katz Director of Science, TeraGrid GIG Senior Computational Researcher, Computation Institute, University.

Similar presentations


Presentation on theme: "Science on the TeraGrid Daniel S. Katz Director of Science, TeraGrid GIG Senior Computational Researcher, Computation Institute, University."— Presentation transcript:

1 Science on the TeraGrid Daniel S. Katz d.katz@ieee.org Director of Science, TeraGrid GIG Senior Computational Researcher, Computation Institute, University of Chicago & Argonne National Laboratory Affiliate Faculty, Center for Computation & Technology, LSU Adjunct Associate Professor, Electrical and Computer Engineering Department, LSU

2 What is the TeraGrid? World’s largest open scientific discovery infrastructure Leadership class resources at eleven partner sites combined to create an integrated, persistent computational resource –High-performance computers (>2 Pflops, >150,000 cores) And a Condor pool (w/ ~13,000 CPUs) –Medium to long term storage (>3 PB disk, >60 PB of tape) –Visualization systems –Data collections (>30 PB, >100 discipline-specific databases) –Science gateways (~35) –High-performance networks –Unified user services – portal, help desk, training, advanced app support Allocated to US researchers and their collaborators through national peer-review process –Generally, review of computing, not science Extremely user-driven –MPI jobs, ssh or grid (GRAM) access, etc.

3 Governance 11 Resource Providers (RPs) funded under separate agreements with NSF –Different start and end dates –Different goals –Different agreements –Different funding models 1 Coordinating Body – Grid Integration Group (GIG) –University of Chicago/Argonne National Laboratory –Subcontracts to all RPs and six other universities –7-8 Area Directors –Working groups with members from many RPs TeraGrid Forum with Chair

4 TG New Large Resources Ranger@TACC –First NSF ‘Track2’ HPC system –504 TF –15,744 Quad-Core AMD Opteron processors –123 TB memory, 1.7 PB disk Kraken@NICS (UT/ORNL) –Second NSF ‘Track2’ HPC system –1 PF Cray XT5 system 10,000+ compute sockets 100 TB memory, 2.3 PB disk Blue Waters@NCSA NSF Track 1 10 PF peak Coming in 2011

5 Who Uses TeraGrid (2007) Molecular Biosciences 31% Chemistry 17% Physics 17% Astronomical Sciences 12% Materials Research 6% Earth Sciences 3% All 19 Others 4% Advanced Scientific Computing 2% Atmospheric Sciences 3% Chemical, Thermal Systems 5%

6 How TeraGrid Is Used Use Modality Community Size (rough est. - number of users) Batch Computing on Individual Resources 850 Exploratory and Application Porting 650 Workflow, Ensemble, and Parameter Sweep 250 Science Gateway Access 500 Remote Interactive Steering and Visualization 35 Tightly-Coupled Distributed Computation 10 2006 data

7 Science Gateways A natural extension of Internet & Web 2.0 Idea resonates with Scientists –Researchers can imagine scientific capabilities provided through familiar interface Mostly web portal or web or client-server program Designed by communities; provide interfaces understood by those communities –Also provide access to greater capabilities (back end) –Without user understand details of capabilities –Scientists know they can undertake more complex analyses and that’s all they want to focus on –TeraGrid provides tools to help developer Seamless access doesn’t come for free –Hinges on very capable developer Slide courtesy of Nancy Wilkins-Diehr

8 TG App: Predicting storms Hurricanes and tornadoes cause massive loss of life and damage to property TeraGrid supported spring 2007 NOAA and University of Oklahoma Hazardous Weather Testbed –Major Goal: assess how well ensemble forecasting predicts thunderstorms, including the supercells that spawn tornadoes –Nightly reservation at PSC, spawning jobs at NCSA as needed for details –Input, output, and intermediate data transfers –Delivers “better than real time” prediction –Used 675,000 CPU hours for the season –Used 312 TB on HPSS storage at PSC Slide courtesy of Dennis Gannon, ex-IU, and LEAD Collaboration

9 TG App: SCEC-PSHA Part of SCEC (Tom Jordan, USC) Using the large scale simulation data, estimate probablistic seismic hazard (PSHA) curves for sites in southern California (probability that ground motion will exceed some threshold over a given time period) Used by hospitals, power plants, schools, etc. as part of their risk assessment For each location, need a Cybershake run followed by roughly 840,000 parallel short jobs (420,000 rupture forecasts, 420,000 extraction of peak ground motion) –Parallelize across locations, not individual workflows Completed 40 locations to date, targeting 200 in 2009, and 2000 in 2010 Managing these requires effective grid workflow tools for job submission, data management and error recovery, using Pegasus (ISI) and DAGman (Wisconsin) 9 Information/image courtesy of Phil Maechling

10 App: GridChem Slide courtesy of Joohyun Kim Different licensed applications with different queues Will be scheduled for workflows

11 TG Apps: Genius and Materials HemeLB on LONI LAMMPS on TeraGrid Fully-atomistic simulations of clay-polymer nanocomposites Slide courtesy of Steven Manos and Peter Coveney Why cross-site / distributed runs? 1.Rapid turnaround, conglomeration of idle processors to run a single large job 2.Run big compute & big memory jobs not possible on a single machine Modeling blood flow before (during?) surgery

12 App: PolarGrid Goal: Work with Center for Remote Sensing of Ice Sheets Requirements: –View CReSIS data sets, run filters, and view results through Web map interfaces; –See/Share user’s events in a Calendar; –Update results to a common repository with appropriate access controls; –Post the status of computational experiments. –Support collaboration and information exchange by interfacing to blogs and discussion areas Slide courtesy of: Raminder Singh, Gerald Guo, Marlon Pierce Login Screen Interface to create new users and login using existing accounts. Integrated with OpenID API for authentication. Solution: Web 2.0 enabled PolarGrid Portal

13 App: PolarGrid Home Page with a set of gadgets like Google Calendar, Picasa, Facebook, Blog, Twitter Slide courtesy of: Raminder Singh, Gerald Guo, Marlon Pierce

14 Future App: Real-time high resolution Radar data Delivering 3D visualization of radar data via a Google gadget LiveRadar3D –Super high res, real-time NEXRAD data –Continuously updated as new data comes –3D rendering that includes multiple stations in the US –Significant processing (high throughout) and rendering supported by TG systems –To be released next spring Slide courtesy of Carol Song

15 TeraGrid -> XD Future Current RP agreements end in March 2011 –Except track 2 centers (current and future) TeraGrid XD (eXtreme Digital) starts in April 2011 –Era of potential interoperation with OSG and others –New types of science applications? Current TG GIG continues through July 2011 –Allows four months of overlap in coordination –Probable overlap between GIG and XD members Blue Waters (track 1) production in 2011 15

16 TeraGrid: Both Operations and Research 16 Operations –Facilities/services on which researchers rely –Infrastructure on which other providers build AND R&D –Learning how to do distributed, collaborative science on a global, federated infrastructure –Learning how to run multi-institution shared infrastructure http://teragrid.org/


Download ppt "Science on the TeraGrid Daniel S. Katz Director of Science, TeraGrid GIG Senior Computational Researcher, Computation Institute, University."

Similar presentations


Ads by Google