Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA State of TeraGrid in Brief John Towns TeraGrid Forum Chair Director of Persistent Infrastructure National.

Similar presentations


Presentation on theme: "1 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA State of TeraGrid in Brief John Towns TeraGrid Forum Chair Director of Persistent Infrastructure National."— Presentation transcript:

1 1 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA State of TeraGrid in Brief John Towns TeraGrid Forum Chair Director of Persistent Infrastructure National Center for Supercomputing Applications University of Illinois jtowns@ncsa.illinois.edu

2 2 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA TeraGrid Objectives DEEP Science: Enabling Terascale and Petascale Science –make science more productive through an integrated set of very-high capability resources address key challenges prioritized by users WIDE Impact: Empowering Communities –bring TeraGrid capabilities to the broad science community partner with science community leaders - “Science Gateways” OPEN Infrastructure, OPEN Partnership –provide a coordinated, general purpose, reliable set of services and resources partner with campuses and facilities

3 3 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA What is the TeraGrid? An instrument that delivers high-end IT resources/services: computation, storage, visualization, and data/services –a computational facility – over two PFlop in parallel computing capability –collection of Science Gateways – provides a new idiom for access to HPC resources via discipline-specific web-portal front-ends –a data storage and management facility – over 20 PetaBytes of storage (disk and tape), over 100 scientific data collections –a high-bandwidth national data network A service: help desk and consulting, Advanced Support for TeraGrid Applications (ASTA), education and training events and resources Available freely to research and education projects with a US lead –research accounts allocated via peer review –Startup and Education accounts automatic

4 4 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA SDSC TACC UC/ANL NCSA ORNL PU IU PSC NCAR Caltech USC/ISI UNC/RENCI UW Resource Provider (RP) Software Integration Partner Grid Infrastructure Group (UChicago) 11 Resource Providers, One Facility NICS LONI Network Hub

5 5 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA How is TeraGrid Organized? TG is set up like a large cooperative research group –evolved from many years of collaborative arrangements between the centers –still evolving! Federation of 12 awards –Resource Providers (RPs) provide the computing, storage, and visualization resources –Grid Infrastructure Group (GIG) central planning, reporting, coordination, facilitation, and management group Leadership provided by the TeraGrid Forum –made up of the PI’s from each RP and the GIG –led by the TG Forum Chair, who is responsible for coordinating the group (elected position) John Towns – TG Forum Chair –responsible for the strategic decision making that affects the collaboration Day-to-Day Functioning via Working Groups (WGs): –each WG under a GIG Area Director (AD), includes RP representatives and/or users, and focuses on a targeted area of TeraGrid

6 6 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA TeraGrid Resources and Services Computing: ~2 PFlops aggregate –more than two PFlops of computing power today and growing Ranger: 579 Tflop Sun Constellation resource at TACC Kraken: 1.03 Pflop Cray XT5 NICS/UTK Remote visualization servers and software –Spur: 128 core, 32 GPU cluster connected to Ranger’s interconnect –Longhorn: 2048 core, 512 GPU cluster directly connected to Ranger’s parallel file system –Nautilus: 1024 core, 16 GPU, 4 TB SMP directly connected to parallel file system shared with Kraken Data –allocation of data storage facilities –over 100 Scientific Data Collections Central allocations process –single process to request access to (nearly) all TG resources/services Core/Central services –documentation –User Portal –EOT program Coordinated technical support –central point of contact for support of all systems –Advanced Support for TeraGrid Applications (ASTA) –education and training events and resources –over 30 Science Gateways

7 7 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA Resources Evolving Recent and anticipated resources –Track 2D awards Dash/Gordon (SDSC), Keeneland (GaTech), FutureGrid (Indiana) –XD Visualization and Data Analysis Resources Spur (TACC), Nautilus (UTK) –“NSF DCL”-funded resources PSC, NICS/UTK, TACC, SDSC –Other Ember (NCSA) Continuing resources –Ranger, Kraken Retiring resources –most other resources in TeraGrid today will retire in 2011 Attend BoFs for more on this: –New Compute Systems in the TeraGrid Pipeline(Part 1) Tuesday, 5:30-:700pm in Woodlawn I –New Compute Systems in the TeraGrid Pipeline(Part 2) Wednesday, 5:15-6:45pm in Stoops Ferry

8 8 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA Impacting Many Agencies (CY2008 data) NSF DOE NIH NASA DOD International University Other Industry NSF 52% DOE 13% NIH 19% NASA 10% DOD 1% International 0% University 2% Other 2% Industry 1% NSF 49% DOE 11% NIH 15% NASA 9% DOD 5% International 3% University 1% Other 6% Industry 1% Supported Research Funding by Agency Resource Usage by Agency $91.5M Direct Support of Funded Research 10B NUs Delivered

9 9 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA Across a Range of Disciplines Physics 26% Molecular Biosciences 18% Astronomical Sciences 14% Atmospheric Sciences 8% Chemistry 7% Chemical, Thermal Systems 6% Materials Research 6% Advanced Scientific Computing 6% Earth Sciences 5% 19 Others 4% >27B NUs Delivered in 2009

10 10 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA Ongoing Impact More the 1,200 projects supported –54 examples highlighted in most recent TG Annual Report atmospheric sciences, biochemistry and molecular structure/function, biology, biophysics, chemistry, computational epidemiology, environmental biology, earth sciences, materials research, advanced scientific computing, astronomical sciences, computational mathematics, computer and computation research, global atmospheric research, molecular and cellular biosciences, nanoelectronics, neurosciences and pathology, oceanography, physical chemistry 2009 TeraGrid Science and Engineering Highlights –16 focused stories –http://tinyurl.com/TeraGridSciHi2009-pdfhttp://tinyurl.com/TeraGridSciHi2009-pdf 2009 EOT Highlights –12 focused stories –http://tinyurl.com/TeraGridEOT2009-pdfhttp://tinyurl.com/TeraGridEOT2009-pdf

11 11 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA Continued Growth in TeraGrid 20% increase in active user community –over 4,800 active users annually –in past year (Sep ‘09), added 10,000 th new user since beginning of project 255% growth in delivered compute resources –more than 27B NUs delivered in past year Over 45 ASTA projects in progress currently –each quarterly TRAC gets about 15 ASTA requests New phylogenetics science gateway (CIPRES, www.phylo.org) has more researchers running jobs than all gateways combined in 2009www.phylo.org –cited in 35+ publications –3× times initial projected usage; jobs from institutions in 17 EPSCOR states Campus Champions Program continues as a very successful and growing outreach activity –now with 91 Champions, up from ~60 last year –50 are here at TG’10! Very successful student program for TG’xy –initiated at TG’09 with ~130 students –continued at TG’10 with ~100 students

12 12 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA Continued Deployment of New Capabilities Ongoing deployment of new compute and data resources –referred to earlier and more information at BoFs Completed deployment of advanced scheduling capabilities –metascheduling, co-scheduling, and on-demand scheduling Expanded deployment of globally-accessible file systems –Data Capacitor provides ~700 TB to most current production TG sites –second Lustre-based wide area file system in development TeraGrid joined the InCommon federation –deployed a prototype service allowing users at 26 of the 171 participating universities to access TeraGrid using their local campus credentials

13 13 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA TeraGrid: a project in transition Currently in a one year extension of project –start of XD for CMS/AUSS/TEOS deferred for one year (April 2011) TeraGrid Extension funded to bridge to XD program – 12-month funding to support most GIG functions and some non-Track 2 RP resources still some uncertainty in sequence/timing of events All activity areas have effort reserved for TeraGrid → XD transition as appropriate –planned period for transition: April-July 2011 –transition issues exist for nearly all areas Continued support of users during transition is our highest priority More information on this tomorrow morning: –“The Transition from TeraGrid to XD” 8:30am Wednesday in Grand Station room

14 14 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA Questions?


Download ppt "1 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA State of TeraGrid in Brief John Towns TeraGrid Forum Chair Director of Persistent Infrastructure National."

Similar presentations


Ads by Google