Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 An Introduction to the Jeffrey P. Gardner Pittsburgh Supercomputing Center

Similar presentations


Presentation on theme: "1 An Introduction to the Jeffrey P. Gardner Pittsburgh Supercomputing Center"— Presentation transcript:

1 1 An Introduction to the Jeffrey P. Gardner Pittsburgh Supercomputing Center

2 2 Boulder, CO National Science Foundation TeraGrid The worlds largest collection of supercomputers

3 3 Boulder, CO Pittsburgh Supercomputing Center Founded in 1986 Joint venture between Carnegie Mellon University, University of Pittsburgh, and Westinghouse Electric Co. Funded by several federal agencies as well as private industries. Main source of support is National Science Foundation

4 4 Boulder, CO Pittsburgh Supercomputing Center PSC is the third largest NSF sponsored supercomputing center BUT we provide over 60% of the computer time used by the NSF research AND PSC most recently had the most powerful supercomputer in the world (for unclassified research)

5 5 Boulder, CO Pittsburgh Supercomputing Center SCALE: 3000 processors SIZE: 1 basketball court COMPUTING POWER: 6 TeraFlops (6 trillion floating point operations per second) Will do in 3 hours what a PC will do in a year The Terascale Computing System (TCS) at the Pittsburgh Supercomputing Center Upon entering production in October 2001, the TCS was the most powerful computer in the world for unclassified research

6 6 Boulder, CO Pittsburgh Supercomputing Center HEAT GENERATED: 2.5 million BTUs (169 lbs of coal per hour) AIR CONDITIONING: 900 gallons of water per minute (375 room air conditioners) BOOT TIME: ~3 hours The Terascale Computing System (TCS) at the Pittsburgh Supercomputing Center Upon entering production in October 2001, the TCS was the most powerful computer in the world for unclassified research

7 7 Boulder, CO Pittsburgh Supercomputing Center

8 8 Boulder, CO NCSA: National Center for Super- computing Applications SCALE: 1774 processors ARCHITECHTURE: Intel Itanium2 COMPUTING POWER: 10 TeraFlops The TeraGrid cluster Mercury at NCSA

9 9 Boulder, CO TACC: Texas Advanced Computing Center SCALE: 1024 processors ARCHITECHTURE: Intel Xeon COMPUTING POWER: 6 TeraFlops The TeraGrid cluster LoneStar at TACC

10 10 Boulder, CO Before the TeraGrid: Supercomputing The Old Fashioned way Each supercomputer center was its own independent entity. Users applied for time at a specific supercomputer center Each center supplied its own: compute resources archival resources accounting user support

11 11 Boulder, CO The TeraGrid Strategy Creating a unified user environment… Single user support resources. Single authentication point Common software functionality Common job management infrastructure Globally-accessible data storage …across heterogeneous resources 7+ computing architectures 5+ visualization resources diverse storage technologies Create a unified national HPC infrastructure that is both heterogeneous and extensible

12 12 Boulder, CO The TeraGrid Strategy A major paradigm shift for HPC resource providers Make NSF resources useful to a wider community Strength through uniformity! Strength through diversity! TeraGrid Resource Partners

13 13 Boulder, CO TeraGrid Components Compute hardware Intel/Linux Clusters Alpha SMP clusters IBM POWER3 and POWER4 clusters SGI Altix SMPs SUN visualization systems Cray XT3 (PSC July 20) IBM Blue Gene/L (SDSC Oct 1)

14 14 Boulder, CO TeraGrid Components Large-scale storage systems hundreds of terabytes for secondary storage Very high-speed network backbone (40Gb/s) bandwidth for rich interaction and tight coupling Grid middleware Globus, data management, … Next-generation applications

15 15 Boulder, CO Building a System of Unprecidented Scale 40+ teraflops compute 1+ petabyte online storage 10-40Gb/s networking

16 16 Boulder, CO TeraGrid Resources ANL/ UC Caltech CACR IUNCSAORNLPSCPurdueSDSCTACC Compute Resources Itanium2 (0.5 TF) IA-32 (0.5 TF) Itanium2 (0.8 TF) Itanium2 (0.2 TF) IA-32 (2.0 TF) Itanium2 ( 10 TF ) SGI SMP ( 6.5 TF ) IA-32 (0.3 TF) XT3 ( 10 TF ) TCS ( 6 TF ) Marvel (0.3 TF) Hetero (1.7 TF) Itanium2 ( 4.4 TF ) Power4 (1.1 TF) IA-32 ( 6.3 TF ) Sun (Vis) Online Storage 20 TB155 TB32 TB600 TB1 TB150 TB540 TB50 TB Mass Storage 1.2 PB3 PB2.4 PB6 PB2 PB Data Collections Yes Visualization Yes Instruments Yes Network (Gb/s,Hub) 30 CHI 30 LA 10 CHI 30 CHI 10 ATL 30 CHI 10 CHI 30 LA 10 CHI

17 17 Boulder, CO Grid-Like Usage Scenarios Currently Enabled by the TeraGrid Traditional massively parallel jobs Tightly-coupled interprocessor communication storing vast amounts of data remotely remote visualization Thousands of independent jobs Automatically scheduled amongst many TeraGrid machines Use data from a distributed data collection Multi-site parallel jobs Compute upon many TeraGrid sites simultaneously TeraGrid is working to enable more!

18 18 Boulder, CO Allocations Policies Any US researcher can request an allocation Policies/procedures posted at: Online proposal submission https://pops-submit.paci.org/

19 19 Boulder, CO Allocations Policies Different levels of review for different size allocations DAC: Development Allocation Committee up to 30,000 Service Units (SUs, 1 SU =~ 1 CPU Hour) only a one paragraph abstract required Must focus on developing an MRAC or NRAC application accepted continuously! MRAC: Medium Resource Allocation Committee <200,000 SUs/year reviewed every 3 months next deadline July 15, 2005 (then October 21) NRAC: National Resource Allocation Committee >200,000 SUs/year reviewed every 6 months next deadline July 15, 2005 (then January 2006)

20 20 Boulder, CO Accounts and Account Management Once a project is approved, the PI can add any number of users by filling out a simple online form User account creation usually takes 2-3 weeks TG accounts created on ALL TG systems for every user single US mail packet arriving for user accounts and usage synched through centralized database

21 21 Boulder, CO Roaming and Specific Allocations R-Type: roaming allocations can be used on any TG resource usage debited to a single (global) allocation of resource maintained in a central database S-Type: specific allocations can only be used on specified resource (All S-only awards come with 30,000 roaming SUs to encourage roaming usage of TG)

22 22 Boulder, CO Useful links TeraGrid website Policies/procedures posted at: TeraGrid user information overview Summary of TG Resources Summary of machines with links to site-specific user guides (just click on the name of each site)


Download ppt "1 An Introduction to the Jeffrey P. Gardner Pittsburgh Supercomputing Center"

Similar presentations


Ads by Google