SURA Regional HPC Grid Proposal Ed Seidel LSU With Barbara Kucera, Sara Graves, Henry Neeman, Otis Brown, others.

Slides:



Advertisements
Similar presentations
TeraGrid Community Software Areas (CSA) JP (John-Paul) Navarro TeraGrid Grid Infrastructure Group Software Integration University of Chicago and Argonne.
Advertisements

Test harness and reporting framework Shava Smallen San Diego Supercomputer Center Grid Performance Workshop 6/22/05.
Xsede eXtreme Science and Engineering Discovery Environment Ron Perrott University of Oxford 1.
1 US activities and strategy :NSF Ron Perrott. 2 TeraGrid An instrument that delivers high-end IT resources/services –a computational facility – over.
4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
Gabrielle Allen*, Thomas Dramlitsch*, Ian Foster †, Nicolas Karonis ‡, Matei Ripeanu #, Ed Seidel*, Brian Toonen † * Max-Planck-Institut für Gravitationsphysik.
Supporting Transformative Research Through Regional Cyberinfrastructure (CI) Dr. Dali Wang, Grid Infrastructure Specialist.
High Throughput Urgent Computing Jason Cope Condor Week 2008.
Simo Niskala Teemu Pasanen
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
GIG Software Integration: Area Overview TeraGrid Annual Project Review April, 2008.
TeraGrid Information Services December 1, 2006 JP Navarro GIG Software Integration.
Hungarian Supercomputing GRID
Update on Activities of the SURA Grid Planning Group Ed Seidel (and Gary Crane) Co-Chair, SURA Grid Planning Group Director, Center Computation & Technology.
and beyond Office of Vice President for Information Technology.
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
National Center for Supercomputing Applications GridChem: Integrated Cyber Infrastructure for Computational Chemistry Sudhakar.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
Supporting Transformative Research Through Regional Cyberinfrastructure (CI) Gary Crane, Director IT Initiatives.
CoG Kit Overview Gregor von Laszewski Keith Jackson.
© 2004 Oracle Corporation Laurent Sandrolini Vice President Systems Platform Division Oracle Corporation.
Distributed EU-wide Supercomputing Facility as a New Research Infrastructure for Europe Gabrielle Allen Albert-Einstein-Institut, Germany Jarek Nabrzyski.
Cyberinfrastructure for Distributed Rapid Response to National Emergencies Henry Neeman, Director Horst Severini, Associate Director OU Supercomputing.
The Sharing and Training of HPC Resources at the University of Arkansas Amy Apon, Ph.D. Oklahoma Supercomputing Symposium October 4, 2006.
SAN DIEGO SUPERCOMPUTER CENTER Impact Requirements Analysis Team Co-Chairs: Mark Sheddon (SDSC) Ann Zimmerman (University of Michigan) Members: John Cobb.
N*Grid – Korean Grid Research Initiative Funded by Government (Ministry of Information and Communication) 5 Years from 2002 to million US$ Including.
Top Issues Facing Information Technology at UAB Sheila M. Sanders UAB Vice President Information Technology February 8, 2007.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
State of SURAgrid for All-Hands meeting September 2007 Mary Fran Yafchak SURA IT Program Coordinator
Amazon Web Services MANEESH MOHANAVILASAM. OLD IS GOLD?...NOT Predicting peaks Developing partnerships Buying and maintaining hardware Upgrading hardware.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
CCS Overview Rene Salmon Center for Computational Science.
Renaissance Computing Institute: An Overview Lavanya Ramakrishnan, John McGee, Alan Blatecky, Daniel A. Reed Renaissance Computing Institute.
1October 9, 2001 Sun in Scientific & Engineering Computing Grid Computing with Sun Wolfgang Gentzsch Director Grid Computing Cracow Grid Workshop, November.
A New Parallel Debugger for Franklin: DDT Katie Antypas User Services Group NERSC User Group Meeting September 17, 2007.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
GridLab WP-2 Cactus GAT (CGAT) Ed Seidel, AEI & LSU Co-chair, GGF Apps RG, Gridstart Apps TWG Gabrielle Allen, Robert Engel, Tom Goodale, *Thomas Radke.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation NEES TeraGrid.
Biomedical and Bioscience Gateway to National Cyberinfrastructure John McGee Renaissance Computing Institute
Portal Update Plan Ashok Adiga (512)
Launching a great program! SURAgrid All Hands Meeting – September 21, 2006.
Minority-Serving Institutions (MSI) Cyberinfrastructure (CI) Institute [MSI-CI 2 ] and CI Empowerment Coalition MSI-CIEC October Geoffrey Fox
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
Information Technology Committee Report by Dick Newman for Dave Lambert Committee chairman.
MidVision Enables Clients to Rent IBM WebSphere for Development, Test, and Peak Production Workloads in the Cloud on Microsoft Azure MICROSOFT AZURE ISV.
Data, Visualization and Scheduling (DVS) TeraGrid Annual Meeting, April 2008 Kelly Gaither, GIG Area Director DVS.
Network, Operations and Security Area Tony Rimovsky NOS Area Director
GridChem Sciene Gateway and Challenges in Distributed Services Sudhakar Pamidighantam NCSA, University of Illinois at Urbaba- Champaign
Dynamic Grid Computing: The Cactus Worm The Egrid Collaboration Represented by: Ed Seidel Albert Einstein Institute
Derek Weitzel Grid Computing. Background B.S. Computer Engineering from University of Nebraska – Lincoln (UNL) 3 years administering supercomputers at.
AT LOUISIANA STATE UNIVERSITY CCT: Center for Computation & LSU Condor in Louisiana Tevfik Kosar Center for Computation & Technology Louisiana.
1 Sammie Carter Department of Computer Science N.C. State University November 18, 2004
HPC University Requirements Analysis Team Training Analysis Summary Meeting at PSC September Mary Ann Leung, Ph.D.
TeraGrid Capability Discovery John-Paul “JP” Navarro TeraGrid Area Co-Director for Software Integration University of Chicago/Argonne National Laboratory.
Data Infrastructure in the TeraGrid Chris Jordan Campus Champions Presentation May 6, 2009.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Page : 1 SC2004 Pittsburgh, November 12, 2004 DEISA : integrating HPC infrastructures in Europe DEISA : integrating HPC infrastructures in Europe Victor.
TeraGrid Software Integration: Area Overview (detailed in 2007 Annual Report Section 3) Lee Liming, JP Navarro TeraGrid Annual Project Review April, 2008.
Private Public FG Network NID: Network Impairment Device
Deploying Regional Grids Creates Interaction, Ideas, and Integration
GridChem Current Status
DEISA : integrating HPC infrastructures in Europe Prof
Dynamic Grid Computing: The Cactus Worm
Campus Bridging at XSEDE
Information Systems Server Funding Request
Presentation transcript:

SURA Regional HPC Grid Proposal Ed Seidel LSU With Barbara Kucera, Sara Graves, Henry Neeman, Otis Brown, others

Basic Plan Strengthen SURAgrid to create the leading regional HPC environment –Deploy numerous supercomputers across region –Leverage regional investments in optical networks SURA, NLR, RONs 1 Gbit to many sites makes regional, national integration possible as never before –Coordinate deployment, operations Major impact across region

Operational Plan Tight integration of HPC systems –Globally shared file system –Common base software stack –Metascheduling Machines respond both to local and regional needs –Majority of cycles locally controlled –Some fraction available for the regional use, coordinated training, preparation for codes to run at national centers

Primary Advantages HPC Resource Sharing, load balancing Regional (SURA sponsored), national training Compatibility with national HPC centers –SURA underrepresented by –Existing (NCSA, SDSC, NERSC, TACC, etc) –Future: LSU proposal, many others Specific Projects –SCOOP, LEAD, Dynacode –Event-driven computing –Other projects much easier to develop with regional HPC support IBM partnership

Software Deployment Open Source –linux –Globus, Condor, Cactus, SAGA, MPICH, etc –Eclipse –Spruce, TeraGrid CTSS IBM –AIX –GPFS-WAN –HPC Cluster software –ESSL

IBM Partnership Hardware –Power5, Power6: very responsive Software –Metascheduling, load balancing, migration of LPARS, MPI jobs –Development environment Eclipse, Cactus, ESSL, Portals Usage scenarios –Event-driven, DDDAS Other HPC systems, software welcome and encouraged –TeraGrid model applies: all vendors connected

Financials Very unusual value for major vendor –Price down to commodity levels –$1.2M system for $350K, including 3 years of maintenance (at roughly $112K) SURA contribution likely if strong regional support is seen –Both hardware and personnel support possible Some sites willing to help administer –LSU, others

Participating Groups Expecting to participate: Kentucky, LSU, Oklahoma, UAH, Miami Considering participation: TAMU, Houston, TACC, RENCI Others