O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 1 State of the CCS SOS 8 April 13, 2004 James B. White.

Slides:



Advertisements
Similar presentations
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY 1 Benchmarking using the Community Atmospheric Model Patrick H. Worley Oak Ridge National.
Advertisements

Will Minter Division Director, Asset Management & Small Business Programs Office November 15, 2006 ITER Project.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY 1 DOE Evaluation of the Cray X1 Mark R. Fahey James B. White III (Trey) Center for Computational.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
EPICS Channel Access Overview 2006
HPC in Poland Marek Niezgódka ICM, University of Warsaw
Oak Ridge National Laboratory Mentor-Protégé Program Business Success Stories Presented by Cassandra McGee Stuart ORNL Small Business Programs Office May.
Kei Davis and Fabrizio Petrini Europar 2004, Pisa Italy 1 CCS-3 P AL STATE OF THE ART.
A Feedback and Continuous Improvement Tool to ACE Audit Readiness Keith S. Joy Quality Manager September 14, 2004.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Commodity Computing Clusters - next generation supercomputers? Paweł Pisarczyk, ATM S. A.
Supercomputing Challenges at the National Center for Atmospheric Research Dr. Richard Loft Computational Science Section Scientific Computing Division.
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Budhendra Bhaduri Overview of Geospatial Computing at ORNL Geographic Information Science.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Deployment, Deployment, Deployment March, 2002 Randy Burris Center for Computational Sciences.
Planned Machines: ASCI Purple, ALC and M&IC MCR Presented to SOS7 Mark Seager ICCD ADH for Advanced Technology Lawrence Livermore.
New HPC technologies Arunas Birmontas, BGM BalticGrid II Kick-off meeting, Vilnius May 13, 2008.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Global Climate Modeling Research John Drake Computational Climate Dynamics Group Computer.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
1 PSC update July 3, 2008 Ralph Roskies, Scientific Director Pittsburgh Supercomputing Center
Computing and Computational Sciences Presented at Fall '03 Workshop Research Alliance for Minorities by Thomas Zacharia Associate Laboratory Director Oak.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Cluster Computing Applications Project Parallelizing BLAST Research Alliance of Minorities.
Approaching a Data Center Guy Almes meetings — Tempe 5 February 2007.
Core Services I & II David Hart Area Director, UFP/CS TeraGrid Quarterly Meeting December 2008.
Oak Ridge National Laboratory: “The Lab of the South” Building a Regional Virtual Laboratory  Geographic Focus  Operating Philosophy  Initial Projects.
Parallel Computing Basic Concepts Computational Models Synchronous vs. Asynchronous The Flynn Taxonomy Shared versus Distributed Memory Interconnection.
Information Technology at Purdue Presented by: Dr. Gerry McCartney Vice President and CIO, ITaP HPC User Forum September 8-10, 2008 Using SiCortex SC5832.
Networking and Computing Technologies Division Becky Verastegui December 6, 2004 RAMS Workshop.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
Russ Miller Center for Computational Research Computer Science & Engineering SUNY-Buffalo Hauptman-Woodward Medical Inst IDF: Multi-Core Processing for.
Computer Science Section National Center for Atmospheric Research Department of Computer Science University of Colorado at Boulder Blue Gene Experience.
Export Controls—What’s next? Joseph Young Bureau of Industry and Security Export Controls – What’s Next? Joseph Young Bureau of Industry and Security.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Oak Ridge Regional Emergency Management Forum David Milan Manager, Emergency Preparedness Department Pollard Auditorium October 26, 2006 Oak Ridge National.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY 1 On-line Automated Performance Diagnosis on Thousands of Processors Philip C. Roth Future.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Vision for OSC Computing and Computational Sciences
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
Advanced Computer Architecture Cache Memory 1. Characteristics of Memory Systems 2.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY 1 SNS 2 Meeting Opening Remarks, Purpose Glenn R. Young Physics Division, ORNL August 28,
DOE Nanoscale Science Research Center Workshop: Enabling the Nanoscience Revolution Renaissance Hotel Washington, D.C. February 26-28, 2003.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Probe Plans and Status SciDAC Kickoff July, 2001 Dan Million Randy Burris ORNL, Center for.
R. Scott Studham, Associate Director Advanced Computing April 13, 2004 HPC At PNNL March 2004.
Ted Fox Interim Associate Laboratory Director Energy and Engineering Sciences Oak Ridge, Tennessee March 21, 2006 Oak Ridge National Laboratory.
CEA DSM Irfu IRFU site report. CEA DSM Irfu HEPiX Fall 0927/10/ Computing centers used by IRFU people IRFU local computing IRFU GRIF sub site Windows.
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
Cray Environmental Industry Solutions Per Nyberg Earth Sciences Business Manager Annecy CAS2K3 Sept 2003.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences TF Sustained on Cray X Series SOS 8 April 13,
1 SOS7: “Machines Already Operational” NSF’s Terascale Computing System SOS-7 March 4-6, 2003 Mike Levine, PSC.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
1 Spallation Neutron Source Data Analysis Jessica Travierso Research Alliance in Math and Science Program Austin Peay State University Mentor: Vickie E.
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
Sergiu April 2006June 2006 Overview of TeraGrid Resources and Services Sergiu Sanielevici, TeraGrid Area Director for User.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Spallation Neutron Source Project: Update T. E. Mason Oak Ridge National Laboratory.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
SOS7 What will Cray do for Supercomputing in this Decade? Asaph Zemach Cray Inc.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
PAPI on Blue Gene L Using network performance counters to layout tasks for improved performance.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Detecting Undesirable Insider Behavior Joseph A. Calandrino* Princeton University Steven J. McKinney* North Carolina State University Frederick T. Sheldon.
Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.
NIIF HPC services for research and education
Appro Xtreme-X Supercomputers
32nd TOP500 List SC08, Austin, TX.
Presentation transcript:

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 1 State of the CCS SOS 8 April 13, 2004 James B. White III (Trey) virtual Buddy Bland

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 2 State of the CCS  CCS as a user facility  CCS as a DOE Advanced Computing Research Testbed (ACRT)  Future plans

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 3 Facilities  Computer facility  40,000 ft 2 over two floors  36” raised floor (lower floor)  8 MW power, 3600 tons cooling  Office space for 450  Classrooms and training areas  Labs for visualization, computer science, and networking

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 4 User Facility  CCS designated by DOE as a user facility  Supports users from academia and industry  Pursuing agreements with Boeing and Dow Chemical

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 5 User Community 70% of usage is from users outside of ORNL Users come from all around the country

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 6 FY03 Usage by Discipline

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 7 CCS Usage Model  Small number of large projects  CCS supports liaisons for large projects  Center can be dedicated to single task of national importance  Human genome  HFIR restart  IPCC

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 8 Advanced Computing Research Testbed  ACRT examines promising new computer architectures for DOE SC  Determine usability for SC applications  Work with vendors to improve systems  Application-based evaluations

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 9 Past Evaluations KSR Intel Paragon MP X/PS Intel i/PSC Intel I/PSC Intel Paragon XP/S Compaq AlphaServer SC 2000 SRC Prototype 1999 GSN Switch 2000 IBM Winterhawk And Nighthawk 1999 IBM S IBM Power4 and Federation

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 10 Current Evaluations  Cray X1 - scalable vector  SGI Altix - large shared memory  IBM Federation Cluster - interconnect 

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 11 Cray X1  World’s largest X1  8 cabinets  256 MSPs, 3.2 TF  1 TB memory  32 TB local disk  Cabinets half populated to test topology and facilitate upgrade

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 12 SGI Altix  Large memory, single-system image  256 Itanium2 processors  1.5 GHz, 6 GF, 6 MB cache  1.5 TF  2 TB shared memory (NUMA)  Targeting biology apps and data analysis

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 13 IBM Federation Cluster  27 p690s  GHz Power4s  864 total processors  8 p655s  GHz Power4s  Login and GPFS

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 14 Evaluation Plans  Cray X series  Upgrade X1 to 512 MSPs  Upgrade to 1024 X1E MSPs  Black Widow  Red Storm  10.5 TF in 2004  21 TF in 2005  Blue Gene at Argonne  Cray XD1 (OctigaBay)  SRC FPGA systems  IBM Power5  SGI Altix (larger images)  ADIC StorNext  Lustre

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 15 Questions? James B. White III (Trey)