Presentation is loading. Please wait.

Presentation is loading. Please wait.

Overview of High Performance Computing at KFUPM Khawar Saeed Khan ITC, KFUPM.

Similar presentations


Presentation on theme: "Overview of High Performance Computing at KFUPM Khawar Saeed Khan ITC, KFUPM."— Presentation transcript:

1 Overview of High Performance Computing at KFUPM Khawar Saeed Khan ITC, KFUPM

2 Agenda ► KFUPM HPC Cluster Details ► Brief look at RHEL and Windows 2008 HPC Environments ► Dual Boot Configuration ► Job Scheduling ► Current and Soon to be available Software ► Expectations from users

3 Why Cluster Computing and Supercomputing? ► Some Problems Larger Than Single Computer Can Process ► Memory Space (>> 4-8 GB) ► Computation Cost ► More Iterations and Large Data sets ► Data Sources (Sensor processing) ► National Pride ► Technology Migrates to Consumers

4 How Fast Are Supercomputers? ► The Top Machines Can Perform Tens of Trillions Floating Point Operations per Second (TeraFLOPS) ► They Can Store Trillions of Data Items in RAM! ► Example: 1 KM grid over USA ► 4000x2000x100 = 800 million grid points ► If each point has 10 values, and each value takes 10 ► ops to compute => 80 billion ops per iteration ► If we want 1 hour timesteps for 10 years, 87600 iters ► More than 7 Peta-ops total!

5 Lies, Damn Lies, and Statistics ► Manufacturers Claim Ideal Performance ► 2 FP Units @ 3 GHz => 6 GFLOPS ► Dependences mean we won't get that much! ► How Do We Know Real Performance ► Top500.org Uses High-Perf LINPACK ► http://www.netlib.org/benchmark/hpl http://www.netlib.org/benchmark/hpl ► Solves Dense Set of Linear Equations ► Much Communications and Parallelism ► Not Necessarily Reflective of Target Apps

6 HPC in Academic Institutions ► HPC cluster resources are no longer a research topic but a core part of the research infrastructure. ► Researchers are using HPC clusters and are dependent on them ► Increased competitiveness ► Faster time to research ► Prestige, to attract talent and grants ► Cost-effective infrastructure spending

7 Top Universities using HPC Clusters ► National Center for Supercomputing Applications at University of Illinois Urbana Champagne ► Texas Advanced Computing Center/University of Texas, Austin United States ► National Institute for Computational Sciences/University of Tennessee United States National Institute for Computational Sciences/University of Tennessee National Institute for Computational Sciences/University of Tennessee ► Information Technology Center, The University of Tokyo Japan Information Technology Center, The University of Tokyo Information Technology Center, The University of Tokyo ► Stony Brook/BNL, New York Center for Computational Sciences United States Stony Brook/BNL, New York Center for Computational Sciences Stony Brook/BNL, New York Center for Computational Sciences ► GSIC Center, Tokyo Institute of Technology Japan GSIC Center, Tokyo Institute of Technology GSIC Center, Tokyo Institute of Technology ► University of Southampton, UK ► University of Cambridge, UK ► Oklahoma State University, US

8 Top Research Institutes using HPC Clusters ► DOE/NNSA/LANL United States DOE/NNSA/LANL ► Oak Ridge National Laboratory United States Oak Ridge National Laboratory Oak Ridge National Laboratory ► NASA/Ames Research Center/NAS United States NASA/Ames Research Center/NAS NASA/Ames Research Center/NAS ► Argonne National Laboratory United States Argonne National Laboratory Argonne National Laboratory ► NERSC/LBNL United States NERSC/LBNL ► NNSA/Sandia National Laboratories United States NNSA/Sandia National Laboratories NNSA/Sandia National Laboratories ► Shanghai Supercomputer Center China Shanghai Supercomputer Center Shanghai Supercomputer Center

9 KFUPM HPC Environment

10 HPC @ KFUPM ► Planning & Survey started in early 2008 ► Procured in October 2008 ► Cluster Installation and Testing during Nov-Dec-Jan ► Applications like Gaussian with Linda, DL-POLY, ANSYS tested on the cluster setup ► Test problems were provided by professors of Chemistry, Physics, Mechanical Engg. Departments. ► More applications on the cluster will be installed shortly e.g., GAMESS-UK.

11 KFUPM Cluster Hardware ► HPC IBM Cluster 1350 ► 128 nodes, 1024 Cores. Master Nodes ► 3x Xeon E5405 Quad-Core, 8 GB, 2x 500 GB HD (mirrored) Compute Nodes ► 128 nodes(IBM 3550 rack mounted). Each node dual processor, Quad- Core Xeon E5405 (2 GHz). 8 GB RAM, 64TB total local storage. ► Interconnect 10GB Ethernet. Uplink 1000-Base-T Gigabit. Operating Systems for Compute nodes (Dual Boot) ► Windows HPC Server 2008 and Red Hat Linux 5.2.

12 Dual Boot clusters ► Choice of the right operating system for a HPC cluster can be a very difficult decision. ► This choice will usually have a big impact on the Total Cost of Ownership (TCO) of the cluster. ► Parameters like multiple user needs, application environment requirements and security policies add to the complex human factors included in training, maintenance and support planning, all leading to associated risks on the final return on investment (ROI) of the whole HPC infrastructure. ► Dual Boot HPC clusters provide two environments (Linux and Windows in our case) for the price of one.

13 Key takeaways: - Mixed clusters provide a low barrier to leverage HPC related hardware, software, storage and other infrastructure investments better – “Optimize, flexibility of infrastructure” - Maximize the utilization of compute infrastructure by expanding the pool of users accessing the HPC cluster resources - “Ease of use and familiarity breeds usage” ►

14 Possibilities with HPC ► Computational fluid dynamics Computational fluid dynamics Computational fluid dynamics ► Simulation and Modeling imulationodelingimulationodeling ► Seismic tomography eismic tomographyeismic tomography ► Nano Sciences ► Vizualization ► Weather Forecasting ► Protein / Compound Synthesis

15 Available Software ► Gaussian with Linda ► ANSYS ► FLUENT ► Distributed MATLAB, ► Mathematica ► DL_POLY ► MPICH ► Microsoft MPI SDK ► The following software will also be made available in the near future. ► Eclipse, GAMESS-UK, GAMESS-US, VASP and NW-CHEM

16 Initial Results of Beta Testing ► Few applications like Gaussian etc have been beta tested and considerable speed up in computing times has been reported ► MPI program run tested on the cluster reported considerable speed up as compared to serial server runs.

17 HPC @ KFUPM ► Several Firsts ► Dual Boot Cluster ► Supports RedHat Linux 5.2 and Windows 2008 HPC Server ► Capability to support variety of applications ► Parallel Programming Support ► Advanced Job Scheduling options

18 Expectations ► Own the system ► Respect other’s jobs ► Assist ITC HPC team by searching and sending complete installation, software procurement and licensing requirements ► Help other users by sharing your experience  Use Vbulletin at http://hpc.kfupm.edu.sa/ http://hpc.kfupm.edu.sa/


Download ppt "Overview of High Performance Computing at KFUPM Khawar Saeed Khan ITC, KFUPM."

Similar presentations


Ads by Google