Introduction. Demand for Computational Speed Continual demand for greater computational speed from a computer system than is currently possible Areas.

Slides:



Advertisements
Similar presentations
Computational Science 1 Introduction to Computational Science PRESENTERS: Robert R. Gotwals, Jr. The Shodor Education Foundation, Inc. Ozone O3 creation.
Advertisements

Chapter 1 Parallel Computers.
Parallel Computers Chapter 1
Information Technology Center Introduction to High Performance Computing at KFUPM.
Personal Life of Bob Kahn
Supercomputers Daniel Shin CS 147, Section 1 April 29, 2010.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Parallel Algorithms in STAPL Implementation and Evaluation Jeremy Vu Faculty Mentor: Dr. Nancy Amato Supervisor: Dr. Mauro Bianco.
Introduction What is Parallel Algorithms? Why Parallel Algorithms? Evolution and Convergence of Parallel Algorithms Fundamental Design Issues.
CS 282 Simulation Physics Lecture 1: Introduction to Rigid-Body Simulation 1 September 2011 Instructor: Kostas Bekris Computer Science & Engineering, University.
Winners of Turing Award 2004 Presenter: Yung-Hsing Peng Date:
CS 206 Introduction to Computer Science II 01 / 28 / 2009 Instructor: Michael Eckmann.
Lecture 1: Introduction to High Performance Computing.
Transitioning to HPC: Experiences from the Atmospheric Sciences Dr. Joe Galewsky Department of Earth and Planetary Sciences University of New Mexico
INTEL CONFIDENTIAL Why Parallel? Why Now? Introduction to Parallel Programming – Part 1.
Institutional Research Computing at WSU: Implementing a community-based approach Exploratory Workshop on the Role of High-Performance Computing in the.
1a-1.1 Parallel Computing Demand for High Performance ITCS 4/5145 Parallel Programming UNC-Charlotte, B. Wilkinson Dec 27, 2012 slides1a-1.
Self-Organizing Agents for Grid Load Balancing Junwei Cao Fifth IEEE/ACM International Workshop on Grid Computing (GRID'04)
UIUC CSL Global Technology Forum © NVIDIA Corporation 2007 Computing in Crisis: Challenges and Opportunities David B. Kirk.
By Arun Bhandari Course: HPC Date: 01/28/12. GPU (Graphics Processing Unit) High performance many core processors Only used to accelerate certain parts.
INTEL CONFIDENTIAL Predicting Parallel Performance Introduction to Parallel Programming – Part 10.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
General Purpose Computing on Graphics Processing Units: Optimization Strategy Henry Au Space and Naval Warfare Center Pacific 09/12/12.
Grid Computing, B. Wilkinson, 20047a.1 Computational Grids.
Introduction to Parallel Computing Most of the slides are borrowed from
BELLWORK 1. List three effects of the exploration era. 2. How did views of the world change after exploration? 3. What is skepticism? 4. THINKER: What.
1 © 2012 The MathWorks, Inc. Parallel computing with MATLAB.
Computer Software Engineer e-book Created by The University of North Texas in partnership with the Texas Education Agency.
Parallel Processing Steve Terpe CS 147. Overview What is Parallel Processing What is Parallel Processing Parallel Processing in Nature Parallel Processing.
- Rohan Dhamnaskar. Overview  What is a Supercomputer  Some Concepts  Couple of examples.
Scaling Area Under a Curve. Why do parallelism? Speedup – solve a problem faster. Accuracy – solve a problem better. Scaling – solve a bigger problem.
CSCI-455/522 Introduction to High Performance Computing Lecture 1.
© 2009 IBM Corporation Motivation for HPC Innovation in the Coming Decade Dave Turek VP Deep Computing, IBM.
Accelerating Spherical Harmonic Transforms on the NVIDIA® GPGPU
Motivation: Sorting is among the fundamental problems of computer science. Sorting of different datasets is present in most applications, ranging from.
Barriers to Industry HPC Use or “Blue Collar” HPC as a Solution Presented by Stan Ahalt OSC Executive Director Presented to HPC Users Conference July 13,
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
Outline Why this subject? What is High Performance Computing?
… begin …. Parallel Computing: What is it good for? William M. Jones, Ph.D. Assistant Professor Computer Science Department Coastal Carolina University.
SNU OOPSLA Lab. 1 Great Ideas of CS with Java Part 1 WWW & Computer programming in the language Java Ch 1: The World Wide Web Ch 2: Watch out: Here comes.
A Level Computing for AQA Teacher’s Resource CD-ROM 20 CHAPTER: Communication methods Serial and parallel communications, bandwidth, bit rate and baud.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Scaling Conway’s Game of Life. Why do parallelism? Speedup – solve a problem faster. Accuracy – solve a problem better. Scaling – solve a bigger problem.
Copyright © Curt Hill SIMD Single Instruction Multiple Data.
FORCE AND MOTION IDEAS WHAT DO YOU KNOW OR THINK YOU KNOW?!?!
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Ch. 1 Introduction EE692 Parallel and Distribution Computation | Prof. Song.
1a.1 Parallel Computing and Parallel Computers ITCS 4/5145 Cluster Computing, UNC-Charlotte, B. Wilkinson, 2006.
Thinking in Parallel - Introduction New Mexico Supercomputing Challenge in partnership with Intel Corp. and NM EPSCoR.
Hybrid Parallel Implementation of The DG Method Advanced Computing Department/ CAAM 03/03/2016 N. Chaabane, B. Riviere, H. Calandra, M. Sekachev, S. Hamlaoui.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
Introduction. News you can use Hardware –Multicore chips (2009: mostly 2 cores and 4 cores, but doubling) (cores=processors) –Servers (often.
Why Parallel/Distributed Computing Sushil K. Prasad
High Performance Computing (HPC)
Parallel Computing and Parallel Computers
Parallel Computing Demand for High Performance
Introduction to Computational Science
The Earth Simulator System
Introduction to Parallelism.
Parallel Computers.
The City University of New York
Simulation at NASA for the Space Radiation Effort
Introduction to Computational Science
Parallel Computing Demand for High Performance
CLUSTER COMPUTING.
The Free Lunch Ended 7 Years Ago
Parallel Computing Demand for High Performance
Parallel Computing Demand for High Performance
Parallel Computing and Parallel Computers
Presentation transcript:

Introduction

Demand for Computational Speed Continual demand for greater computational speed from a computer system than is currently possible Areas requiring great computational speed include numerical modeling and simulation of scientific and engineering problems. Computations must be completed within a “reasonable” time period.

Grand Challenge Problems A grand challenge problem is one that cannot be solved in a reasonable amount of time with today’s computers. Obviously, an execution time of 10 years is always unreasonable. Examples Modeling large DNA structures Global weather forecasting Modeling motion of astronomical bodies.

Global Weather Forcast Example

Parallel Computing Using more than one computer, or a computer with more than one processor, to solve a problem. Motives Usually faster computation - very simple idea - that n computers operating simultaneously can achieve the result n times faster – it will not be n times faster for various reasons.

CSI’s HPC Systems “SALK” : a Cray XE6m with a total of 1280 processor cores. Salk is reserved for large parallel jobs, particularly those requiring more than 64 cores. Emphasis is on applications in the environmental sciences and astrophysics. Salk is named in honor of Dr. Jonas Salk, the developer of the first polio vaccine, and a City College alumnus. “ANDY” is an SGI cluster with 744 processor cores and 96 NVIDIA Fermi processor accelerators. Andy is for jobs using 64 cores or fewer, for jobs using the NVIDIA Fermi’s, and for Gaussian jobs. Andy is named in honor of Dr. Andrew S. Grove, a City College alumnus and one of the founders of the Intel Corporation.

CSI’s HPC Systems “BOB” is a Dell cluster with 232 processor cores. Bob is for jobs using 64 cores or fewer, and parallel Matlab jobs. Bob is named in honor of Dr. Robert E. Kahn, an alumnus of the City College, who, along with Vinton G. Cerf, invented the TCP/IP protocol. “KARLE” is a Dell shared memory system with 24 processor cores. Karle is used for serial jobs, Matlab, SAS, parallel Mathematica, and certain ARCview jobs. Karle is named in honor of Dr. Jerome Karle, an alumnus of the City College of New York who was awarded the Nobel Prize in Chemistry in 1985.

CSI’s HPC Systems “Zeus” is a Dell cluster with 88 processor cores. It is used for Gaussian jobs. “Neptune” is the system that is used as a gateway to the above HPCC systems. It is not used for computations.

CSI’s HPC Systems nstalled_systems