Zhao Lixing.  A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation.  Supercomputers.

Slides:



Advertisements
Similar presentations
Clusters, Grids and their applications in Physics David Barnes (Astro) Lyle Winton (EPP)
Advertisements

System Area Network Abhiram Shandilya 12/06/01. Overview Introduction to System Area Networks SAN Design and Examples SAN Applications.
WHAT IS AN OPERATING SYSTEM? An interface between users and hardware - an environment "architecture ” Allows convenient usage; hides the tedious stuff.
FUTURE TECHNOLOGIES Lecture 13.  In this lecture we will discuss some of the important technologies of the future  Autonomic Computing  Cloud Computing.
PRIYADHARSHINI S SUPERCOMPUTERS. OVERVIEW  The term is commonly applied to the fastest high-performance systems in existence at the time of their construction.
Commodity Computing Clusters - next generation supercomputers? Paweł Pisarczyk, ATM S. A.
Definitions of Supercomputer A time dependent term which refers to the class of most powerful computer systems world-wide at the time of reference An.
Today’s topics Single processors and the Memory Hierarchy
Beowulf Supercomputer System Lee, Jung won CS843.
SUPERCOMPUTERS By: Cooper Couch. WHAT IS A SUPERCOMPUTER? In the most Basic sense a supercomputer is one, that is at the forefront of modern processing.
Types of Parallel Computers
History of Distributed Systems Joseph Cordina
MD240 - Management Information Systems Sept. 13, 2005 Computing Hardware – Moore's Law, Hardware Markets, and Computing Evolution.
Supercomputers Daniel Shin CS 147, Section 1 April 29, 2010.
1 BGL Photo (system) BlueGene/L IBM Journal of Research and Development, Vol. 49, No. 2-3.
Multiprocessors ELEC 6200: Computer Architecture and Design Instructor : Agrawal Name: Nam.
Computational Astrophysics: Methodology 1.Identify astrophysical problem 2.Write down corresponding equations 3.Identify numerical algorithm 4.Find a computer.
11/14/05ELEC Fall Multi-processor SoCs Yijing Chen.
1: Operating Systems Overview
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
Chapter 1 An Overview of Personal Computers
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Chapter 2 Computer Clusters Lecture 2.1 Overview.
Computer System Architectures Computer System Software
WHAT IS A COMPUTER? Computer is an electronic device designed to manipulate data so that useful information can be generated. Computer is multifunctional.
Information and Communication Technology Fundamentals Credits Hours: 2+1 Instructor: Ayesha Bint Saleem.
LOGO OPERATING SYSTEM Dalia AL-Dabbagh
Operating System Review September 10, 2012Introduction to Computer Security ©2004 Matt Bishop Slide #1-1.
Lappeenranta University of Technology / JP CT30A7001 Concurrent and Parallel Computing Introduction to concurrent and parallel computing.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Classification of Computers
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Slides Courtesy Michael J. Quinn Parallel Programming in C.
Computing Fundamentals Module Lesson 1 — What Is A Computer?
Cluster Workstations. Recently the distinction between parallel and distributed computers has become blurred with the advent of the network of workstations.
MIMD Distributed Memory Architectures message-passing multicomputers.
Chapter 1 Intro to Computer Department of Computer Engineering Khon Kaen University.
PARALLEL COMPUTING overview What is Parallel Computing? Traditionally, software has been written for serial computation: To be run on a single computer.
Copyright © 2011 Curt Hill MIMD Multiple Instructions Multiple Data.
Orange Coast College Business Division Computer Science Department CS 116- Computer Architecture Multiprocessors.
1: Operating Systems Overview 1 Jerry Breecher Fall, 2004 CLARK UNIVERSITY CS215 OPERATING SYSTEMS OVERVIEW.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Tackling I/O Issues 1 David Race 16 March 2010.
M211 – Central Processing Unit
Generations of Computing. The Computer Era Begins: The First Generation  1950s: First Generation for hardware and software Vacuum tubes worked as memory.
Background Computer System Architectures Computer System Software.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
Spring,2016 Welcome to CSE101 Presentation. Introduction: Presented by: Sumiaya Huq Dipty Munira Tanzim Prithila Rahman Ahmed.
Vector computers.
BLUE GENE Sunitha M. Jenarius. What is Blue Gene A massively parallel supercomputer using tens of thousands of embedded PowerPC processors supporting.
Constructing a system with multiple computers or processors 1 ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson. Jan 13, 2016.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Computer Organization and Architecture Lecture 1 : Introduction
These slides are based on the book:
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
6. Structure of Computers
Super Computing By RIsaj t r S3 ece, roll 50.
Constructing a system with multiple computers or processors
What is Parallel and Distributed computing?
Introduction and History of Cray Supercomputers
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Chapter 1 Introduction.
LO2 – Understand Computer Software
Types of Parallel Computers
Presentation transcript:

Zhao Lixing

 A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation.  Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC).  Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering.

 CDC's early machines were simply very fast scalar processors(a scalar processor processes one data item at a time), some ten times the speed of the fastest machines offered by other companies.  In the 1970s most supercomputers were dedicated to running a vector processor(a single instruction operates simultaneously on multiple data items), and many of the newer players developed their own such processors at a lower price to enter the market.  In the late 1980s and 1990s, attention turned from vector processors to massive parallel processing(a computer system with many independent arithmetic units or entire microprocessors that run in parallel) systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs.

 Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.

 The top ten supercomputers on the Top500 list have the same top-level architecture.  Each of them is a cluster of MIMD(Multiple Instruction Multiple Data ) multiprocessors, each processor of which is SIMD (Single Instruction Multiple Data ).  The supercomputers vary radically with respect to the number of multiprocessors per cluster, the number of processors per multiprocessor, and the number of simultaneous instructions per SIMD processor.

 The base language of supercomputer code is Fortran or C, using special libraries to share data between nodes.  Environments such as PVM(Parallel Virtual Machine) and MPI(Message Passing Interface) for loosely connected clusters and OpenMP(Open Multi- Processing) for tightly coordinated shared memory machines are used.  Significant effort is required to prevent any of the CPU's from wasting time waiting on data from other nodes.

 Supercomputer operating systems today are most often variants of Linux.  Supercomputers are often priced at millions of dollars, are sold to a very small market and the R&D budget for the OS was often limited.  The advent of Unix and Linux allows reuse of conventional desktop software and user interfaces.

 Supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC (Heating, Ventilating, and Air Conditioning) problem.  In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1-5 microseconds to send a message between CPUs are typical.  Supercomputers consume and produce massive amounts of data in a very short period of time. Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.

 Supercomputers are used for highly calculation- intensive tasks.  quantum mechanical physics  weather forecasting  climate research  molecular modeling(structures and properties of chemical compounds)  physical simulations

 IBM is developing the Cyclops64 architecture, intended to create a "supercomputer on a chip".  Meanwhile, IBM is constructing a 20 PFLOPs supercomputer at Lawrence Livermore National Laboratory, named Sequoia, which is scheduled to go online in  Erik P. DeBenedictis of Sandia National Laboratories theorizes that a zettaflops (10 21 ) computer is required to accomplish full weather modeling, which could cover a two week time span accurately. Such systems might be built around 2030.

 Vaughan-Nichols, S.J., “New trends revive supercomputing industry”, Computer, Volume 37, Issue 2, Feb 2004 Page(s):   