Presentation is loading. Please wait.

Presentation is loading. Please wait.

Who uses a supercomputer anyway? Jim Greer. Start of Digital Computer: ENIAC Built in 1943-45 at the Moore School of the University of Pennsylvania for.

Similar presentations

Presentation on theme: "Who uses a supercomputer anyway? Jim Greer. Start of Digital Computer: ENIAC Built in 1943-45 at the Moore School of the University of Pennsylvania for."— Presentation transcript:

1 Who uses a supercomputer anyway? Jim Greer

2 Start of Digital Computer: ENIAC Built in at the Moore School of the University of Pennsylvania for the War effort by John Mauchly and J. Presper Eckert. The Electronic Numerical Integrator And Computer (ENIAC) was one of the first general- purpose electronic digital computers.

3 Programming the Computer Programming in early computers is by wiring the cables and flipping the switches.

4 MANIAC The MANIAC had a memory of 1K 40-bit words. Multiplication took a milli- second.

5 THE JOURNAL OF CHEMICAL PHYSICS VOLUME 21, NUMBER 6 JUNE, 1953 Equation of State Calculations by Fast Computing Machines NICHOLAS METROPOLIS, ARIANNA W. ROSENBLUTH, MARSHALL N. ROSENBLUTH, AND AUGUSTA H. TELLER, Los Alamos Scientific Laboratory, Los Alamos, New Mexico AND EDWARD TELLER, * Department of Physics, University of Chicago, Chicago, Illinois (Received March 6, 1953) A general method, suitable for fast computing machines, for investigating such properties as equations of state for substances consisting of interacting individual molecules is described. The method consists of a modified Monte Carlo integration over configuration space. Results for the two-dimensional rigid-sphere system have been obtained on the Los Alamos MANIAC and are presented here. These results are compared to the free volume equation of state and to a four-term virial coefficient expansion This algorithm by Metropolis et al, from1953 has been cited as among the top 10 algorithms having the "greatest influence on the development and practice of science and engineering in the 20th century." Machine is long out of date- the methods and scientific approach used remain relevant today. Basis for simulated annealing

6 YearComputer NamePower (Watts)Performance (adds/sec)Memory (kByte)Price (US dollars) 1951 UNIVAC I 124,5001,90048$1,000, IBM S360 10,000500,00064$1,000, PDP ,0004$16, Cray-1 60,000166,000,00032,768$4,000, IBM PC ,000256$3, HP ,000,00016,384$7, IBM notebook 201,000,000,000512,000$1,900

7 What do the machines look like nowadays? When HPCx first came into service in 2002 it was one of the top 10 fastest supercomputers in the world. Despite an upgrade in 2004, it has since slipped to 59th place in the Top 500 supercomputers list. At the moment the most potent machine in the world is the IBM's Blue Gene/L at the Lawrence Livermore National Laboratory in California where it is used to ensure that the US nuclear weapons stockpile remains safe and reliable. … only machine to have pushed through the 100 teraflop barrier, performs a staggering trillion calculations per second …. 367 teraflops

8 Seymour Cray - plumber Beowulf is perhaps the most well-known type of parallel processing cluster today. Donald Becker and Thomas Sterling designed the first Beowulf prototype in 1994 for NASA. It consisted of DX4 processors connected by channel-bonded Ethernet. The next Beowulf clusters were built around 16 Pentium Pro (P6) 200-MHz processors connected by Fast Ethernet adapters and switches.

9 Beowulf is a design for high-performance parallel computing clusters on inexpensive personal computer hardware. Originally developed at NASA, Beowulf systems are now deployed worldwide, chiefly in support of scientific computing. A Beowulf cluster is a group of usually identical PC computers running a Free and Open Source Software (FOSS) Unix-like operating system, such as Linux or BSD. They are networked into a small TCP/IP LAN, and have libraries and programs installed which allow processing to be shared among them. There is no particular piece of software that defines a cluster as a Beowulf. Commonly used parallel processing libraries include MPI (Message Passing Interface) and PVM (Parallel Virtual Machine). Both of these permit the programmer to divide a task among a group of networked computers, and recollect the results of processing. - Wikipedia

10 From PlayStation to Supercomputer for $50,000 New York Times | By JOHN MARKOFF As perhaps the clearest evidence yet of the computing power of sophisticated but inexpensive video-game consoles, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign has assembled a supercomputer from an army of Sony PlayStation 2's. The resulting system, with components purchased at retail prices, cost a little more than $50,000. The center's researchers believe the system may be capable of a half trillion operations a second, well within the definition of supercomputer, although it may not rank among the world's 500 fastest supercomputers.

11 InterconnectLowest measured latency (smaller better) PathScale InfiniPath1.31 microseconds Cray RapidArray1.63 microseconds Quadrics4.89 microseconds NUMAlink5.79 microseconds Myrinet19.00 microseconds Gigabit Ethernet42.23 microseconds Fast Ethernet microseconds Source: HPC Challenge, November Latency and Bandwidth Commodity processsors the interconnect makes the supercomputer

12 TOP 10 Sites for June 2006 SiteSystem Family# Processors DOE/NNSA/LLNL United States Blue Gene L IBM IBM Watson United States Blue Gene L IBM DOE/NNSA/LLNL United States ASCPurple (p –series) IBM NASA/Ames United States Altix SGI CEA France Tera-10 (SMP cluster) Bull Sandia Nat. Lab. United States PowerEdge Dell Tokyo Inst. Tech. Japan Sun Fire NEC/Sun FZ Juelich Germany Blue Gene L IBM Sandia Nat. Lab. United States XT3 Cray Earth Simulation Japan NEC Vector NEC




16 Scientific Computing Newtons equation (with damping) Diffusion or heat equation Fourier transform Schrödinger equation

17 Computational Needs- Biology & Bioinformatics Problem ComponentComputing SpeedStorage Genome Assembly >10 TeraFlops sustained to keep up with expected sequencing rates 300 TB of trace files per genome Protein Structure Prediction >100 TeraFlops per protein set in one microbial genome Petabytes Classical Molecular Dynamics 100 TeraFlops per DNA-protein interaction 10s of Petabytes First Principles Molecular Dynamics 1 PetaFlops per reaction in enzyme active site 100s of Petabytes Simulations of Biological Networks >1 TeraFlops for simple correlation analyses of small biological networks 1000s of Petabytes

18 Grand Challenges Grand Challenges are the leading problems in science and engineering that can be solved only with the help of the fastest, most powerful computers. They address issues of great societal impact, such as biomedicine, the environment, economic competitiveness, and military.

19 Examples of Grand Challenge problems Forecasting weather, predicting global climate change, and modeling pollution dispersion Determining molecular, atomic, and nuclear structures Understanding the structure of biological macromolecules Improving communication networks for research and education Developing and understanding the nature of new materials Building more energy-efficient cars and airplanes Understanding how galaxies are formed Designing new pharmaceuticals

20 Blood flow in heart from Navier-Stokes equation, NIH

21 Brain Chemistry: bilayer sandwich of lipids nitrogen (blue) and phosphorous (gold) heads facing outward on both sides of filamentary tails (gray). Patti Reggio, University of North Carolina, Greensboro Diane Lynch, Kennesaw State University

22 This thin-slice snapshot through the simulation volume, about 3 million light years thick by 4.5 billion light years on each side, shows the filamental structure of dark-matter clusters. Brightness corresponds to density. Paul Bode and Jeremiah Ostriker of Princeton University

23 The infant universe hatching from its structureless shell. This map represents Edmund Bertschinger's simulations on the CRAY T3D at Pittsburgh. This map shows negative (blue) and positive (red) fluctuations of degrees K. The simulation assumed a mixed hot and cold dark matter model with 5 eV neutrino mass.

24 Von Mises stress on the F-16 structure (increasing from dark blue to light blue to red) during an aeroelastic simulation.

25 Mach contours and streamlines for an aeroelastic simulation at Mach 0.9.



28 Kelvin K. Droegemeier, University of Oklahoma at Norman.

29 Animation of a simulated earthquake in the San Fernando valley. Color depicts the peak magnitude of ground displacement. The simulation covered a 54 x 33 kilometer area, superimposed here on a satellite view of topography, to a depth of 15 km. Los Angeles can be seen in the southeast corner. Carnegie Mellon University


31 Oxygen and Temperature in Turbine-Combustion These snapshots from simulation of turbine-combustion show how oxygen decreases (previous) and temperature increases (this slide) as markers of combustion in turbine flow. DOE and Westinghouse


33 Adsorption of molecules at SAMs by theoretical/simulation modelling

34 Matrix Diagonalization Vector Screening Matrix Diagonalization Vector Screening Matrix Diagonalization Vector Screening Vector merge Matrix Generation Matrix Generation Matrix Generation Convergence? Stop Vector Expansion Vector Expansion Vector Expansion Initial Guess no yes Parallel computing allows you to do calculations in new ways …

35 Conclusions … Lotsa uses for supercomputers Commodity processors, but a lot of em Very fast customised interconnects Single winning architecture not decided Power remains a big concern- back to plumbing

Download ppt "Who uses a supercomputer anyway? Jim Greer. Start of Digital Computer: ENIAC Built in 1943-45 at the Moore School of the University of Pennsylvania for."

Similar presentations

Ads by Google