Presentation is loading. Please wait.

Presentation is loading. Please wait.

AMS 290B Advanced Topics in the Numerical Solution of PDEs Topic 2008: An Introduction to Parallel Computation and Large CFD Codes Nic Brummell

Similar presentations


Presentation on theme: "AMS 290B Advanced Topics in the Numerical Solution of PDEs Topic 2008: An Introduction to Parallel Computation and Large CFD Codes Nic Brummell"— Presentation transcript:

1 AMS 290B Advanced Topics in the Numerical Solution of PDEs Topic 2008: An Introduction to Parallel Computation and Large CFD Codes Nic Brummell (831) Applied Math & Statistics (AMS) University of California, Santa Cruz

2 Maybe we want to model something complicated …

3 Global simulations: Differential rotation/Dynamo Brun, Brown, Browning, Clune, Elliot, Miesch, Toomre (University of Colorado) Largest global simulations in the world – spherical harmonic degree l ~ O(1000) Problem: resolves from largest scale down => diffuses smallest scale ~ supergranules

4 Local simulations: Convection zone and tachocline

5 Local simulations: Penetrative convection movie Brummell, Clune, Toomre

6 Local simulations: Magnetic transport? Tobias, Brummell, Clune, Toomre

7 Magnetic buoyancy?: 1D shear (Grad student: Geoff Vasil ) Success? But this was all very laminar. How about a much more turbulent case? Success? Much harder to produce clean rising tubes. Magnetic field feeds back strongly on shear that manufactures field.

8 Side viewTop view 3D simulation (Cattaneo, 1999)

9 3D simulation - Cattaneo, 1999 Temp flucs near surface top B middle ->

10 Cost! One simulation costs 60,000 supercomputing hours. = 2500 days = 83 months = 7 years! How come we’ve done so many? Are we that old? NO! NO! (don’t be cheeky) PARALLEL COMPUTERS PARALLEL COMPUTERS => 60,000 supercomputing hours can be done in 1 wall clock hour on 60,000 processors (or 120 wall clock hours = 5 days on 500 processors)

11 Computing power steadily increases! Moore’s Law: (Gordon E. Moore, Intel co-founder, “Electronics Magazine”, 1965) “Number of transistors that can be placed cheaply on an integrated circuit board will double every two years” ~ computing power doubles every two years = exponential rate of increase! Can link almost any computational capability to Moore’s Law: Processing speed Memory capacity Resolution of digital cameras!

12 Computing power steadily increases! Pentium Pentium II Pentium III Itanium 2 Itanium 2+cache Year 286 Intel processors Double every 2 years Double every 18 months No of transistos ,000,000

13 Computing power steadily increases! However … 1. Exponentially improving hardware does not mean exponentially improving software to go with it! Wirth’s Law: “Software gets slower faster than hardware gets faster” Software must get larger and more complex = slower! 2.Moore’s Law can’t go on forever (even Moore acknowledged this). Technological singularity?? Eventually transistors will become size of an atom. Krauss & Starkman: 600 years. 3.BUT new technology come along and replace integrated circuits? (R. Kurzweil: “The Law of Accelerating Returns”)

14 Computing power steadily increases!

15 SpecificationCray-1AIBM POWER5+Approx Ratio Year installed ArchitectureVector processorDual Cluster of scalar CPUs Number of CPUs 1~5,0005,000:1 Clock Speed12.5 nsec (80 MHz)0.53 nsec (1.9 GHz)24:1 Peak perf per CPU160 MFLOPS7.6 GFLOPS48:1 Peak perf per system 160 MFLOPS~38 TFLOPS250,000:1 Sustained performance ~50 MFLOPS~4 TFLOPS80,000:1 Memory8 Mbytes~10 Tbytes1,000,000:1 Disk Space2.5 Gbytes~100 Tbytes40,000:1

16 Computing power steadily increases!

17 Technology over the years: 1942 ENIAC kops

18 Technology over the years: 1947 Whirlwind 3

19 Technology over the years: 1951 Univac 250 kops

20 Technology over the years: IBM 1950s IBM 650 IBM 701 IBM 704

21 Technology over the years: 1964 IBM 360

22 Technology over the years: 1965 CDC Mflops

23 Technology over the years: 1976 Cray Mflops

24 Technology over the years: 1978 VAX

25 Technology over the years: 1982 Cray X-MP 941 Mflops

26 Technology over the years: 1985 TM CM-1 1 Gflops

27 Technology over the years: 1990 NEC SX2/3 23 Gflops

28 Technology over the years: 1993 Cray T3D 1993 also: TM CM-5 ~ 65 Gflops, Intel Paragon ~ 143 Gflops

29 Technology over the years: 2002 NEC ESS Fastestin world until 2004 Holistic simulations of global earth climate and oceans down to 10km resolution 5120 processors 35 Tflops

30 Technology over the years: 2007 IBM Blue Gene/L Fastest machine in the world! ~ 213,000 cpus 596 Tflops peak, 478 sustained

31 Technology over the years: 2007 Cray XT5

32 What is the point of this course? Learn how to harness all this computing power! In particular, learn how to program MASSIVELY PARALLEL MACHINES Elusive mixture of theory and practice … Highly interdisciplinary! These days need: Engineering skills: architecture, hardware, network knowledge Mathematical skills: applied math methods, numerical analysis Computer science skills: algorithm design, software engineering, compilers, operating systems, ftp, … Scientific skills: design of problem, setup of models/equations, analysis of results Artistic skills: visualisation, movies, …

33 The syllabus PART A: CONCEPTS oIntro to Parallel Computing Parallel machine models Parallel programming models oDesigning Parallel algorithms Partitioning, communication, agglomeration, mapping Performance analysis PART B: TOOLS  MPI  OpenMP  Debugging, Performance, Analysis (IDL, VAPOR) PART C: CASE STUDIES  Spectral Methods (HPS)  Finite-Volume PART D: SPECIAL TOPICS IN TURBULENCE  DNS vs LES/SGS  AMR

34 Expectations Course is BRAND NEW = an EXPERIMENT! Materials are “in preparation” (sorry, guinea pigs!) Expectations of the student: oLearn something useful! oThere will be exercises and tasks oBut very seldom “right” and “wrong” answers oOften I won’t know the answer! oThe expectation is only that the student try and make use of what they’ve learned.

35 Baseline TEST!


Download ppt "AMS 290B Advanced Topics in the Numerical Solution of PDEs Topic 2008: An Introduction to Parallel Computation and Large CFD Codes Nic Brummell"

Similar presentations


Ads by Google