Presentation is loading. Please wait.

Presentation is loading. Please wait.

High Performance Computing and Atmospheric Modeling

Similar presentations


Presentation on theme: "High Performance Computing and Atmospheric Modeling"— Presentation transcript:

1 High Performance Computing and Atmospheric Modeling
John Michalakes Mesoscale and Microscale Meteorology Division National Center for Atmospheric Research Colorado State University, November 26, 2007

2 Outline Part 1 Part 2 HPC and atmospheric simulation
Characteristics of atmospheric models Part 2 Weather Research and Forecast (WRF) Towards petascale

3 High Performance Computing and Weather
The original “HPC application” John von Neumann, Jule Charney, Carl-Gustov Rossby, others; first computerized weather forecast using ENIAC, in grid points, 700 km resolution Joint Numerical Weather Prediction unit; 1954 Computational capability has gone hand in hand from beginnings of Numerical Weather Prediction, and advances in computer power have had a direct impact on increasing forecast capability and skill. Discussions of the history of NWP usually begin with L.F. Richardson, who outlined a vision for predicting the weather using parallel processing – unfortunately, this was before the development of computers that could actually do the job. Rather, he envisioned a great room filled with teams of human calculators. Anticipating the load-imbalance, he envisioned a central controller who could direct teams around the room to speed up or slow down using colored lights. The first application of actual computing hardware to NWP was undertaken at Princeton by John Von Neuman, Charney, and others in the 40s with very simple models of the atmosphere. Over time, and with increases in available computer power, the sophistication and complexity of the models increased until a watershed point in the 1960s where the skill of numerically produced weather forecasts began to consistently exceed the skill of human forecasters. The Chart shown here is from a paper by Frederick Shuman, in a 1989 issue of Weather and Forecasting, “History of Numerical Weather Prediction at NMC.” NMC is now the NOAA National Centers for Environmental Prediction. The chart demonstrates a consistent improvement in forecasts over a span of three and a half decades in direct correlation with increases in computer power, shown as hardware milestones along time axis. The milestones represent advances in supercomputer vector technology. The chart stops at 1990 so it doesn’t show a crucial shift in the relationship between NWP software and computational hardware: beginning in the early 1990s, the breadth and diversity of the computer architecture landscape expanded to encompass both vector and microprocessors and dfferent models for parallelism: shared and distributed memory. Nevertheless, the costs of software development and maintenance, the fact that software typically outlives hardware by several generations, and the fact that community models by their nature must adapt to the range of platforms in use, presents a dilemma between the need for performance and the need for maintainable software. History of Numerical Weather Prediction at the NMC,” Frederic G. Shuman. Weather and Forecasting, “Before 1955: Numerical Models and the Prehistory of AGCMs,” Paul N. Edwards, U. Mich. Grcar, Joseph, “John von Neumann and the Origins of Scientific Computing”, 2007

4 High Performance Computing and Weather
The original “HPC application” John von Neumann, Jule Charney, Carl-Gustov Rossby, others; first computerized weather forecast using ENIAC, in grid points, 700 km resolution Joint Numerical Weather Prediction unit; 1954 50th Anniversary of JNWPU at U. Maryland, June 2004 attended by representatives of National Weather Service, Air Force Weather Agency, and Navy Fleet Numerical Computational capability has gone hand in hand from beginnings of Numerical Weather Prediction, and advances in computer power have had a direct impact on increasing forecast capability and skill. Discussions of the history of NWP usually begin with L.F. Richardson, who outlined a vision for predicting the weather using parallel processing – unfortunately, this was before the development of computers that could actually do the job. Rather, he envisioned a great room filled with teams of human calculators. Anticipating the load-imbalance, he envisioned a central controller who could direct teams around the room to speed up or slow down using colored lights. The first application of actual computing hardware to NWP was undertaken at Princeton by John Von Neuman, Charney, and others in the 40s with very simple models of the atmosphere. Over time, and with increases in available computer power, the sophistication and complexity of the models increased until a watershed point in the 1960s where the skill of numerically produced weather forecasts began to consistently exceed the skill of human forecasters. The Chart shown here is from a paper by Frederick Shuman, in a 1989 issue of Weather and Forecasting, “History of Numerical Weather Prediction at NMC.” NMC is now the NOAA National Centers for Environmental Prediction. The chart demonstrates a consistent improvement in forecasts over a span of three and a half decades in direct correlation with increases in computer power, shown as hardware milestones along time axis. The milestones represent advances in supercomputer vector technology. The chart stops at 1990 so it doesn’t show a crucial shift in the relationship between NWP software and computational hardware: beginning in the early 1990s, the breadth and diversity of the computer architecture landscape expanded to encompass both vector and microprocessors and dfferent models for parallelism: shared and distributed memory. Nevertheless, the costs of software development and maintenance, the fact that software typically outlives hardware by several generations, and the fact that community models by their nature must adapt to the range of platforms in use, presents a dilemma between the need for performance and the need for maintainable software. History of Numerical Weather Prediction at the NMC,” Frederic G. Shuman. Weather and Forecasting, “Before 1955: Numerical Models and the Prehistory of AGCMs,” Paul N. Edwards, U. Mich. Grcar, Joseph, “John von Neumann and the Origins of Scientific Computing”, 2007

5 High Performance Computing and Weather
Today: Key component of atmospheric research Higher resolutions: 109 grid points Speeds: 1013 operations/second More complex physics, direct simulation, ensembles, coupled model simulations 5% of Top 500® systems used for weather and climate Computational capability has gone hand in hand from beginnings of Numerical Weather Prediction, and advances in computer power have had a direct impact on increasing forecast capability and skill. Discussions of the history of NWP usually begin with L.F. Richardson, who outlined a vision for predicting the weather using parallel processing – unfortunately, this was before the development of computers that could actually do the job. Rather, he envisioned a great room filled with teams of human calculators. Anticipating the load-imbalance, he envisioned a central controller who could direct teams around the room to speed up or slow down using colored lights. The first application of actual computing hardware to NWP was undertaken at Princeton by John Von Neuman, Charney, and others in the 40s with very simple models of the atmosphere. Over time, and with increases in available computer power, the sophistication and complexity of the models increased until a watershed point in the 1960s where the skill of numerically produced weather forecasts began to consistently exceed the skill of human forecasters. The Chart shown here is from a paper by Frederick Shuman, in a 1989 issue of Weather and Forecasting, “History of Numerical Weather Prediction at NMC.” NMC is now the NOAA National Centers for Environmental Prediction. The chart demonstrates a consistent improvement in forecasts over a span of three and a half decades in direct correlation with increases in computer power, shown as hardware milestones along time axis. The milestones represent advances in supercomputer vector technology. The chart stops at 1990 so it doesn’t show a crucial shift in the relationship between NWP software and computational hardware: beginning in the early 1990s, the breadth and diversity of the computer architecture landscape expanded to encompass both vector and microprocessors and dfferent models for parallelism: shared and distributed memory. Nevertheless, the costs of software development and maintenance, the fact that software typically outlives hardware by several generations, and the fact that community models by their nature must adapt to the range of platforms in use, presents a dilemma between the need for performance and the need for maintainable software. History of Numerical Weather Prediction at the NMC,” Frederic G. Shuman. Weather and Forecasting, “Before 1955: Numerical Models and the Prehistory of AGCMs,” Paul N. Edwards, U. Mich. Precipitable water field from 5 day global WRF forecast 20km resolution (100 million cells)

6 Number of Weather and Climate Systems in November Top500® Listings
Source: Top 500 Supercomputing Sites Copyright (c) TOP500.Org

7 Weather and Climate Systems in November Top500® Listings
Source: Top 500 Supercomputing Sites Copyright (c) TOP500.Org

8 Challenges for Petascale Atmospheric Models
Conventional wisdom revisited (View from Berkeley) Transistors are cheap, power expensive Flops are cheap, memory access is expensive Performance will continue to increase, but with parallelism, not clock rate Other limits to parallelism were not so easy to fix, either because they were a feature of the current WRF model design or because of some state-of-the engineering issue on very large parallel systems such as the Blue Gene or the Cray XT We lived with these for the work here, but these are all issues that will need to be addressed as weather and climate modeling moves to Petscale.

9 Characteristics of Atmospheric Models
Fundamentally CFD Numerical approximation of solutions to PDEs for primitive equations of mass, momentum, thermodynamics However, additional constraints/characteristics Domains: Spherical (global models), subject to pole-problem Rectangular (limited area models), subject to lateral boundary conditions Boundaries: Physical: land, ocean, upper Regional models also have lateral boundaries Predominantly structured grids, small by CFD standards Cartesian coordinates (includes lat/lon, isotropic, reduced Cartesian) Others: icosahedral, hybrids (cubed-sphere, yin-yang, etc.) Grids range 105 – 109 cells (typical problem size today around 108)

10 Characteristics of Atmospheric Models
Fundamentally CFD Numerical approximation of solutions to PDEs for primitive equations of mass, momentum, thermodynamics However, additional constraints/characteristics Domains: Spherical (global models), subject to pole-problem Rectangular (limited area models), subject to lateral boundary conditions Boundaries: Physical: land, ocean, upper Regional models also have lateral boundaries Predominantly structured grids, small by CFD standards Cartesian coordinates (includes lat/lon, isotropic, reduced Cartesian) Others: icosahedral, hybrids (cubed-sphere, yin-yang, etc.) Grids range 105 – 109 cells (typical problem size today around 108) Lat/Lon Icosohedral Cubed-sphere Composite Yin-yang Channel with Polar Caps

11 Characteristics of Atmospheric Models
Continued… Numerical considerations: Fastest modes -- gravity waves, acoustic waves -- are of little interest for solution, but must be resolved without overly constraining time step Time-split (explicit) finite difference approximation Elliptical solvers (implicit, spectral, semi-Lagrangian) More than half of code, data, and computation is non-CFD (e.g. physics) Large amount of state per grid-cell ( variables) computations per cell-step Solution-induced load imbalance Berkeley Dwarf 5 Structured Berkeley Dwarfs 1 & 3 Dense L.A., Spectral

12 Characteristics of Atmospheric Models
convective precipitation processor load Characteristics of Atmospheric Models Continued… Numerical considerations: Fastest modes -- gravity waves, acoustic waves -- are of little interest for solution, but must be resolved without overly constraining time step Time-split (explicit) finite difference approximation Elliptical solvers (implicit, spectral, semi-Lagrangian) More than half of code, data, and computation is non-CFD (e.g. physics) Large amount of state per grid-cell ( variables) computations per cell-step Solution-induced load imbalance Berkeley Dwarf 5 Structured Berkeley Dwarfs 1 & 3 Dense L.A., Spectral Workload Characterization of Physics-Induced Load Imbalance in WRF

13 Characteristics of Atmospheric Models
Weather versus Climate simulation: Codes and techniques for weather and climate simulation are basically identical However, they differ fundamentally in requirements for HPC Weather forecasting: Forecast length bounded by predictability to one or two weeks Unbounded in complexity. Can usefully employ high resolution, high-cost physics to “scale its way out” of Amdahl limits Climate prediction: Effectively unbounded in time – multi-decades/centuries High resolution and sophisticated physics is desirable but simulation speed in years/day is paramount Can giant supercomputers with thousands of processors be useful for weather? For climate?

14 Weather as a Petascale Application
Weather is a petascale application only to the extent that higher resolution can be usefully employed Exquisitely detailed but incorrect forecasts are not the answer. Instead, very high resolution research simulation to improve understanding that, in turn, improves skill of lower-resolution operational forecasts For example, cloud resolving (Dh ~ O(100 m)) simulations to understand and improve parameterizations of cloud dynamics

15 Weather as a Petascale Application
Weather is a petascale application only to the extent that higher resolution can be usefully employed Exquisitely detailed but incorrect forecasts are not the answer. Instead, very high resolution research simulation to improve understanding that, in turn, improves skill of lower-resolution operational forecasts For example, cloud resolving (Dh ~ O(100 m)) simulations to understand and improve parameterizations of cloud dynamics

16 Summary (part 1) Atmospheric models are rooted in CFD but with additional constraints, costs, requirements Weather and climate present different challenges for HPC, esp. entering the petascale era; weather is difficult, climate may be problematic Successfully enabling new science using HPC depends on careful consideration of numerous, often conflicting requirements in the design of atmospheric modeling software

17 Weather Research and Forecast Model

18 WRF Overview Large collaborative effort to develop next-generation community model with direct path to operations Limited area, high-resolution Structured (Cartesian) with mesh-refinement (nesting) High-order explicit dynamics Software designed for HPC 4000+ registered users Applications Numerical Weather Prediction Atmospheric Research Coupled modeling systems Air quality research/prediction High resolution regional climate A quick overview of the WRF model that is being used for these experiments... ... A new development in WRF is extending the model, designed as a limited-area regional model, to operating as a high-resolution non-hydrostatic model on the global domain. The animation shows a “modest” 20km global simulation from this past July running on 128 processors of the IBM Power5+ system at NCAR, at a speed of about 4x real time. Increases in computing power and basic dynamics simulations such as this Nature Run research will feed directly into enabling global resolutions at very high-resolution prediction at the petascale.

19 WRF Overview Hurricane Katrina
Large collaborative effort to develop next-generation community model with direct path to operations Limited area, high-resolution Structured (Cartesian) with mesh-refinement (nesting) High-order explicit dynamics Software designed for HPC 4000+ registered users Applications Numerical Weather Prediction Atmospheric Research Coupled modeling systems Air quality research/prediction High resolution regional climate Hurricane Katrina A quick overview of the WRF model that is being used for these experiments... ... A new development in WRF is extending the model, designed as a limited-area regional model, to operating as a high-resolution non-hydrostatic model on the global domain. The animation shows a “modest” 20km global simulation from this past July running on 128 processors of the IBM Power5+ system at NCAR, at a speed of about 4x real time. Increases in computing power and basic dynamics simulations such as this Nature Run research will feed directly into enabling global resolutions at very high-resolution prediction at the petascale. Observations (Radar) WRF Simulated Reflectivity 4km Vortex-following Moving Nest 5 day global WRF forecast at 20km horizontal resolution. running at 4x real time 128 processors of IBM Power5+ (blueice.ucar.edu)

20 WRF Overview Large collaborative effort to develop next-generation community model with direct path to operations Limited area, high-resolution Structured (Cartesian) with mesh-refinement (nesting) High-order explicit dynamics Software designed for HPC 4000+ registered users Applications Numerical Weather Prediction Atmospheric Research Coupled modeling systems Air quality research/prediction High resolution regional climate A quick overview of the WRF model that is being used for these experiments... ... A new development in WRF is extending the model, designed as a limited-area regional model, to operating as a high-resolution non-hydrostatic model on the global domain. The animation shows a “modest” 20km global simulation from this past July running on 128 processors of the IBM Power5+ system at NCAR, at a speed of about 4x real time. Increases in computing power and basic dynamics simulations such as this Nature Run research will feed directly into enabling global resolutions at very high-resolution prediction at the petascale. WRF-CHEM 27km 36 hour NO+NO2 forecast 29-31 January 2005 Courtesy Georg Grell

21 WRF Overview Large collaborative effort to develop next-generation community model with direct path to operations Limited area, high-resolution Structured (Cartesian) with mesh-refinement (nesting) High-order explicit dynamics Software designed for HPC 4000+ registered users Applications Numerical Weather Prediction Atmospheric Research Coupled modeling systems Air quality research/prediction High resolution regional climate A quick overview of the WRF model that is being used for these experiments... ... A new development in WRF is extending the model, designed as a limited-area regional model, to operating as a high-resolution non-hydrostatic model on the global domain. The animation shows a “modest” 20km global simulation from this past July running on 128 processors of the IBM Power5+ system at NCAR, at a speed of about 4x real time. Increases in computing power and basic dynamics simulations such as this Nature Run research will feed directly into enabling global resolutions at very high-resolution prediction at the petascale. Precipitable water field from 2-year NCAR Climate-WRF simulation using 720 processors IBM Power5

22 WRF Software Framework
Hierarchical design Multi-level parallelism Performance-portable... ARW solver Physics Interfaces Plug-compatible physics NMM solver Top-level Control, Memory Management, Nesting, Parallelism, External APIs mediation driver model

23 WRF Software Framework
Logical domain 1 Patch, divided into multiple tiles Inter-processor communication Hierarchical design Multi-level parallelism Performance-portable...

24 WRF Software Framework
Hierarchical design Multi-level parallelism Performance-portable...

25 WRF Supported Platforms
This slide shows the systems WRF has been ported to and, in most cases is running on out in the user community. A few recent developments: Experimental implementation on IBM BG/L Also Apple G5 workstation ports, but not yet distributed memory parallel Many variations of UNIX, both by the major vendors and also by a number of Integrators and Turnkey systems, as well as the home grown research dept. varieties. May have said this the last time I was here but Linux architecture is probably widest distribution in terms of number of CPUs running WRF and also probably the most vexing in terms of community model support. Petascale precursor systems

26 WRF Supported Platforms
This slide shows the systems WRF has been ported to and, in most cases is running on out in the user community. A few recent developments: Experimental implementation on IBM BG/L Also Apple G5 workstation ports, but not yet distributed memory parallel Many variations of UNIX, both by the major vendors and also by a number of Integrators and Turnkey systems, as well as the home grown research dept. varieties. May have said this the last time I was here but Linux architecture is probably widest distribution in terms of number of CPUs running WRF and also probably the most vexing in terms of community model support. The U. Sao Paulo has a unique low-cost supercomputer installation. See attached. They have 2.2 GhZ IA32 mother boards and power supplies attached to 1'x4' wooden planks, 4 motherboards per plank, suspended with window sash cord from their space-limited computer room ceiling. They call it a "clothesline" computer (actually, the Brazilian word for clothesline). The planks are suspended from pullies and can be individually lowered and raised for maintenance and cleaning. The blue cables are GigE network over which they run MPICH. Very ingenious (assuming their machine room is beaver- and termite-proof). University of São Paulo “Clothesline Computer” Petascale precursor systems

27 Towards Petascale

28 WRF Nature Run SC07 Gordon Bell HPC Challenge Finalist Goals:
New scientific insights and ultimately better prediction will be enabled by atmospheric modeling at Petascale SC07 Gordon Bell HPC Challenge Finalist Establish baseline for studying atmos. dynamics at very high resolution 5km hemispheric domain, first few hours of 90 day Nature Run Computational: 12 Tera-ops/time step (5800 ops per cell-step) Memory footprint: > 100 variables per cell (single prec.) times 2109 cells Interprocessor communication: 160 MPI Send/Recv pairs per time step per processor Average 120 KB per exchange (one way) I/O Input: 200 GB/restart Output: 40 GB/hourly write Goals: Record parallelism and scaling on a petascale problem Record computational rate with output Most importantly: new scientific result and a step towards new understanding of predictability in the earth’s atmosphere Modeling of the atmosphere was the first high-performance computing application when John von Neumann, Jule Charney, Carl-Gustov Rossby and others performed the first simulations on ENIAC in the 1950s, and atmospheric modeling for weather and climate continues to benefit from steady increases in computing power, now entering the petascale computing. The power of petascale computing – not here yet, btw – will enable unprecedented problem sizes, resolutions for explicit resolution of atmospheric dynamics and other processes at global scale, improving understanding and ultimately predictive power of earth system models. Atmospheric models – at their core, CFD – stress every aspect of high-performance computing systems... Goals...

29 Floating Point Rate Here are performance results for four 3 ½ hour simulations conducted at different processor counts on two of the test systems, both with and without I/O (hourly) This first slide shows the computation-only floating point rates obtained for our successful runs on Franklin, the Cray XT4 at NERSC and on NY Blue, the Blue Gene L system at Stony Brook University and Brookhaven NL. The plot shows floating point rate in TF/s on the left hand axis as a function of number of processors. The simulation rate – that is, the number of hours of simulation performed for an hour of machine time, is shown as the right hand axis. Floating point rate is based on an operation count of 11.6 Trilliion floating point operations per time step as measured using the Blue Gene performance counters. XT4 is at 8.77 TF/s on 12,090 processors, or 14% (12%) of peak. BG/L is at 3.4 TF/s on 15,360 processors, or 8% (7.4%) of peak. BG runs were using CO processor mode. Because of the size and cost of these runs, only one run was conducted for each data point. Therefore, unfortunately, we can’t present any error bars on this data. From what we observed from some of our other attempted runs, the computational measurements and the floating point rates were repeatable and there would not be much variance to report even if we had been able to report on multiple runs. As far as balance is concerned, the Blue Gene output bandwidth, while slower, was better able to keep up with the computational performance of the model. On 15K processors of Blue Gene, output was about 290 MB/s, slightly better than 240 MB/s at 6K processors. The penalty for I/O on 15K processors of the Blue Gene is only 6.4 percent, which is a little high but in the acceptable range for I/O cost, even with an ordinary WRF run. On 8K processors, the Cray output rate was 430 MB/s and the penalty for I/O was also in the acceptable range – about 7.7% (the difference between 6.2 and 5.7 Tflop/s). On 12K processors, bandwidth dropped to 290 MB/s and the cost for output rose to 15%, or the difference between 8.8 and 7.5 TF/s, more than a Teraflop lost to output performance. That I’ve worked in this field long enough to consider losing a Teraflop acceptable! (almost)

30 Initial simulation results
N.H. For comparison: Real conditions (July 22, 2007) Looking at some initial scientific results from the experiments, the plot on the right (actually an animation) is wind-speed at 500 Millibars during the first 3 ½ hours of the Nature Run simulation. The atmosphere isn’t changing much at the global scale in that small time period. We are seeing large-scale structures – Rossby Waves that were present in the initial data that was spun up at a lower resolution – and the intensification we see is most likely the model adjusting to the increase in resolution over this initial 3.5 hour time period. On the left are plots from a lower-resolution (20km) global WRF real-data forecast using initial data from July 22, Note the closer similarity between the Nature Run large scale structure and the Southern hemisphere rather than the Northern hemisphere. The southern hemisphere is in winter, when the Rossby wave features are stronger, and the S.H. has more water and less topographic influence than the northern hemisphere. But we’re not after large scale fidelity... S.H. WRF Nature Run 5km (idealized) Capturing large scale structure already (Rossby Waves) Small scale features spinning up (next slide)

31 Initial simulation results
Kinetic Energy Spectrum k-3 k-5/3 Scales not yet spun up Large scales already present At 3:30 h into the simulation, the mesoscales are still spinning up and filling in the spectrum. Large scales were previously spun up on a coarser grid Mesoscales spinning up Looking at some initial scientific results from the experiments, the plot on the right (actually an animation) is wind-speed at 500 Millibars during the first 3 ½ hours of the Nature Run simulation. The atmosphere isn’t changing much at the global scale in that small time period. We are seeing large-scale structures – Rossby Waves that were present in the initial data that was spun up at a lower resolution – and the intensification we see is most likely the model adjusting to the increase in resolution over this initial 3.5 hour time period. On the left are plots from a lower-resolution (20km) global WRF real-data forecast using initial data from July 22, Note the closer similarity between the Nature Run large scale structure and the Southern hemisphere rather than the Northern hemisphere. The southern hemisphere is in winter, when the Rossby wave features are stronger, and the S.H. has more water and less topographic influence than the northern hemisphere. But we’re not after large scale fidelity...

32 New Directions GPU and other non-traditional architectures
Converted standalone WRF microphysics to CUDA Produces same output as original Fortran, within roundoff Speed of original on host CPU (Xeon): 330 milliseconds Speed on NVIDIA GPU: Theoretical peak speedup: .3 milliseconds (1000x) Time on GPU: 26 milliseconds (12x speedup) Including data transfers: 37 milliseconds (9x speedup) Preliminary, but current implementation is getting only 1% of peak on GPU but still doing 10x better than host CPU!

33 Summary Atmospheric modeling, one of the original HPC-enabled applications, is moving to exploit petascale computing As always, significant engineering and scientific challenges and opportunities WRF web page: My contact info:

34 Thank you WRF web page: http://www.wrf-model.org
My contact info:

35 WRF Nature Run Computational record for an atmospheric model?
AFES Earth Simulator still the record at 27 TF/s 8.76 Tf/s (7.47 TF/s with I/O) is a WRF record and perhaps a record for a model designed to run at high, non-hydrostatic resolution with scale-appropriate dynamics Parallelism and scaling? 15K processors at 7.8% peak (7.2% with I/O) We think yes. I/O performance at scale 6.4% penalty for I/O on Blue Gene; 14.75% on XT4 Needs improvement but science enabled in meantime Science Important new steps towards understanding the behavior and predictability of the atmosphere through frontier simulation New York Blue is an 18 rack IBM Blue Gene/L massively parallel supercomputer located at Brookhaven National Laboratory (BNL) in Upton, Long Island, New York. It is the centerpiece of the New York Center for Computational Sciences (NYCCS), a cooperative effort between BNL and Stony Brook University that will also involve universities throughout the state of New York. Each of the 18 racks consists of 1024 compute nodes (a total of nodes) with each node containing two 700 MHz PowerPC 440 core processors and 1 GB of memory (a total of processors and 18.4 TB of memory). The racks are arranged as six rows of three racks each. New York Blue is critical for computations in biology, medicine, materials, nanoscience, renewable energy, climate science, finance and technology. It is the centerpiece of the New York Center for Computational Sciences (NYCCS). Features of New York Blue IBM Blue Gene 100 teraflops 18-rack configuration 36,864 processors --- that’s 2.8 GF/s peak per processor With a peak performance of teraflops (trillion floating-point calculations per second), New York Blue will allow computations critical for research in biology, medicine, material science, nanoscience, renewable energy, climate science, finance and technology. The Blue Gene/L computer is a


Download ppt "High Performance Computing and Atmospheric Modeling"

Similar presentations


Ads by Google