Model and Data Hierarchies for Simulating and Understanding Climate Marco A. Giorgetta Demands on next generation dynamical solvers.

Slides:



Advertisements
Similar presentations
Weather Research & Forecasting: A General Overview
Advertisements

Geophysical Fluid Dynamics Laboratory Review June 30 - July 2, 2009 Geophysical Fluid Dynamics Laboratory Review June 30 - July 2, 2009.
Program Analysis and Tuning The German High Performance Computing Centre for Climate and Earth System Research Panagiotis Adamidis.
Discretizing the Sphere for Multi-Scale Air Quality Simulations using Variable-Resolution Finite-Volume Techniques Martin J. Otte U.S. EPA Robert Walko.
June 2003Yun (Helen) He1 Coupling MM5 with ISOLSM: Development, Testing, and Application W.J. Riley, H.S. Cooley, Y. He*, M.S. Torn Lawrence Berkeley National.
Günther Zängl, DWD1 Improvements for idealized simulations with the COSMO model Günther Zängl Deutscher Wetterdienst, Offenbach, Germany.
On the Luv-Lee Problem in the Simulation of Orographic Precipitation G. Doms, J.-P. Schulz a, D. Majewski a, J. Förstner a, V. Galabov b 1. Spatial Distribution.
ICONAM ICOsahedral Non-hydrostatic Atmospheric Model -
AIR POLLUTION. ATMOSPHERIC CHEMICAL TRANSPORT MODELS Why models? incomplete information (knowledge) spatial inference = prediction temporal inference.
ECE669 L4: Parallel Applications February 10, 2004 ECE 669 Parallel Computer Architecture Lecture 4 Parallel Applications.
Eta Model. Hybrid and Eta Coordinates ground MSL ground Pressure domain Sigma domain  = 0  = 1  = 1 Ptop  = 0.
Nesting. Eta Model Hybrid and Eta Coordinates ground MSL ground Pressure domain Sigma domain  = 0  = 1  = 1 Ptop  = 0.
Geophysical Modelling: Climate Modelling How advection, diffusion, choice of grids, timesteps etc are defined in state of the art models.
Elements of atmospheric chemistry modelling Prof. Michel Bourqui Office BH
1/36 Gridless Method for Solving Moving Boundary Problems Wang Hong Department of Mathematical Information Technology University of Jyväskyklä
Overview of ROMS features (numerics and boundary layer parameterizations) ROMS developments: boundary layers, data assimilation, nesting, Prototype model.
1 NGGPS Dynamic Core Requirements Workshop NCEP Future Global Model Requirements and Discussion Mark Iredell, Global Modeling and EMC August 4, 2014.
Application of the CSLR on the “Yin-Yang” Grid in Spherical Geometry X. Peng (Earth Simulator Center) F. Xiao (Tokyo Institute of Technology) K. Takahashi.
Non-hydrostatic algorithm and dynamics in ROMS Yuliya Kanarska, Alexander Shchepetkin, Alexander Shchepetkin, James C. McWilliams, IGPP, UCLA.
Next Gen AQ model Need AQ modeling at Global to Continental to Regional to Urban scales – Current systems using cascading nests is cumbersome – Duplicative.
Massively Parallel Magnetohydrodynamics on the Cray XT3 Joshua Breslau and Jin Chen Princeton Plasma Physics Laboratory Cray XT3 Technical Workshop Nashville,
University of Veszprém Department of Image Processing and Neurocomputing Emulated Digital CNN-UM Implementation of a 3-dimensional Ocean Model on FPGAs.
NUMERICAL WEATHER PREDICTION K. Lagouvardos-V. Kotroni Institute of Environmental Research National Observatory of Athens NUMERICAL WEATHER PREDICTION.
A Look at High-Order Finite- Volume Schemes for Simulating Atmospheric Flows Paul Ullrich University of Michigan.
Zängl ICON The Icosahedral Nonhydrostatic model: Formulation of the dynamical core and physics-dynamics coupling Günther Zängl and the ICON.
1.Introduction 2.Description of model 3.Experimental design 4.Ocean ciruculation on an aquaplanet represented in the model depth latitude depth latitude.
The Fujin Development of Parallel Coupler Takashi Arakawa Research Organization for Information Science & Technology.
Model and Data Hierarchies for Simulating and Understanding Climate Marco A. Giorgetta Overview of Earth System Modeling and Fluid Dynamical Issue.
Rossby Wave Two-layer model with rigid lid η=0, p s ≠0 The pressures for the upper and lower layers are The perturbations are 
A cell-integrated semi-Lagrangian dynamical scheme based on a step-function representation Eigil Kaas, Bennert Machenhauer and Peter Hjort Lauritzen Danish.
10/22/2015 Zängl ICON The next-generation global model for numerical weather prediction and climate modeling of DWD and MPI-M Current development status.
ICON Bucharest 2006 J.Steppeler, Ripodas, Th. Heinze, D. Majewski (DWD - German Weather Service, Offenbach, Germany) L. Bonaventura (MOX - Politecnico.
Higher Resolution Operational Models. Operational Mesoscale Model History Early: LFM, NGM (history) Eta (mainly history) MM5: Still used by some, but.
Nonlinear Dynamics of Vortices in 2D Keplerian Disks: High Resolution Numerical Simulations.
3.3.3: Semi-Lagrangian schemes AOSC614 class Hong Li.
Georgia Institute of Technology Initial Application of the Adaptive Grid Air Quality Model Dr. M. Talat Odman, Maudood N. Khan Georgia Institute of Technology.
24-28 Sept. 2012Baldauf, Reinert, Zängl (DWD)1 Michael Baldauf, Daniel Reinert, Günther Zängl (DWD, Germany) PDEs on the sphere, Cambridge, Sept.
A baroclinic instability test case for dynamical cores of GCMs Christiane Jablonowski (University of Michigan / GFDL) David L. Williamson (NCAR) AMWG Meeting,
The status and development of the ECMWF forecast model M. Hortal, M. Miller, C. Temperton, A. Untch, N. Wedi ECMWF.
Numerical Investigation of Hydrogen Release from Varying Diameter Exit
Manno, , © by Supercomputing Systems 1 1 COSMO - Dynamical Core Rewrite Approach, Rewrite and Status Tobias Gysi POMPA Workshop, Manno,
AMWG Breakout, CCSM Workshop June 25, 2002 Overview of CAM status and simulations Bill Collins and Dave Randall National Center for Atmospheric Research.
Recent Developments in the NRL Spectral Element Atmospheric Model (NSEAM)* Francis X. Giraldo *Funded.
Implementation of Grid Adaptation in CAM: Comparison of Dynamic Cores Babatunde J. Abiodun 1,2 William J. Gutowski 1, and Joseph M. Prusa 1,3 1 Iowa State.
Exascale Climate Data ANalysis From the Inside Out Frédéric Laliberté Paul Kushner University of Toronto ExArch WP3.
An Evaluation of Partitioners for Parallel SAMR Applications Sumir Chandra & Manish Parashar ECE Dept., Rutgers University Submitted to: Euro-Par 2001.
Mass Coordinate WRF Dynamical Core - Eulerian geometric height coordinate (z) core (in framework, parallel, tested in idealized, NWP applications) - Eulerian.
NOAA Global Modeling Workshop January 2006NOAA/ESRL FIM Contribution toward future NOAA global modeling system Developed at ESRL, collaboration so.
Development of an Atmospheric Climate Model with Self-Adapting Grid and Physics Joyce E. Penner 1, Michael Herzog 2, Christiane Jablonowski 3, Bram van.
Deutscher Wetterdienst Flux form semi-Lagrangian transport in ICON: construction and results of idealised test cases Daniel Reinert Deutscher Wetterdienst.
Deutscher Wetterdienst 1FE 13 – Working group 2: Dynamics and Numerics report ‘Oct – Sept. 2008’ COSMO General Meeting, Krakau
Performance of a Semi-Implicit, Semi-Lagrangian Dynamical Core for High Resolution NWP over Complex Terrain L.Bonaventura D.Cesari.
Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy Jonathan Pleim, Shawn Roselle,
Higher Resolution Operational Models
Representing Effects of Complex Terrain on Mountain Meteorology and Hydrology Steve Ghan, Ruby Leung, Teklu Tesfa, PNNL Steve Goldhaber, NCAR.
Implementation of an improved horizontal diffusion scheme into the Méso-NH Günther Zängl Laboratoire d’Aérologie / University of Munich 7 March 2005.
Climate models 101 for air quality Anand Gnanadesikan Department of Earth and Planetary Sciences Johns Hopkins University GAIA Conference on Climate Change.
Energy efficient SCalable
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Free vs. Forced Convection
Grid Point Models Surface Data.
Introduction to Parallelism.
Finite difference code for 3D edge modelling
WindNinja Model Domain/Objective
ICON The next generation global model at DWD and MPI-M Current development status and selected results of idealized.
Compiler Back End Panel
Compiler Back End Panel
Models of atmospheric chemistry
NWP Strategy of DWD after 2006 GF XY DWD Feb-19.
Ph.D. Thesis Numerical Solution of PDEs and Their Object-oriented Parallel Implementations Xing Cai October 26, 1998.
Presentation transcript:

Model and Data Hierarchies for Simulating and Understanding Climate Marco A. Giorgetta Demands on next generation dynamical solvers

(My) wish list for the dynamical solver Conserves mass, tracer mass (and energy) Numerical consistency between continuity and transport equation Well behaved dynamics –Low numerical noise –Low numerical diffusion –Physically reasonable dispersion relationship Accurate Grid refinements (of different kinds) Fast

Numerical consistency between continuity and transport equation

Example: ECHAM –A spectral transform dynamical core solving for: Relative vorticity Divergence Temperature Log(surface pressure) … using 3 time level “leap frog” time integration scheme –A hybrid “eta” vertical coordinate: Pressure at interface between layers:p(k,t) = a x p(t) + b Mass of air in a layer x g:dp(k,t) = p(k+1,t) - p i (k,t) –A flux form transport scheme q, cloud water, cloud ice, (and tracers for chemistry or aerosols … using a 2 time level scheme  PROBLEM: tracer mass not conserved

Illustration: ICON dynamical core + transport scheme –Triangular grid –Hydrostatic dynamics –Hybrid vertical “eta” coordinate –2 time level semi-implicit time stepping –Flux form semi-Lagrangian transport scheme Jablonowski-Williamson test –Initial state = zonally symmetric, but dynamically instable flow –Initial perturbation  Baroclinic wave develops over ~10 days –4 Tracers, of which Q4(x,y,z,t=0) = 1

Daniel Reinert, DWD

Grid refinements (of different kinds)

Options/Questions Grid refinement –static or dynamic/adaptive? –Re-distribute grid points or create/destroy grid points? –2d or 3d? Boundary layer, troposphere, stratosphere, mesosphere –Single time integration scheme or recursive schemes? –Conservation properties? Dynamical core –Adjust scheme to expected errors (  FE schemes) Parameterizations: –Submodels: embedded dynamical models – “super-parameterizations” Cost function –How to predict the need for refinement, and what for? Target/goal? –How to confine computational costs?

Generating the icosahedral triangular grid

Other kinds of grid refinements (A) Hexagon(B) re-distributing cells

Refining the grid Grid refinement Refinement by bi-section of triangle edges: 1 triangle  4 triangles 1 or more refined regions 1 or more refinement levels per region Two-way nesting 1.Compute one time step on parent domain  dX/dt 2.Interpolate the tendencies to the lateral boundary of the nested domain 3.Perform 2 time steps(*) on the nested domain 4.Feed back the increments (*)For more levels  Apply recursion  Numerical discontinuities! Leonidas Linardakis, MPI-M

(B) global resolution 35 km (A) global resolution 140 km (C) global res. 140 km, regional res. 35 km Mountain induced Rossby wave Vorticity at ~ 3 km MSL after 20 days of simulation 2000-m circular mountain at 30°N/90°E Initial condition: Zonal flow with maximum speed of 20 m/s Experiments: –(A) Global with 140 km resolution –(B) Global with 35 km resolution –(C) As (A), but 2-step refined circular region with resolution of (B) Günther Zängl, DWD

High Performance Computing

How to get faster: –Faster CPUs –Faster connections between CPU, memory and disk faster –Parallelization over more CPUs CPUs share memory CPUs have their own memory –Modify code design to account for architecture of CPUs Scalar/vector CPUs Sizes of intermediate, fast access memories (“Caches”) The past was dominated by improved CPUs The future will be dominated by more CPUs

Parallelzation Distribute the work to many CPUs –Works well for local computations : cells, columns, (levels), … –Works badly for non-local task: integrals, global organization, …  Serial and parallel sections in a code  On 1 CPU:  On 4 CPUs:

 Amdahl’s law The serial fraction of work limits the speedup P =fraction of the work that can be parallelized 1-P = remainder, which cannot be parallelized S = 1/(1-P) = maximum speedup for N  ∞ (Wikipedia) The serial fraction of work limits the maximum speedup!

For illustration: Computer at DKRZ IBM Power6 CPUs –250 nodes –16 CPUs/node  total = 4000 CPUs –2 cores/CPU  total = 8000 cores –2 floating point units/core  64 parallel processes/node ECHAM GCM at ~1° resolution –Scales “well” up to 20 nodes = 640 cores (with parallel I/O) –Problem: Spectral transform method used for dynamical core Requires transformations between spherical harmonics and grid point fields  global data exchange, transpositions. Future: ~10 5 cores  New model necessary

Strategies Select numerical scheme, which is –Sufficiently accurate with respect to your problem –Computationally efficient Fast on single CPUs Minimize global data exchange (Transformations, “fixers”, I/O) Find optimal way to distribute work Practical issues: –Optimization the code for the main computer platform –Account for strengths/weaknesses of available compilers –Avoid “tricks” which will stop the code to work on other platforms –Optimize first the most expensive parts

Other problems in HPC Data storage: –Disk capacities grow less than computing power –Bandwidth between computer and storage system –ESMs can produce HUGE amounts of data –Finite lifetime of disks or tapes  backups or re-computing? Data accessibility: –Bandwidth between disks/tapes and post-processing computer –Post-processing software must be parallelized Data description –Documentation of model, experimental setup, formats etc. Climate models are no longer a driver for the HPC development

END