Welcome To the Inaugural Meeting of the WRF Software Training and Documentation Team Jan. 26-28, 2004 NCAR, MMM Division.

Slides:



Advertisements
Similar presentations
WRF Model: Software Architecture
Advertisements

Visual Scripting of XML
Intermediate Code Generation
Chapter 7 Introduction to Procedures. So far, all programs written in such way that all subtasks are integrated in one single large program. There is.
Names and Bindings.
The Assembly Language Level
File Management Chapter 12. File Management File management system is considered part of the operating system Input to applications is by means of a file.
ITEC113 Algorithms and Programming Techniques
CPSC Compiler Tutorial 9 Review of Compiler.
Chapter 9 Subprogram Control Consider program as a tree- –Each parent calls (transfers control to) child –Parent resumes when child completes –Copy rule.
Chapter 13 Embedded Systems
XP 1 Working with JavaScript Creating a Programmable Web Page for North Pole Novelties Tutorial 10.
Chapter 2: Input, Processing, and Output
Chapter 3 Program translation1 Chapt. 3 Language Translation Syntax and Semantics Translation phases Formal translation models.
Guide To UNIX Using Linux Third Edition
Chapter 1 Program Design
1 ES 314 Advanced Programming Lec 2 Sept 3 Goals: Complete the discussion of problem Review of C++ Object-oriented design Arrays and pointers.
 Pearson Education, Inc. All rights reserved Arrays.
Chapter 3 Planning Your Solution
Operating Systems Concepts 1. A Computer Model An operating system has to deal with the fact that a computer is made up of a CPU, random access memory.
CSC 8310 Programming Languages Meeting 2 September 2/3, 2014.
Chapter 3.1:Operating Systems Concepts 1. A Computer Model An operating system has to deal with the fact that a computer is made up of a CPU, random access.
Language Evaluation Criteria
Template Development of a Plume-in-Grid Version of Global-through-Urban WRF/Chem Prakash Karamchandani, Krish Vijayaraghavan, Shu-Yun Chen ENVIRON International.
File Management Chapter 12. File Management File management system is considered part of the operating system Input to applications is by means of a file.
1 Shawlands Academy Higher Computing Software Development Unit.
© 2008, Renesas Technology America, Inc., All Rights Reserved 1 Purpose  This training course describes how to configure the the C/C++ compiler options.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
CS 403: Programming Languages Lecture 2 Fall 2003 Department of Computer Science University of Alabama Joel Jones.
1 Programming Languages Tevfik Koşar Lecture - II January 19 th, 2006.
A Metadata Based Approach For Supporting Subsetting Queries Over Parallel HDF5 Datasets Vignesh Santhanagopalan Graduate Student Department Of CSE.
1 Module Objective & Outline Module Objective: After completing this Module, you will be able to, appreciate java as a programming language, write java.
Scientific Computing Division A tutorial Introduction to Fortran Siddhartha Ghosh Consulting Services Group.
1 COMP 3438 – Part II-Lecture 1: Overview of Compiler Design Dr. Zili Shao Department of Computing The Hong Kong Polytechnic Univ.
Component frameworks Roy Kensmil. Historical trens in software development. ABSTRACT INTERACTIONS COMPONENT BUS COMPONENT GLUE THIRD-PARTY BINDING.
The european ITM Task Force data structure F. Imbeaux.
Unit-1 Introduction Prepared by: Prof. Harish I Rathod
_______________________________________________________________CMAQ Libraries and Utilities ___________________________________________________Community.
Efficient RDF Storage and Retrieval in Jena2 Written by: Kevin Wilkinson, Craig Sayers, Harumi Kuno, Dave Reynolds Presented by: Umer Fareed 파리드.
Distribution and components. 2 What is the problem? Enterprise computing is Large scale & complex: It supports large scale and complex organisations Spanning.
1. 2 Preface In the time since the 1986 edition of this book, the world of compiler design has changed significantly 3.
RUN-Time Organization Compiler phase— Before writing a code generator, we must decide how to marshal the resources of the target machine (instructions,
Standard Template Library The Standard Template Library was recently added to standard C++. –The STL contains generic template classes. –The STL permits.
The Software Development Process
Earth System Modeling Framework Python Interface (ESMP) October 2011 Ryan O’Kuinghttons Robert Oehmke Cecelia DeLuca.
CE Operating Systems Lecture 17 File systems – interface and implementation.
NCEP ESMF GFS Global Spectral Forecast Model Weiyu Yang, Mike Young and Joe Sela ESMF Community Meeting MIT, Cambridge, MA July 21, 2005.
CS212: Object Oriented Analysis and Design Lecture 19: Exception Handling.
Session 1 Module 1: Introduction to Data Integrity
WRF Software Development and Performance John Michalakes, NCAR NCAR: W. Skamarock, J. Dudhia, D. Gill, A. Bourgeois, W. Wang, C. Deluca, R. Loft NOAA/NCEP:
STL CSSE 250 Susan Reeder. What is the STL? Standard Template Library Standard C++ Library is an extensible framework which contains components for Language.
ESMF,WRF and ROMS. Purposes Not a tutorial Not a tutorial Educational and conceptual Educational and conceptual Relation to our work Relation to our work.
1 The Software Development Process ► Systems analysis ► Systems design ► Implementation ► Testing ► Documentation ► Evaluation ► Maintenance.
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 2: The Linux System Part 2.
Copyright © 2004, Keith D Swenson, All Rights Reserved. OASIS Asynchronous Service Access Protocol (ASAP) Tutorial Overview, OASIS ASAP TC May 4, 2004.
1 Asstt. Prof Navjot Kaur Computer Dept PRESENTED BY.
Parallel Computing Presented by Justin Reschke
November 21 st 2002 Summer 2009 WRFDA Tutorial WRF-Var System Overview Xin Zhang, Yong-Run Guo, Syed R-H Rizvi, and Michael Duda.
XP Tutorial 10New Perspectives on HTML, XHTML, and DHTML, Comprehensive 1 Working with JavaScript Creating a Programmable Web Page for North Pole Novelties.
13-1 ANSYS, Inc. Proprietary © 2009 ANSYS, Inc. All rights reserved. April 28, 2009 Inventory # Chapter 13 Solver.out File and CCL Introduction to.
Data Integrity & Indexes / Session 1/ 1 of 37 Session 1 Module 1: Introduction to Data Integrity Module 2: Introduction to Indexes.
Advanced Computer Systems
Zuse’s Plankalkül – 1945 Never implemented Problems Zuse Solved
User-Written Functions
Chapter 1 Introduction.
Introduction to Visual Basic 2008 Programming
Chapter 1 Introduction.
An Introduction to Software Architecture
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
SPL – PS1 Introduction to C++.
Presentation transcript:

Welcome To the Inaugural Meeting of the WRF Software Training and Documentation Team Jan , 2004 NCAR, MMM Division

Monday January 26, 2004 Introduction Software Overview WRF Software Tutorial, June 2003 Data and data structures Parallel infrastructure

Introduction History –Requirements emphasize flexibility over a range of platforms, applications, users –WRF develops rapidly. First released Dec 2000; Last beta release, 1.3, in May Official 2.0 release coming in May 2004 –Circa 2003: "Arcane" used to describe WRF: Adj. Known or understood by only a few. Mysterious.

Introduction Purpose of WRF Tiger Team effort –Extend knowledge of WRF software to wider base of software developers –Create comprehensive developer document –Team approach to both objectives –Streamlining and code improvement as byproducts in the coming months but not the subject for this meeting

Introduction This meeting –Review of WRF software structure and function: Phenomenological – this is the code as it exists Incomplete –time to prepare and present is limiter, but What we're looking for now is a roadmap through the code for producing the comprehensive documentation –Develop an outline for the developer documentation –Writing assignments and work plan over next 9 months

Some terms WRF Architecture – scheme of software layers and interface definitions WRF Framework – the software infrastructure, also "driver layer" in the WRF architecture WRF Model Layer – the computational routines that are specifically WRF WRF Model – a realization of the WRF architecture comprising the WRF model layer with some framework WRF – a set of WRF architecture-compliant applications, of which the WRF Model is one

WRF Software Overview

Weather Research and Forecast Model Goals: Develop an advanced mesoscale forecast and assimilation system, and accelerate research advances into operations 12km WRF simulation of large- scale baroclinic cyclone, Oct. 24, 2001

WRF Software Requirements… Fully support user community's needs: nesting, coupling, contributed physics code and multiple dynamical cores – but keep it simple Support every computer but make sure scaling and performance is optimal on the computer we use Leverage community infrastructure, computational frameworks, contributed software, but please no opaque code Implement by committee of geographically remote developers Adhere to union of all software process models Fully test, document, support Free ;-)

WRF Software Requirements (for real) Goals Community Model Good performance Portable across a range of architectures Flexible, maintainable, understandable Facilitate code reuse Multiple dynamics/ physics options Run-time configurable Nested Package independent Aspects of Design Single-source code Fortran90 modules, dynamic memory, structures, recursion Hierarchical software architecture Multi-level parallelism CASE: Registry Package-neutral APIs –I/O, data formats –Communication Scalable nesting/coupling infrastructure

Aspects of WRF Software Design

Model Coupling

ESMF

Performance

Structural Aspects Directory Structure and relationship to Software Hierarchy File nomenclature and conventions Use Association

Directory Structure

WRF Model Directory Structure page 5, WRF D&I Document driver mediation model

WRF File Taxonomy and Nomenclature

Module Conventions and USE Association Modules are named module_something Name of file containing module is module_something.F If a module includes an initialization routine, that routine should be named init_module_something() Typically: –Driver and model layers are made up of modules, –Mediation layer is not (rather, bare subroutines), except for physics drivers in phys directory –Gives benefit of modules while avoiding cycles in the use association graph MODULE module_this MODULE module_that … USE module_this USE module_that USE module_whatcha USE module_macallit … MODULE module_whatcha MODULE module_macallit USE module_this USE module_that … driver mediation model

WRF S/W Tutorial, June 2003

Tutorial Presentation (click here) Parallel Infrastructure Registry details I/O architecture and mechanism Example of coding new package into framework

Data and Data Structures

Session 3: Data Structures Overview Representation of domain Special representations –Lateral Boundary Conditions –4D Tracer Arrays

Data Overview WRF Data Taxonomy –State data –Intermediate data type 1 (I1) –Intermediate data type 2 (I2) –Heap storage (COMMON or Module data)

State Data Persist for the duration of a domain Represented as fields in domain data structuredomain data structure Arrays are represented as dynamically allocated pointer arrays in the domain data structuredynamically allocated Declared in Registry using state keyword Always memory dimensioned; always thread shared Only state arrays can be subject to I/O and Interprocessor communication

I1 Data Data that persists for the duration of 1 time step on a domain and then released Declared in Registry using i1 keyword Typically automatic storage (program stack) in solve routine solve routine Typical usage is for tendency arrays in solver Always memory dimensioned and thread shared Typically not communicated or I/O

I2 Data I2 data are local arrays that exist only in model- layer subroutines and exist only for the duration of the call to the subroutine I2 data is not declared in Registry, never communicated and never input or output I2 data is tile dimensioned and thread local; over-dimensioning within the routine for redundant computation is allowed over-dimensioning –the responsibility of the model layer programmer –should always be limited to thread-local data

Heap Storage Data stored on the process heap is not thread- safe and is generally forbidden anywhere in WRF –COMMON declarations –Module data Exception: If the data object is: –Completely contained and private within a Model Layer module, and –Set once and then read-only ever after, and –No decomposed dimensions.

Grid Representation in Arrays Increasing indices in WRF arrays run –West to East (X, or I-dimension) –South to North (Y, or J-dimension) –Bottom to Top (Z, or K-dimension) Storage order in WRF is IKJ but this is a WRF Model convention, not a restriction of the WRF Software Framework

Grid Representation in Arrays The extent of the logical or domain dimensions is always the "staggered" grid dimension. That is, from the point of view of a non-staggered dimension, there is always an extra cell on the end of the domain dimension.

Grid Indices Mapped onto Array Indices (C-grid example) m 1,4 m 2,4 m 3,4 m 4,4 m 1,3 m 2,3 m 3,3 m 4,3 m 1,2 m 2,2 m 3,2 m 4,2 m 1,1 m 2,1 m 3,1 m 4,1 u 1,4 u 2,4 u 3,4 u 4,4 u 1,3 u 2,3 u 3,3 u 4,3 u 1,2 u 2,2 u 3,2 u 4,2 u 1,1 u 2,1 u 3,1 u 4,1 u 4,5 u 5,3 u 5,2 u 5,1 v 1,4 v 2,4 v 3,4 v 4,4 v 1,3 v 2,3 v 3,3 v 4,3 v 1,2 v 2,2 v 3,2 v 4,2 v 1,1 v 2,1 v 3,1 v 4,1 v 1,5 v 2,5 v 3,5 v 4,5 jds = 1 jde = 5 ids = 1ide = 5 Computation over mass points mass points runs only ids..ide-1 and jds..jde-1 Likewise, vertical computation over unstaggered fields run kds..kde-1

LBC Arrays State arrays, declared in Registry using the b modifier in the dimension field of the entryRegistry Store specified forcing data on domain 1, or forcing data from parent on a nest All four boundaries are stored in the array; last index is over: P_XSB (western) P_XEB (eastern) P_YSB (southern) P_YEB (northern) These are defined in module_state_description.F

LBC Arrays LBC arrays are declared as follows: em_u_b(max(ide,jde),kde,spec_bdy_width,4) Globally dimensioned in first index as the maximum of x and y dimensions Second index is over vertical dimension Third index is the width of the boundary (namelist) Fourth index is which boundary Note: LBC arrays are globally dimensioned not fully dimensioned so still scalable in memory preserves global address space for dealing with LBCs makes input trivial (just read and broadcast)

unused LBC Arrays unused P_YEB P_YSB A Given Domain P_YEBP_XEB jds jde idside spec_bdy_width

unused LBC Arrays unused P_YEB P_YSB P_YEB P_XEB jds jde idside A given subdomain that includes a domain boundary spec_bdy_width

Four Dimensional Tracer Arrays State arrays, used to store arrays of 3D fields such as moisture tracers, chemical species, ensemble members, etc. First 3 indices are over grid dimensions; last dimension is the tracer index Each tracer is declared in the Registry as a separate state array but with f and optionally also t modifiers to the dimension field of the entryRegistry The field is then added to the 4D array whose name is given by the use field of the Registry entry

Four Dimensional Tracer Arrays Fields of a 4D array are input and output separately and appear as any other 3D field in a WRF dataset The extent of the last dimension of a tracer array is from PARAM_FIRST_SCALAR to num_tracername –Both defined in Registry-generated frame/module_state_description.F frame/module_state_description.F –PARAM_FIRST_SCALAR is a defined constant (2) –Num_tracername is computed at run-time in set_scalar_indices_from_config (module_configure) –Calculation is based on which of the tracer arrays are associated with which specific packages in the Registry and on which of those packages is active at run time (namelist.input)Registry

Four Dimensional Tracer Arrays Each tracer index (e.g. P_QV) into the 4D array is also defined in module_state_description and set in set_scalar_indices_from_configP_QV Code should always test that a tracer index greater than or equal to PARAM_FIRST_SCALAR before referencing the tracer (inactive tracers have an index of 1) Loops over tracer indices should always run from PARAM_FIRST_SCALAR to num_tracername -- EXAMPLE EXAMPLE

Parallel Infrastructure

Distributed memory parallelism –Some basics –API module_dm.F routines Registry interface (gen_comms.c) –Data decomposition –Communications Shared memory parallelism –Tiling –Threading directives –Thread safety

Some Basics on DM Parallelism Principal types of explicit communication –Halo exchanges –Periodic boundary updates –Parallel transposes –Special purpose scatter gather for nesting Also –Broadcasts –Reductions (missing, using MPI directly) –Patch-to-global and global-to-patch –Built in I/O server mechanism

Some Basics on DM Parallelism All DM comm operations are collective Semantics for specifying halos exchanges, periodic bdy updates, and transposes allow message agglomeration (bundling) Halos and periods allow fields to have varying width stencils within the same operation Efficient implementation is up to the external package implementing communications

DM Comms API External package provides a number of subroutines in module_dm.F module_dm.F Click here Click here for partial API specification Actual invocation of halos, periods, transposes provided by a specific external package is through #include files in the inc directory. This provides greater flexibility and latitude to the implementer than a subroutine interface#include inc Package implementer may define comm-invocation include files manually or they can be generated automatically by the Registry by providing a routine external/package/gen_comms.c for inclusion in the Registry program external/package/gen_comms.c

A few notes on RSL implementation RSL maintains descriptors for domains and the operations on the domains An operation such as a halo exchange is a collection of logical "messages", one per point on the halo's stencil Each message is a collection of fields that should be exchanged for that point RSL stores up this information in tables then compiles an efficient communication schedule the first time the operation is invoked for a domain 24 pt stencil msg1{ u, v } msg2{ t, w, ps }...

Example HALO_EM_D2_5 Defined in Registry halo HALO_EM_D2_5 dyn_em 48:u_2,v_2,w_2,t_2,ph_2;\ 24:moist_2,chem_2;\ 4:mu_2,al USED in dyn_em/solve_em.F #ifdef DM_PARALLEL IF ( h_mom_adv_order <= 4 ) THEN # include "HALO_EM_D2_3.inc"HALO_EM_D2_3.inc ELSE IF ( h_mom_adv_order <= 6 ) THEN # include "HALO_EM_D2_5.inc"HALO_EM_D2_5.inc ELSE WRITE(wrf_err_message,*)'solve_em: invalid h_mom_adv_order ' CALL wrf_error_fatal (TRIM(wrf_err_message))wrf_error_fatal ENDIF # include "PERIOD_BDY_EM_D.inc"PERIOD_BDY_EM_D.inc # include "PERIOD_BDY_EM_MOIST2.inc"PERIOD_BDY_EM_MOIST2.inc # include "PERIOD_BDY_EM_CHEM2.inc"PERIOD_BDY_EM_CHEM2.inc #endif

Example HALO_EM_D2_5 Defined in Registry halo HALO_EM_D2_5 dyn_em 48:u_2,v_2,w_2,t_2,ph_2;\ 24:moist_2,chem_2;\ 4:mu_2,al !STARTOFREGISTRYGENERATEDINCLUDE 'inc/HALO_EM_D2_5.inc' ! ! WARNING This file is generated automatically by use_registry ! using the data base in the file named Registry. ! Do not edit. Your changes to this file will be lost. ! IF ( grid%comms( HALO_EM_D2_5 ) == invalid_message_value ) THEN CALL wrf_debug ( 50, 'set up halo HALO_EM_D2_5' ) CALL setup_halo_rsl( grid ) CALL reset_msgs_48pt CALL add_msg_48pt_real ( u_2, (glen(2)) ) CALL add_msg_48pt_real ( v_2, (glen(2)) ) CALL add_msg_48pt_real ( w_2, (glen(2)) ) CALL add_msg_48pt_real ( t_2, (glen(2)) ) CALL add_msg_48pt_real ( ph_2, (glen(2)) ) if ( P_qv.GT. 1 ) CALL add_msg_24pt_real ( moist_2 ( grid%sm31,grid%sm32,grid%sm33,P_qv), glen(2) ) if ( P_qc.GT. 1 ) CALL add_msg_24pt_real ( moist_2 ( grid%sm31,grid%sm32,grid%sm33,P_qc), glen(2) ) if ( P_qr.GT. 1 ) CALL add_msg_24pt_real ( moist_2 ( grid%sm31,grid%sm32,grid%sm33,P_qr), glen(2) ) if ( P_qi.GT. 1 ) CALL add_msg_24pt_real ( moist_2 ( grid%sm31,grid%sm32,grid%sm33,P_qi), glen(2) ) if ( P_qs.GT. 1 ) CALL add_msg_24pt_real ( moist_2 ( grid%sm31,grid%sm32,grid%sm33,P_qs), glen(2) ) if ( P_qg.GT. 1 ) CALL add_msg_24pt_real ( moist_2 ( grid%sm31,grid%sm32,grid%sm33,P_qg), glen(2) ) CALL add_msg_4pt_real ( mu_2, 1 ) CALL add_msg_4pt_real ( al, (glen(2)) ) CALL stencil_48pt ( grid%domdesc, grid%comms ( HALO_EM_D2_5 ) ) ENDIF CALL rsl_exch_stencil ( grid%domdesc, grid%comms( HALO_EM_D2_5 ) )

Notes on Period Communication m 1,4 m 2,4 m 3,4 m 4,4 m 1,3 m 2,3 m 3,3 m 4,3 m 1,2 m 2,2 m 3,2 m 4,2 m 1,1 m 2,1 m 3,1 m 4,1 u 1,4 u 2,4 u 3,4 u 4,4 u 1,3 u 2,3 u 3,3 u 4,3 u 1,2 u 2,2 u 3,2 u 4,2 u 1,1 u 2,1 u 3,1 u 4,1 u 4,5 u 5,3 u 5,2 u 5,1 v 1,4 v 2,4 v 3,4 v 4,4 v 1,3 v 2,3 v 3,3 v 4,3 v 1,2 v 2,2 v 3,2 v 4,2 v 1,1 v 2,1 v 3,1 v 4,1 v 1,5 v 2,5 v 3,5 v 4,5 updating Mass Point periodic boundary

Notes on Period Communication m 1,4 m 2,4 m 3,4 m 4,4 m 1,3 m 2,3 m 3,3 m 4,3 m 1,2 m 2,2 m 3,2 m 4,2 m 1,1 m 2,1 m 3,1 m 4,1 u 1,4 u 2,4 u 3,4 u 4,4 u 1,3 u 2,3 u 3,3 u 4,3 u 1,2 u 2,2 u 3,2 u 4,2 u 1,1 u 2,1 u 3,1 u 4,1 u 4,5 u 5,3 u 5,2 u 5,1 v 1,4 v 2,4 v 3,4 v 4,4 v 1,3 v 2,3 v 3,3 v 4,3 v 1,2 v 2,2 v 3,2 v 4,2 v 1,1 v 2,1 v 3,1 v 4,1 v 1,5 v 2,5 v 3,5 v 4,5 updating Mass Point periodic boundary m 4,4 m 4,3 m 4,2 m 4,1 m 1,4 m 1,3 m 1,2 m 1,1

Notes on Period Communication m 1,4 m 2,4 m 3,4 m 4,4 m 1,3 m 2,3 m 3,3 m 4,3 m 1,2 m 2,2 m 3,2 m 4,2 m 1,1 m 2,1 m 3,1 m 4,1 u 1,4 u 2,4 u 3,4 u 4,4 u 1,3 u 2,3 u 3,3 u 4,3 u 1,2 u 2,2 u 3,2 u 4,2 u 1,1 u 2,1 u 3,1 u 4,1 u 1,5 u 1,3 u 1,2 u 1,1 v 1,4 v 2,4 v 3,4 v 4,4 v 1,3 v 2,3 v 3,3 v 4,3 v 1,2 v 2,2 v 3,2 v 4,2 v 1,1 v 2,1 v 3,1 v 4,1 v 1,5 v 2,5 v 3,5 v 4,5 updating U Staggered periodic boundary note: replicated

Notes on Period Communication m 1,4 m 2,4 m 3,4 m 4,4 m 1,3 m 2,3 m 3,3 m 4,3 m 1,2 m 2,2 m 3,2 m 4,2 m 1,1 m 2,1 m 3,1 m 4,1 u 1,4 u 2,4 u 3,4 u 4,4 u 1,3 u 2,3 u 3,3 u 4,3 u 1,2 u 2,2 u 3,2 u 4,2 u 1,1 u 2,1 u 3,1 u 4,1 u 1,5 u 1,3 u 1,2 u 1,1 v 1,4 v 2,4 v 3,4 v 4,4 v 1,3 v 2,3 v 3,3 v 4,3 v 1,2 v 2,2 v 3,2 v 4,2 v 1,1 v 2,1 v 3,1 v 4,1 v 1,5 v 2,5 v 3,5 v 4,5 updating U Staggered periodic boundary u 4,4 u 4,3 u 4,2 u 4,1 u 2,4 u 2,3 u 2,2 u 2,1

Welcome To the Inaugural Meeting of the WRF Software Training and Documentation Team Jan , 2004 NCAR, MMM Division

Tuesday, January 27, 2004 Detailed code walk-through I/O Misc. Topics –Registry –Error handling –Time management –Build mechanism

Detailed WRF Code Walkthrough

Detailed Code Walkthrough The walkthrough was conducted using the following set of NotesNotes The WRF Code Browser was used to peruse the code and dive down at various pointsWRF Code Browser The walkthrough began with the main/wrf.F routine (when you bring up the browser, this should be in the upper right hand frame; if not, click the link WRF in the lower left hand frame, under Programs)

I/O

Concepts I/O Software Stack I/O and Model Coupling API

WRF I/O Concepts WRF model has multiple input and output streams that are bound a particular format at run time Different formats (NetCDF, HDF, binary I/O) are implemented behind a standardized WRF I/O API Lower levels of the WRF I/O software stack allow expression of a dataset open as a two-stage operation: OPEN BEGIN and then OPEN COMMIT –Between the OPEN BEGIN and OPEN COMMIT the program performs the sequence of writes that will constitute one frame of output to "train" the interface –An implementation of the API is free to use this information for optimization/bundling/etc. or ignore it Higher levels of the WRF I/O software stack provide a BEGIN/TRAIN/COMMIT form of an OPEN as a single call

I/O Software Stack Domain I/O Field I/O Package-independent I/O API Package-specific I/O API

Domain I/O Routines in share/module_io_domain.Fshare/module_io_domain.F –High level routines that apply to operations on a domain and a stream open and define a stream for writing in a single call that contains the OPEN FOR WRITE BEGIN, the series of "training writes" to a dataset, and the final OPEN FOR WRITE COMMIT read or write all the fields of a domain that make up a complete frame on a stream (as specified in the Registry) with a single call some wrf-model specific file name manipulation routines

Field I/O Routines in share/module_io_wrf.Fshare/module_io_wrf.F –Many of the routines here are duplicative of the routines in share/module_io_domain.F and an example of unnecessary layering in the WRF I/O software stackshare/module_io_domain.F –However, file does contain the base output_wrf and input_wrf routines in this file are what all the stream-specific wrappers (that are duplicated in the two layers)

Field I/O Output_wrf and input_wrfOutput_wrf input_wrf –Contain hard coded WRF-specific meta-data puts (for output) and gets (for input) Whether meta-data is output or input is controlled by a flag in the grid data structure Meta data output is turned off when output_wrf is being called as part of a "training write" within a two-stage open It is turned on when it's called as part of an actual write –Contain registry generated series of calls the WRF I/O API to write or read individual files

Package-independent I/O API frame/module_io.F These routines correspond to WRF I/O API specificationWRF I/O API specification Start with the wrf_ prefix (package-specific routines start with ext_package_) The package-independent routines here contain logic for: –selecting between formats (package-specific) based on the what stream is being written and what format is specified for that stream –calling the external package as a parallel package (each process passes subdomain) or collecting and calling on a single WRF process –passing the data off the the asynchronous quilt-servers instead of calling the I/O API from this task

Package-specific I/O API Format specific implementations of I/O –external/io_netcdf/wrf_io.F90 –external/io_int/io_int.F90 –external/io_phdf5/wrf-phdf5.F90 –external/io_mcel/io_mcel.F90 The NetCDF version each contain a small program, diffwrf.F90, that uses the API read and then generate an ascii dump of a field that is readable by HMV (see: a small plotting program we use in- house for debugging and quick output. diffwrf.F90 Diffwrf is also useful as a small example of how to use the I/O API to read a WRF data set

Misc. Topics

Registry Error handling Time management Build mechanism

Registry Overview of Registry program Survey of what is autogenerated

Registry Source Files (in tools/) registry.cregistry.cMain program reg_parse.creg_parse.cParser Registry File and build AST gen_allocs.cGenerate allocate statements gen_args.cGenerate argument lists gen_comms.cGenerate comms (STUBS or PACKAGE SPECIFIC) gen_config.cGenerate namelist handling code gen_defs.cGenerate variable/dummy arg declarations gen_interp.cGenerate nest interpolation code gen_mod_state_descr.cGenerate frame/module_state_description.F gen_model_data_ord.cGenerate inc/model_data_ord.inc gen_scalar_derefs.cGenerate grid dereferencing code for non arrays gen_scalar_indices.cGenerate code for 4D array indexing gen_wrf_io.cGenerate calls to I/O API for fields misc.cUtilities used in registry program my_strtok.c " " " " " data.cAbstract syntax tree routines sym.cSymbol table (used by parser and AST) symtab_gen.c " " " " " " type.cType handling, derived data types, misc.

What the Registry Generates Include files in the inc directory…inc

WRF Error Handling frame/module_wrf_error.F Routines for –Incremental debugging output WRF_DEBUG –Producing diagnostic messages WRF_MESSAGE –Writing an error message and terminating WRF_ERROR_FATAL

WRF Time management Implementation of ESMF Time Manager Defined in external/esmf_time_f90 Objects –Clocks –Alarms –Time Instances –Time Intervals

WRF Time management Operations on ESMF time objects –For example: +, -, and other arithmetic is defined for time intervals intervals and instances –I/O intervals are specified by setting alarms on clocks that are stored for each domain; see share/set_timekeeping.F share/set_timekeeping.F –The I/O operations are called when these alarms "go off". see MED_BEFORE_SOLVE_IO in share/mediation_integrate.FMED_BEFORE_SOLVE_IO in share/mediation_integrate.F

WRF Build Mechanism Structure –Scripts: configure –Determines architecture using 'uname' then searchs the arch/configure.defaults file for the list of possible compile options for that system. Typically the choices involved compiling for single- threaded, pure shared memory, pure distributed memory, or hybrid; may be other options too –Creates the file configure.wrf, included by Makefiles compile [scenario] –Checks for existence of configure.wrf –Checks the environment for the core-specific settings such as WRF_EM_CORE or WRF_NMM_CORE –Invokes the make command on the top-level Makefile passing it information about specific targets to be built depending on the scenario argument to the script clean [-a] –Cleans the code, or really cleans the code

WRF Build Mechanism Structure (continued) –arch/configure.defaults -- file containing settings for various architecturesarch/configure.defaults –test directory Contains a set of subdirectories, each one for a different scenario. Includes idealized cases as well as directories for running real-data cases The compile script requires the name of one of these directories (for example "compile em_real") and based on that it compiles wrf.exe and the appropriate preprocessor (for example real.exe) and creates symbolic links from the test/em_real subdirectory to these executables

WRF Build Mechanism Structure (continued) –Top-level Makefile and Makefiles in subdirectories –The compile script invokes the top-level Makefile as: "make scenario" –The top level Makefile, including rules and targets from the configure.wrf file that was generated by the configure script, then recursively invokes Makefiles in subdirectories in order:Makefile external(external packages based on configure.wrf) tools(builds registry) frame(invokes registry and then builds framework) shared(mediation layer and other modules and subroutines) physics(physics package) dyn_*(core specific code) main(main routine and link to produce executables)