Presentation is loading. Please wait.

Presentation is loading. Please wait.

Some Coding Structure in WRF. Software Architecture F90 w/ structures and dynamic memory allocation Modules Run-time configurable Hierarchical Software.

Similar presentations


Presentation on theme: "Some Coding Structure in WRF. Software Architecture F90 w/ structures and dynamic memory allocation Modules Run-time configurable Hierarchical Software."— Presentation transcript:

1 Some Coding Structure in WRF

2 Software Architecture F90 w/ structures and dynamic memory allocation Modules Run-time configurable Hierarchical Software Design  Features Multi-level parallel decomposition shared-, distributed-, hybrid

3 Model domains are decomposed for parallelism on two-levels Patch: section of model domain allocated to a distributed memory node Single version of code for efficient execution on: Distributed-memory Shared-memory Hybrid-memory Logical domain 1 Patch, divided into multiple tiles Multi-level parallel decomposition Tile: section of a patch allocated to a shared-memory processor within a node; this is also the scope of a model layer subroutine. Distributed memory parallelism is over patches; shared memory parallelism is over tiles within patches

4 Domain size: ids, ide, jds, jde, kds, kde ids, ide, jds, jde, kds, kde Memory size: ims, ime, jms, jme, kms, kme ims, ime, jms, jme, kms, kme Tile size: its, ite, jts, jte, kts, kte its, ite, jts, jte, kts, kte Three Sets of Dimensions

5 template for model layer subroutine SUBROUTINE model ( & arg1, arg2, arg3, …, argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims IMPLICIT NONE ! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1,... REAL, DIMENSION (ims:ime,jms:jme) :: arg7, ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO template for model layer subroutine SUBROUTINE model ( & arg1, arg2, arg3, …, argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims IMPLICIT NONE ! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1,... REAL, DIMENSION (ims:ime,jms:jme) :: arg7, ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO Domain dimensions Size of logical domain Used for bdy tests, etc.

6 template for model layer subroutine SUBROUTINE model ( & arg1, arg2, arg3, …, argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims IMPLICIT NONE ! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1,... REAL, DIMENSION (ims:ime,jms:jme) :: arg7, ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO template for model layer subroutine SUBROUTINE model ( & arg1, arg2, arg3, …, argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims IMPLICIT NONE ! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1,... REAL, DIMENSION (ims:ime,jms:jme) :: arg7, ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO logical patch Domain dimensions Size of logical domain Used for bdy tests, etc.

7 Distributed Memory Communications (dyn_eh/module_diffusion.F ) SUBROUTINE horizontal_diffusion_s (tendency, rr, var, DO j = jts,jte DO k = kts,ktf DO i = its,ite mrdx=msft(i,j)*rdx mrdy=msft(i,j)*rdy tendency(i,k,j)=tendency(i,k,j)- & (mrdx*0.5*((rr(i+1,k,j)+rr(i,k,j))*H1(i+1,k,j)- & (rr(i-1,k,j)+rr(i,k,j))*H1(i,k,j))+ & mrdy*0.5*((rr(i,k,j+1)+rr(i,k,j))*H2(i,k,j+1)- & (rr(i,k,j-1)+rr(i,k,j))*H2(i,k,j ))- & msft(i,j)*(H1avg(i,k+1,j)-H1avg(i,k,j)+ & H2avg(i,k+1,j)-H2avg(i,k,j) & )/dzetaw(k) & ) ENDDO... Example code fragment that requires communication between patches Note the tell-tale +1 and –1 expressions in indices for rr and H1 arrays on right-hand side of assignment. These are horizontal data dependencies because the indexed operands may lie in the patch of a neighboring processor. That neighbor’s updates to that element of the array won’t be seen on this processor. We have to communicate.

8 template for model layer subroutine SUBROUTINE model ( & arg1, arg2, arg3, …, argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims IMPLICIT NONE ! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1,... REAL, DIMENSION (ims:ime,jms:jme) :: arg7, ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO template for model layer subroutine SUBROUTINE model ( & arg1, arg2, arg3, …, argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims IMPLICIT NONE ! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1,... REAL, DIMENSION (ims:ime,jms:jme) :: arg7, ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO logical patch Domain dimensions Size of logical domain Used for bdy tests, etc.

9 template for model layer subroutine SUBROUTINE model ( & arg1, arg2, arg3, …, argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims IMPLICIT NONE ! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1,... REAL, DIMENSION (ims:ime,jms:jme) :: arg7, ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO template for model layer subroutine SUBROUTINE model ( & arg1, arg2, arg3, …, argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims IMPLICIT NONE ! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1,... REAL, DIMENSION (ims:ime,jms:jme) :: arg7, ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO logical patch jms jme halo Domain dimensions Size of logical domain Used for bdy tests, etc. Memory dimensions Used to dimension dummy arguments Do not use for local arrays imsime 1 node

10 template for model layer subroutine SUBROUTINE model ( & arg1, arg2, arg3, …, argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims IMPLICIT NONE ! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1,... REAL, DIMENSION (ims:ime,jms:jme) :: arg7, ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO template for model layer subroutine SUBROUTINE model ( & arg1, arg2, arg3, …, argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims IMPLICIT NONE ! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1,... REAL, DIMENSION (ims:ime,jms:jme) :: arg7, ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO Domain dimensions Size of logical domain Used for bdy tests, etc. Memory dimensions Used to dimension dummy arguments Do not use for local arrays Tile dimensions Local loop ranges Local array dimensions imsime jms jme itsite tile jts jte halo

11 Data structure WRF Data Taxonomy State data Intermediate data type 1 (L1) Intermediate data type 2 (L2)

12 State data Persist for the duration of a domain Persist for the duration of a domain Represented as fields in domain data structure Represented as fields in domain data structuredomain data structuredomain data structure Arrays are represented as dynamically allocated pointer arrays in the domain data structure Arrays are represented as dynamically allocated pointer arrays in the domain data structuredynamically allocateddynamically allocated Declared in Registry using state keyword Declared in Registry using state keyword Always memory dimensioned; always thread shared Always memory dimensioned; always thread shared Only state arrays can be subject to I/O and Interprocessor communication Only state arrays can be subject to I/O and Interprocessor communication Data structure

13 template for model layer subroutine SUBROUTINE model ( & arg1, arg2, arg3, …, argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims IMPLICIT NONE ! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1,... REAL, DIMENSION (ims:ime,jms:jme) :: arg7, ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO template for model layer subroutine SUBROUTINE model ( & arg1, arg2, arg3, …, argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims IMPLICIT NONE ! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1,... REAL, DIMENSION (ims:ime,jms:jme) :: arg7, ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO logical patch jms jme halo Domain dimensions Size of logical domain Used for bdy tests, etc. Memory dimensions Used to dimension dummy arguments Do not use for local arrays imsime 1 node

14 Data that persists for the duration of 1 time step on a domain and then released Data that persists for the duration of 1 time step on a domain and then released Declared in Registry using i1 keyword Declared in Registry using i1 keyword Typically automatic storage (program stack) in solve routine Typically automatic storage (program stack) in solve routinesolve routinesolve routine Typical usage is for tendency or temporary arrays in solver Typical usage is for tendency or temporary arrays in solver Always memory dimensioned and thread shared Always memory dimensioned and thread shared Typically not communicated or I/O Typically not communicated or I/O Data structure L1 Data

15 L2 data are local arrays that exist only in model-layer subroutines and exist only for the duration of the call to the subroutine L2 data are local arrays that exist only in model-layer subroutines and exist only for the duration of the call to the subroutine L2 data is not declared in Registry, never communicated and never input or output L2 data is not declared in Registry, never communicated and never input or output L2 data is tile dimensioned and thread local; over- dimensioning within the routine for redundant computation is allowed L2 data is tile dimensioned and thread local; over- dimensioning within the routine for redundant computation is allowedover- dimensioning over- dimensioning the responsibility of the model layer programmer the responsibility of the model layer programmer should always be limited to thread-local data should always be limited to thread-local data Data structure L2 Data

16 template for model layer subroutine SUBROUTINE model ( & arg1, arg2, arg3, …, argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims IMPLICIT NONE ! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1,... REAL, DIMENSION (ims:ime,jms:jme) :: arg7, ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO template for model layer subroutine SUBROUTINE model ( & arg1, arg2, arg3, …, argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims IMPLICIT NONE ! Define Arguments (S and I1) data REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1,... REAL, DIMENSION (ims:ime,jms:jme) :: arg7, ! Define Local Data (I2) REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO k = kts, kte DO i = MAX(its,ids), MIN(ite,ide) loc(i,k,j) = arg1(i,k,j) + … END DO Domain dimensions Size of logical domain Used for bdy tests, etc. Memory dimensions Used to dimension dummy arguments Do not use for local arrays Tile dimensions Local loop ranges Local array dimensions imsime jms jme itsite tile jts jte halo

17 The Registry "Active data-dictionary” for managing WRF data structures Database describing attributes of model state, intermediate, and configuration data Dimensionality, number of time levels, staggering Association with physics I/O classification (history, initial, restart, boundary) Communication points and patterns Configuration lists (e.g. namelists) Program for auto-generating sections of WRF from database: 570 Registry entries  30-thousand lines of automatically generated WRF code Allocation statements for state data, I1 data Argument lists for driver layer/mediation layer interfaces Interprocessor communications: Halo and periodic boundary updates, transposes Code for defining and managing run-time configuration information Code for forcing, feedback and interpolation of nest data Automates time consuming, repetitive, error-prone programming Insulates programmers and code from package dependencies Allow rapid development Documents the data

18 Currently implemented as a text file: Registry/Registry Types of entry: State – Describes state variables and arrays in the domain structure Dimspec – Describes dimensions that are used to define arrays in the model L1 – Describes local variables and arrays in solve Typedef – Describes derived types that are subtypes of the domain structure Rconfig – Describes a configuration (e.g. namelist) variable or array Package – Describes attributes of a package (e.g. physics) Halo – Describes halo update interprocessor communications Period – Describes communications for periodic boundary updates Xpose – Describes communications for parallel matrix transposes Registry data base

19 State/L1 Entry (Registry) # Type Sym Dims Use Tlev Stag IO Dname Descrip # definition of a 3D, two-time level, staggered state array state real ru ikj dyn_em 2 X irh "RHO_U" "X WIND COMPONENT“ i1 real ww1 ikj dyn_em 1 Z Elements Entry: The keyword “state” Type: The type of the state variable or array (real, double, integer, logical, character, or derived) Sym: The symbolic name of the variable or array Dims: A string denoting the dimensionality of the array or a hyphen (-) Use: A string denoting association with a solver or 4D scalar array, or a hyphen NumTLev: An integer indicating the number of time levels (for arrays) or hypen (for variables) Stagger: String indicating staggered dimensions of variable (X, Y, Z, or hyphen) IO: String indicating whether and how the variable is subject to I/O and Nesting DName: Metadata name for the variable Descrip: Metadata description of the variable Example

20 State Entry– different output times Example Example In Registry state real ru ikj dyn_em 2 X irh0 1 "RHO_U" "XX“ In namelist.input auxhist1_outname = 'pm_output_d _ ' auxhist1_interval = 10000, 10000, 5 frames_per_auxhist1 = 30, 30, 24 auxhist1_begin_y = 0 auxhist1_begin_mo = 0 auxhist1_begin_d = 1 auxhist1_begin_h = 0 auxhist1_begin_m = 0 auxhist1_begin_s = 0 io_form_auxhist1 = 2, This will give you a five minute output interval on domain 3 starting after 1 day simulation.

22 # Type Sym How set Nentries Default rconfig integer dyn_opt namelist,namelist_ Rconfig Entry (Registry) This defines namelist entries Elements Entry: the keyword “rconfig” Type: the type of the namelist variable (integer, real, logical, string ) Sym: the name of the namelist variable or array How set: indicates how the variable is set: e.g. namelist or derived, and if namelist, which block of the namelist it is set in Nentries: specifies the dimensionality of the namelist variable or array. If 1 (one) it is a variable and applies to all domains; otherwise specify max_domains (which is an integer parameter defined in module_driver_constants.F). Default: the default value of the variable to be used if none is specified in the namelist; hyphen (-) for no default Example

23 # specification of microphysics options package passiveqv mp_physics==0 - moist:qv package kesslerscheme mp_physics==1 - moist:qv,qc,qr package linscheme mp_physics==2 - moist:qv,qc,qr,qi,qs,qg package ncepcloud3 mp_physics==3 - moist:qv,qc,qr package ncepcloud5 mp_physics==4 - moist:qv,qc,qr,qi,qs # namelist entry that controls microphysics option rconfig integer mp_physics namelist,namelist_04 max_domains 0 Package Entry (Registry) Elements Entry: the keyword “package”, Package name: the name of the package: e.g. “kesslerscheme” Associated rconfig choice: the name of a rconfig variable and the value of that variable that choses this package Package state vars: unused at present; specify hyphen (-) Associated 4D scalars: the names of 4D scalar arrays and the fields within those arrays this package uses Example

24 Elements Entry: keywords “halo” or “period” Commname: name of comm operation Description: defines the halo or period operation For halo: npts:f1,f2,...[;npts:f1,f2,...]* For period: width:f1,f2,...[;width:f1,f2,...]* Example # first exchange in eh solver halo HALO_EH_A dyn_em 24:u_2,v_2,ru_1,ru_2,rv_1,rv_2,w_2,t_2;4:pp,pip # a periodic boundary update period PERIOD_EH_A dyn_em 2:u_1,u_2,ru_1,ru_2,v_1,v_2,rv_1,rv_2,rw_1,rw_2 Comm entries: halo and period

25 State arrays, used to store arrays of 3D fields such as moisture tracers, chemical species, ensemble members, etc. State arrays, used to store arrays of 3D fields such as moisture tracers, chemical species, ensemble members, etc. First 3 indices are over grid dimensions; last dimension is the tracer index First 3 indices are over grid dimensions; last dimension is the tracer index Each tracer is declared in the Registry as a separate state array but with f and optionally also t modifiers to the dimension field of the entry Each tracer is declared in the Registry as a separate state array but with f and optionally also t modifiers to the dimension field of the entryRegistry The field is then added to the 4D array whose name is given by the use field of the Registry entry The field is then added to the 4D array whose name is given by the use field of the Registry entry 4D Tracer Arrays

26 Package Entry (Registry) state real qv ikjft moist 2 - \ i01rhusdf=(bdy_interp:dt,rqv_b,rqv_bt) "QVAPOR" "Water vapor mixing ratio" "kg kg-1" state real qc ikjft moist 2 - \ i01rhusdf=(bdy_interp:dt,rqc_b,rqc_bt) "QCLOUD" "Cloud water mixing ratio" "kg kg-1" state real qr ikjft moist 2 - \ i01rhusdf=(bdy_interp:dt,rqr_b,rqr_bt) "QRAIN" "Rain water mixing ratio" "kg kg-1" state real qi ikjft moist 2 - \ i01rhusdf=(bdy_interp:dt,rqi_b,rqi_bt) "QICE" "Ice mixing ratio" "kg kg-1" state real qs ikjft moist 2 - \ i01rhusdf=(bdy_interp:dt,rqs_b,rqs_bt) "QSNOW" "Snow mixing ratio" "kg kg- 1" state real qg ikjft moist 2 - \ i01rhusdf=(bdy_interp:dt,rqg_b,rqg_bt) "QGRAUP" "Graupel mixing ratio" "kg kg-1"

27 The extent of the last dimension of a tracer array is from PARAM_FIRST_SCALAR to num_tracername The extent of the last dimension of a tracer array is from PARAM_FIRST_SCALAR to num_tracername Both defined in Registry-generated frame/module_state_description.F Both defined in Registry-generated frame/module_state_description.F frame/module_state_description.F PARAM_FIRST_SCALAR is a defined constant (2) PARAM_FIRST_SCALAR is a defined constant (2) Num_tracername is computed at run-time in set_scalar_indices_from_config (module_configure) Num_tracername is computed at run-time in set_scalar_indices_from_config (module_configure) Calculation is based on which of the tracer arrays are associated with which specific packages in the Registry and on which of those packages is active at run time (namelist.input) Calculation is based on which of the tracer arrays are associated with which specific packages in the Registry and on which of those packages is active at run time (namelist.input) Registry 4D Tracer Arrays

28 Each tracer index (e.g. P_QV) into the 4D array is also defined in module_state_description and set in set_scalar_indices_from_config Each tracer index (e.g. P_QV) into the 4D array is also defined in module_state_description and set in set_scalar_indices_from_configP_QV Code should always test that a tracer index greater than or equal to PARAM_FIRST_SCALAR before referencing the tracer (inactive tracers have an index of 1) Code should always test that a tracer index greater than or equal to PARAM_FIRST_SCALAR before referencing the tracer (inactive tracers have an index of 1) Loops over tracer indices should always run from PARAM_FIRST_SCALAR to num_tracername -- EXAMPLE Loops over tracer indices should always run from PARAM_FIRST_SCALAR to num_tracername -- EXAMPLE EXAMPLE 4D Tracer Arrays

29 4D moisture field, moist_1(i,k,j,?) 4D moisture field, moist_1(i,k,j,?) ? = P_QV (water vapor) P_QC (cloud water) P_QC (cloud water) P_QI (cloud ice) P_QI (cloud ice) P_QR (rain) P_QR (rain) P_QS (snow) P_QS (snow) P_QG (graupel) P_QG (graupel) 4D Tracer Array Example IF (qi_flag) then IF (qi_flag) then (the memory of cloud ice is allocated) (the memory of cloud ice is allocated)......

30 Registry Directory Structure

31 Runge-Kutta loop (steps 1, 2, and 3) (i) advection, p-grad, buoyancy using (ii) if step 1 (first_rh_part1/part2) physics, save for steps 2 and 3 (iii) assemble dynamics tendencies Acoustic step loop (i) advance U,V, then then w, (ii) time-average U,V, End acoustic loop Advance scalars using time-averaged U,V, End Runge-Kutta loop Other physics (currently microphysics) WRFV3/dyn_em/solve_em.F End time step WRF Mass-Coordinate Model Integration Procedure Begin time step

32 WRF … solve_em DYNAMICS phy_init … INIT. microphysics_driver radiation_driver cumulus_driver surface_driver phy_prep moist_physics_prep pbl_driver part1

33 Calculate decoupled variable tendencies Update decoupled variables directly Physics Cumulus parameterizationCumulus parameterization Boundary layer parameterizationBoundary layer parameterization Radiation parameterizationRadiation parameterization MicrophysicsMicrophysics

34 solve_em Physics_driver SELECT CASE (CHOICE) CASE ( NOPHY ) CASE ( SCHEME1 ) CALL XXX CASE ( SCHEME2 ) CALL YYY  CASE DEFAULT END SELECT Individual physics scheme ( XXX ) Physics three-level structure

35 Rules for WRF physics  Naming rules xxx = individual scheme xxx = individual scheme ex, module_cu_grell.F ex, module_cu_grell.F yy = ra is for radiation bl is for PBL yy = ra is for radiation bl is for PBL sf is for surface and surface layer sf is for surface and surface layer cu is for cumulus cu is for cumulus mp is for microphysics. mp is for microphysics. module_yy_xxx.F (module)

36 Rules for WRF physics YY = ra is for radiation bl is for PBL YY = ra is for radiation bl is for PBL cu is for cumulus cu is for cumulus RXXYYTEN (tendencies) RXXYYTEN (tendencies) XX = variable (th, u, v, qv, qc, … ) XX = variable (th, u, v, qv, qc, … ) ex, RTHBLTEN  Naming rules

37  Coding rules (later)  One scheme one module Rules for WRF physics  Naming rules

38 WRF Physics Features REAL, PARAMETER :: r_d = 287. REAL, PARAMETER :: r_d = 287. REAL, PARAMETER :: r_v = REAL, PARAMETER :: r_v = REAL, PARAMETER :: cp = 7.*r_d/2. REAL, PARAMETER :: cp = 7.*r_d/2. REAL, PARAMETER :: cv = cp-r_d REAL, PARAMETER :: cv = cp-r_d.. Unified global constatnts Unified global constatnts (module_model_constants.F) (module_model_constants.F)

39 Vertical index Vertical index (kms is at the bottom) (kms is at the bottom) Unified common calculations Unified common calculations (saturation mixing ratio) (saturation mixing ratio) WRF Physics Features Unified global constatnts Unified global constatnts (module_model_constants.F) (module_model_constants.F)

40  Prepare your code  Create a new module  Declare new variables and a new package in Registry a new package in Registry  Modify solve_em.F  Do initialization  Modify namelist  Modify phy_prep Implement a new physics scheme

41  Modify cumulus_driver.F (use cumulus parameterization as an example) (use cumulus parameterization as an example)  Modify Makefile  Compile and test  Modify calculate_phy_ten  Modify phy_cu_ten (module_physics_addtendc.F) (module_physics_addtendc.F) Implement a new physics scheme

42 WRF … solve_em DYNAMICS phy_init … INIT. microphysics_driver radiation_driver cumulus_driver surface_driver phy_prep moist_physics_prep pbl_driver part1

43 a)Replace continuation characters in the 6th column with f90 continuation `&‘ at end of previous line 1. F90 Subroutine kessler(QV, T, Subroutine kessler(QV, T, + its,ite,jts,jte,kts,kte, + ims,ime,jms,jme,kms,kme, + ids,ide,jds,jde,kds,kde) F77 Subroutine kessler(QV, T,... & Subroutine kessler(QV, T,... & its,ite,jts,jte,kts,kte, & its,ite,jts,jte,kts,kte, & ims,ime,jms,jme,kms,kme,& ims,ime,jms,jme,kms,kme,& ids,ide,jds,jde,kds,kde ) ids,ide,jds,jde,kds,kde )F90 Prepare your code

44 1. F90 b) Replace the 1st column `C` for comment with `!` c This is a test c This is a testF77 ! This is a test ! This is a testF90 Prepare your code a)Replace continuation characters in the 6th column with f90 continuation `&‘ at end of previous line

45 1. F90 2. No common block common/var1/T,q,p, … common/var1/T,q,p, … Subroutine sub(T,q,p, ….) Subroutine sub(T,q,p, ….) real,intent(out), & real,intent(out), & dimension(ims:ime,kms:kme,jms:jme):: T,q,p dimension(ims:ime,kms:kme,jms:jme):: T,q,pWRF Prepare your code

46 1. F90 3. Use “ implicit none ” 4. Use “ intent ” Subroutine sub(T,q,p, ….) Subroutine sub(T,q,p, ….) implicit none implicit none real,intent(out), & real,intent(out), & dimension(ims:ime,kms:kme,jms:jme):: T dimension(ims:ime,kms:kme,jms:jme):: T real,intent( in), & real,intent( in), & dimension(ims:ime,kms:kme,jms:jme):: q dimension(ims:ime,kms:kme,jms:jme):: q real,intent(inout), & real,intent(inout), & dimension(ims:ime,kms:kme,jms:jme):: p dimension(ims:ime,kms:kme,jms:jme):: p Prepare your code 2. No common block

47 5.Variable dimensions Subroutine sub(global,….) implicit none implicit none real,intent(out), & real,intent(out), & dimension(ims:ime,kms:kme,jms:jme):: global dimension(ims:ime,kms:kme,jms:jme):: global real,dimension(its:ite,kts:kte,jts:jte):: local real,dimension(its:ite,kts:kte,jts:jte):: local Prepare your code 1. F90 3. Use “ implicit none ” 4. Use “ intent ” 2. No common block

48 6.Do loops Prepare your code 5.Variable dimensions 1. F90 3. Use “ implicit none ” 4. Use “ intent ” 2. No common block do j = jts, jte do k = kts, kte do k = kts, kte do i = its, ite do i = its, ite enddo enddo

49  Create a new module ex, module_cu_exp.F (plug in all your codes)  Go Registry and declare a new package (and new variables) (WRFV1/Registry) (and new variables) (WRFV1/Registry) package expscheme cu_physics==3 - - package kfscheme cu_physics==1 - - package kfscheme cu_physics==1 - - package bmjscheme cu_physics==2 - - package bmjscheme cu_physics==2 - - Implement a new physics scheme

50 Cloud microphysics Cloud microphysics package kesslerscheme mp_physics==1 - moist:qv,qc,qr package linscheme mp_physics==2 - moist:qv,qc,qr,qi,qs,qg package wsm3 mp_physics==3 - moist:qv,qc,qr package wsm5 mp_physics==4 - moist:qv,qc, qr,qi,qs Implement a new physics scheme  Create a new module ex, module_cu_exp.F (plug in all your codes)  Go Registry and declare a new package (and new variables) (WRFV1/Registry) (and new variables) (WRFV1/Registry)

51  Modify namelist.input and assign cu_physics = 3 Implement a new physics scheme  Create a new module ex, module_cu_exp.F (plug in all your codes)  Go Registry and declare a new package (and new variables) (WRFV1/Registry) (and new variables) (WRFV1/Registry)

52 INIT WRF ……. solve_em phy_init start_domain_em cu_init (dyn_em) (start_em.F) * * (phys) (module_physics_init.F) ( dyn_em)

53  Pass new variables down to cu_init phys/module_physics_init.F INIT WRF ……. solve_em phy_init start_domain_em cu_init (dyn_em) (start_em.F) * * (phys) (module_physics_init.F) ( dyn_em)

54  Go subroutine cu_init Include the new module and create a new Include the new module and create a new SELECT case SELECT case phys/module_physics_init.F  Pass new variables down to cu_init

55 cps_select: SELECT CASE(config_flags%cu_physics) CASE (KFSCHEME) CALL kfinit(...) CASE (BMJSCHEME) CALL bmjinit(...) CASE DEFAULT END SELECT cps_select Match the package name in Registry Subroutine cu_init(…). USE module_cu_kf USE module_cu_bmj. CASE (EXPSCHEME) CALL expinit(...) USE module_cu_exp Put into module_cu_exp.F phys/module_physics_init.F

56 WRF … solve_em DYNAMICS phy_init … INIT. microphysics_driver phy_prep moist_physics_prep part1

57 Calculate required variables Calculate required variables Convert variables from C grid Convert variables from C grid to A grid to A grid phy_prep/moist_physics_prep

58 WRF … solve_em DYNAMICS phy_init … INIT. microphysics_driver radiation_driver cumulus_driver surface_driver phy_prep moist_physics_prep pbl_driver part1 Expcps

59 solve_em Physics_driver SELECT CASE (CHOICE) CASE ( NOPHY ) CASE ( SCHEME1 ) CALL XXX CASE ( SCHEME2 ) CALL YYY  CASE DEFAULT END SELECT Individual physics scheme ( XXX ) Three-level structure

60  Go physics driver (cumulus_driver.F) Include the new module Include the new module and create a new SELECT CASE in driver and create a new SELECT CASE in driver Check available variables in drivers Check available variables in drivers (variables are explained inside drivers) (variables are explained inside drivers) cumulus_driver.F

61 MODULE module_cumulus_driver CONTAINS Subroutine cumulus_driver (….). !-- RQICUTEN Qi tendency due to ! cumulus scheme precipitation (kg/kg/s) !-- RAINC accumulated total cumulus scheme precipitation (mm) !-- RAINCV cumulus scheme precipitation (mm) !-- NCA counter of the cloud relaxation ! time in KF cumulus scheme (integer) !-- u_phy u-velocity interpolated to theta points (m/s) !-- v_phy v-velocity interpolated to theta points (m/s) !-- th_phy potential temperature (K) !-- t_phy temperature (K) !-- w vertical velocity (m/s) !-- moist moisture array (4D - last index is species) (kg/kg) !-- dz8w dz between full levels (m) !-- p8w pressure at full levels (Pa) Module_cumulus_driver.F

62 MODULE module_cumulus_driver CONTAINS Subroutine cumulus_driver. USE module_cu_kf USE module_bmj_kf cps_select: SELECT CASE(config_flags%cu_physics) CASE (KFSCHEME) CALL KFCPS(...) CASE (BMJSCHEME) CALL BMJCPS(...) CASE DEFAULT END SELECT cps_select Match the package name in Registry Put in module_cu_exp.F USE module_cu_exp CASE (EXPSCHEME) CALL EXPCPS(...) Module_cumulus_driver.F

63 WRF … solve_em DYNAMICS phy_init … INIT. microphysics_driver radiation_driver cumulus_driver surface_driver phy_prep moist_physics_prep pbl_driver part1

64 solve_em cumulus_driver expcps phy_prep DYNAMICS. calculate_phy_tend update_phy_ten phy_cu_ten message passing ? part1 part2

65 . CASE(BMJSCHEME). CASE (EXPSCHEME) CALL add_a2a (rt_tendf, RTHCUTEN,… ) CALL add_a2c_u(ru_tendf,RUBLTEN,… ) CALL add_a2c_v(rv_tendf,RVBLTEN,… ). if ( QI_FLAG ) & CALL add_a2a(moist_tendf(ims,kms,jms,P_QV),RQVCUTEN,.. & ids,ide, jds, jde, kds, kde, & ims, ime, jms, jme, kms, kme, & its, ite, jts, jte, kts, kte ). Subroutine phy_cu_ten (… ) phys/module_physics_addtendc.F


Download ppt "Some Coding Structure in WRF. Software Architecture F90 w/ structures and dynamic memory allocation Modules Run-time configurable Hierarchical Software."

Similar presentations


Ads by Google