Download presentation
Presentation is loading. Please wait.
Published byAndrea May Modified over 10 years ago
1
-SELFE Users Trainng Course- (3) SELFE code Joseph Zhang, CMOP, OHSU
2
SELFE flow chart Compute geometry Initialization or hot start end yes
All numbers are based on a most recent CORIE run using serial version 1.5k2 Compute geometry Initialization or hot start end yes forcings no 4% Completed? backtracking 40% 3% output no Turbulence closure 6% Update levels & eqstate 1% ~10% Transport Preparation (wave cont.) 11% <1% w <1% Momentum eq. 21% solver
3
Serial code structure (1)
3 Serial code structure (1) 1 Dimensioning parameters – static allocation of memory ! Dimensioning parameters integer, parameter :: mnp=31000 integer, parameter :: mne=60000 integer, parameter :: mns=91000 integer, parameter :: mnv=54 ! user-defined tracer part integer, parameter :: ntracers=0 ! end user-defined tracer part integer, parameter :: mntr=max(2,ntracers) integer, parameter :: mnei=20 !neighbor integer, parameter :: mne_kr=100 !max. # of elements used in Kriging integer, parameter :: mnei_kr=6 !max. # of pts used in Kriging integer, parameter :: mnope=20 !# of open bnd segements integer, parameter :: mnond=1000 !max. # of open-bnd nodes on each segment integer, parameter :: mnland=220 !# of land bnd segements integer, parameter :: mnlnd=10000 !max. # of land nodes on each segment integer, parameter :: mnbfr=15 !# of forcing freqs. integer, parameter :: itmax=5000 !# of iteration for itpack solvers used for dimensioning integer, parameter :: nwksp=6*mnp+4*itmax !available work space for itpack solvers integer, parameter :: nbyte=4 integer, parameter :: mnout=100 !max. # of output var. integer, parameter :: mirec= !max. record # to prevent output ~> 4GB How to read the code Array indices: tr_el(mnv,mne,mntr) Loop around elements, nodes, and sides Major blocks read in vgrid: SZ param.in, part I: CPP projection center for hgrid hgrid & bnd info Compute connectivity info Ball of nodes Side info; open bnd sides 2 1 2 3 j 3 2 1 1 2 3 2 y’ 1 x’
4
Serial code structure (2)
Side neighborhood info for filter and hvis: geometry check and end of ipre=1 param.in, part II: remaining parameters Kriging cloud and inversion of matrices with LAPACK Initialization bottom index cold start: i.c. for S,T & tracers; wind field; nudging field (S&T); GOTM hotstart: read in all state variables reset time=0 for ihot=1 find position in all *.th, atmos. and nudging inputs write header of outputs initial vgrid: vertical indices; z-coord.; wet/dry; bed deformation initial nodes eqstate initialize heat exchange Time stepping bed deformation bottom drag horizontal viscosity tidal potential wind stress and atmos. fluxes S,T nudging values read in *.th; compute vel. b.c. based on discharges (except uv3D.th) backtracking (more later) nodes and sides density gradients using cubic spline turbulence closure: at nodes x do j=1,ns if(idry_s(j)==1) then do k=1,nvrt su2(k,j)=0 sv2(k,j)=0 enddo !k cycle endif ! Wet sides node1=isidenode(j,1) node2=isidenode(j,2) …………………… enddo !j
5
Serial code structure (3)
Time stepping continuity eq evaluation of FE matrices compute sparse matrix (I1-5) and impose essential b.c. solver: CG with Jacobian pre-conditioner momentum eq: sides mismatch of Z layers: volume conservation filter vertical velocity: elements transport eq ELM: nodes and sides (need btrack values) upwind &TVD: prisms (no btrack) do_transport_tvd() conversion to nodes for output and for eqstate (closure) tracers nudging and b.c. new vgrid: wet/dry (more later) new (u,v,w) at nodes eqstate calculation of fluxes and conservation check (using internal variables) global outputs
6
Vertical indices: levels0()
Compute z-coordinates at nodes, sides and elements Set wet/dry flags for elements an element is "dry" if one of nodes is dry an element is wet if all nodes are wet (and all sides are wet as well) weed out isolated wet nodes: a node is wet if and only if at least one surrounding element is wet A side is wet if and only if at least one adjacent element is wet In short: a wet element has 3 wet nodes and sides; a dry element may have wet or dry nodes/sides consistency of indices is fundamental in the code Rewetted nodes: calculate u,v (average); S & T (tracking) Rewetted sides: calculate u,v (average); S & T (tracking)
7
Vertical indices: levels1()
Suitable for fine grid resolution Iteration for finding interfacial nodes Go along shoreline and make surrounding elements wet or dry Enforce consistency in wet/dry flags Compute side vel. at newly wetted sides based on average of neighbors Final extrapolation of elevation into dry zone Weed out isolated wet nodes Reset vel. at dry sides to 0 Reset elevations at dry nodes below MSL to add a thin wet layer Carry on with the stuff in levels0()
8
Backtracking (1): quicksearch()
Inputs: initial position (x0,y0), nnel0, jlev0, total time, estimated destination (xt,yt) Outputs: updated destination (xt,yt), nnel1, jlev1, ratios for 3D interpolation nfl – abnormal flag Iteration – track the intersecting sides Check if (xt,yt) is already inside nnel0 Nudge (x0,y0) to avoid underflow problem Compute first intersecting side Abnormal cases: wet/dry interface; horizontal bnd; too many iterations (“death trap”) wet/dry interface or horizontal bnd: continue with tangential velocity (if too small, exit with nfl=1 and most recent position for (xt,yt)) Death trap: exit with nfl=1 Tricky underflow problems (check if inside the element …) Moving target search – role of clock time (trm) Complex logic, but efficient nnel0 (xt,yt) (x0,y0) nnel0
9
Backtracking (2): Euler
Subroutine btrack() Inputs: starting position (x00,y00,z00, ielem etc), vel., and algorithm flag (iadvf) Outputs: final destination, element, level, vel., and interpolated S,T Euler tracking First order accuracy Sub-step computed from local gradient Use quicksearch() to find end points at each sub time step Do 3D interpolation at the end points for new vel. (interpol.gr3 for vertical): blend of continuous and discontinuous vel. If nfl=1 during the stepping, exit After tracking Kriging (optional): do vertical interpolation first to obtain Kriging data; use inverted matrix S,T at foot of char. line: needs S,T at nodes & sides Split linear Quadratic: upwind bias in vertical Use quadratic shape function in 3D Constrain near bottom or surface Normal cases Record extrema among values used in interpolation (6x3) Impose bounds for the interpolated values (alleviate dispersion) Char. line Constrained Normal
10
Backtracking (3): 5th-order adaptive Runge-Kutta
Based on 4th-order R-K, but with adaptive step to increase accuracy and efficiency Error estimator Given a desired accuracy D0, the appropriate time step is In the code, the adaptivity loop is limited to 5 iterations No abnormal exit from quicksearch() After the foot of char. is found, proceed as before Efficiency: often larger time steps are invoked; moderately more expensive than Euler Accuracy: conservation
11
Parallel version: domain decomposition
The domain is first partitioned into non-overlapping sub-domains (in element sense) sub-domain may not be contiguous Then each sub-domain is augmented with 1 layer of ghost elements where exchange of info occurs residents (elements, sides, nodes) ghosts aquire_hgrid(.false.) in partition_hgrid(): initial partition based on naïve ordering of elements ParMetis lib MPI-based parallel library that implements a variety of algorithms for partitioning and repartitioning unstructured graphs based on the multilevel partitioning and fill-reducing ordering algorithms that are implemented in the serial package METIS Optimal partitioning is obtained by minimizing the edge cuts Adjustable parameters for optimal performance aquire_hgrid(.true.): final partitioning
12
Parallel version: nomenclature
Each process (sub-domain) has its own set of variables global-to-local, local-to-global mappings resident vs augmented (i.e., resident + ghost) Linked lists: iegl()%next%next%next… Each process has its own version of this list, with the local element number at the top (if inside aug. domain), and after that the rank numbers in ascending order ipgl and isgl(): interfacial nodes/sides will be resident in more than 1 domain Consistency among processes (e.g., elevations): follow the smallest rank Local vs ball info: need for exchange of info Computation usually done inside resident domain Communication done in ghost domain (expensive) Communication MPI standard Bundle up before posting Boundary info: all global info for simplicity Global Local non-augmented Ghost Augmented Elements ne_global ne neg nea Nodes np_global np npg npa Sides ns_global ns nsg nsa A linked list NULL NULL head tail
13
Parallel version: communication (1)
! Count and index sends nbtsend=0 do i=1,nbts irank=iegrpv(btsendq(i)%iegb) inbr=ranknbr(irank) if(inbr==0) then write(errmsg,*) 'INTER_BTRACK: bt to non-neighbor!',irank call parallel_abort(errmsg) endif nbtsend(inbr)=nbtsend(inbr)+1 if(nbtsend(inbr)>mnbt) call parallel_abort('bktrk_subs: overflow (1)') ibtsend(nbtsend(inbr),inbr)=i-1 !displacement enddo ! Set MPI bt send datatypes do inbr=1,nnbr if(nbtsend(inbr)/=0) then #if MPIVERSION==1 call mpi_type_indexed(nbtsend(inbr),bbtsend,ibtsend(1,inbr),bt_mpitype, & btsend_type(inbr),ierr) #elif MPIVERSION==2 call mpi_type_create_indexed_block(nbtsend(inbr),1,ibtsend(1,inbr),bt_mpitype, & #endif if(ierr/=MPI_SUCCESS) call parallel_abort('INTER_BTRACK: create btsend_type',ierr) call mpi_type_commit(btsend_type(inbr),ierr) if(ierr/=MPI_SUCCESS) call parallel_abort('INTER_BTRACK: commit btsend_type',ierr) endif enddo !inbr ! Post recvs for bt counts call mpi_irecv(nbtrecv(inbr),1,itype,nbrrank(inbr),700,comm,btrecv_rqst(inbr),ierr) if(ierr/=MPI_SUCCESS) call parallel_abort('INTER_BTRACK: irecv 700',ierr) enddo ! Post sends for bt counts call mpi_isend(nbtsend(inbr),1,itype,nbrrank(inbr),700,comm,btsend_rqst(inbr),ierr) if(ierr/=MPI_SUCCESS) call parallel_abort('INTER_BTRACK: isend 700',ierr)
14
Parallel version: communication (2)
! Wait for recvs to complete call mpi_waitall(nnbr,btrecv_rqst,btrecv_stat,ierr) if(ierr/=MPI_SUCCESS) call parallel_abort('INTER_BTRACK: waitall recv 700',ierr) ! Wait for sends to complete call mpi_waitall(nnbr,btsend_rqst,btsend_stat,ierr) if(ierr/=MPI_SUCCESS) call parallel_abort('INTER_BTRACK: waitall send 700',ierr) ! write(30,*)'Mark 4.5' ! Set MPI bt recv datatypes i=0 !total # of recv from all neighbors nnbrq=0; !# of "active" neighbors do inbr=1,nnbr if(nbtrecv(inbr)/=0) then nnbrq=nnbrq+1 if(nbtrecv(inbr)>mnbt) call parallel_abort('bktrk_subs: overflow (3)') do j=1,nbtrecv(inbr); ibtrecv(j,inbr)=i+j-1; enddo; !displacement i=i+nbtrecv(inbr) #if MPIVERSION==1 call mpi_type_indexed(nbtrecv(inbr),bbtrecv,ibtrecv(1,inbr),bt_mpitype, & btrecv_type(inbr),ierr) #elif MPIVERSION==2 call mpi_type_create_indexed_block(nbtrecv(inbr),1,ibtrecv(1,inbr),bt_mpitype, & #endif if(ierr/=MPI_SUCCESS) call parallel_abort('INTER_BTRACK: create btrecv_type',ierr) call mpi_type_commit(btrecv_type(inbr),ierr) if(ierr/=MPI_SUCCESS) call parallel_abort('INTER_BTRACK: commit btrecv_type',ierr) ! write(30,*)'recev from',nbrrank(inbr),nbtrecv(inbr) endif enddo !inbr ! Check bound for btrecvq if(i>mnbt*nnbr) call parallel_abort('bktrk_subs: overflow (2)') ! write(30,*)'Mark 5' ! Post sends for bt data do inbr=1,nnbr if(nbtsend(inbr)/=0) then ! btsendq(1)%rank is the starting address call mpi_isend(btsendq(1)%rank,1,btsend_type(inbr),nbrrank(inbr),701, & comm,btsend_rqst(inbr),ierr) if(ierr/=MPI_SUCCESS) call parallel_abort('INTER_BTRACK: isend 701',ierr) else btsend_rqst(inbr)=MPI_REQUEST_NULL endif enddo ! Post recvs for bt data if(nbtrecv(inbr)/=0) then call mpi_irecv(btrecvq(1)%rank,1,btrecv_type(inbr),nbrrank(inbr),701, & comm,btrecv_rqst(inbr),ierr) if(ierr/=MPI_SUCCESS) call parallel_abort('INTER_BTRACK: irecv 701',ierr) btrecv_rqst(inbr)=MPI_REQUEST_NULL
15
Inter-subdomain backtracking: inter_btrack()
Outer loop Send and recv lists (structures) quicksearch() - abnormal cases nfl=2: exit aug. domain upon entry nfl=3: exit aug. domain during iteration (redo) Nbt, btlist% btsendq% btrecvq% New btlist% mpi_waitsome send% recv% All ranks commun. Inner loop btdone%rank btrack() yes no Aug. domain? All done? btdone Exit domain btsendq% bttmp%
16
Solver: Jacobian Conjugate Gradient
A (entire row) Solver: Jacobian Conjugate Gradient x .. residual ….. Jacobian pre-conditioner gradient Halo (interfacial) nodes do x new guess Zero out halo nodes before multiplication Exchange ghosts whenever they are updated new gradient end do
17
Using SELFE Joseph Zhang, CMOP, OHSU
18
Grid generation BP BQ P Q Unstructured grid offers max flexibility
resolve geometry & bathymetry represent features local refinement & coarsening grid must be conformal: the intersection of any 2 elements is either the empty set, or a vertex, or an edge Methods Quadtree Advancing front Delauney Software Gredit Shewchuk’s Triangle ARGUS SMS BP BQ P Q
19
SELFE inputs *.gr3: grid-like files
*.in: vgrid; hotstart; parameters; nudging files *.th: time history (b.c.) *.ic: initial condition for T,S sflux/: atmos. forcings gotmturb.inp: GOTM turbulence closure inputs hgrid.ll: lon/lat form of horizontal grid Bare minimum (mandatory) for pre-processing (ipre=1) hgrid.gr3 (hgrid.ll if you are using some modules): no need for correct bathymetry & boundaries yet vgrid.in param.in Pre-processing Prepare the 3 input files, and set ipre=1 in param.in (only the parts up to CPP projection are needed) Run SELFE with 1 CPU only Check fort.11 for all violating sides (whose nodes are given there) if you are using indvel=0 or -1 (filter) Edit grid and repeat until pre-processing is successful - tips Other mandatory inputs: interpol.gr3 & drag input (drag.gr3 or rough.gr3) Difference between serial and parallel versions hotstart.in *nu.in *3D.th
20
Except for hgrid.gr3, only depth info is used
.gr3 and .ll files Comments ……………………………………… Except for hgrid.gr3, only depth info is used nodes elements 2 = Number of open boundaries 69 = Total number of open boundary nodes 4 = Number of nodes for open boundary 1 1 2 3 4 ……………………………………… 12 = number of land boundaries 1756 = Total number of land boundary nodes 737 0 = Number of nodes for land boundary 1 29368 29421 29420 29467 Boundary info (hgrid.gr3 only)
21
vgrid.in !hs Z levels -5000. -3000. …….. S levels !hc , qb , qf Remember: you can get a sneak preview of sample z-coordinates at various depths by running the pre-processor Pure S zone z s zone S zone SZ zone Nz s=0 hc hs hs S-levels kz s=-1 kz-1 Z-levels 2 1
22
param.in (1) Free format rules:
Lines beginning with "!" are comments; blank lines are ignored; one line for each parameter in the format: keywords= value; keywords are case sensitive; spaces allowed between keywords and "=" and value; comments starting with "!" allowed after value; value is an integer, double, or 2-char string; for double, any of the format is acceptable: e1; use of decimal point in integers is OK but discouraged; Order of parameters not important ! ! Model configuration parameters ! ! Pre-processing option. Useful for checking grid violations and get sample z coordinates (sample_z.out) ipre = 0 !Pre-processor flag (1: on; 0: off) ! # of passive tracers; need to update bctides.in accordingly. ntracers = 0 ! Bed deformation option (1: on; 0: off). If imm=1, bdef.gr3 is needed. imm = 0 ! ibdef = 10 !needed if imm=1; # of steps used in deformation
23
param.in (2) ! ! Coordinate option: 1: Cartesian; 2: lon/lat (but outputs are CPP projected ! to Cartesian). If ics=2, CPP projection center is given by (cpp_lon,cpp_lat) ics = 1 !Coordinate option cpp_lon = !CPP projection centers: lon cpp_lat = !CPP projection centers: lat ! Baroclinic/barotropic option. If ibcc=0 (baroclinic model), itransport is not used. ibcc = 0 !Baroclinic option itransport = 1 nrampbc = 1 !ramp-up flag for baroclinic force drampbc = 1. !not used if nrampbc=0 ! Hotstart option. 0: cold start; 1: hotstart with time reset to 0; 2: ! continue from the step in hotstart.in ihot = 1 ! ! Physical parameters ! Horizontal viscosity option; if ihorcon=1, horizontal viscosity is given in hvis.gr3. ihorcon = 0 ! Horizontal diffusivity option. if ihdif=1, horizontal viscosity is given in hdif.gr3 ihdif = 1
24
param.in (3) ! ! Bottom drag formulation option. If idrag=1, linear drag is used (in this case, itur<0 ! and bfric=0); if idrag=2 (default), quadratic drag formulation is used. idrag = 2 ! Bottom friction. bfric=0: drag coefficients specified in drag.gr3; bfric=1: ! bottom roughness (in meters) specified in rough.gr3 bfric = 0 !nchi in code !Cdmax = 0.01 !needed if bfric=1 ! Coriolis. If ncor=-1, specify "lattitude" (in degrees); if ncor=0, ! specify Coriolis parameter in "coriolis"; if ncor=1, model uses ! lat/lon in hgrid.ll for beta-plane approximation, and in this case, ! the lattitude specified in CPP projection ('cpp_lat') is used. ncor = 1 !lattitude = 46 !if ncor=-1 !coriolis = 1.e-4 !if ncor=0 ! ! Numerical parameters ! Initial condition. This value only matters for ihot=0 (cold start). ! If icst=1, the initial T,S field is read in from temp.ic ans salt.ic (horizontally varying). ! If icst=2, the initial T,S field is read in from ts.ic (vertical varying). icst = 1
25
param.in (4) ! ! Mean T,S profile option. If ibcc_mean=1 (or ihot=0 and icst=2), mean profile ! is read in from ts.ic, and will be removed when calculating baroclinic force. ! No ts.ic is needed if ibcc_mean=0. ibcc_mean = 0 ! Methods for computing velocity at nodes. If indvel=-1, non-comformal ! linear shape function is used for velocity; if indvel=0, comformal ! linear shape function is used; if indvel=1, averaging method is used. ! For indvel<=0, Shapiro filter is used for side velocity. indvel = 0 shapiro = 0.5 !default is 0.5 ! Max. horizontal velocity magnitude, used mainly to prevent problem in ! bulk aerodynamic module rmaxvel = 10. ! Min. vel for backtracking velmin_btrack = 1.e-3
26
param.in (5) ! ! Wetting and drying. If ihhat=1, \hat{H} is made non-negative to enhance ! robustness near wetting and drying; if ihhat=0, no retriction is imposed for ! this quantity. ! inunfl=0 is used for normal cases and inunfl=1 (not available yet) is used for more accurate wetting ! and drying if grid resolution is sufficiently fine. ihhat = 1 inunfl = 0 h0 = 0.01 !min. water depth for wetting/drying ! Implicitness factor (0.5<thetai<=1). thetai = 0.6 ! Run time and ramp option rnday = 42 !total run time in days nramp = 1 !ramp-up option (1: on; 0: off) dramp = 2. !needed if nramp=1; ramp-up period in days dt = 150. !Time step in sec ! Solver option. JCG is used presently. slvr_output_spool = 50 !output spool for solver info mxitn = 1000 !max. iteration allowed tolerance = 1.e-12 !error tolerance
27
param.in (6) ! ! Advection (ELM) option. If nadv=1, backtracking is done using Euler method, and ! 'dtb_max1' is the _minimum_ step used and 'dtb_max2' is not needed. If nadv=2, ! backtracking is done using 5th-order Runge_Kutte method and 'dtb_max1' is ! the max. step used. If nadv=0, advection in momentum is turned off/on in adv.gr3 ! (the depths=0,1, or 2 also control methods in backtracking as above), and ! in this case, 'dtb_max1' is the _minimum_ step used in Euler (depth=1) and 'dtb_max1' is ! the max. step used in 5th-order R-K (depth=2). nadv = 1 dtb_max1 = 15. dtb_max2 = 15. ! Interpolation methods in ELM for ST and velocity. If inter_st=1, split linear ! is used for T,S at foot of char. line. If inter_st=2, quadratic interpolation ! is used there. If inter_st=0, the interpolation method is specified in lqk.gr3. ! If inter_mom=0, linear interpolation is used for velocity at foot of char. line. ! If inter_mom=1, Kriging is used, and the choice of covariance function is ! specified in 'kr_co'. ! For velocity, additional controls are available in 'blend_internal' and 'blend_bnd', ! two parameters specifying how continuous and discontinuous velocities are blended ! for internal and boundary sides. inter_st = 1 !formerly lq inter_mom = 0 kr_co = 1 !not used if inter_mom=0 blend_internal = 0. blend_bnd = 0.
28
param.in (7) ! ! Transport method. If iupwind_t=0, ELM is used for T or S. If ! iupwind_t=1, upwind method is used. If iupwind_t=2, ! 2nd-order TVD method is used. ! If iupwind_t>0, the interpolation ! method above ('inter_st') does not affect T (or S). iupwind_t = 1 !tvd_mid = AA !flimiter = SB ! Atmos. option. If nws=0, no atmos. forcing is applied. If nws=1, atmos. ! variables are read in from wind.th. If nws=2, atmos. variables are ! read in from sflux_ files. ! If nws>0, 'iwindoff' can be used to scale wind speed (with windfactor.gr3). nws = 2 wtiminc = 150. !needed if nws/=0; time step for atmos. forcing nrampwind = 1 !ramp-up option for atmos. forcing drampwind = 1. !needed of nrampwind/=0; ramp-up period in days iwindoff = 0 !needed only if nws/=0 ! Heat and salt exchange. isconsv=1 needs ihconsv=1; ihconsv=1 needs nws=2. ! If isconsv=1, need to compile with precip/evap module turned on. ihconsv = 1 !heat exchange option isconsv = 0 !evaporation/precipitation model
29
param.in (8) ! ! Turbulence closure. itur = 3 ! dfv0 = 0 ! dfh0 = 0 turb_met = KL turb_stab = KC ! Nudging options for T,S. If inu_st=0, no nudging is used. If inu_st=1, ! nudge T,S to initial condition according to relaxation constants specified ! in t_nudge.gr3 and s_nudge.gr3. If inu_st=2, nudge T,S to values in temp_nu,in ! and salt_nu.in (with step 'step_nu') according to t_nudge.gr3 and s_nudge.gr3. inu_st = 2 !nudging option step_nu = !in sec; only used if inu_st=2 vnh1 = 400 !vertical nudging; disabled at the moment vnf1 = 0 !vertical nudging; disabled at the moment vnh2 = 401 !vertical nudging; disabled at the moment vnf2 = 0. !vertical nudging; disabled at the moment ! Cutt-off depth for cubic spline interpolation near bottom when computing baroc. force. ! If depth > depth_zsigma, ! a min. (e.g. max bottom z-cor for the element) is imposed in the spline; ! otherwise constant extrapolation below bottom is used. depth_zsigma = 100. ! Dimensioning parameters for inter-subdomain btrack. s1_mxnbt = 0.5 s2_mxnbt = 3.0
30
param.in (9) ! ! Global output options. iwrite = 0 !not used nspool = 6 !output step spool ihfskip = 576 !stack spool; every ihfskip steps will be put into 1_*, 2_*, etc... elev.61 = 1 !0: off; 1: on pres.61 = 1 airt.61 = 1 shum.61 = 1 srad.61 = 1 flsu.61 = 1 fllu.61 = 1 radu.61 = 1 radd.61 = 1 flux.61 = 1 evap.61 = 0 prcp.61 = 0 wind.62 = 1 wist.62 = 0 dahv.62 = 0 vert.63 = 1 temp.63 = 1 salt.63 = 1 conc.63 = 0 tdff.63 = 1 vdff.63 = 0 kine.63 = 0 mixl.63 = 0 zcor.63 = 1 hvel.64 = 1 testout = 0
31
param.in (10) ! ! Option for hotstart outputs hotout = 1 !1: output *_hotstart every 'hotout_write' steps hotout_write = 4032 !divisible by ihfskip ! Conservation check option. If consv_check=1, some fluxes are computed ! in regions specified in fluxflag.gr3. consv_check = 0
32
Phases don’t matter for Z0
Tides in bctides.in 04/15/ :00:00 PST 0 40. ntip 9 nbfr Z0 O1 e K1 e Q1 e P1 e K2 e N2 e M2 e S2 e 4 nope !Georgia E+00 Invalid 1.e-3 T relaxation !ocean E E+00 Invalid E E+00 Phases don’t matter for Z0
33
interpol.gr3: interpolation method
Preferred method in S zone: along S plane Method in SZ zone: along Z plane 2 1
34
Hotstart.in (unformatted binary)
! Element data do i=1,ne_global read(36) iegb, idry_e, (we(j,i), tsel(1:2,j,i), (trel0(l,j,i), trel(l,j,i), l=1,ntracers), j=1,nvrt) enddo !i=1,ne_global ! Side data do i=1,ns_global read(36) isgb, idry_s, (su2(j,i), sv2(j,i), tsd(j,i), ssd(j,i), j=1,nvrt) enddo ! Node data do i=1,np_global read(36) ipgb, eta2, idry, (tnd(j,i), snd (j,i), tem0 (j,i), sal0 (j,i), q2(i,j), xl(i,j), dfv(i,j), dfh(i,j), dfq1(i,j), dfq2(i,j), j=1,nvrt) Nudging (temp_nu.in and salt_nu.in) (unformatted binary) do it=1,n_nudging_steps read(35) time_stamp do i=1,np_global read(35)(temp_nu(j,i),j=1,nvrt) enddo !i enddo !it
35
flux.th (negative for inflow), temp.th, salt.th, elev.th, wind.th
*.th: ASCII flux.th (negative for inflow), temp.th, salt.th, elev.th, wind.th Corresponds to b.c. flag=1 Easy to plot Time (sec) value 1 (1st bnd) value 2 (2nd bnd) ………….. value N (Nth bnd) …………………………………………………………………………………………………………. Time step is the main time step as in param.in except for wind.th (specified in param.in)
36
elev3D.th, temp3D.th, salt3D.th, uv3D.th (not discharge!)
*3D.th: binary elev3D.th, temp3D.th, salt3D.th, uv3D.th (not discharge!) Corresponds to b.c. flag=±4 h do it=1,nsteps write(18,rec=irec_out+1)time,((th(i,j),j=1,nond(i)),i=1,nope) enddo S,T do it=1,nsteps write(18,rec=irec_out+1)time,(((th(i,j,l),l=1,nvrt),j=1,nond(i)),i=1,nope) enddo u,v do it=1,nsteps write(18,rec=irec_out+1)time,(((u(i,j,l), v(i,j,l),l=1,nvrt),j=1,nond(i)),i=1,nope) enddo
37
*.ic: initial condition for T,S
icst=1 in param.in: need temp.ic and salt.ic (.gr3 format) icst=2 or ibcc_mean=1 in param.in: needs ts.ic (mean density removed) 43 !total # of vertical levels; !level #, z-coordinates, temperature, salinity Sample ts.ic
38
sflux netcdf files reformatted from GFS, NARR, NAM, MM5… Three types
sflux_air_1_*.nc: wind speed (u,v) (10m above MSL), air pressure (MSL), surface air T (2m above MSL), and specific humidity (2m above MSL) sflux_rad_1_*.nc: downward long (infrared) and short (solar) wave radiation fluxes sflux_prc_1_*.nc: surface precipitation rate (kg/m2/s) – optional Starting point: NARR ( ) in Utility Lib Download NARR files for your application period. Each NARR file covers ~ 1 day (e.g., narr_air.1981_01_28.nc is for Jan. 28, 1981). In your run directory, mkdir sflux and inside it, create symbolic links to the NARR files. e.g., if you run starts from June 10, 2004 and ends June 20, 2004, then sflux_air_1.001.nc --> narr_air.2004_06_10.nc sflux_air_1.002.nc --> narr_air.2004_06_11.nc sflux_air_1.011.nc --> narr_air.2004_06_20.nc sflux_air_1.012.nc --> narr_air.2004_06_21.nc (extra day to account for time zone difference). Similarly for sflux_rad_*.nc and sflux_prc_*.nc. The number "1" after "air_" denotes first data set used In sflux, copy the file sflux_inputs.txt and edit it to reflect the start time. The field "utc_start" is hours behind UTC. Create your own netcdf files: see gen_atmos.f90 in Utility Lib for the details of format Some tips The maximum numbers of input files and time steps (used in the atmospheric forcing) are set in the module netcdf_io (you can search for it in the sflux*.F90 code) as max_files and max_times. If your application requires larger values simply increase them in the module. The grids used in atmospheric forcings (usually structured) can be different from hgrid.gr3, but must encompass the latter for successful interpolation, which is done in SELFE at run time (both in space and in time).
39
Compute various fluxes, volume and mass
fluxflag.gr3 Compute various fluxes, volume and mass 1 2
40
Horizontal nudging parameters for T,S
[t,s]_nudge.gr3 Horizontal nudging parameters for T,S
41
Web http://www.ccalmr.ogi.edu/CORIE/modeling/selfe
Mailing list archive:
42
Try not to use unevenly spaced s-coordinates
Miscellaneous issues vgrid Pure S first Spacing parameters Try not to use unevenly spaced s-coordinates Time step: smaller not necessarily better! Implicitness factor: implication for dissipation indvel and Kriging: battle between diffusion and dispersion Horizontal b.c. for velocity: uv3D.th Datum issue in estuary and rivers initially “dry” river ELM transport: freshening of the estuary river extended 2m 10m Diffmin=0.01m2/s
43
Post-processing and visualization
Combining binary files (MPI): perl script Extracting time series: vertical profiles, horizontal slabs, transects Linear interpolation in 3D calculation of z-coordinates wet/dry treatment *Model API (netcdf) Residuals, harmonics analysis Visualization xmgredit xmvis6 & g3 matlab *VisTrail
44
User experiences: open discussions
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.