Presentation is loading. Please wait.

Presentation is loading. Please wait.

I/O server work at ICHEC Alastair McKinstry IS-ENES workshop, 2013.

Similar presentations


Presentation on theme: "I/O server work at ICHEC Alastair McKinstry IS-ENES workshop, 2013."— Presentation transcript:

1 I/O server work at ICHEC Alastair McKinstry IS-ENES workshop, 2013

2 2 IS-ENES workshop on I/O for climate models, 2013 The Plan Components for a “Common I/O server” Single I/O server, targeting EC-Earth for testing: Write both NetCDF & GRIB2 Work efficiently “everywhere”: on low-memory nodes (eg. BlueGene) Use XIOS post-processing Components to be usable for other models 2

3 3 IS-ENES workshop on I/O for climate models, 2013 Basic Ideas CDI-2-XIOS: A CDI user (application) could use XIOS “back-end” functions without changing their source. Intermediate interface library could replace the CDI back-end, call XIOS back-end instead. Would give CDI applications access to memcache and post-processing features in XIOS. XIOS-2-CDI: An XIOS user (application) could use CDI “back-end” functions without changing their source. XIOS applications could use the GRIB and other formats supported by CDI. Which (if any) of these would be useful? 3

4 4 IS-ENES workshop on I/O for climate models, 2013 Original workplan CDI Interface for IFS. –Gives GRIB2 output. Compare with existing output. CDI->XIOS mapping. –“Trivial”. –Allows XIOS transport layer, post-processing. “Memcache” code for XIOS –To support low-memory nodes. Delayed by staffing issues at ICHEC. PRACE project extended. 4

5 5 IS-ENES workshop on I/O for climate models, 2013 IFS IO: refactoring IOSTREAM: I/O not well structured: Codebase does RAW, GRIB, ARPEGE, FDB,.. I/O intermixed IOSTREAM_MIX.F90 refactored: Use as API, Add LCDIF, LXIOSF flags for CDI, XIOS output »IO_PUT, IO_GET, IO_INQUIRE, IOSTREAM_SETUP, IOREQUEST_SETUP, … Common code from existing IOSTREAM_MIX, IFS factored to IOSTREAM_COMMON Pass model information (grid, vertical axes, variables, etc.) down to CDI and XIOS interfaces And on to CMOR or CF-compliant file? 5

6 6 IS-ENES workshop on I/O for climate models, 2013 IFS Interface Initial work done on CDI: Using CDI serial interface on Rank 0 CDI-pio calls added but not used yet. Testing: What configurations ? EC-Earth3 test cases FullPOS to be always used? XIOS interface started. 6

7 7 IS-ENES workshop on I/O for climate models, 2013 CDI/ XIOS differences CDI-2-XIOS users would need to create XML files (essential for XIOS). Some CDI features have no counterpart in XML e.g., grid-types GRID_GAUSSIAN, GRID_LCC, etc.; z-axis types ZAXIS_SIGMA, ZAXIS_PRESSURE, etc. Sequential jobs possible in CDI, not in XIOS Same possibly true for MPI communicators Some XIOS features (attributes, groups, sub-domain structures) may have no counterpart in CDI. Few CDI or XIOS functions have simple one-to-one mapping between them 7

8 8 IS-ENES workshop on I/O for climate models, 2013 Tentative Mapping CDIXIOS namespacecontext pioInitxios_initialize xios_init_server xios_context_initialize pioEndDefxios_close_context_definition gridCreatexios_add_grid zaxisCreatexios_add_axis xios_add_axisgroup streamOpenWritexios_add_file streamDefTimestepxios_set_timestep streamWriteVarxios_send_field gridDestroyxios_context_finalize ……

9 9 IS-ENES workshop on I/O for climate models, 2013 XIOS memory cache node Implement XIOS transport twice, existence of XIOS cache “transparent” to rest of XIOS Receive from compute node as priority; drained by I/O 9

10 IS-ENES workshop on I/O for climate models, 2013 Current XIOS Client Server iodef XML Read context definitions Client Server Read context definitions Register context, send definitions, close definitions Client Add/modify definitions Server Client Server Iterate over timesteps, send data, finalize context Client Server netCDF Write to disk

11 IS-ENES workshop on I/O for climate models, 2013 Memcache version Memcache Client Server iodef XML Read context definitions Client Server Register context (CLIENT) send definitions, close definitions Client Add/modify definitions Server Client Server Iterate over timesteps, send data, finalize Client Server netCDF Write to disk Read context definitions Memcache Register context (IO) + other objects send definitions, close definitions Memcache Iterate over timesteps, send data, finalize Memcache Create context (IO), create/copy definitions from context (CLIENT)

12 IS-ENES workshop on I/O for climate models, 2013 Memcache Configuration <!-- We must have buffer_size > jpi*jpj*jpk*8 (with jpi and jpj the subdomain size) --> 25000000 2 0 true false oceanx 1

13 IS-ENES workshop on I/O for climate models, 2013 Implementation New class: CMemcache Only static methods At colour attribution, a number of XIOS servers get in the memcacheNodes list of MPI ranks to be used as cache nodes. Other servers are used as normal server nodes. CMemcache::isMemcacheNode() returns true if calling process is in the memcacheNodes list. Static methods from the CMemcache class are then called from within server methods if said 'server' process is a memcache

14 IS-ENES workshop on I/O for climate models, 2013 Current Status Works on XIOS 428 (Fortran test client) Updated to XIOS 447 to integrate with NEMO 3.5 Beta Works on XIOS 447 (Fortran test client) Crashes on XIOS 447 (NEMO 3.5 Beta client, ORCA2_LIM experiment) Mainly trouble with Memcache node duplicating properly the client elements Still works with NEMO client if Memcache value set to 0. “Non- intrusive” design works as expected.

15 IS-ENES workshop on I/O for climate models, 2013 Sending attributes from cache to server, good timing? void CDomain::recvLonLat(CEventServer& event) { list ::iterator it ; int srank; MPI_Comm_rank(CXios::globalComm,&srank); cout << " test in Domain:recvLonLat srank= " << srank << endl; string domainId ; for (it=event.subEvents.begin();it!=event.subEvents.end();++it) { CBufferIn* buffer=it->buffer; *buffer>>domainId ; get(domainId)->recvLonLat(*buffer) ; cout << " test in Domain:recvLonLat srank= " << srank << " DomID= " << domainId << endl; } if (CMemcache::isMemcacheNode()) { CContext* context = CContext::getCurrent(); CMemcache::PassAttributes(context); } -> Ideally, immediately after receiving each element -> In practice, need to wait until the cache node has all it needs: lonvalue_srv, latvalue_srv critical... -> Inheritance issues?

16 16 IS-ENES workshop on I/O for climate models, 2013 Current work and plans Memcache: Use single memcache node per client: minimize client mem. usage Merge client domains on memcache; useful for efficient transposes in CDI Needed for GRIB with large domains IFS: Complete XIOS interface XIOS: Add CDI GRIB writing 16


Download ppt "I/O server work at ICHEC Alastair McKinstry IS-ENES workshop, 2013."

Similar presentations


Ads by Google