Presentation is loading. Please wait.

Presentation is loading. Please wait.

Experiment Workflow Pipelines at APS: Message Queuing and HDF5 Claude Saunders, Nicholas Schwarz, John Hammonds Software Services Group Advanced Photon.

Similar presentations


Presentation on theme: "Experiment Workflow Pipelines at APS: Message Queuing and HDF5 Claude Saunders, Nicholas Schwarz, John Hammonds Software Services Group Advanced Photon."— Presentation transcript:

1 Experiment Workflow Pipelines at APS: Message Queuing and HDF5 Claude Saunders, Nicholas Schwarz, John Hammonds Software Services Group Advanced Photon Source Argonne National Laboratory Wednesday, February 1, 2012

2 Data Volume and Rates Tomography 2K x 2K pixels per frame (assume 16 bits/pixel) Up to 1500 projections per sample ~13GB per sample PCO.dimax can do 1300fps until 36GB RAM is full, so 2-3 samples Camera RAM dumped to detector controller disk at 160MB/s GridFTP can move 80MB/s from detector controller disk to cluster Several minutes reconstruction time on 16 cores XPCS 1K x 1K pixels per frame 3000 frames per sample (varies) ~6GB per sample 30fps acquisition rate 2 minutes processing time on 16 cores HEDM 1TB per day expected

3 Abstracted Workflow Acquisition gridFTP: detector to cluster Reduction/A nalysis job Data Management System: Ingest metadata gridFTP: cluster to visualization Re-run job with new parameters

4 Workflow Stages and Automation Data acquisition is bursty Detector controller disks can fill up gridFTP speed varies with network usage and endpoint load Cluster disks can fill up, or IO system can be loaded Cluster queue could be occupied at any given time (shared resource) Cluster node can go down Database for saving metadata is periodically unavailable The workflow should proceed reliably in spite of all this and provide clear status to the beamline staff and users.

5 Message Queuing Producer and Consumer are temporally decoupled Message broker guarantees delivery of message Lots of production quality message brokers to choose from –We picked Apache ActiveMQ Can build all manner of pipelines with this Producer Consumer named queue Message broker message Headers/Props name=value Body HDF5 file path Producer Pipeline Actor Consume message, do some work, queue message to next queue in pipeline

6 HDF5 as Manufacturing Traveler Single HDF5 file takes role of manufacturing traveler –Used to track all steps of a manufacturing process applied to some input material, and is typically physically attached to the material. Use HDF5 to store raw data at acquisition – the material Also store metadata at acquisition time Add results of subsequent reduction/analysis to the same file Provenance –Record steps of pipeline as they occur –Success/Failure of each step is also recorded Format is a work in progress, but close…: –Data Exchange – see https://confluence.aps.anl.gov/display/NXhttps://confluence.aps.anl.gov/display/NX –Also see more formal document produced by Francesco DeCarlo

7 Scientific Data and Metadata Scientific data and metadata are stored in their own high- level HDF5 groups and datasets. /exchange_1 /data /exchange_2 /data /sample …details… /reconstruction /input/exchange_1 /output/exchange_2 …reconstruction specific parameters… /spectromicroscopy … Each scientific analysis process identifies its own input(s) and output(s), and its own process specific parameters Scientific data are stored in /exchange_n groups Data Exchange File Layout

8 Provenance /provenance /nextprocess_2 /process_1 /status RUNNING /reference /provenance/gridftp /message /process_2 /status PENDING /reference /reconstruction /message /gridftp /sourceURI:AreaDetectorSystem /destinationURI:Cluster Data Exchange Provenance Layout Metadata about the sequence of pipeline processes. A record of data movement and processing stored in a separate (optional) HDF5 group.

9 Goals of Design Message queue infrastructure supplies simple, reliable sequencing and triggering mechanism. –Key: Message does not contain scientific data/metadata, just a reference to HDF5 file on accessible filesystem Scientist writes code that will read provided HDF5 file, do something, and write results back to same file. Completely independent of this infrastructure. Software people (or scientist!) writes lightweight actor that: –Waits for incoming triggering message –Passes HDF5 file reference to scientists code to execute –Waits for completion, notes success or failure in provenance group of HDF5 –Produces message for next queue(s) in pipeline Re-usable, shared actors –Data transfer (gridFTP), Data management ingest, etc. Technique and algorithm specific actors –Tomography reconstruction, XPCS auto-correlation, etc.

10 Tomo Client Beamline Scheduling Scan Engine VME IOCDetector Controller Message Queue HPC Cluster Data Mgmt. Sys. Lustre User@APS Local disk Activity meta-data User@home Local config/ Scan scripts Area Detector Area Detector gridFTP server gridFTP server gridFTP server gridFTP server metadata Scan hdf5 Pipeline Actor Pipeline Actor Pipeline Actor Pipeline Actor Monitor queue gridFTP Delete source Reconstruct Queue ingest hdf5 gridFTP Delete source Reconstruct Queue ingest Ingest hdf5 metadata gridFTP server gridFTP server Queue reconstruct User identifies scheduled activity User selects DAQ params put get put trigger Scan command sequence gridFTP server gridFTP server Find experiment datasets In Data Mgmt. Sys. gridFTP conducted by GlobusOnline hdf5 configure filewriter plugin Click To Begin Done! APS gridFTP Big Picture


Download ppt "Experiment Workflow Pipelines at APS: Message Queuing and HDF5 Claude Saunders, Nicholas Schwarz, John Hammonds Software Services Group Advanced Photon."

Similar presentations


Ads by Google