Presentation is loading. Please wait.

Presentation is loading. Please wait.

Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Component Approach to Distributed Multiscale Simulations Katarzyna Rycerz(1,2), Marian.

Similar presentations


Presentation on theme: "Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Component Approach to Distributed Multiscale Simulations Katarzyna Rycerz(1,2), Marian."— Presentation transcript:

1 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Component Approach to Distributed Multiscale Simulations Katarzyna Rycerz(1,2), Marian Bubak(1,3) (1) AGH University of Technology, Institute of Computer Science AGH, Mickiewicza 30, 30-059 Kraków, Poland (2) ACC Cyfronet AGH, ul. Nawojki 11, 30-950 Kraków, Poland (3)University of Amsterdam, Institute for Informatics, Amsterdam, The Netherlands

2 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Outline Requirements of multiscale simulations Motivation for a component model for such simulations HLA-based component model: idea, design challenges and solutions Experiment with Multiscale Multiphysics Scientific Environment (MUSE) Execution in GridSpace VL (demo) Summary

3 Multiscale Simulations Consists of modules of different scale Examples: virtual physiological human initiative reacting gas flows capillary growth colloidal dynamics stellar systems and many more... the reoccurrence of stenosis, a narrowing of a blood vessel, leading to restricted blood flow

4 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Multiscale Simulations - Requirements Actual connection of two or more models together obeying laws of physics (e.g. conservation law) advanced time management: ability to connect modules with different time scales and internal time management support for connecting models of different space scale Composability and reusability of existing models of different scale finding existing models needed and connecting them either together or to new models ease of plugging in and unplugging of models from a running system standarized models’ connections + many users sharing their models = more chances for general solutions

5 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Motivation To wrap simulations into recombinant components that can be selected and assembled in various combinations to satisfy requirements of multiscale simulations machanisms specyfic for distributed multiscale simulation adaptation of one of the existing solutions for distributed simulations – our choice – High Level Architecture (HLA) support for long running simulations - setup and steering of components should be possible also during runtime possibility to wrap legacy simulation kernels into components Need for an infrastructure that facilitates cross-domain exchange of components among scientists need for support for the component model using Grid solutions (e-infrastructures) for crossing administrative domains

6 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Related Work Model Coupling Toolkit message passing (MPI) style of communication between simulation models. domain data decomposition of the simulated problem support for advanced data transformations between different models J. Larson, R. Jacob, E. Ong ”The Model Coupling Toolkit: A New Fortran90 Toolkit for Building Multiphysics Parallel Coupled Models.” 2005: Int. J. High Perf. Comp. App.,19(3), 277-292. Multiscale Multiphysics Scientific Environment (MUSE), now AMUSE The Astrophysical Multi-Scale Environment scripting approach (Python) is used to couple models together. models include: stellar evolution, hydrodynamics, stellar dynamics and radiative transfer S. Portegies Zwart, S. McMillan, at al. A Multiphysics and Multiscale Software Environment for Modeling Astrophysical Systems, New Astronomy, volume 14, issue 4, year 2009, pp. 369 - 378 The Multiscale Coupling Library and Environment (MUSCLE) a software framework to build simulations according to the complex automata theory concept of kernels that communicate by unidirectional pipelines dedicated to pass a specific kind of data from/to a kernel (asynchronous communication) J. Hegewald, M. Krafczyk, at al.. An agent-based coupling platform for complex automata. ICCS, volume 5102 of Lecture Notes in Computer Science, pages 227-233. Springer, 2008.

7 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Why High Level Architecture (HLA) ? Introduces the concept of simulation systems (federations) built from distributed elements (federates) Supports joining models of different time scale - ability to connect simulations with different internal time management in one system Supports data management (publish/subscribe mechanism) Separates actual simulation from communication between fedarates Partial support for interoperability and reusability (Simulation Object Model (SOM), Federation Object Model (FOM), Base Object Model (BOM)) Well-known IEEE and OMT standard Reference implementation – HLA Runtime Infrastructure (HLA RTI) Open source implementations available – e.g. CERTI, ohla

8 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands HLA Component Model Model differs from common models (e.g. CCA) – no direct connections, no remote procedure call (RPC) Components run concurrently and communicate using HLA mechanisms Components use HLA facilities (e.g. time and data management) Differs from original HLA mechanism: interactions can be dynamically changed at runtime by a user change of state is triggered from outside of any federate CCA model HLA model

9 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands HLA Components Design Challenges Transfer of control between many layers requests from the Grid layer outside the component simulation code layer HLA RTI layer. The component should be able to efficiently process concurrently: actual simulation that communicates with other simulation components via RTI layer external requests of changing state of simulation in HLA RTI layer. Simulation Code CompoHLA library HLA RTI Component HLA Component HLA Grid platform (H2O) External requests: start/stop join/resign set time policy publish/subscribe Grid platform (H2O)

10 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands HLA RTI Concurrent Access Control Use concurrent access exception handling available in HLA Transparent to developer Synchronous mode - requests processed as they come simulation is running in a separate thread Dependent on implementation of concurrency control in used HLA RTI Concurrency difficult to handle effectively e.g starvation of requests that causes overhead in simulation execution Simulation Code CompoHLA library HLA RTI (concurrent access control) Component HLA Component HLA Grid platform (H2O) External requests Grid platform (H2O)

11 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Advanced Solution - Use Active Object Pattern Requires to call a single routine in a simulation loop Asynchronous mode - separates invocation from execution Requests processed when scheduler is called from simulation loop Independent on behavior of HLA implementation Concurrency easy to handle JNI used for communication between Simulation Code, Scheduler and CompoHLA library Simulation Code CompoHLA library HLA RTI Component HLA Component HLA Grid platform (H2O) External requests Grid platform (H2O) Scheduler Queue

12 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Interactions between Components Modules taken from Multiscale Multiphysics Scientific Environment (MUSE) Multiscale simulation of dense stellar systems Two modules of different time scale: stellar evolution (macro scale) stellar dynamics - N-body simulation (meso scale) Data management mass of changed stars are sent from evolution (macro scale) to dynamics (meso scale) no data is needed from dynamics to evolution data flow affects whole dynamics simulation Dynamics takes more steps than evolution to reach the same point of simulation time Time management - regulating federate (evolution) regulate the progress in time of constrained federate (dynamics) The maximal point in time which the constrained federate can reach (LBTS) at certain moment is calculated dynamically according to the position of regulating federate on the time axis

13 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Experiment Results Concurrent execution, conservative approach of dynamics and evolution as HLA components (total time 18.3 sec): Pure calculations of more computationally intensive (dynamics) component 17.6 sec Component architecture overhead: Request processing (through grid and component layer) 4-14 msec depending on request type Request realisation (scheduler) 0.6 sec HLA-based distribution overhead: Synchronization with evolution component 7 msec H2O v2.1 as a Grid platform and HLA CERTI v 3.2.4 – open source Experiment run on DAS3 grid nodes in: Delft (MUSE sequential version and dynamics component) Amsterdam UvA (evolution component) Leiden (component client) Amsterdam VU (RTIexec control process) Detailed results - in a paper

14 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands HLA Components in GridSpace VL Demo

15 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Demo experiment – allocation of resources H2O kernel node A H2O kernel node B Ruby script (snippet 1) Run PBS job allocate nodes start H2O kernels GridSpace user PBS run job (start H2O kernel)

16 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Ruby script (snippet 1) H2O kernel node A H2O kernel node B Jruby script (snippet 2) Asks selected components to join simulation system Asks selected components to publish or subscribe to data objects (stars) Asks components to set their time policy Determines where output/error streams should go HLA communication join federation subscribe publish be constrained be regulating Dynamics HLAComponent Evolution HLAComponent GridSpace set streaming user create components Demo experiment – simulation setup

17 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Ruby script (snippet 1) GridSpace H2O kernel node A H2O kernel node B Asks components to start Alters the time policy at runtime Stop Dynamics HLAComponent Evolution HLAComponent HLA communication start unset regulation Star data object unset constrained Jruby script (snippet 2) user Jruby script (snippet 3) Jruby script (snippet 4) stop Dynamics view Evolution view Out/err Demo experiment - execution

18 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Ruby script (snippet 1) H2O kernel node A H2O kernel node B Ruby script (snippet 5) Delete job stop H2O kernels release nodes GridSpace user PBS Delete job ( stop H2O kernels) Ruby script (snippet 1) Ruby script (snippet 1) Ruby script (snippet 5) Demo experiment – cleaning up

19 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Recorded demo: HLA Components in GridSpace VL

20 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Summary Presented HLA component model enables the user to dynamically compose/decompose distributed simulations from multiscale elements residing on the Grid Architecture of the HLA component supports steering of interactions with other components during simulation runtime The presented approach differs from that in original HLA, where all decisions about actual interactions are made by federates themselves. The functionality of the prototype is shown on the example of multiscale simulation of a dense stellar system – MUSE environment. Experiment results show that that grid and component layers do not introduce much overhead. HLA components can be run and managed within GridSpace Virtual Laboratory

21 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands For more information see: http://dice.cyfronet.pl https://gs2.cyfronet.pl http://www.mapper-project.eu

22 Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Experiment Results Comparision of: Concurrent execution, conservative approach of dynamics and evolution as HLA components Sequential execution (MUSE) Timing of: Request processing (through grid and component layer) Request realisation (scheduler) H2O v2.1 as a Grid platform and HLA CERTI v 3.2.4 – open source Experiment run on DAS3 grid nodes in: Delft (MUSE sequential version and dynamics component) Amsterdam UvA (evolution component) Leiden (component client) Amsterdam VU (RTIexec control process)


Download ppt "Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Component Approach to Distributed Multiscale Simulations Katarzyna Rycerz(1,2), Marian."

Similar presentations


Ads by Google