Presentation is loading. Please wait.

Presentation is loading. Please wait.

Manchester Computing Supercomputing, Visualization & e-Science Stephen Pickles GridLab meeting, Eger, Hungary, 1 April 2003.

Similar presentations


Presentation on theme: "Manchester Computing Supercomputing, Visualization & e-Science Stephen Pickles GridLab meeting, Eger, Hungary, 1 April 2003."— Presentation transcript:

1 Manchester Computing Supercomputing, Visualization & e-Science Stephen Pickles http://www.realitygrid.org GridLab meeting, Eger, Hungary, 1 April 2003 RealityGrid Mission: "Using grid technology to closely couple high performance computing, high throughput experiment and visualization, RealityGrid will move the bottleneck out of the hardware and back into the human mind."

2 Supercomputing, Visualization & e-Science2 Partners Academic  University College London  Queen Mary, University of London  Imperial College  University of Manchester  University of Edinburgh  University of Oxford  University of Loughborough Industrial  Schlumberger  Edward Jenner Institute for Vaccine Research  Silicon Graphics Inc  Computation for Science Consortium  Advanced Visual Systems  Fujitsu

3 Supercomputing, Visualization & e-Science3 The RealityGrid project Aims:  to predict the realistic behavior of matter using diverse simulation methods (Lattice Boltzmann, Molecular Dynamics, Monte Carlo, …) spanning many time and length scales  to discover new materials through integrated experiments.

4 Supercomputing, Visualization & e-Science4 Project Structure  Led by materials scientists –Prof Peter Coveney, now of UCL, is Principal Investigator  10 materials science FTEs –1 Edinburgh, 1 Oxford, 1 Loughborough, 1 QMUL, rest at UCL –Retain responsibility for their own application codes  10 "computer science" FTEs (includes software engineers) –4+2 in Manchester, 1 EPCC, 2 Imperial College, 1 Loughborough (HCI)  Twin track approach –"Fast track" – feeding useful tools to users (materials scientists) early Mostly computational steering and visualization to date –"Deep track": breaking new ground in Component frameworks (ICENI) Feedback-based performance control (CNC)

5 Supercomputing, Visualization & e-Science5 RealityGrid Characteristics  Grid-enabled (Globus, UNICORE)  Component-based, service-oriented –Close correspondence between coarse-grained components and OGSA services  Steering is central –Computational steering –On-line visualisation of large, complex datasets –Feedback-based performance control –Remote control of novel, grid-enabled, instruments (LUSI)  Advanced Human-Computer Interfaces (Loughborough)  Everything is (or should be) distributed and collaborative  High performance computing, visualization and networks  All in a materials science domain –multiple length scales, many "legacy" codes (Fortran90, C, C++, mostly parallel)

6 Supercomputing, Visualization & e-Science6 Access Grid  Access Grid used for regular project meetings –UK Grid Engineering Task Force and other distributed projects rely on it heavily –Can't live without it, even in the UK which is geographically compact  Want to extend it for collaborative steering/visualization 1 st AG node in UK: Early Users

7 Manchester Computing Supercomputing, Visualization & e-Science Computational Steering Current Technology

8 Supercomputing, Visualization & e-Science8 Steering: the aim of the game  Large-scale simulations (and experiments) can generate in days data that takes months to understand  Problem: to efficiently explore and understand the parameter spaces of materials science problems  Computational steering aims to short circuit post facto analysis –Brute force parameter sweeps create a data-mining problem –Instead, computational steering enables scientist to navigate through interesting regions of parameter space –Simultaneous on-line visualization develops and engages scientist's intuition –Avoids wasted cycles/experiment time exploring barren regions or even doing the wrong calculation

9 Supercomputing, Visualization & e-Science9 “Fast Track” Demonstration Jens Harting at UK e-Science All Hands Meeting, September 2002

10 Supercomputing, Visualization & e-Science10 Insight from steering The "Aha!" moment

11 Supercomputing, Visualization & e-Science11 “Fast Track” Steering Demo UK e-Science AHM 2002 Bezier SGI Onyx @ Manchester Vtk + VizServer Dirac SGI Onyx @ QMUL LB3D with RealityGrid Steering API Laptop SHU Conference Centre UNICORE Gateway and NJS Manchester Firewall SGI OpenGL VizServer Simulation Data VizServer client Steering GUI The Mind Electric GLUE web service hosting environment with OGSA extensions Single sign-on using UK e-Science digital certificates UNICORE Gateway and NJS QMUL Steering (XML)

12 Supercomputing, Visualization & e-Science12 Steering architecture today Communication modes: Shared file system Files moved by UNICORE daemon GLOBUS-IO Simulation Visualization data transfer Client Steering library

13 Supercomputing, Visualization & e-Science13 Computational steering – how?  We instrument (add "knobs" and "dials" to) simulation codes through a steering library, written in C –Can be called from F90, C and C++ –Library distribution includes F90 and C examples  Library provides: –Pause/resume –Checkpoint and restart –Set values of steerable parameters –Report values of monitored (read-only) parameters –Emit "samples" to remote systems for e.g. on-line visualization –Consume "samples" from remote systems for e.g. resetting boundary conditions –Automatic emit/consume with steerable frequency –No restrictions on parallelisation paradigm  Images can be displayed at sites remote from visualization system, using e.g. SGI OpenGL VizServer –Interactivity (rotate, pan, zoom) and shared controls are important

14 Supercomputing, Visualization & e-Science14 Steering client  Built using C++ and Qt library – currently have execs. for Linux and IRIX  Attaches to any steerable RealityGrid application  Discovers what commands are supported  Discovers steerable & monitored parameters  Constructs appropriate widgets on the fly

15 Supercomputing, Visualization & e-Science15 Implementing steering Steps required to instrument a code for steering:  Register supported commands (eg. pause/resume, checkpoint) –steering_initialize()  Register samples –register_io_types()  Register steerable and monitored parameters –register_params()  Inside main loop –steering_control() –Reverse communication model: User code actions, in sequence, each command in list returned Support routines provided (eg. emit_sample_slice) –When you write a checkpoint, register it  When finished, –steering_finalize()

16 Manchester Computing Supercomputing, Visualization & e-Science Steering – a look ahead NAMD & VMD Steering in the OGSA Wishlist

17 Supercomputing, Visualization & e-Science17 NAMD & VMD  NAMD – Package for classical MD –Scales well on large parallel machines (Charm++ parallelisation scheme) –Suitable for tackling simulation of large molecules (e.g. DNA fragments) –Source code (C++) available –Scriptable using Tcl  VMD – package for visualisation of NAMD output –Enables on-line visualisation of NAMD simulation –Scientist can interact with simulation by using mouse to apply forces –Communicates with NAMD using a socket  We want to be able to use this software within the RealityGrid framework…

18 Supercomputing, Visualization & e-Science18 NAMD & VMD continued… Architecture remains the same  Build steering library into both NAMD & VMD  Library used to communicate VMD-specified forces back to NAMD as a type of “sample” Forces Atomic positions NAMD Visualization VMD Client Steering library  Retain existing functionality  Gain ReG steering facilities & can build on future developments (e.g. checkpoint/restart functionality)

19 Supercomputing, Visualization & e-Science19 Steering in an OGSA framework Steering client Simulation Steering library Visualization Registry Steering GS connect publish find bind data transfer publish bind Client Steering library

20 Supercomputing, Visualization & e-Science20 Steering in OGSA continued…  Each application has an associated OGSI-compliant “Steering Grid Service” (SGS)  SGS provides public interface to application –Use standard grid service technology to do steering –Easy to publish our protocol –Good for interoperability with other steering clients/portals –Future-proofed next step to move away from file-based steering  Application still need only make calls to the steering library  SGSs used to set-up direct inter-component connections for large data transfers (e.g. using globus_io)

21 Supercomputing, Visualization & e-Science21 Additional steering functionality  Logging of all steering activity –Not just checkpoints –As record of investigative process –As basis for scripted steering  Scripted steering –Breakpoints ( IF (temperature > TOO_HOT) STOP ) –Replay of previous steering actions  Integrate performance control into steering library –SGS persists while job migrates to different system, architecture, and number of processors –Use Service Data on SGS to re-configure connected components  Advanced checkpoint management to support exploration of parameter space (and code development) –Imagine a tree where nodes are checkpoints and branches are choices made through steering interface (cf. GRASPARC)

22 Supercomputing, Visualization & e-Science22 Steering Summary  Current version of steering library provides useful functionality with relatively little coding effort  Amount of steering functionality is related to how much code scientist wishes to write –Low barrier to overcome  Value-added functionality –Automatic emit/consume of samples and checkpoints –Checkpoint logging –Scripted steering (in the future)  Two application codes instrumented so far  NAMD/VMD to come  Will be prototyping OGSA approach during next couple of months

23 Manchester Computing Supercomputing, Visualization & e-Science End Matter

24 Supercomputing, Visualization & e-Science24 1 st impressions of GridLab  Impressed by breadth and depth of R&D activities  Surprised by low profile of OGSI  Clearly strong synergy between the goals and philosophies of GridLab (esp. GAT) and RealityGrid  Did not get clear picture of deliverables and release schedules from the presentations

25 Supercomputing, Visualization & e-Science25 RealityGrid and GridLab? 1. GAT 2. CGAT 3. TGAT 4. Grid Portals 5. Testbed 6. Security 7. Adaptivity info 8. Data & Vis. 9. GRMS 10. Info. services 11. Monitoring Steering Visualization Portal Performance Control Component frameworks Instrumentation UK e-Science Grid / ETF ( need )

26 Supercomputing, Visualization & e-Science26 Challenges  RealityGrid will stretch performance envelope at many levels –Computation: must scale to 100s of processors –Networks: projected need for 1 Gbps sustained –Visualization: must keep up with simulation.  Interoperability and integration –Modular Visualization Environments are hard to integrate into distributed, heterogeneous, Grid-enabled applications  Advanced Reservation and Co-allocation are key –Especially when there's a human in the loop –Need better support from scheduling infrastructure –Hence RealityGrid's involvement in GRAAP-WG at GGF

27 Supercomputing, Visualization & e-Science27 Acknowledgments  Cliff Addison  John Brooke  Prof Mike Cates  Jonathan Chin  Prof Peter Coveney  Jean-Christophe Desplat  Simon Clifford  Prof John Darlington  Rupert Ford  Prof John Gurd  Jens Harting  Matt Harvey  Shantenu Jha  Prof Roy Kalawsky  Steven Kenny  Peter Love  Soenke Lorenz  Mikel Lujan  Ken Mayes  Anthony Mayer  Andrew Murdoch  Simon Nee  Steven Newhouse  Stephen Pickles  Robin Pinning  Gavin Pringle  Andrew Porter  Sue Ramsden  Graham Riley  Christophe Ramshorn  Dave Snelling  Jim Stanton  Kevin Stratford  Carlos Sanchez- Navarre  Tiffany Walsh  Jennifer Williams  Yong Xie …and others!

28 Manchester Computing Supercomputing, Visualization & e-Science Questions?

29 29 Index This talk  Overview Overview  Computational Steering  Current technology Current technology  Future plans Future plans  End matter End matter Additional Material  Materials Science Materials Science  Instrumentation Instrumentation  LUSI LUSI  XMT XMT  Making an application steerable Making an application steerable  Visualization Visualization  Co-allocation Co-allocation

30 Manchester Computing Supercomputing, Visualization & e-Science Computational Materials Science

31 Supercomputing, Visualization & e-Science31 Computational Materials Science RealityGrid uses HPC for large-scale simulation work in various areas:  Electronic structure studies of condensed matter & materials –(clays, clay-polymer nanocomposites): plane wave DFT  Atomistic/molecular simulation: molecular dynamics –NAMD, LAMMPS, Moldyn,…  Mesoscale simulation: –lattice gas & lattice-Boltzmann (LB3D, LUDWIG, …) –dissipative particle dynamics  Multiscale/hybrid methods

32 Supercomputing, Visualization & e-Science32 Bridging length and time scales Macroscopic (irreversible) Boltzmann equation Lattice- Boltzmann Mesoscopic (irreversible) Microscopic (reversible) Computational/Continuum Fluid Dynamics Dissipative Particle Dynamics Lattice Gas Molecular Dynamics

33 Supercomputing, Visualization & e-Science33 Lattice gas methods 3D Lattice Gas method: Binary immiscible phase separation Beta=0.03, just below the spinodal pointBeta=0.04

34 Supercomputing, Visualization & e-Science34 Lattice gas methods 3D Lattice Gas method: Binary and ternary immiscible phase separation Invasion of a porous medium with residing fluid. Only oil and water [1] Ternary system: two immiscible fluids plus surfactant. Only oil density shown. Shear Flow, lattice size=64^3, shear rate=0.25, reduced density=0.18 [2] [1] Love P J, Maillet J-B, Coveney PV, Phys Rev E 64 61302 (2001); [2] Love P J and Coveney P V, Phil Trans R Soc London A360, 357(2002)

35 Supercomputing, Visualization & e-Science35 Lattice Boltzmann methods Lattice Boltzmann simulation movie of phase separation in an initially homogeneous mixture of two immiscible fluids. Experimentally this occurs when a fluid mixture is quenched below the spinodal point in its phase diagram. Different length scales are obtained, as has been seen experimentally Chin J and Coveney PV, Physical Review E 66 016303 (2002)

36 Manchester Computing Supercomputing, Visualization & e-Science Instrumentation London University Search Instrument (LUSI) X-Ray Microtomography (XMT)

37 Supercomputing, Visualization & e-Science37 London University Search Instrument LUSI is located at and developed by Queen Mary College, University of London Aim: Find ceramics (e.g. rare earth metal oxides) with interesting / valuable properties (e.g. high temperature superconductivity) Motivation: theory cannot indicate how to construct a compound with a particular property. Established methodology in pharmaceutical industry uses automated sample generation and testing. Let's apply the same idea in materials science, exploring properties that are difficult to predict: superconductivity, luminescence, dielectric response… FurnaceXY TableInstrumentsPrinter

38 Supercomputing, Visualization & e-Science38 LUSI - schematic Database New materials c c c c Predictions Neural network Measured data Robot

39 Supercomputing, Visualization & e-Science39 XMT  X-Ray Microtomography in Dentistry at QM, or using synchrotron X-ray source at ESRF  Produces large amounts of data: –Storage –Provenance  Visualisation –Data sets are large –If done in real time we can get experimental steering Rendered image of a 1.6 mm length of a microtomographic data set of a human vertebral body, about 40 mm in diameter. Sample from Prof. Alan Boyde. J.C. Elliott, G.R. Davis, P. Anderson, F.S.L. Wong, S.E.P. Dowker, C.E. Mercer. Anales de Química Int Ed 93, S77- S82, 1997.

40 Supercomputing, Visualization & e-Science40 XMT  Simulation, visualization and data gathering coupled via RealityGrid  Expensive synchrotron beam time resources optimally used to obtain sufficient resolution for simulation  Local testbed providing grid enablement model for European synchrotron facility

41 Manchester Computing Supercomputing, Visualization & e-Science Implementing steering An example showing the basic steps required to make an application steerable

42 Supercomputing, Visualization & e-Science42 Implementing steering Steps required to instrument code for steering:  Register supported commands (eg. pause/resume, checkpoint) –steering_initialize()  Register samples –register_io_types()  Register steerable and monitored parameters –register_params()  Inside main loop –steering_control() –Reverse communication model: User code actions, in sequence, each command in list returned Support routines provided (eg. emit_sample_slice) –When you write a checkpoint, register it  When finished, –steering_finalize()

43 Supercomputing, Visualization & e-Science43 Register supported commands INTEGER (KIND=REG_SP_KIND) :: status INTEGER (KIND=REG_SP_KIND) :: num_cmds INTEGER (KIND=REG_SP_KIND), & DIMENSION(REG_INITIAL_NUM_CMDS) :: commands. num_cmds = 2 commands(1) = REG_STR_STOP commands(2) = REG_STR_PAUSE CALL steering_initialize_f(num_cmds, commands, status)

44 Supercomputing, Visualization & e-Science44 Register IO types INTEGER (KIND=REG_SP_KIND) :: num_types CHARACTER(LEN=REG_MAX_STRING_LENGTH), & DIMENSION(REG_INITIAL_NUM_IOTYPES):: io_labels INTEGER (KIND=REG_SP_KIND), & DIMENSION(REG_INITIAL_NUM_IOTYPES):: iotype_handles INTEGER (KIND=REG_SP_KIND), & DIMENSION(REG_INITIAL_NUM_IOTYPES):: io_dirn INTEGER (KIND=REG_SP_KIND) :: out_freq = 5. num_types = 1 io_labels(1) = "VTK_STRUCTURED_POINTS_OUTPUT"//CHAR(0) io_dirn(1) = REG_IO_OUT CALL register_iotypes_f(num_types, io_labels, io_dirn, & out_freq, iotype_handles(1), status)

45 Supercomputing, Visualization & e-Science45 Register parameters INTEGER (KIND=REG_SP_KIND) :: num_params CHARACTER(LEN=REG_MAX_STRING_LENGTH):: param_label INTEGER (KIND=REG_SP_KIND) :: param_type INTEGER (KIND=REG_SP_KIND) :: param_strbl INTEGER (KIND=REG_SP_KIND) :: dum_int. dum_int = 5 num_params = 1 param_label = "test_integer“//CHAR(0) param_type = REG_INT param_strbl = reg_true CALL register_params_f(num_params, param_label, param_strbl, & dum_int, param_type, status)

46 Supercomputing, Visualization & e-Science46 Example continued… ! Enter main 'simulation' loop DO WHILE(iloop 0)THEN ! Tell other processes about changed parameters END IF IF(num_recvd_cmds > 0)THEN ! Respond to steering commands here END IF ELSE … END IF ! Do some physics here… END DO

47 Manchester Computing Supercomputing, Visualization & e-Science Visualization in RealityGrid

48 Supercomputing, Visualization & e-Science48 On-line visualisation  On-line visualisation currently vtk-based –Open source –Simple GUI built with Tk/Tcl –Tk/Tcl mechanism used to control polling for new data so image updated automatically  vtk extended to use the steering library –AVS-format data supported –Allows use of XDR-format data for sample transfer between platforms –Allows use of globus_io for actual sample transfer  Volume-data focus at the minute but this will change as more applications made steerable –Volume render (parallel) –Isosurface –Hedgehog –Cut-plane

49 Supercomputing, Visualization & e-Science49 Visualization experiences  Mostly VTK in production use –Encountered performance barriers in certain Modular Visualization Environment (MVEs)  Positive experiences of SGI VizServer –Delivers pixels to remote displays, transparently –Reasonable interactivity, even over long distances –We plan to put GSI authentication into VizServer PAM, try using reservation API to achieve co-allocation, and trial extended collaborative capabilities in latest beta release.  Also experimenting with Chromium

50 Supercomputing, Visualization & e-Science50 Beyond the visualization pipeline Traditional Modular Visualization Environments tend to be monolithic  Incorporating the simulation in an extended visualization pipeline makes steering possible, but usually implies lock- in to a single framework  If remote execution is supported, rarely "Grid compatible" –But the UK e-Science project gViz is changing this for Iris Explorer  Can't compose components from different MVEs  No single visualization package meets all requirements for features and performance  Users want to use same simulation executable for batch and steered use

51 Supercomputing, Visualization & e-Science51 Highlights of UK e-Science workshop "Visualization and the Grid"  Emerging vision of future visualization systems being created as a set of composable OGSA services –with support for co-allocation of resources  Collaborative visualization raises security issues –the concept of a ‘group’ is fundamental  A Semantic Visualization World –semantic searches, resource discovery/brokerage for visualization –start now with visualization ontology?  Visualization Everywhere –Need standards for advanced interfaces supporting highly interactive, heterogeneous, collaborative visualizations on current and emerging technology.  Integration of Visualization with Access Grid is important

52 Manchester Computing Supercomputing, Visualization & e-Science Advance Reservation Use cases for advance reservation and co-allocation

53 Supercomputing, Visualization & e-Science53 Scenarios A typical RealityGrid scenario involves  a simulation running on a massively parallel system, coupled to  a visualization running on a high-end graphics system. The two sets of resources will often be located on remote systems owned and administered by different organisations. The administration teams within the two organisations, if aware of each other’s existence at all, are unlikely to have established comprehensive Service Level Agreements.

54 Supercomputing, Visualization & e-Science54 Immediate Requirements for Advance Reservation Need to co-allocate (or co-schedule)  (a) multiple processors to run a parallel simulation code, (b) multiple graphics pipes and processors on the visualization system These resources may be required  immediately (either by a RealityGrid developer or by a scientist engaged in routine investigations)  or at some time in the future (for a scheduled collaborative session).

55 Supercomputing, Visualization & e-Science55 What can be assumed?  Access to resources on both systems is through a single sign-on mechanism such as GSI or the UNICORE security model.  Simulation system characteristics: –Massively parallel system in heavy demand –Allocation of resources is likely to be entrusted to a batch scheduling system.  Visualization system characteristics: –Resources include both graphics pipes and processors (CPU+memory) –Variable demand for graphics and CPU –There may or may not be a batch scheduling system in place. –Whatever system, if any, for booking graphics pipes is unlikely to be integrated with whatever system may exist for booking processors.

56 Supercomputing, Visualization & e-Science56 We'll take what we can get The ability for a user to reserve processors and graphics pipes manually without involving system administrators would remove a significant barrier to the routine use of computational steering. The ability for an agent to do the same will be important for the resource broker that will be developed in the later stages of the project.

57 Supercomputing, Visualization & e-Science57 Network QoS Projected network bandwidth requirements:  1 Gbps between the simulation and visualization systems in order to achieve satisfactory interactivity.  100 Mbps between visualization system and remote displays –but reasonably good latency and jitter characteristics are desirable Therefore want ability to reserve network bandwidth with certain quality of service characteristics, using the same protocols as for reservation of processors.

58 Supercomputing, Visualization & e-Science58 More complex configurations RealityGrid's "deep track" is enabling more complex configurations involving finer-grained componentisation  The application is composed out of more than two communicating components,  each of which must be deployed onto (possibly remote) computational resources at run-time. Co-allocation mechanisms must therefore be robust and scalable. If we're paying for usage, and cancelling a reservation incurs a charge, then two-phase commit is highly desirable.

59 Supercomputing, Visualization & e-Science59 Performance Control Performance control aims to optimise the collective performance of the components comprising an application based on performance information collected at run time. Initially, the set of resources will be assumed to be fixed during execution, and it is by redistributing components across this set of resources that the performance control system hopes to achieve performance improvement. Ultimately, the aim is to adapt the application to utilize new resources that become available during execution To achieve this, the ability to renegotiate an existing reservation may be required.

60 Manchester Computing Supercomputing, Visualization & e-Science Bringing Science and Supercomputers Together http://www.man.ac.uk/sve sve@man.ac.uk SVE @ Manchester Computing


Download ppt "Manchester Computing Supercomputing, Visualization & e-Science Stephen Pickles GridLab meeting, Eger, Hungary, 1 April 2003."

Similar presentations


Ads by Google