Download presentation
Presentation is loading. Please wait.
Published byVictoria Dowd Modified over 10 years ago
1
Laurent Castanié (Earth Decision / Paradigm – INRIA Lorraine – Project ALICE) Christophe Mion (INRIA Lorraine – Project ALICE) Xavier Cavin (INRIA Lorraine – Project ALICE) Bruno Lévy (INRIA Lorraine – Project ALICE) Distributed Shared Memory for Roaming Large Volumes
2
2 Outline Introduction Large volumes in the Oil and Gas EP domain Previous work: single-workstation cache system COTS cluster solution: DHCS Distributed volume rendering Distributed data management Real-time roaming in gigantic data sets with DHCS Conclusions
3
3 Outline Introduction Large volumes in the Oil and Gas EP domain Previous work: single-workstation cache system COTS cluster solution: DHCS Distributed volume rendering Distributed data management Real-time roaming in gigantic data sets with DHCS Conclusions
4
4 Reservoir scale volume 300x400x400 (~50 MB) 100-300 Km 2 Targeted ROI 1000x1000x1000 (~1 GB) Targeted volume 4000x5000x5000 (~100 GB) Reservoir scale ROI 100x200x200 (~4 MB) Targeted ROI ~ 250x typical reservoir scale ROI REGIONAL SCALE 10000-30000 Km 2 Introduction Interpretation scales in Oil and Gas EP
5
5 Introduction OOC visualization on a single workstation 8% 6% 0.5% 100 GB Data Volume (Disk) 512 MB Graphics Card (V-RAM) 8 GB Workstation (RAM)
6
6 Introduction OOC visualization on a single workstation Bhaniramka and Demange, IEEE VolVis 2002, OpenGL Volumizer Plate et al., VISSYM 2002, Octreemizer Castanie et al., IEEE Visualization 2005 VolumeExplorer (coupling OOC visualization and data processing) Probe-based roaming systems with LRU volume paging
7
7 Efficient solution up to 20-30 GB, however: ROI size is limited to the amount of graphics memory available; Performance decreases rapidly when the size of data on disk increases over 30 GB. => How to SCALE our solution up to 100-200 GB ? Distributed Hierarchical Cache System (DHCS) on COTS cluster Introduction OOC visualization on a single workstation
8
8 Outline Introduction Large volumes in the Oil and Gas EP domain Previous work: single-workstation cache system COTS cluster solution: DHCS Distributed volume rendering Distributed data management Real-time roaming in gigantic data sets with DHCS Conclusions
9
9 SLAVE NODE MASTER NODE 1. Segmentation 2. Distribution 3. Rendering 4. Composition Distributed volume rendering Sort-last parallel volume rendering
10
10 Distributed volume rendering Pipelined binary-swap compositing P0P0 P1P1 P2P2 P3P3 Result on MASTER node Ma et al., IEEE CG&A 1994 Binary-swap compositing Cavin et al., IEEE Visualization 2005 Cavin et al., Eurographics PGV 2006 DViz pipelined implementation
11
11 ROI size several GBs => ~15-20 fps One virtual graphics card 8 GB DViz Master Distributed volume rendering Pipelined binary-swap compositing 16 nodes with GeForce 6800 ULTRA - 512 MB
12
12 Outline Introduction Large volumes in the Oil and Gas EP domain Previous work: single-workstation cache system COTS cluster solution: DHCS Distributed volume rendering Distributed data management Real-time roaming in gigantic data sets with DHCS Conclusions
13
13 Very low disk to memory bandwidth Faster transfers through the network? ? Distributed data management Limited disk to memory bandwidth Graphics Card (V-RAM) Workstation (RAM) Data Volume (Disk) Gigabit Ethernet Network ~1 GB/s~50 MB/s
14
14 4x faster transfers through the network 50 MB/s Network Disk 220 MB/s Distributed data management Disk Vs Network bandwidth 120 MB/s500 MB/s
15
15 Fully dynamic memory state that must be kept up-to-date ~50 MB/s ~220 MB/s 11 2 Distributed data management Our fully dynamic implementation (DHCS) 8 GB
16
16 Outline Introduction Large volumes in the Oil and Gas EP domain Previous work: single-workstation cache system COTS cluster solution: DHCS Distributed volume rendering Distributed data management Real-time roaming in gigantic data sets with DHCS Conclusions
17
17 30 copies of the Visible Human data set = 5580x5400x3840 ~ 107 GB ROI 1000x1000x1000 ~ 1 GB => Real-time rendering and volume roaming at full resolution at 12 fps on average on a 16-node cluster Results Real-time rendering and volume roaming Full resolution volume roaming Full resolution volume rendering
18
18 Outline Introduction Large volumes in the Oil and Gas EP domain Previous work: single-workstation cache system COTS cluster solution: DHCS Distributed volume rendering Distributed data management Real-time roaming in gigantic data sets with DHCS Conclusions
19
19 Volume visualization of ~100 GB Volume roaming a ROI of several GBs Cluster-based hierarchical cache system Distributed volume rendering Distributed data management Compression techniques Better load balancing of the communications on the network Pre-fetching strategies to hide disk access Other use cases: Combination of multiple attributes of 100 GB each Real-time full resolution volume slicing at 20-30 slices per second Conclusions
20
20 Acknowledgements This work involved and has been supported by: Earth Decision (now part of Paradigm) http://www.earthdecision.comhttp://www.earthdecision.com LORIA / INRIA – Project ALICEhttp://alice.loria.frhttp://alice.loria.fr DVizhttp://www.dviz.frhttp://www.dviz.fr Region Lorraine (CRVHP) GOCAD consortiumhttp://www.gocad.orghttp://www.gocad.org
21
21 Distributed data management System architecture
22
22 Distributed data management System flow chart
23
23 Optimal transfer rate with GL_BGRA format Distributed data management Memory to graphics memory bandwidth
24
24 Distributed data management GL_BGRA texture packing
25
25 GL_RGBA8 only CPU bound up to 32x32x16 Distributed data management Frame rate against the brick dimensions CPU boundGPU bound CPU boundGPU bound
26
26 2xGigabit Ethernet and 512 KB pages => 220 MB/s 220 MB/s114 MB/s Distributed data management Network bandwidth
27
27 8 GB ~220 MB/s Limited by the global amount of main memory on the cluster Distributed data management Classical static DSM implementation 8 GB Initialization with a static resident set on each node DeMarle et al. 2004
28
28 Each node with its own buffer state 1 buffer transferred at step 0 2 buffers transferred at step 1 2 i buffers transferred at step i Each node with the entire cluster state Distributed data management Binary all-to-all communication
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.