Presentation is loading. Please wait.

Presentation is loading. Please wait.

3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University.

Similar presentations


Presentation on theme: "3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University."— Presentation transcript:

1 3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University of Tübingen, DE IEEE Visualization 2002 Boston, MA Presented by James R. McGirr – CS526 UIC

2 3/16/04James R. McGirr2 Overview of Paper Algorithm for rendering very large volume data sets at interactive framerates on standard PC hardware. Algorithm takes scalar data sampled on a regular grid as input Data is compressed into a hierarchical wavelet representation in a preprocessing step During rendering, the data is decompressed on-the-fly, and rendered using hardware texture mapping

3 3/16/04James R. McGirr3 Motivation Many areas in medicine, computational physics, biology, etc, deal with large volumetric data sets that demand adequate visualization. Direct Volume Rendering – Each point in space is assigned a density for the emission and absorption of light and the volume renderer computes the light reaching the eye along viewing rays. Can be done efficiently using texture mapping hardware – volume is discretized into textured slices that are blended over each other using alpha blending. This can now be done in real time on standard off the shelf PCs. The main draw back is that this is only doable with small data sets. 256 3 voxel data set is currently infeasible. What are we going to do?

4 3/16/04James R. McGirr4 IDEA ► ► Hierarchical wavelet representation – Data Compression ► ► Projective Classification   View-dependent Priority Schedule ► ► Caching

5 3/16/04James R. McGirr5 Overview of Wavelet Hierarchy The data volume is stored as a hierarchy of wavelet coefficients (In an octree) Only the levels of detail necessary for display will be decompressed and sent to the texture hardware. Typical compression of 30:1 without noticeable artifacts in the image. Visible human data set can be stored in 222MB instead of 6.5GB During Rendering, local frequency spectrum can be analyzed and the appropriate rendering resolution can be determined.

6 3/16/04James R. McGirr6 Octree Construction Algorithm ► ► Data divided into cubic blocks of (2k) 3 voxels.   k typically 16 ► ► Wavelet filter applied to each block. Produces a lowpass filtered block of k 3 voxels and (2k) 3 -k 3 wavelet coefficients. ► ► Group a cube of 8 adjacent lowpass filtered blocks together and apply the filtering algorithm to this block. ► ► Continue recursively till only a single block remains.

7 3/16/04James R. McGirr7 Octree Construction... kk k k k k k k k (2k) 3 – k 3 Wavelet Coeff. + filter Note : Filter they used was a linearly interpolating spline wavelet

8 3/16/04James R. McGirr8 Final Octree... k3k3 k3k3 k3k3 k3k3 C C C C k3k3 C k3k3 C Resolution increases by a factor of 2 each level you go down C Contains the wavelet coefficients. Needed to reconstruct the children nodes 

9 3/16/04James R. McGirr9 Comments on the Coefficients ► For k=16, we have ~29,000 coefficients per node. ► Wavelet coefficents of low importance are discared  Threshold at which all values lower are mapped to zero. Setting Threshold to zero leads to lossless compression. (4:1 for typical data sets) ► They are encoded in a compact bit stream  Run-length encoding combined with Huffman encoding  Arithmetic encoding - increase compression ratio of ~15% compared to Run-length Huffman encoding

10 3/16/04James R. McGirr10 Rendering ► We now have the data represented in a multi-resolution octree. ► Use projective classification to pick which nodes to render using hardware texture mapping.

11 3/16/04James R. McGirr11 Projective Classification ► Extract nodes from the tree that have the same resolution as the display resolution. ► Exclude all nodes outside of the view frustum.

12 3/16/04James R. McGirr12 Projective Classification Algorithm ► Start at the root and traverse the tree recursively. ► For each node, test if it is located outside the view frustum. If so, ignore it and stop. ► Compare the spacing between the voxel grid and the screen resolution. If equal or smaller, then send to the renderer. Otherwise subdivide the node and repeat ► Results in O(log n) rendering time. ► Problem is very large constant in the big O Notation. ► E.G. For a close-up of a volume with a depth of 2048 voxels, the algorithm obtains more than 230 million voxels after projective classification. ► This is 4X more than the texture memory of a typical graphics board (64MB)

13 3/16/04James R. McGirr13 What do we do now? Need a refined classification criterion  Nodes that are near the viewer have a higher priority  Know maximum amount of voxels that can be processed in the rendering stage. Stop once this is reached.  Use a priority queue, based on distance from the viewer and weighted by the amount of detail in the region obtained from the coefficent data. slice s viewer| viewing plane ► View-dependent priority Schedule.

14 3/16/04James R. McGirr14 Caching ► So far so good, but we would still not be able to perform an interactive walkthrough if we decompressed the wavelet representation from scratch for each frame. Store decompressed volume blocks from the octree. Coefficients need not be stored. User defines fixed amount of cache memory Must create 3D textures from the cache (OpenGL texture objects) Texture objects must be uploaded to the texture memory. Done automatically by the OpenGL driver

15 3/16/04James R. McGirr15 Results ► Algorithm was implemented in C++ using OpenGL with nVidia extensions for rendering. ► 2Ghz Pentium 4 PC with 1GB RAM ► nVidia GeForce 4 Ti4600 graphics board with 128MB local video memory

16 3/16/04James R. McGirr16 Data Sets Examined ► CT scan of a Christmas Tree  512 x 512 x 999 voxel with 12 bits per voxel ► Visible Human Male  2048 x 1216 x 1877 voxel RGB ► Visible Human Woman  2048 x 1216 x 1734 voxel RGB ► Rendering were made using gradient based lighting and a classification function with several semi-transparent iso-surfaces.

17 3/16/04James R. McGirr17 Statistics ► Run-length Huffman encoding **Used  decompression speed of 50 MB/s ► Arithmetic encoding  decompression speed of 4.5 MB/s  10-15% higher compression than Huffman. ► Resolution of the output images are 256 2 pixels ► Chirstmas tree compressed using lossless compression (3.4:1) ► Visible human compressed using lossy compression (man 30:1, woman 40:1) ► Preprocessing Time : Xmas 1Hr, VH 5Hr each  These times are dominated by hard disk access (CPU utilization only 6-7% during compression)

18 3/16/04James R. McGirr18 More Results ► The more blocks used, the higher the quality of the image.  2048 blocks -> 3-4 f.p.s  1024 blocks -> 7 f.p.s  512 blocks -> 10 f.p.s ► Cache efficiency very high. For the high quality rendering, only 40-60 blocks have to be decompressed per frame and 20-30 textures have to be constructed on average ► If Caching is deactivated, average framerate falls to 0.3 f.p.s. for all test scenes.

19 3/16/04James R. McGirr19 And some more results ► For the walkthrough animation that you will see in a second...  6% of time spent for decompression  5% gradient calculations  1% uploading textures to graphics board  88% rendering  So the processor spends most of the time waiting for the graphics hardware.

20 3/16/04James R. McGirr20 Conclusion ► Algorithm presented in the paper uses a hierarchical wavelet representation to store very large data sets in main memory. ► Algorithm extracts the levels of detail necessary for the current view point on-the-fly. ► By using intelligent caching, interactive walkthroughs of large data sets are feasible.

21 3/16/04James R. McGirr21 References ► Interactive Rendering of Large Volume Data Sets : Stefan Guthe, Michael Wand, Julius Gonser, Wolfgang Stasser : IEEE Vis. 2002 ► Real-time Decompression and Visualization of Animated Volume Data : Stefan Guthe, Wolfgang Strasser : IEEE 2001 ► Interactive Lighting Models and Pre-Integration for Volume REndering on PC Graphics Accelerators : Michael Meissner, Stefan Guthe, Wolfgang Stasser

22 3/16/04James R. McGirr22 ¿ Questions ?


Download ppt "3/16/04James R. McGirr1 Interactive Rendering of Large Volume Data Sets Written By : Stefan Guthe Michael Wand Julius Gonser Wolfgang Straβer University."

Similar presentations


Ads by Google