Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dr. Will Schroeder, Kitware

Similar presentations


Presentation on theme: "Dr. Will Schroeder, Kitware"— Presentation transcript:

1 Dr. Will Schroeder, Kitware
ESCE 4960: Open Source Software Practice Lecture 8: VTK Pipeline September 24, 2007 Dr. Will Schroeder, Kitware

2 Visualization Pipeline Topics
Interpreters Visualization Model Pipeline Mechanics Data Management Start, End, & Progress Events Surface Rendering Volume Rendering Open Source Software Practice Lecture 8

3 Interpreters VTK provides automatic wrapping for the following interpreted languages: Tcl Java Python Interpreters provide faster turn-around (no compilation) but suffer from slower execution VTK can be accessed from C++, but sometimes it’s easier to test a particular part of the code from an interpreted language instead, especially because you can make modifications and see the effects without recompiling your program. In VTK, the interpreted languages we chose to support are Tcl, Java, and Python. As is noted, programs written in interpreted languages execute more slowly than their compiled counterparts, so we tend to use C++ for application development rather than one of the interpreted languages. Open Source Software Practice Lecture 8

4 Tcl Interpreter To use VTK from Tcl, add the following line to the beginning of your script: package require vtk Create an actor in Tcl: vtkActor actor Invoke a method: actor SetPosition We’ll walk through how to use VTK from within the Tcl language. Note the similarities between creating a vtkActor and invoking a method on it from C++ and from Tcl. It is fairly straightforward to convert a C++ example to Tcl and vice versa. Open Source Software Practice Lecture 8

5 Tcl Interpreter A special package provides a Tcl interpreter when the ‘u’ key is pressed in the render window: package require vtkinteraction iren AddObserver UserEvent {wm deiconify .vtkInteract} These 2 lines of Tcl code allow you to bring up a Tcl interpreter from a running VTK program. From the interpreter you can modify the parameters of your program and see the results immediately. Open Source Software Practice Lecture 8

6 Tcl Interpreter vtkActor ListInstances: list all vtkActor objects
vtkActor ListMethods: list all vtkActor methods anActor Print: print internal state of anActor vtkCommand DeleteAllObjects: delete all VTK objects vtkTkRenderWidget: embed a render window in Tk vtkTkImageViewerWidget: embed an image window in Tk There are a few special VTK commands that are useful from within Tcl. Open Source Software Practice Lecture 8

7 Exercise 2 Run an example exercise2.tcl
Use ‘u’ (user-defined) key to bring up the interactor interface Try some commands vtkLODActor ListInstances sphereActor Print sphereActor ListMethods Double-click on the exercise2.tcl file. After you bring up the interactor, type the each of these commands and press Enter to see the results in the interactor interface. Open Source Software Practice Lecture 8

8 The Visualization Pipeline
A sequence of algorithms that operate on data objects to generate geometry that can be rendered by the graphics engine or written to a file Source Filter Mapper Actor to graphics system Data Data In VTK, this is the definition of the visualization pipeline. In this diagram, the items labeled “Source”, “Filter”, and “Mapper” are process objects. The items labeled “Data” are data objects. (We’ll discuss algorithms and data objects next.) “Actor” represents the geometry to be rendered. Data could also be passed to a writer to be saved to a file. Data Filter Mapper Actor Data Open Source Software Practice Lecture 8

9 Visualization Model Data Objects represent data provide access to data
compute information particular to data (e.g., bounding box, derivatives) Algorithms Ingest, transform, and output data objects In the previous slide, you were briefly introduced to data objects and algorithms. Data objects represent the data in a VTK program. They are the classes you use to access data and retrieve particular information about the data. Algorithms operate on those data objects. They are the sources (e.g., data readers), filters (e.g., visualization algorithms), and mappers from the previous slide. We will discuss these different types of algorithms in more detail later. Open Source Software Practice Lecture 8

10 vtkDataObject / vtkDataSet
vtkDataObject represents a “blob” of data contain instance of vtkFieldData an array of arrays no geometric/topological structure Superclass of all VTK data objects vtkDataSet has geometric/topological structure Consists of geometry (points) and topology (cells) Has associated point- and cell-centered data arrays Convert data object to data set with vtkDataObjectToDataSetFilter The most general way to represent data in VTK is with a vtkDataObject. vtkDataObject does not have any associated geometry/topology – there are not associated points/cells. A data object has field data – an array of arrays – but these arrays are not associated with points/cells in the data set since data object does not define points/cells. vtkDataSet is a subclass of vtkDataObject. Geometry (points) and topology (cells – how the points are connected to each other) are defined at this level. Point-centered and cell-centered attribute data is defined. The next slide illustrates the distinction between data objects and data sets. Open Source Software Practice Lecture 8

11 vtkDataObject / vtkDataSet
Array of arrays Field Data vtkDataObject Concept Implementation Geometry & Topology Points & Cells vtkDataSet Point- and cell-centered arrays Point Data Cell Data Open Source Software Practice Lecture 8

12 Dataset Model A dataset is a data object with structure
Dataset structure consists of points (x-y-z coordinates) cells (e.g., polygons, lines, voxels) which are defined by connectivity list referring to points ids Access is via integer ID implicit representations explicit representations As mentioned earlier, datasets contain both points (location in 3D), and cells (connectivity of groups of points), so a cell’s X-Y-Z position is determined by the points that it is composed of. Both points and cells are accessed by integer Ids. The points and cells making up a data set can be specified either implicitly (e.g., vtkImageData or data defined by an implicit function) or explicitly (e.g., vtkUnstructuredGrid). Cell Points Open Source Software Practice Lecture 8

13 vtkDataSet Subclasses
vtkImageData vtkRectilinearGrid vtkUnstructuredGrid vtkStructuredGrid Structured data – vtkImageData (axis-aligned, even spacing per dimension), vtkRectilinearGrid (axis-aligned, irregular spacing allowed), vtkStructuredGrid (not axis-aligned, irregular spacing allowed – uses hexahedra or quads) Unstructured data – vtkPolyData (2D cells), vtkUnstructuredGrid (vtkPolyData + 3D cells) vtkPolyData Open Source Software Practice Lecture 8

14 Data Set Attributes vtkDataSet also has point and cell attribute data:
Scalars Vectors - 3-vector Tensors - 3x3 symmetric matrix Normals - unit vector Texture Coordinates 1-3D Array of arrays (I.e. FieldData) In addition to geometry and topology, data sets also have attribute data for points and for cells. vtkPointData and vtkCellData are both subclasses of vtkFieldData – an array of arrays. The specific attribute data arrays (scalars, vectors, etc.) are just pointers to field data arrays. Additional point or cell attribute data arrays can also be stored. Historical note: In previous versions of VTK, scalars, vectors, etc., were not contained in field data, but were specially-treated arrays. vtkPointData and vtkCellData each contained an instance of vtkFieldData where additional arrays could be stored, but special methods had to be used to move an array from field data to a specific type of attribute data. Now, all that is required to do this is resetting a pointer. Open Source Software Practice Lecture 8

15 Data Set Attributes (cont.)
In addition to geometry and topology, data sets also have attribute data for points and for cells. vtkPointData and vtkCellData are both subclasses of vtkFieldData – an array of arrays. The specific attribute data arrays (scalars, vectors, etc.) are just pointers to field data arrays. Additional point or cell attribute data arrays can also be stored. Historical note: In previous versions of VTK, scalars, vectors, etc., were not contained in field data, but were specially-treated arrays. vtkPointData and vtkCellData each contained an instance of vtkFieldData where additional arrays could be stored, but special methods had to be used to move an array from field data to a specific type of attribute data. Now, all that is required to do this is resetting a pointer. Open Source Software Practice Lecture 8

16 Scalars (An Aside) Scalars are represented by a vtkDataArray
Scalars are typically single valued Scalars can also represent color I (intensity) IA (intensity-alpha: alpha is opacity) RGB (red-green-blue) RGBA (RGB + alpha) Scalars can be used to generate colors mapped through lookup table if unsigned char  direct color specification Scalars are one of the most common types of attribute data. They (and all other attribute data) are represented as vtkDataArrays. Scalars are usually used to determine the colors used in displaying the data set. They can either represent the colors directly (1 – 4 component scalars) or they can be used to generate colors (single-component scalars mapped through a lookup table). In directly representing colors, I and IA are grayscale, and RGB and RGBA are color. In generating colors, the color lookup table is associated with a range of scalar values. The scalars in the data correspond to particular colors in this range. Open Source Software Practice Lecture 8

17 Algorithms Algorithms operate on data objects 1 or more inputs Source
1 or more outputs Filter We mentioned algorithms briefly earlier. They operate on the data objects we have just been discussing. There are three types of algorithms in VTK: sources, filters, and mappers. Sources produce data objects but have no inputs (e.g., file readers, cone source, etc.). Filters make changes to the data (the visualization algorithms in VTK). They take a data object as input and produce another data object as output. Mappers take data objects as input but do not produce output. They are the connection between the data objects and the rendering parts of VTK. They convert the data into rendering primitives. 1 or more outputs Mapper Open Source Software Practice Lecture 8

18 Pipeline Execution Model (conceptual depiction)
direction of data flow (via RequestData()) Source Filter Mapper Render() Data Data The data object and algorithms we’ve discussed can be linked together in VTK to form a visualization pipeline, resulting in a particular visualization of some data. When the pipeline is executed, the data is produced from a source and flows down the pipeline through any filters to a mapper and into the rendering subsystem. In VTK, pipeline execution is demand-driven. This means that the pipeline is not executed until Update is called – either by the rendering subsystem, by a writer, or called directly. Update traces back up the pipeline to the point where the data is up-to-date. Execute happens from that point down the pipeline to produce the new visualization. direction of update (via Update()) Open Source Software Practice Lecture 8

19 Creating Pipeline Topology
bFilter->SetInputConnection(aFilter->GetOutputPort()); bFilter->SetInputConnection(1,aFilter->GetOutputPort(2)); aFilter Connection bFilter Output Ports 1 Input Ports Connection 2 1 In order to produce a visualization, we must set up a pipeline. The SetInputConnection method (in filters and mapper) and the GetOutputPort method (in sources and filters) are used to connect the pieces of the pipeline. Inputs and outputs are both vtkDataSets (or a subclass of vtkDataSet.) Several connections on an input port: AddInputConnection() if allowed by the filter (ex: vtkAppendFilter) Reuse an output port: OK Open Source Software Practice Lecture 8

20 Role of Type-Checking FillInputPortInformation() specifies input dataset type FillOutputPortInformation() specifies output dataset type Type-checking performed at run-time int vtkPolyDataAlgorithm::FillInputPortInformation( int vtkNotUsed(port), vtkInformation *info) { info->Set(vtkAlgorithm::INPUT_REQUIRED_DATA_TYPE(), ”vtkPolyData”); return 1; } Different algorithms operate on different dataset types, so in order to connect two algorithms, the output dataset type of one must match the input dataset type of the next one in the pipeline. Open Source Software Practice Lecture 8

21 Example Pipeline Decimation, smoothing, normals Implemented in C++
vtkBYUReader vtkDecimatePro Note: data objects are not shown  they are implied from the output type of the filter vtkSmoothPolyDataFilter vtkPolyDataNormals vtkPolyDataMapper Open Source Software Practice Lecture 8

22 Create Reader & Decimator
vtkBYUReader *byu = vtkBYUReader::New(); byu-> SetGeometryFileName("../../vtkdata/fran_cut.g”); vtkDecimatePro *deci = vtkDecimatePro::New(); deci->SetInputConnection( byu->GetOutputPort() ); deci->SetTargetReduction( 0.9 ); deci->PreserveTopologyOn(); deci->SetMaximumError( ); Open Source Software Practice Lecture 8

23 Smoother & Graphics Objects
vtkSmoothPolyDataFilter *smooth = vtkSmoothPolyDataFilter::New(); smooth->SetInputConnection(deci->GetOutputPort()); smooth->SetNumberOfIterations( 20 ); smooth->SetRelaxationFactor( 0.05 ); vtkPolyDataNormals *normals = vtkPolyDataNormals::New(); normals->SetInputConnection( smooth->GetOutputPort() ); vtkPolyDataMapper *cyberMapper = vtkPolyDataMapper::New(); cyberMapper->SetInputConnection( normals->GetOutputPort() ); vtkActor *cyberActor = vtkActor::New(); cyberActor->SetMapper (cyberMapper); cyberActor->GetProperty()->SetColor ( 1.0, 0.49, 0.25 ); cyberActor->GetProperty()->SetRepresentationToWireframe(); Open Source Software Practice Lecture 8

24 More Graphics Objects vtkRenderer *ren1 = vtkRenderer::New();
vtkRenderWindow *renWin = vtkRenderWindow::New(); renWin->AddRenderer( ren1 ); vtkRenderWindowInteractor *iren = vtkRenderWindowInteractor ::New(); iren->SetRenderWindow( renWin ); ren1->AddViewProp( cyberActor ); ren1->SetBackground( 1, 1, 1 ); renWin->SetSize( 500, 500 ); iren->Start(); Open Source Software Practice Lecture 8

25 After Decimation and Smoothing (7,477 triangles)
Results Before (52,260 triangles) After Decimation and Smoothing (7,477 triangles) Open Source Software Practice Lecture 8

26 Exercise 2b Compile and run exercise2b.cxx
Understand what the example does... How many source objects are there? How many filter objects? How many mapper objects? How many data objects are there? Open Source Software Practice Lecture 8

27 Exercise 2b (solution) # of Sources: 1 # of Filters: 5 # of Mappers: 2
# of Data Objects: 6 Sources – vtkSampleFunction Filters – vtkContourFilter, vtkDecimatePro, vtkSmoothPolyDataFilter, vtkPolyDataNormals, vtkOutlineFilter Mappers – 2 vtkPolyDataMappers (contMapper, outlineMapper) Data Objects – outputs of the source and filters Open Source Software Practice Lecture 8

28 Exercise 2b - Notes vtkImplicitFunction
Any function of the form F(x,y,z) = 0 sphere of radius R: F(x,y,z) = R2 – (x2 + y2 + z2) = 0 Examples: spheres, quadrics, cones, cylinders, planes, ellipsoids, etc. Can be combined in boolean trees Union, difference, intersection Powerful tools for Modeling Clipping Cutting Extracting data vtkQuadric (from this example) is a subclass of vtkImplicitFunction. It was used in the source vtkSampleFunction. Clipping – remove part of the data set Cutting – reduce the dimensionality of the data set by 1 Open Source Software Practice Lecture 8

29 Volume Rendering Volume rendering is the process of generating a 2D image from 3D data. The line between volume rendering and geometric rendering is not always clear. Volume rendering may produce an image of an isosurface, or may employ geometric hardware for rendering. Open Source Software Practice Lecture 8

30 Volume Data Structures
ImageData: 3D regular rectilinear grid UnstructuredGrid: explicit list of 3D cells Open Source Software Practice Lecture 8

31 3D Image Data Structure 3D Regular Rectilinear Grid vtkImageData:
Dimensions = (Dx, Dy, Dz) Spacing = (Sx, Sy, Sz) Origin = (Ox, Oy, Oz) Open Source Software Practice Lecture 8

32 Volume Rendering Strategies
Image-Order Approach: Traverse the image pixel-by-pixel and sample the volume via ray-casting. In an image-order volume rendering method, the image is traversed pixel-by-pixel, and rays are cast out from each pixel through the volume to determine the final pixel color value. Image-order volume rendering is typically referred to as ray casting. Image-order ray casting, and object-order texture mapping are actually equivalent except for possible precision problems in the texture mapping method due to the limited hardware frame buffer size. The speed of an image-order volume rendering method is mostly driven by the size of the image while object-order methods depend mostly on the size of the volume. Some volume rendering methods combine both object-order and image-order techniques and are known as hybrid methods. Ray Casting Open Source Software Practice Lecture 8

33 Volume Rendering Strategies
Object-Order Approach: Traverse the volume, and project to the image plane. Most volume rendering methods fall into two categories: object-order or image-order. In object-order volume rendering, the rectilinear grid is traversed, and the data is projected onto the image plane. One example of this is splatting where the volume is traversed sample-by-sample, a spherical kernel is placed around the sample, and then the sample is “splatted” onto the image plane. Alternatively, the volume may be processed plane-by-plane by texture mapping the scalar values onto axis-aligned rectangular polygons, then rendering these polygons using conventional graphics hardware. The polygons are perpendicular to the axis most parallel to the viewing direction. During rotation, this axis will change, leading to some image artifacts. If 3D texture mapping hardware is available, this technique can be performed with view-plane aligned polygons eliminating these artifacts. Splatting cell-by-cell Texture Mapping plane-by-plane Open Source Software Practice Lecture 8

34 Scalar Value Interpolation
(1,1,1) v = (1-x)(1-y)(1-z)S(0,0,0) + (x)(1-y)(1-z)S(1,0,0) + (1-x)(y)(1-z)S(0,1,0) + (x)(y)(1-z)S(1,1,0) + (1-x)(1-y)(z)S(0,0,1) + (x)(1-y)(z)S(1,0,1) + (1-x)(y)(z)S(0,1,1) + (x)(y)(z)S(1,1,1) x z y (0,0,0) In many rendering methods, such as ray casting and texture mapping, it is often necessary to know the scalar value at any location in three-dimensional space. Since scalar values exist only on the grid points of the dataset, we must employ an interpolation function that defines the scalar value between sample points. The simplest form of interpolation is known as nearest neighbor interpolation, where the value of the nearest sample point is returned for any location within the dataset. Smoother results are obtained at the cost of higher computational complexity with trilinear interpolation, which linearly interpolates the scalar value along each of the three main axis. v = S(rnd(x), rnd(y), rnd(z)) Nearest Neighbor Trilinear Open Source Software Practice Lecture 8

35 Scalar Value Interpolation
The importance of interpolation is seen in these images generated from a 50x50x50 sphere. On the left, nearest neighbor interpolation was used, and the underlying voxel structure of the dataset is clearly visible. On the right, trilinear interpolation is used, and the sphere appears smooth even in the close-up image. Trilinear interpolation requires more computing time than nearest neighbor interpolation, and this can be significant on workstations with poor floating point performance unless these calculations are converted to fixed-point arithmetic. Trilinear interpolation is a continuous function, but is not continuous in the first derivative. It would be beneficial to use tricubic interpolation to obtain first derivative continuity, but this is generally too computationally expensive for practical implementation in interactive environments. Nearest Neighbor Interpolation Trilinear Interpolation Open Source Software Practice Lecture 8

36 Material Classification
Transfer functions are the key to volume renderings Open Source Software Practice Lecture 8

37 Material Classification
Scalar value can be classified into color and opacity (RGBA) RGBA Scalar Value Gradient magnitude can be classified into opacity Opacity One of the hardest tasks in volume rendering is material segmentation. For example if you have a CT dataset that contains bone and skin, you must define transfer functions that map color and opacity to these features based on properties of the dataset. Generally, three transfer functions define the appearance of the volume. Two of these transfer functions are functions of scalar value, mapping the scalar value into color (rgb) and opacity (alpha). Another useful transfer function maps the magnitude of the gradient value to an opacity. To determine the opacity at some sample location, both the scalar value and the gradient magnitude opacity values are multiplied together. Scalar value opacity transfer functions are useful for narrowing the range of scalar values that are rendered. Gradient magnitude opacity transfer functions can highlight areas of sharp change. Gradient Magnitude Final opacity is obtained by multiplying scalar value opacity by gradient magnitude opacity Open Source Software Practice Lecture 8

38 Material Classification
The choice of transfer functions greatly affects the final image produced from a dataset. Consider the three top images. These images were all generated using a compositing technique on a simulated dataset of a high potential iron protein molecule. The transfer functions mapping scalar value to RGBA were varied, producing images that convey different information about the dataset. In the bottom row we see two images generated from CT data. Color transfer functions were used to map scalar values that most likely belong to regions of the bone to a white color, while those samples most likely to indicate skin were mapped to a light brown color. Opacity transfer functions were used to map skin values to a low opacity and bone values to a higher opacity. In addition, homogenous areas were assigned a low opacity. Open Source Software Practice Lecture 8

39 Implementation Renderer Prop Collection Property ... Volume Volume
Image Data Volume Mapper The vtkRenderer contains a vtkPropCollection. This collection can hold all types of props including 2D props, 3D actors, and 3D volumes. Each vtkVolume has a vtkVolumeProperty (called the Property), and a vtkVolumeMapper (called the Mapper), which is similar to the structure of an actor. The vtkVolumeMapper has an Input which points to some vtkImageData to represent the 3D dataset. This data may be obtained by connecting the output of a vtkSLCReader to the Input of the vtkVolumeMapper. As opposed to the other objects colored orange, the volume mapper is colored in blue to illustrate that it is an abstract superclass. A helper class, vtkRayCaster, was created to encapsulate the ray casting methods. It is an automatically generated class that is not visible to the user and therefore does not appear in this architecture chart. Input Mapper Open Source Software Practice Lecture 8

40 Implementation vtkVolume vtkVolumeProperty vtkVolumeMapper
vtkVolumeRayCastMapper vtkFixedPointVolumeRayCastMapper vtkVolumeTextureMapper2D vtkVolumeTextureMapper3D vtkVolumeProMapper (removed in 5.0, back in CVS) Open Source Software Practice Lecture 8

41 Implementation (cont’d)
vtkUnstructuredGridVolumeMapper vtkUnstructuredGridVolumeZsweepMapper vtkUnstructuredGridVolumeRayCastMapper vtkProjectedTetrahedraMapper Mappers listed from slowest to fastest Raycaster- software; more accurate than projected tetrahedra mapper; faster than zsweep, but uses more memory, so best suited to small datasets Projected tetrahedra mapper – uses GPU acceleration Zsweep mapper – slower than raycaster, but better suited to larger volumes Open Source Software Practice Lecture 8

42 Volume Rendering Issues
Quality – is it accurate? Speed – is it fast? Intermixing – what can’t I do? Features / Flexibility – what features does it have, and can I extend it? There are four major issues that I will discuss in relation to each of the volume rendering approaches covered in this talk. The first issue is quality – how accurate is the generated image? This will depend on many issues including data interpolation, accumulation method, precision of the calculations, and other approximations used by the rendering technique to improve speed. Another important issue is speed – how fast can a image be generated? Interactivity is important in many application areas, and often a less accurate method is preferred if that is the only way to ensure interactivity. In a complex application it is often desirable to intermix multiple volumes with various geometry – I will discuss the limitations imposed by each of the volume rendering technique on intermixing. Finally, does the method have all the features I need, and if not is it flexible? Can I get into the code and add additional features? Open Source Software Practice Lecture 8

43 Standard Features Transfer Functions – define color and opacity per scalar value. Modulate opacity based on magnitude of the scalar gradient for edge detection. Shading – specular / diffuse shading from multiple light sources. Cropping – six axis-aligned cropping planes form 27 regions. Independent control of regions (limited with VolumePro) Cut Planes – arbitrary cut planes. Limited by hardware (6 with OpenGL, 2 parallel with VolumePro) Some of the standard features supported (at least in part) across all of the volume rendering strategies are: transfer functions, shading, cropping, and cut planes. The transfer functions are used to map scalar value into color and opacity, and the magnitude of the scalar value into an opacity modulation value. Shading due to multiple light sources is supported, and in ray casting and texture mapping these light sources can be colored. Six cropping planes define 27 regions in the volume that can be turned on or off independently (there are only a few predefined options with the VolumePro (but these are the most useful options) to, for example, view a subvolume or to chop a corner out of a volume. Arbitrary cut planes can also be applied to the volume, although this is limited to 6 for texture mapping, and only 2 which must be parallel to each other for the VolumePro hardware. With two parallel cut planes, thick reformatting can be performed. Open Source Software Practice Lecture 8

44 Intermixed Geometry High potential iron protein
In the image on the left, the positive wave function values in the high potential iron protein dataset are volume rendered using a ray casting compositing scheme, while the negative wave function values have been extracted as a geometric isosurface using marching cubes. The image on the right shows CT data from the visible woman dataset. The skin surface in the knee has been extracted using marching cubes, and rendered with graphics hardware. Ray casting is used to display the bone as a “fuzzy surface” using a composite volume rendering technique. High potential iron protein CT scan of the visible woman's knee Open Source Software Practice Lecture 8

45 Volume Ray Casting Volume RayCast Function Volume RayCast Mapper
Gradient Shader Gradient Estimator Gradient Encoder Some additional classes are need to support the volume ray casting architecture. The vtkVolumeRayCastFunction is the class that performs most of the work in ray casting since it is responsible for traversing the ray and computing a final pixel value. The vtkGradientEstimator is used to create the 3D array of normals, encoded into two byte direction and one byte magnitude values according to the scheme implemented in the vtkGradientEncoder. The ray cast mapper contains an automatically generated class, the vtkEncodedGradientShader, which is used to calculate illumination for each encoded normal. Open Source Software Practice Lecture 8

46 Ray Cast Functions A Ray Function examines the scalar values encountered along a ray and produces a final pixel value according to the volume properties and the specific function. A ray cast function takes a ray and transforms it into "volume space" where the axes of the coordinate system are aligned with the axes of the dataset, and a scaling factor is applied to make the volume look like the spacing is 1 unit by 1 unit by 1 unit. This one transformation simplifies many later calculations such as interpolation. The ray cast function examines the scalar values that are encountered along the ray, and according to the color and opacity transfer functions, the shading information, the interpolation type, and the gradients, produces a final pixel value. Each specific implementation of a ray cast function defines how these values are combined (and in some cases, which values are ignored) in order to obtain that final value. We will cover three common ray cast functions in this section. Scalar Value Ray Distance Open Source Software Practice Lecture 8

47 Maximum Intensity Function
Maximum Value Scalar Value Ray Distance Opacity Scalar Value A maximum intensity function is one that examines the values and selects the maximum. One way to do this is to pick the maximum scalar value encountered. This value would then be turned into an RGBA pixel value by applying the color and opacity transfer functions. This ray function is an example of one that would ignore certain properties that do not make sense for the function. For example, shading is ignored by a maximum intensity function since the set of maximum intensities in the final image do not generally represent a surface structure in the volume. As an alternative to finding the maximum scalar value, this function could locate the maximum intensity encountered along the ray. The name of this class is vtkVolumeRayCastMIPFunction, where MIP stands for Maximum Intensity Projection. Opacity Maximize Scalar Value Gradient Magnitude Open Source Software Practice Lecture 8

48 Composite Function Ii = ci ai + Ii+1 (1-ai) Scalar Value Ray Distance
Opacity Scalar Value Compositing is generally performed using the following recursive equation: Ii = ci ai + Ii+1 (1-ai) where Ii is the intensity at location i along the ray, ci is the color, and ai is the opacity at that location. The ray is traversed until it exits the volume or until (1-ai) drops below some tolerance indicating that full opacity has been obtained. The opacity value is actually an opacity per unit length, and represents the amount of light reflected by the material at that location. The (1-ai) value represents the amount of light transmitted unaltered by the material at that location. Use a-blending along the ray to produce final RGBA value for each pixel. Ii = ci ai + Ii+1 (1-ai) Opacity Gradient Magnitude Open Source Software Practice Lecture 8

49 Isosurface Function Isosurface Value Scalar Value Ray Distance Stop ray traversal at isosurface value. Use cubic equation solver if interpolation is trilinear. An isosurface image can be generated from a dataset using a composting ray function with the following transfer function mapping scalar value to opacity: Since compositing is performed using a sampling method, this will lead to samples taken not quite on the isosurface, which in turn will lead to artifacts. To improve this, a special ray cast function can be used which exactly locates the surface. With trilinear interpolation, this results in a cubic equation to solve for the intersection of the ray with the surface. Scalar Value Opacity Open Source Software Practice Lecture 8

50 Sampling Distance 0.1 Unit Step Size 1.0 Unit Step Size 2.0 Unit
The sampling distance must be selected with care when using a ray function that performs sampling. Consider the above example in which a vase is rendered using compositing with three different step sizes. The transfer functions are defined such that the color of a sample changes from black to white within a short distance (approximately one unit). If the sampling distance is set too high, as it is in the image on right, artifacts will appear in the resulting image. On the other hand, if the sampling distance is set too low, then the image will require a long time to render. In some cases it is possible to adaptively adjust the step size during the ray casting by examining the gradient magnitude along the ray. In areas of rapid change the sampling distance should be small, while large step sizes can be used in homogenous regions. 0.1 Unit Step Size 1.0 Unit Step Size 2.0 Unit Step Size Open Source Software Practice Lecture 8

51 Speed / Accuracy Trade-Off
Multi-resolution ray casting: 1x1 Sampling 2x2 Sampling 4x4 Sampling Combined approach: vtkLODProp3D can be used to hold mappers of various types. A volume can be represented at multiple levels-of-detail using a geometric isosurface, texture mapping, and ray casting. A decision between levels-of-detail is made based on allocated render time. Even on a multi-processor machine with a fairly good graphics card, the render rate of a large volumetric dataset may not be high enough for interactivity. Since the time to render with ray casting is proportional to the number of pixels in the image, reducing the number of pixels from which rays are cast will decrease the render time. For texture mapping, additional version of the dataset can be generated at lower resolution. Rendering lower resolution data requires smaller textures and therefore performance is improved. Several techniques can be combined by using a vtkLODProp3D - a low resolution dataset with a texture mapping approach, a full resolution dataset with texture mapping, and a full resolution dataset with ray casting can all be added to the vtkLODProp3D. At render time, the vtkLODProp3D will choose between the different techniques based on the allocated render time for the volume. Open Source Software Practice Lecture 8


Download ppt "Dr. Will Schroeder, Kitware"

Similar presentations


Ads by Google