Presentation is loading. Please wait.

Presentation is loading. Please wait.

Acquiring, Stitching and Blending Diffuse Appearance Attributes on 3D Models C. Rocchini, P. Cignoni, C. Montani, R. Scopigno Istituto Scienza e Tecnologia.

Similar presentations


Presentation on theme: "Acquiring, Stitching and Blending Diffuse Appearance Attributes on 3D Models C. Rocchini, P. Cignoni, C. Montani, R. Scopigno Istituto Scienza e Tecnologia."— Presentation transcript:

1 Acquiring, Stitching and Blending Diffuse Appearance Attributes on 3D Models C. Rocchini, P. Cignoni, C. Montani, R. Scopigno Istituto Scienza e Tecnologia dell’Informazione notes by Sharon Price

2 Not an Ideal World... very accurate 3D models with high resolution cannot be automatically built from real objects high resolution solutions that exist are currently difficult and slow surface attributes are particularly difficult to obtain not supported by many devices not supported by many devices insufficient accuracy/resolution insufficient accuracy/resolution insufficient software insufficient software perhaps a better solution can be found through improved handling of surface attributes?

3 Hybrid Image-based Modeling approach focused on surface attributes combine existing techniques to texture a standard triangle-based mesh combine existing techniques to texture a standard triangle-based mesh (optionally) separate acquisition of shape and surface attributes (optionally) separate acquisition of shape and surface attributesgoals/constraints guarantee good accuracy and resolution through surface attributes acquisition guarantee good accuracy and resolution through surface attributes acquisition require only cheap, standard equipment require only cheap, standard equipment limited, if any, user interaction during processing limited, if any, user interaction during processing limit data post-processing complexity limit data post-processing complexity

4 Pipeline 1)acquisition of surface attributes 2)un-shading of images 3)image registration 4)texture patching and stitching 1) vertex-to-image binding 2) patch growing 3) patch boundary blending 4) texture patches packing

5 Acquisition of Surface Attributes define viewpoints capture multiple images from each viewpoint differ lighting between images

6 Un-shading of Images want illumination-invariant colors, not light direction dependent colors remove main shading effects direct shading direct shading cast shadows cast shadows specular highlights specular highlights but not inter-object reflections but not inter-object reflections

7 Un-shading of Images discard shadow and saturation pixels assign a weight to each pixel

8 Un-shading of Images if three lights left, solve linear system if more than three, solve over-constrained linear system through least squares if less than three, interpolate color and normal vectors of adjacent pixels

9 Un-shading of Images before un-shadingafter un-shading

10 Un-shading of Images for each view one un-shaded image one un-shaded image one bump map one bump map noise in images can lead to aliasing prevent by applying a smoothing filter to images prevent by applying a smoothing filter to images

11 Image Registration register un-shaded images with 3D model user must manually select at least six pixels select each pixel in both model and image select each pixel in both model and image could replace with automatic registration if desired could replace with automatic registration if desired local registration performed later in pipeline

12 Texture Patching & Stitching build optimal patchwork of (partial) images to map onto 3D model entire object surface should be covered entire object surface should be covered adjacent images should join smoothly adjacent images should join smoothly

13 Vertex-to-Image Binding determine valid image set and target image for each vertex v in 3D model mesh valid image set consists of all images in which v is visible and a non-silhouette vertex target image is image in valid image set that has the smallest angle between the surface normal and view direction

14 Vertex-to-Image Binding A – silhouette vertices B – non-silhouette vertices

15 Patch Growing classify each triangular face of the mesh as internal or frontier internal face if all three vertices have same target image internal face if all three vertices have same target image frontier face if vertices have two or three different target images frontier face if vertices have two or three different target images reduce number of frontier faces through greedy iterative algorithm

16 Patch Boundary Blending restrict texture blending to frontier faces create frontier texture patch by assigning weights to sections of target images each point p in area is assigned a color based on

17 Patch Boundary Blending prevent discontinuity through accurate detection of sections in target image insufficiently accurate image registration can cause aliasing imperfect selection of pixels imperfect selection of pixels simplified camera model simplified camera model finite numeric precision of computers finite numeric precision of computers reduce aliasing through local registration

18 Patch Boundary Blending texels local registration removes small misalignments mesh-centered solution each vertex v of the frontier face f processed fix v coordinates to it’s location in target image fix v coordinates to it’s location in target image compute optimal translation of other texture coordinates of v on other two images in f compute optimal translation of other texture coordinates of v on other two images in f creates total of six new coordinates for vertices of f creates total of six new coordinates for vertices of f

19 Texture Patches Packing extract minimal sections of each image that are mapped onto model combine sections in one texture using cutting-stock algorithm adjacent texels no longer necessarily correspond to adjacent surfaces, requiring different texture reconstruction duplicate/triplicate corresponding face in mesh duplicate/triplicate corresponding face in mesh assign different textures to each face assign different textures to each face blend textures at rendering time blend textures at rendering time divide texture data into multiple packed images to reduce texture size necessary when using very high resolution digital cameras necessary when using very high resolution digital cameras

20 Texture Patches Packing

21 perspective texture distortion most orthogonal images chosen for each face, limiting problem most orthogonal images chosen for each face, limiting problem distortion further reduced by use of triangular mesh distortion further reduced by use of triangular mesh could remove using hardware/API that allows use of perspective texture coordinates (u,v,w) could remove using hardware/API that allows use of perspective texture coordinates (u,v,w)

22 Results ~25cm tall statuette complex shape complex shape 14 views required 14 views required running time of ~62sec running time of ~62sec

23 Results ~40cm tall ceramic vase complex painted surface complex painted surface 8 views required 8 views required running time of ~89sec running time of ~89sec

24 Results results considered to be good some aliasing present accuracy depends on 3D model 3D model initial image registration initial image registration local registration local registration texture mapping texture mapping

25 Conclusions advantages low user interaction low user interaction high quality results high quality results frontier area could be altered to create a smoother result optimal width depends on mesh optimal width depends on mesh


Download ppt "Acquiring, Stitching and Blending Diffuse Appearance Attributes on 3D Models C. Rocchini, P. Cignoni, C. Montani, R. Scopigno Istituto Scienza e Tecnologia."

Similar presentations


Ads by Google