Presentation is loading. Please wait.

Presentation is loading. Please wait.

Poster Print Size: This poster template is set up for A0 international paper size of 1189 mm x 841 mm (46.8” high by 33.1” wide). It can be printed at.

Similar presentations


Presentation on theme: "Poster Print Size: This poster template is set up for A0 international paper size of 1189 mm x 841 mm (46.8” high by 33.1” wide). It can be printed at."— Presentation transcript:

1 Poster Print Size: This poster template is set up for A0 international paper size of 1189 mm x 841 mm (46.8” high by 33.1” wide). It can be printed at 70.6% for an A1 poster of 841 mm x 594 mm. Placeholders: The various elements included in this poster are ones we often see in medical, research, and scientific posters. Feel free to edit, move, add, and delete items, or change the layout to suit your needs. Always check with your conference organizer for specific requirements. Image Quality: You can place digital photos or logo art in your poster file by selecting the Insert, Picture command, or by using standard copy & paste. For best results, all graphic elements should be at least 150-200 pixels per inch in their final printed size. For instance, a 1600 x 1200 pixel photo will usually look fine up to 8“- 10” wide on your printed poster. To preview the print quality of images, select a magnification of 100% when previewing your poster. This will give you a good idea of what it will look like in print. If you are laying out a large poster and using half-scale dimensions, be sure to preview your graphics at 200% to see them at their final printed size. Please note that graphics from websites (such as the logo on your hospital's or university's home page) will only be 72dpi and not suitable for printing. [This sidebar area does not print.] Change Color Theme: This template is designed to use the built-in color themes in the newer versions of PowerPoint. To change the color theme, select the Design tab, then select the Colors drop-down list. The default color theme for this template is “Office”, so you can always return to that after trying some of the alternatives. Printing Your Poster: Once your poster file is ready, visit www.genigraphics.com to order a high-quality, affordable poster print. Every order receives a free design review and we can delivery as fast as next business day within the US and Canada. Genigraphics® has been producing output from PowerPoint® longer than anyone in the industry; dating back to when we helped Microsoft® design the PowerPoint software. US and Canada: 1-800-790-4001 International: +(1) 913-441-1410 Email: info@genigraphics.com [This sidebar area does not print.] Capturing system: Five kinect1 sensors that provide 360o coverage of the whole human body. Calibration: Internal calibration using the method of [7] and external calibration with a chessboard- based, all-to-all, custom calibration approach. 1) Preprocessing: a) Foreground segmentation; b) Depth-maps denoising with bilateral fitlering; c) Estimation and voxelization of the bounding box. 2) For each camera, extraction of a) the normal map and b) a depth-confidence map, based on the closeness of a pixel to the object’s boundary. 3) A volumetric 3D reconstruction method: The 3D surface is implicitly defined by a volume function V(X), which is zero at the surface of the object. More specifically, [8], [9] are used: The final triangle mesh surface is extracted using Marching Cubes for V(X)=0. A GPU implementation is used in the experiments. Appropriate weights w k : Express the “quality” of depth measurements. It reduces with the a) Angle between line of sight and surface normal, b) Closeness of a depth’s pixel to the object’s boundary. 4) Texture mapping: The same kind of weights is used for the texture mapping (vertex attributes). ID 1, ID 2 correspond to the two largest weights. Geometry compression: Geometry information, along with the vertex attributes (ID 1, ID 2, w 1 ), are compressed using the OpenCTM mesh compression scheme (http://openctm.sourceforge.net/). The texture IDs are quantized with precision equal to 1. The weights w 1,k are quantized with a large step (0.2), without affecting the visual quality. Texture compression: The texture videos of the mesh are separately compressed using H.264 video coding. The Intra-frame period was set equal to 10. A case study for Tele-Immersion communication applications: From 3D capturing to rendering Dimitrios Alexiadis, Alexandros Doumanoglou, Dimitrios Zarpalas, Petros Daras Information Technologies Institute, Centre for Research and Technology - Hellas Petros Daras and Dimitrios Alexiadis Email: daras@iti.gr and dalexiad@iti.grdaras@iti.grdalexiad@iti.gr Centre for Research and Technology – Hellas, Information Technologies Institute, Visual Computing Lab Website: http://vcl.iti.gr/http://vcl.iti.gr/ Phone: +30 2310 464160 (ext. 277) Contact 1.R. Vasudevan, G. Kurillo, E. Lobaton, T. Bernardin, O. Kreylos, R. Bajcsy, and K. Nahrstedt, “High quality visualization for geographically distributed 3D teleimmersive applications,” IEEE Trans. on Multimedia, vol. 13(3), June 2011 2. A. Smolic, “3D video and free viewpoint video-From capture to display,” Pattern Recognition, vol. 44, no. 9, Sep. 2011 3.A. Maimone and H. Fuchs, “Encumbrance-free telepresence system with real-time 3D capture and display using commodity depth cameras,” in 10th IEEE ISMAR 2011 4.R. Mekuria, D. Alexiadis, P. Daras, and P. Cesar, “Real-time encoding of live reconstructed mesh sequences for 3D tele-immersion,” in Intern. Conf. on Multimedia and Expo, 2013 5.D. S. Alexiadis, D. Zarpalas, and P. Daras, “Real-time, full 3D reconstruction of moving foreground objects from multiple consumer depth cameras,” IEEE Trans. on Multimedia, vol. 15, no. 2, pp. 339–358, 2013 6.B. Dai and X. Yang, “A low-latency 3D teleconferencing system with image based approach,” in 12th Int. Conf. on VR Continuum and Its Appl. in Industry, 2013 7.D. Herrera, J. Kannala, and J. Heikkila, “Joint depth and color camera calibration with distortion correction,” IEEE Trans. Pattern Anal Mach. Intell., vol. 34, 2012 8.D. Alexiadis, D. Zarpalas, and P. Daras, “Real-time, realistic full-body 3D reconstruction and texture mapping from multiple kinects,” in 11 th IEEE IVMSP, Korea, 2013 9.B. Curless and M. Levoy, “A volumetric method for building complex models from range images,” in Proc. SIGGRAPH, 1996 References An overview of TI communication chain is given in Fig. 1: Multiple calibrated RGB-Depth sensors capture the user and the depth data are used to reconstruct her/his 3D shape, in the form of a triangle mesh. The mesh resolution is adaptable, (affecting the reconstruction details and visual quality) and therefore the whole chain is scalable. Data representation: Time-varying a) geometry (vertex positions and normals), b) additional vertex attributes (see below), c) connectivity, d) textures. Vertex attributes: [ID 1, ID 2, w 1, w 2 ], with w 1 +w 2 =1. These are the texture IDs and the corresponding weights, to be used for a vertex with texture- blending rendering – They are estimated at the capturing site, and used at receiver/rendering site. Tele-Immersion Platform We study on the performance of the end-to-end TI chain with respect to its parameters, in terms of visual quality and timing measurements. The corresponding metrics are a) PSNR (obtained for a large number of virtual views), and b) Frame rate (fps) / delay. Figure 3(a): Quality of the reconstruction with respect to resolution level. The volume resolution is 2 r x 2 r x 2 r voxels. As reference for PSNR, r=7 is used. Figure 3(b): Reconstruction rate with respect to resolution level. Based on Fig. 3(a),(b) a good compromise is r=6, which is used below. Figure 3(c): Impact of H.264 compression to the visual quality. Q refers to the quantization parameter of base quality layer. Q=28 (30dB) is selected as a good compromise and used below. Figure 3(d): Impact of the OpenCTM absolute vertex precision d (quantization parameter), to the visual quality. A vertex precision of d = 8mm, which corresponds to 4.8bpvf and a PSNR 22dB, was chosen after subjective evaluation (see Fig. 2). Figure 4: Performance of the whole chain, in terms of the frame rate at the receiver and overall delay between capturing and rendering, with respect to the line-speed. Two different CTM entropy coding (lossless) levels are considered. Drawn conclusions: a) For slow lines (<5Mbps), transmission is the bottleneck. The rate and the delay are respectively proportional and inv. proportional to the line speed. b) For fast lines, compression is the bottleneck. It is preferable to have inefficient, but fast compression. c) For slow lines (<5Mbps), the total experience is expected to be poor for the set quality standards. Experimental study - Discussion Fig 3. Visual-quality and frame-rates, evaluation Diagrams Tele-Immersion (TI) technology aims to enable geographically-distributed users to communicate and interact inside a shared virtual world, as if they were physically there [1]. Key technology aspects: 1) Capturing and reconstruction of (3D) data, 2) (3D) data compression and transmission, 3) Free-view-point (FVP) rendering Substantial factors for user’s immersion: 1) Quality of (3D) generated and presented content, 2) Data exchange rates between remote sites, 3) Wide navigation range (full 360 o 3D coverage) 3D Data representations [2]: a) (Multi-view) Image-based modeling, b) (Multi-view) Video+Depth, c) 3D geometry-based representations Contribution: We propose and study in detail a 3D geometry representation-based TI pipeline, as a reference for people working in the field. The reasons for selecting 3D geometry-based representations are: (a) Straightforward realization of scene structuring and navigation functionalities (e.g. collision detection); b) Wide range of FVP coverage with relatively few cameras; c) Less computational and network load for multi-party systems; d) Potential benefit from the capabilities of future rendering systems (holography-like). Related work: [1] Multiple stereo rigs and view synthesis at the receiver side; [3] Multiple Kinects, but without studying compression/transmission; [4]: Multiple Kinects, but compression is for the specific reconstruction method[5]; [6]: Based on image representations. Introduction – Problem Definition Fig 4. Time measurements for the whole capture-to-render chain Supported by Systems overview Multiple RGB-D capturing Fast 3D data compression Figure 1. An overview of the studied Tele-Immersion chain, from capturing to rendering. Fig 2. An example frame under different reconstruction and compression settings Real-time 3D reconstruction of humans


Download ppt "Poster Print Size: This poster template is set up for A0 international paper size of 1189 mm x 841 mm (46.8” high by 33.1” wide). It can be printed at."

Similar presentations


Ads by Google