Speaker Min-Koo Kang March 26, 2013 Depth Enhancement Technique by Sensor Fusion: MRF-based approach.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Bayesian Belief Propagation
CISC 489/689 Spring 2009 University of Delaware
3-D Computer Vision CSc83020 / Ioannis Stamos  Revisit filtering (Gaussian and Median)  Introduction to edge detection 3-D Computater Vision CSc
Analysis of Contour Motions Ce Liu William T. Freeman Edward H. Adelson Computer Science and Artificial Intelligence Laboratory Massachusetts Institute.
CS-MUVI Video compressive sensing for spatial multiplexing cameras Aswin Sankaranarayanan, Christoph Studer, Richard G. Baraniuk Rice University.
Spatial-Temporal Consistency in Video Disparity Estimation ICASSP 2011 Ramsin Khoshabeh, Stanley H. Chan, Truong Q. Nguyen.
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
Accelerating Spatially Varying Gaussian Filters Jongmin Baek and David E. Jacobs Stanford University.
Volodymyr Fedak Artifacts suppression in images and video.
Shaojie Zhuo, Dong Guo, Terence Sim School of Computing, National University of Singapore CVPR2010 Reporter: 周 澄 (A.J.) 01/16/2011 Key words: image deblur,
IMAGE UPSAMPLING VIA IMPOSED EDGE STATISTICS Raanan Fattal. ACM Siggraph 2007 Presenter: 이성호.
Forward-Backward Correlation for Template-Based Tracking Xiao Wang ECE Dept. Clemson University.
Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1, Lehigh University.
Francisco Barranco Cornelia Fermüller Yiannis Aloimonos Event-based contour motion estimation.
Patch Based Synthesis for Single Depth Image Super-Resolution (ECCV 2012) Oisin Mac Aodha, Neill Campbell, Arun Nair and Gabriel J. Brostow Presented By:
Self-Validated Labeling of MRFs for Image Segmentation Wei Feng 1,2, Jiaya Jia 2 and Zhi-Qiang Liu 1 1. School of Creative Media, City University of Hong.
Gaussian KD-Tree for Fast High-Dimensional Filtering A. Adams, N. Gelfand, J. Dolson, and M. Levoy, Stanford University, SIGGRAPH 2009.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Boundary matting for view synthesis Samuel W. Hasinoff Sing Bing Kang Richard Szeliski Computer Vision and Image Understanding 103 (2006) 22–32.
ON THE IMPROVEMENT OF IMAGE REGISTRATION FOR HIGH ACCURACY SUPER-RESOLUTION Michalis Vrigkas, Christophoros Nikou, Lisimachos P. Kondi University of Ioannina.
High-Quality Video View Interpolation
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
An Iterative Optimization Approach for Unified Image Segmentation and Matting Hello everyone, my name is Jue Wang, I’m glad to be here to present our paper.
High Speed Obstacle Avoidance using Monocular Vision and Reinforcement Learning Jeff Michels Ashutosh Saxena Andrew Y. Ng Stanford University ICML 2005.
Automatic Estimation and Removal of Noise from a Single Image
Noise Estimation from a Single Image Ce Liu William T. FreemanRichard Szeliski Sing Bing Kang.
Super-Resolution of Remotely-Sensed Images Using a Learning-Based Approach Isabelle Bégin and Frank P. Ferrie Abstract Super-resolution addresses the problem.
Image Analogies Aaron Hertzmann (1,2) Charles E. Jacobs (2) Nuria Oliver (2) Brian Curless (3) David H. Salesin (2,3) 1 New York University 1 New York.
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
CSCE 441: Computer Graphics Image Filtering Jinxiang Chai.
Speaker Min-Koo Kang November 14, 2012 Depth Enhancement Technique by Sensor Fusion: Joint Bilateral Filter Approaches.
ICPR/WDIA-2012 High Quality Novel View Synthesis Based on Low Resolution Depth Image and High Resolution Color Image Jui-Chiu Chiang, Zheng-Feng Liu, and.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Introduction to Visible Watermarking IPR Course: TA Lecture 2002/12/18 NTU CSIE R105.
A Local Adaptive Approach for Dense Stereo Matching in Architectural Scene Reconstruction C. Stentoumis 1, L. Grammatikopoulos 2, I. Kalisperakis 2, E.
Optical Flow Donald Tanguay June 12, Outline Description of optical flow General techniques Specific methods –Horn and Schunck (regularization)
City University of Hong Kong 18 th Intl. Conf. Pattern Recognition Self-Validated and Spatially Coherent Clustering with NS-MRF and Graph Cuts Wei Feng.
Takuya Matsuo, Norishige Fukushima and Yutaka Ishibashi
Object Stereo- Joint Stereo Matching and Object Segmentation Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on Michael Bleyer Vienna.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
Robust global motion estimation and novel updating strategy for sprite generation IET Image Processing, Mar H.K. Cheung and W.C. Siu The Hong Kong.
Image Processing Edge detection Filtering: Noise suppresion.
MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation 1 Dynamic Sensor Resource Management for ATE MURI.
Structured Face Hallucination Chih-Yuan Yang Sifei Liu Ming-Hsuan Yang Electrical Engineering and Computer Science 1.
Team 5 Wavelets for Image Fusion Xiaofeng “Sam” Fan Jiangtao “Willy” Kuang Jason “Jingsu” West.
Image Enhancement [DVT final project]
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Geodesic Saliency Using Background Priors
The 18th Meeting on Image Recognition and Understanding 2015/7/29 Depth Image Enhancement Using Local Tangent Plane Approximations Kiyoshi MatsuoYoshimitsu.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
CSE 185 Introduction to Computer Vision Feature Matching.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
Vision-based SLAM Enhanced by Particle Swarm Optimization on the Euclidean Group Vision seminar : Dec Young Ki BAIK Computer Vision Lab.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
3D Reconstruction Using Image Sequence
Electronic Visualization Laboratory University of Illinois at Chicago “Time-Critical Multiresolution Volume Rendering using 3D Texture Mapping Hardware”
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
Semi-Global Matching with self-adjusting penalties
Intrinsic images and shape refinement
Reconstruction For Rendering distribution Effect
A Gentle Introduction to Bilateral Filtering and its Applications
Removing Highlight Spots in Visual Hull Rendering
Analysis of Contour Motions
HALO-FREE DESIGN FOR RETINEX BASED REAL-TIME VIDEO ENHANCEMENT SYSTEM
Review and Importance CS 111.
Presentation transcript:

Speaker Min-Koo Kang March 26, 2013 Depth Enhancement Technique by Sensor Fusion: MRF-based approach

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras Outline 1. Review of filter-based method - Summary and limitation 2. Related work - MRF-based depth up-sampling framework 3. Introduction of state-of-the-art method - High Quality Depth Map Upsampling for 3D-TOF Cameras / ICCV Future work - Remaining problems - Strategy

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras Depth upsampling  Definition  Conversion of depth map with low resolution into one with high resolution  Approach  Most state-of-the-art methods are based on sensor fusion technique; i.e., use image sensor and range sensor together Depth map up-sampling by using bi-cubic interpolation Depth map up-sampling by using image and range sensor

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras Joint bilateral upsampling (JBU)  Representative formulation:  N(P): targeting pixel P(i, j)’s neighborhood. f S (.): spatial weighting term, applied for pixel position P. f I (.): range weighting term, applied for pixel value I(q). f S (.), f I (.) are Gaussian functions with standard deviations, σ S and σ I, respectively. *Kopf et al., “Joint Bilateral Upsampling”, SIGGRAPH 2007 Upsampled depth map Rendered 3D view

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras Is JBU ideal enough?  Limitations of JBU:  It starts from the fundamental heuristic assumptions about the relationship between depth and intensity data  Sometimes depth has no corresponding edges in the 2-D image  Remaining problems:  Erroneous copying of 2-D texture into actually smooth geometries within the depth map  Unwanted artifact known as edge blurring High-resolution guidance image (red=non-visible depth discontinuities) Low-resolution depth map (red=zooming area) JBU enhanced depth map (zoomed)

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras Summary of JBU-based approach  Joint bilateral upsampling approach  Propagates properties from one to an other modality  Credibility map decides system performance  Defining blending function can be another critical factor  Many empirical parameters make the practical automated usage of such fusion filter challenging  Another question is a clear rule on when a smoothing by filtering is to be avoided and when a simple binary decision is to be undertaken

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras MRF-Based Depth Up-sampling  Diebel et al., NIPS 2005  Use a multi-resolution MRF which ties together image and range data  Exploit the fact that discontinuities in range and coloring tend to co-align  Pros and cons  Robust to changes in up-sampling scale through global optimization  High computation complexity 7 MRF framework Data term Smoothness term

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras - High Quality Depth Map Upsampling for 3D-TOF Cameras / ICCV 2011 A novel method based on MRF approach

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras Problem Definition

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras System Setup and Preprocessing

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras Evaluation on Weighting Terms

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras The plot of PSNR accuracy  The combined weighting term consistently produce the best results under different upsampling scale.

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras NLM regularization term  Thin structure protection  By allowing the pixels on the same nonlocal structure to reinforce each other within a larger neighborhood.

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras User Adjustments Additional weighting term for counting the additional depth discontinuity information is defined as: After adding the additional depth samples, our algorithm generates the new depth map using the new depth samples as a hard constraint in Equation (4)

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras Experimental Results (Synthetic)

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras Experimental Results (Real world)

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras Is this method ideal enough?  Noise distribution in depth map:  Practical depth map contains more complicated noise distribution than the Gaussian noise  Neighborhood extension to higher dimension:  Practical depth data is a sequence of successive depth maps  Spatial domain  spatial-temporal domain

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras Spatial-Temporal MRF-Based Depth Map Refinement  Zhu et al., CVPR 2008  Combine range sensor with stereo sensor  Extend the MRF to temporal domain to take the temporal coherence into account  Pros and cons  Improve accuracy by using temporal coherence  Do not consider changes of depth on time-varying 20 Spatial-temporal MRF structure Data term Smoothness term

/ Computer Vision Laboratory Seminar high quality depth map upsampling for 3D-ToF cameras Summary of MRF-based approach  MRF-based approach  Maintaining sharp depth boundaries  Easy adoption of several weighting factors  Easy cooperation with user adjustment  Possible improvements in the future  Noise distribution consideration in practical depth data  Temporal smoothness consideration by neighborhood extension