Ning Sun, Hassan Mansour, Rabab Ward Proceedings of 2010 IEEE 17th International Conference on Image Processing September 26-29, 2010, Hong Kong HDR Image.

Slides:



Advertisements
Similar presentations
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the.
Advertisements

Real-Time Accurate Stereo Matching using Modified Two-Pass Aggregation and Winner- Take-All Guided Dynamic Programming Xuefeng Chang, Zhong Zhou, Yingjie.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Kiyoshi Irie, Tomoaki Yoshida, and Masahiro Tomono 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center.
Does Color Really Help in Dense Stereo Matching?
Lecture 8: Stereo.
Stereo.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
A New Block Based Motion Estimation with True Region Motion Field Jozef Huska & Peter Kulla EUROCON 2007 The International Conference on “Computer as a.
Last Time Pinhole camera model, projection
A Convex Optimization Approach for Depth Estimation Under Illumination Variation Wided Miled, Student Member, IEEE, Jean-Christophe Pesquet, Senior Member,
Radiometric Self Calibration
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Contents Description of the big picture Theoretical background on this work The Algorithm Examples.
Stereo & Iterative Graph-Cuts Alex Rav-Acha Vision Course Hebrew University.
The plan for today Camera matrix
Grading for ELE 5450 Assignment 28% Short test 12% Project 60%
Stereo Computation using Iterative Graph-Cuts
Lecture 11: Stereo and optical flow CS6670: Computer Vision Noah Snavely.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
CSE473/573 – Stereo Correspondence
Stereo Sebastian Thrun, Gary Bradski, Daniel Russakoff Stanford CS223B Computer Vision (with slides by James Rehg and.
Visualization- Determining Depth From Stereo Saurav Basu BITS Pilani 2002.
Manhattan-world Stereo Y. Furukawa, B. Curless, S. M. Seitz, and R. Szeliski 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) /04 $20.00 c 2004 IEEE 1 Li Hong.
Stereo Matching Information Permeability For Stereo Matching – Cevahir Cigla and A.Aydın Alatan – Signal Processing: Image Communication, 2013 Radiometric.
Fast Approximate Energy Minimization via Graph Cuts
Mutual Information-based Stereo Matching Combined with SIFT Descriptor in Log-chromaticity Color Space Yong Seok Heo, Kyoung Mu Lee, and Sang Uk Lee.
Mean-shift and its application for object tracking
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
A Local Adaptive Approach for Dense Stereo Matching in Architectural Scene Reconstruction C. Stentoumis 1, L. Grammatikopoulos 2, I. Kalisperakis 2, E.
Joint Depth Map and Color Consistency Estimation for Stereo Images with Different Illuminations and Cameras Yong Seok Heo, Kyoung Mu Lee and Sang Uk Lee.
Graph Cut Algorithms for Binocular Stereo with Occlusions
Takuya Matsuo, Norishige Fukushima and Yutaka Ishibashi
Integral University EC-024 Digital Image Processing.
Recap from Monday Image Warping – Coordinate transforms – Linear transforms expressed in matrix form – Inverse transforms useful when synthesizing images.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
16421: Vision Sensors Lecture 7: High Dynamic Range Imaging Instructor: S. Narasimhan Wean 5312, T-R 1:30pm – 3:00pm.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Stereo Many slides adapted from Steve Seitz.
CS 4487/6587 Algorithms for Image Analysis
Feature-Based Stereo Matching Using Graph Cuts Gorkem Saygili, Laurens van der Maaten, Emile A. Hendriks ASCI Conference 2011.
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image image 1image.
Computer Vision, Robert Pless
Computer Vision Stereo Vision. Bahadir K. Gunturk2 Pinhole Camera.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Levels of Image Data Representation 4.2. Traditional Image Data Structures 4.3. Hierarchical Data Structures Chapter 4 – Data structures for.
Fast Census Transform-based Stereo Algorithm using SSE2
Lecture 16: Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
Solving for Stereo Correspondence Many slides drawn from Lana Lazebnik, UIUC.
Journal of Visual Communication and Image Representation
A global approach Finding correspondence between a pair of epipolar lines for all pixels simultaneously Local method: no guarantee we will have one to.
Project 2 due today Project 3 out today Announcements TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA.
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Advanced Computer Vision Chapter 11 Stereo Correspondence Presented by: 蘇唯誠 指導教授 : 傅楸善 博士.
Photoconsistency constraint C2 q C1 p l = 2 l = 3 Depth labels If this 3D point is visible in both cameras, pixels p and q should have similar intensities.
Energy minimization Another global approach to improve quality of correspondences Assumption: disparities vary (mostly) smoothly Minimize energy function:
Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Processing visual information for Computer Vision
What have we learned so far?
Range Imaging Through Triangulation
Computer Vision Stereo Vision.
Course 6 Stereo.
Stereo vision Many slides adapted from Steve Seitz.
Presentation transcript:

Ning Sun, Hassan Mansour, Rabab Ward Proceedings of 2010 IEEE 17th International Conference on Image Processing September 26-29, 2010, Hong Kong HDR Image Construction from Multi-exposed Stereo LDR Images Andy

2 Intelligent Systems Lab. Algorithm description Two LDR images with different exposures Initial disparity map Camera response function Radiance maps of LDR images Refined disparity map HDR image Main concept: 1. Multi-exposed stereo images are captured using identical cameras placed adjacent to each other on a horizontal line. 2. Stereo matching is then used to find a disparity map that matches each pixel in one image to the corresponding pixel in another image. 3. A subset of the matched pixels is used to generate the camera response function which in turn is used to generate the scene radiance map for each view with an expanded dynamic range. 4. The disparity map is refined by performing a second stereo matching stage using the radiance maps

3 Intelligent Systems Lab. Imaging models Gamma-correction model Polynomial camera response Imaging models are used to determine the scene radiance from the measured pixel data Left imageRight image Scene radiance Correction factor Exposure ration between images Left imageRight image Scene radiance

4 Intelligent Systems Lab. Computing the disparity map Best disparity map Dissimilarity termSet of feasible disparitiesSmoothing term Pixel dissimilarityDisparity smoothness Used for initial disparity estimation

5 Intelligent Systems Lab. Pixel dissimilarity - Search window centered on p- displacement- Bilateral weight Spatial smoothingIntensity smoothing I’ - intensity in log space defined as:

6 Intelligent Systems Lab. Pixel dissimilarity

7 Intelligent Systems Lab. Disparity smoothness Initial disparity and camera response 1. Minimize using graph cut algorithmgraph cut algorithm 2. Compute polynomial coefficients for camera response function

8 Intelligent Systems Lab. Error correction Minimize energy function one more time with different dissimilarity function For valid pixels Convert images to radiance space (results should be same for both images) For erroneous pixels Hamming distance between pixels p and p+fp after applying Census transformCensus transform

9 Intelligent Systems Lab. Input LDR images

10 Intelligent Systems Lab. Disparity maps Reference disparity mapInitial disparity estimationFinal map

11 Intelligent Systems Lab. HDR images

12 Intelligent Systems Lab. Experimental results Image nameExposure RatioRMSE ErrorError pixels (%) Statue Dolls Clothes Baby

13 Intelligent Systems Lab. Conclusions Disparity map computation algorithm is proposed Proposed method is able to compute disparity between differently exposed images Can deal with saturated regions in the image Can be used for capturing motion scenes with different exposures Disadvantages - High computational costs - Generated images are slightly blurred - No rotation is considered

14 Intelligent Systems Lab. Ideal image formation system Image brightness Sensor response Camera exposure Camera response function Response = Gray-level Irradiance L I Reverse camera response function From optics Image radiance Scene radiance Focal length Aperture Angle from ray to optical axis Radiometric response Shutter speed or Where

15 Intelligent Systems Lab. Response function examples Response functions of a few popular cameras provided by their manufacturers

16 Intelligent Systems Lab. Graph-cut algorithm 1. Start with an arbitrary labeling f 2. Set success := 0 3. For each label 2 L 3.1. Find f* = arg min E(f’) among f’ within one α -expansion of f 3.2. If E(f*) < E(f), set f := f* and success := 1 4. If success = 1 goto 2 5. Return f

17 Intelligent Systems Lab. Census transform If (CurrentPixelIntensity<CentrePixelIntensity) boolean bit=0 else boolean bit=1 Input image3x3 transform5x5 transform