Determining the Camera Response Function Short Presentation Dominik Neumann Chair of Pattern Recognition (Computer Science 5) Friedrich-Alexander-University.

Slides:



Advertisements
Similar presentations
Matthias Wimmer, Bernd Radig, Michael Beetz Chair for Image Understanding Computer Science TU München, Germany A Person and Context.
Advertisements

Photometric Image Formation
Computer Vision Radiometry. Bahadir K. Gunturk2 Radiometry Radiometry is the part of image formation concerned with the relation among the amounts of.
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
A Standardized Workflow for Illumination-Invariant Image Extraction Mark S. Drew Muntaseer Salahuddin Alireza Fathi Simon Fraser University, Vancouver,
Segmentation of Floor in Corridor Images for Mobile Robot Navigation Yinxiao Li Clemson University Committee Members: Dr. Stanley Birchfield (Chair) Dr.
16421: Vision Sensors Lecture 6: Radiometry and Radiometric Calibration Instructor: S. Narasimhan Wean 5312, T-R 1:30pm – 2:50pm.
Basic Principles of Surface Reflectance
Color spaces CIE - RGB space. HSV - space. CIE - XYZ space.
Internet Vision - Lecture 3 Tamara Berg Sept 10. New Lecture Time Mondays 10:00am-12:30pm in 2311 Monday (9/15) we will have a general Computer Vision.
Database-Based Hand Pose Estimation CSE 6367 – Computer Vision Vassilis Athitsos University of Texas at Arlington.
Physics-based Illuminant Color Estimation as an Image Semantics Clue Christian Riess Elli Angelopoulou Pattern Recognition Lab (Computer Science 5) University.
ECCV 2002 Removing Shadows From Images G. D. Finlayson 1, S.D. Hordley 1 & M.S. Drew 2 1 School of Information Systems, University of East Anglia, UK 2.
1 Invariant Image Improvement by sRGB Colour Space Sharpening 1 Graham D. Finlayson, 2 Mark S. Drew, and 2 Cheng Lu 1 School of Information Systems, University.
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
Detecting Digital Image Forgeries Using Sensor Pattern Noise presented by: Lior Paz Jan Lukas, jessica Fridrich and Miroslav Goljan.
Advanced Computer Vision Introduction Goal and objectives To introduce the fundamental problems of computer vision. To introduce the main concepts and.
Digital Cameras CCD (Monochrome) RGB Color Filter Array.
A Study of Approaches for Object Recognition
Radiometric Self Calibration
1 Comp300a: Introduction to Computer Vision L. QUAN.
1 Learning to Detect Natural Image Boundaries David Martin, Charless Fowlkes, Jitendra Malik Computer Science Division University of California at Berkeley.
What is the Space of Camera Responses? Michael Grossberg and Shree Nayar CAVE Lab, Columbia University IEEE CVPR Conference June 2003, Madison, USA Partially.
1Notes. 2Atop  The simplest (useful) and most common form of compositing: put one image “atop” another  Image 1 (RGB) on top of image 2 (RGB)  For.
Face Recognition Based on 3D Shape Estimation
Basic Principles of Surface Reflectance
Color, lightness & brightness Lavanya Sharan February 7, 2011.
COMP322/S2000/L23/L24/L251 Camera Calibration The most general case is that we have no knowledge of the camera parameters, i.e., its orientation, position,
Introduction of the intrinsic image. Intrinsic Images The method of Finlayson & Hordley ( 2001 ) Two assumptions 1. the camera ’ s sensors are sufficiently.
Today: Calibration What are the camera parameters?
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Matthias Wimmer, Bernd Radig, Michael Beetz Chair for Image Understanding Computer Science Technische Universität München Adaptive.
Noise Estimation from a Single Image Ce Liu William T. FreemanRichard Szeliski Sing Bing Kang.
Traffic Sign Identification Team G Project 15. Team members Lajos Rodek-Szeged, Hungary Marcin Rogucki-Lodz, Poland Mircea Nanu -Timisoara, Romania Selman.
Face Alignment Using Cascaded Boosted Regression Active Shape Models
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
SCCS 4761 Introduction What is Image Processing? Fundamental of Image Processing.
Shane Tuohy.  In 2008, rear end collisions accounted for almost 25% of all injuries sustained in road traffic accidents on Irish roads [RSA Road Collision.
Geometric and Radiometric Camera Calibration Shape From Stereo requires geometric knowledge of: –Cameras’ extrinsic parameters, i.e. the geometric relationship.
Welcome Topic: Pixels A.M.Meshkatur Rahman Class: vii Roll: 07.
Camera Geometry and Calibration Thanks to Martial Hebert.
1 Image Basics Hao Jiang Computer Science Department Sept. 4, 2014.
Landing a UAV on a Runway Using Image Registration Andrew Miller, Don Harper, Mubarak Shah University of Central Florida ICRA 2008.
Course Syllabus 1.Color 2.Camera models, camera calibration 3.Advanced image pre-processing Line detection Corner detection Maximally stable extremal regions.
Analysis of Subsurface Scattering under Generic Illumination Y. Mukaigawa K. Suzuki Y. Yagi Osaka University, Japan ICPR2008.
16421: Vision Sensors Lecture 7: High Dynamic Range Imaging Instructor: S. Narasimhan Wean 5312, T-R 1:30pm – 3:00pm.
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
16421: Vision Sensors Lecture 7: High Dynamic Range Imaging Instructor: S. Narasimhan Wean 5312, T-R 1:30pm – 3:00pm.
Presented by Daniel Khashabi Joint work with Sebastian Nowozin, Jeremy Jancsary, Andrew W. Fitzgibbon and Bruce Lindbloom.
Computer Vision Why study Computer Vision? Images and movies are everywhere Fast-growing collection of useful applications –building representations.
October Andrew C. Gallagher, Jiebo Luo, Wei Hao Improved Blue Sky Detection Using Polynomial Model Fit Andrew C. Gallagher, Jiebo Luo, Wei Hao Presented.
Color and Brightness Constancy Jim Rehg CS 4495/7495 Computer Vision Lecture 25 & 26 Wed Oct 18, 2002.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
Augmented Reality and 3D modelling Done by Stafford Joemat Supervised by Mr James Connan.
DIGITAL VIDEO AUTHENTICATION. Contents  What is Quantization ?  What is Double MPEG/JPEG Compression?  Video Compression/Decompression  What is JPEG/Frame.
Semantic Alignment Spring 2009 Ben-Gurion University of the Negev.
An Introduction to Digital Image Processing Dr.Amnach Khawne Department of Computer Engineering, KMITL.
EECS 274 Computer Vision Sources, Shadows, and Shading.
Che-An Wu Background substitution. Background Substitution AlphaMa p Trimap Depth Map Extract the foreground object and put into another background Objective.
Coin Recognition Using MATLAB - Emad Zaben - Bakir Hasanein - Mohammed Omar.
Calibration meeting summary
SIFT Scale-Invariant Feature Transform David Lowe
A Unified Algebraic Approach to 2D and 3D Motion Segmentation
Optical Flow Estimation and Segmentation of Moving Dynamic Textures
Computer Vision Lecture 4: Color
RGB-D Image for Scene Recognition by Jiaqi Guo
High Dynamic Range Images
SIFT.
A Novel Smoke Detection Method Using Support Vector Machine
Presentation transcript:

Determining the Camera Response Function Short Presentation Dominik Neumann Chair of Pattern Recognition (Computer Science 5) Friedrich-Alexander-University Erlangen-Nuremberg

Page 2 Dominik Neumann Determining the Camera Response Function – Short Presentation The Camera Response Scene Radiance Scene Radiance Image Irradiance (RAW image) Image Irradiance (RAW image) Image Intensity (pixel values) Image Intensity (pixel values) Camera Processor Sensor - White Balancing - Gamma Correction - "Beautification"  Camera Response Function - White Balancing - Gamma Correction - "Beautification"  Camera Response Function

Page 3 Dominik Neumann Determining the Camera Response Function – Short Presentation Motivation Various Computer Vision Algorithms require Image Irradiance  Color Constancy  Shape from Shading  Inverse Rendering  Creating accurate HDR-Images But: Output of a usual camera is not linear My task: Examination of methods for CRF Estimation  “Using Geometry Invariants for Camera Response Function Estimation” (Ng et al.)  “Radiometric Calibration from a Single Image” (Lin et al.) Color Constancy: Images by Eva Eibenberger

Page 4 Dominik Neumann Determining the Camera Response Function – Short Presentation Camera Response Functions Agfachrome CTPrecisa100 Green Agfachrome RSX2 050 Blue Agfacolor Futura 100 Green Agfacolor HDC 100 plus Green Agfacolor Ultra 050 plus Green Agfapan APX 025 Agfa Scala 200x Fuji F125 Green Fuji F400 Green Kodak Ektachrome-100plus Green Kodak Ektachrome-64 Green Kodak KAI0372 CCD Kodak Max Zoom 800 Green Irradiance Intensity  Sample CRFs from DoRF (Database of Response Functions) (compiled by Grossberg and Nayar)

Page 5 Dominik Neumann Determining the Camera Response Function – Short Presentation CRF Estimation: Geometry Invariants Physics-based approch  Separation of camera nonlinearities from reflectance-induced nonlinearities Strategy  Find set of "Locally Planar Irradiance Points" (LPIP)  no more reflectance-induced nonlinearities  Estimate CRF from this data Main Steps of the Algorithm: Compute Image Derivatives Detect LPIPs Estimate CRF

Page 6 Dominik Neumann Determining the Camera Response Function – Short Presentation CRF Estimation: Radiometric Calibration Image © Lin et al.  Method is edge-based and chooses the best fitting one of the previously learned CRFs from DoRF

Page 7 Dominik Neumann Determining the Camera Response Function – Short Presentation CRF Estimation What have I done so far?  Radiometric Calibration almost done Next steps  Completing the code on "Radiometric Calibration"  Start implementing the "Geometry Invariants" paper  Evaluation sample patches detected by the algorithm (Radiometric Calibration)

Page 8 Dominik Neumann Determining the Camera Response Function – Short Presentation Evaluation Methods Synthetically generated Images  Take direct output of camera sensor (supposed to be linear)  Manually apply a particular CRF and check if this CRF is found afterwards Ground Truth Evaluation  Use a camera with a well known CRF (extract from DoRF)  Check how far our computed CRFs are from the ground truth Stability Test  Obtain series of images taken by the same camera (with fixed camera settings)  Analyze how much the individual results vary

Page 9 Dominik Neumann Determining the Camera Response Function – Short Presentation Determining the Camera Response Function Thank you for your attention!