Contrast optimization for structure-from-motion surveys James O’Connor 1 Mike Smith 1, Mike R. James

Slides:



Advertisements
Similar presentations
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Advertisements

Color Imaging Analysis of Spatio-chromatic Decorrelation for Colour Image Reconstruction Mark S. Drew and Steven Bergner
RGB-D object recognition and localization with clutter and occlusions Federico Tombari, Samuele Salti, Luigi Di Stefano Computer Vision Lab – University.
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Grey Level Enhancement Contrast stretching Linear mapping Non-linear mapping Efficient implementation of mapping algorithms Design of classes to support.
TERRESTRIAL LASER SCANNING (TLS): APPLICATIONS TO ARCHITECTURAL AND LANDSCAPE HERITAGE PRESERVATION – PART 1.
3D Object Recognition Pipeline Kurt Konolige, Radu Rusu, Victor Eruhmov, Suat Gedikli Willow Garage Stefan Holzer, Stefan Hinterstoisser TUM Morgan Quigley,
July 27, 2002 Image Processing for K.R. Precision1 Image Processing Training Lecture 1 by Suthep Madarasmi, Ph.D. Assistant Professor Department of Computer.
HISTOGRAM TRANSFORMATION IN IMAGE PROCESSING AND ITS APPLICATIONS Attila Kuba University of Szeged.
Image Processing IB Paper 8 – Part A Ognjen Arandjelović Ognjen Arandjelović
Chapter 4: Image Enhancement
Does Color Really Help in Dense Stereo Matching?
Application of light fields in computer vision AMARI LEWIS – REU STUDENT AIDEAN SHARGHI- PH.D STUENT.
Face Recognition in Hyperspectral Images Z. Pan, G. Healey, M. Prasad and B. Tromberg University of California Published at IEEE Trans. on PAMI Vol 25,
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
3D Mapping Robots Intelligent Robotics School of Computer Science Jeremy Wyatt James Walker.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
Multi-view stereo Many slides adapted from S. Seitz.
Image Forgery Detection by Gamma Correction Differences.
Introduction to Computer Vision 3D Vision Topic 9 Stereo Vision (I) CMPSCI 591A/691A CMPSCI 570/670.
Image Enhancement.
Accurate, Dense and Robust Multi-View Stereopsis Yasutaka Furukawa and Jean Ponce Presented by Rahul Garg and Ryan Kaminsky.
Manhattan-world Stereo Y. Furukawa, B. Curless, S. M. Seitz, and R. Szeliski 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.
Matthew Brown University of British Columbia (prev.) Microsoft Research [ Collaborators: † Simon Winder, *Gang Hua, † Rick Szeliski † =MS Research, *=MS.
Integration Of CG & Live-Action For Cinematic Visual Effects by Amarnath Director, Octopus Media School.
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
A Brief Overview of Computer Vision Jinxiang Chai.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Geometric and Radiometric Camera Calibration Shape From Stereo requires geometric knowledge of: –Cameras’ extrinsic parameters, i.e. the geometric relationship.
GM-Carnegie Mellon Autonomous Driving CRL TitleAutomated Image Analysis for Robust Detection of Curbs Thrust AreaPerception Project LeadDavid Wettergreen,
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Presented by: 15 Oct 2001 Neptec Laser Camera System Preliminary Post Flight Results Neptec Design Group.
MESA LAB Two papers in icfda14 Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of California,
Realtime 3D model construction with Microsoft Kinect and an NVIDIA Kepler laptop GPU Paul Caheny MSc in HPC 2011/2012 Project Preparation Presentation.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
EDGE DETECTION USING MINMAX MEASURES SOUNDARARAJAN EZEKIEL Matthew Lang Department of Computer Science Indiana University of Pennsylvania Indiana, PA.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
Tone Mapping on GPUs Cliff Woolley University of Virginia Slides courtesy Nolan Goodnight.
BARCODE IDENTIFICATION BY USING WAVELET BASED ENERGY Soundararajan Ezekiel, Gary Greenwood, David Pazzaglia Computer Science Department Indiana University.
Computer Vision, Robert Pless
1 Reconstructing head models from photograph for individualized 3D-audio processing Matteo Dellepiane, Nico Pietroni, Nicolas Tsingos, Manuel Asselot,
Barcelonnette 10/2013 STAGE INSTRUMENTAIRE MASTER 2 3D SURFACE RECONSTRUCTION WITH MULTIVIEW-STEREO.
Author: Vera Kukić Supervisors: Shaun Bangay Adele Lobb George Wells
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Peter Henry1, Michael Krainin1, Evan Herbst1,
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Content-Based Image Retrieval Using Block Discrete Cosine Transform Presented by Te-Wei Chiang Department of Information Networking Technology Chihlee.
Spatiotemporal Saliency Map of a Video Sequence in FPGA hardware David Boland Acknowledgements: Professor Peter Cheung Mr Yang Liu.
Visual Computing Computer Vision 2 INFO410 & INFO350 S2 2015
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
Journal of Visual Communication and Image Representation
Augmented Reality and 3D modelling Done by Stafford Joemat Supervised by Mr James Connan.
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
Digital Image Processing
DISCRIMINATIVELY TRAINED DENSE SURFACE NORMAL ESTIMATION ANDREW SHARP.
3D Face Recognition Using Range Images Literature Survey Joonsoo Lee 3/10/05.
Che-An Wu Background substitution. Background Substitution AlphaMa p Trimap Depth Map Extract the foreground object and put into another background Objective.
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
Coin Recognition Using MATLAB - Emad Zaben - Bakir Hasanein - Mohammed Omar.
HYDROGEOLOGY OF THE ZACATÓN CAVE SYSTEM
Image Processing For Soft X-Ray Self-Seeding
SfM: a high resolution, low cost photogrammetric tool for geoscience applications AGU Fall Meeting 2011 ‘Structure-from-Motion’:
Semi-Global Matching with self-adjusting penalties
Image Based Modeling and Rendering (PI: Malik)
Noah Snavely.
Point-Cloud 3D Modeling.
Lecture 15: Structure from motion
Directional Occlusion with Neural Network
Presentation transcript:

Contrast optimization for structure-from-motion surveys James O’Connor 1 Mike Smith 1, Mike R. James 1 School of the Natural and Built Environment, Kingston University, 2 Lancaster Environment Centre, Lancaster University 3. Description of other algorithms PCA1 : First Principal Component (PC) of each RGB image PCALab : First PC of each image after conversion to LAB colourspace 16PCA1/16PCALab: 16 bit First PC of first two algorithms AVGLab3 : Taken from Verhoeven et al. (2015), seeks to maximise statistical information in a single image PCALabM : Multi-image decolourization. Images are concatenated before taking first PC in order to ensure global consistency (Bennedetti et al. 2012) mLab3 : AVGLab3, but with masking of unimportant regions before applying the workflow 1. Introduction Motivation: Structure-from-motion (SfM) is a technology which takes multiple 2D photos of a common scene as an input and projects the data into 3D. Feature based SfM pipelines take single channel (grayscale) images as inputs. Within this contribution we review the results from varying this channel. Research question: Can we improve the accuracy of SfM models by generating appropriate greyscale channels from RGB images? 2. Methods a. SfM software takes greyscale images as inputs b. By altering this greyscale input, we intend to optimise contrast and improve outputs c. Experiment: Take greyscale images made using different combinations of the Red, Green and Blue colour bands. Test other operators and compare accuracies d. Measure accuracy against a very high quality terrestrial laser scan e. Create greyscale image subsets generated by imposing the constraint that the Red, Green and Blue image bands must sum to 1. Bands are incremented by.05 per set and all permutations generated f. 231 image subsets are constructed with 10 models generated for each set. Error (Cloud- to-truth) surfaces are then generated for the entire dataset g. Data augmented with further algorithms h. Two datasets used, one a 9 image block of a Welsh coastal cliff (Figure 2, Westoby et. al 2012) and one a 21 image block of a lab controlled scene (courtesy of Annette Eltner, figure 2) Acknowledgments Annette Eltner and Matt Westoby for providing the datasets. Geert Verhoeven for sharing the code from his work Contact: jp-oconnor.com References Benedetti L, Corsini M, Cignoni P, Callieri M and Scopigno R (2012) Color to gray conversions in the context of stereo matching algorithms. Machine Vision and Applications 23(2): Verhoeven G, Karel W, Štuhec S, Doneus M, Trinks I and Pfeifer N (2015) Mind Your Grey Tones–examining the Influence of Decolourization Methods on Interest Point Extraction and Matching for Architectural Image-Based Modelling. : ISPRS. Westoby M, Brasington J, Glasser N, Hambrey M and Reynolds J (2012) ‘Structure-from- Motion’photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 179: Discussion Cloud to cloud errors shown to be reduced by up to 6% (Constitution hill) vs untreated images using simple contrast enhancement techniques Multi-image decolorization shown to not perform as well as image-specific techniques in dataset b Range of differences appear to depend on number of images within each block 4. Results Combinations of RGB images mapped onto a triangle, showing distance to reference laser scan (Figure 4 and 5) Median of mean absolute cloud-to-truth distance for 10 runs presented Other algorithms listed in colourbar and figure 6 6. Future work Bit depth will be investigated to see if further contrast enhancement can be achieved Number and geometry of images further studied to see if accuracy can be increased RAW image workflows currently being investigated to maximise value from consumer-grade cameras Figure 4. Results from Constitution Hill block Figure 5. Results from Lab control block Terrestrial laser scanner JPGs from camera Ground Control Points Image processing Agisoft Photoscan Dense cloud Cloud-to-truth distance Evaluation 10 runs per set Green coefficient Red coefficient Blue coefficient Green coefficient Red coefficient Blue coefficient Figure 1. Workflow applied Figure 3. Lab control overview Figure 2. Constitution Hill overview Figure 6. Results of other operations for lab control block Clip edges