Module 2 : Linearity AGENDA TVI Vision, Kari Siren Linearity in general Theory, what does non-linearity mean when measuring “true” colour How to measure,

Slides:



Advertisements
Similar presentations
Point Processing Histograms. Histogram Equalization Histogram equalization is a powerful point processing enhancement technique that seeks to optimize.
Advertisements

Frame Buffer Postprocessing Effects in DOUBLE-S.T.E.A.L (Wreckless)
UNDERSTANDING RAW Joe Sukenick DigiQuest
Rotary Encoder. Wikipedia- Definition  A rotary encoder, also called a shaft encoder, is an electro- mechanical device that converts the angular position.
Simon Fraser University Computational Vision Lab Lilong Shi, Brian Funt and Tim Lee.
10/23/2003ME DAC Lecture1 DAC Sunij Chacko Pierre Emmanuel Deliou Thomas Holst Used with modification.
 Any time you half press the shutter button, the light meter activates.  As we know, it measures the light in your scene, and calculates a shutter speed.
Measuring the Speed of Light! Photonic partners: David Orenstein Anuta Bezryadina Nathan Burd.
1. What is Lighting? 2 Example 1. Find the cubic polynomial or that passes through the four points and satisfies 1.As a photon Metal Insulator.
Temperature Dependence of FPN in Logarithmic CMOS Image Sensors Dileepan Joseph¹ and Steve Collins² ¹University of Alberta, Canada ²University of Oxford,
Modelling, calibration and rendition of colour logarithmic CMOS image sensors Dileepan Joseph and Steve Collins Department of Engineering Science University.
Lecture 4 Linear Filters and Convolution
Modeling the imaging system Why? If a customer gives you specification of what they wish to see, in what environment the system should perform, you as.
Screen Monitor Visual display unit (VDU)
Extension of M-VOTE: Improving Feature Detection
Introduction of the intrinsic image. Intrinsic Images The method of Finlayson & Hordley ( 2001 ) Two assumptions 1. the camera ’ s sensors are sufficiently.
Solving two-step and multi-step equations
1/22/04© University of Wisconsin, CS559 Spring 2004 Last Time Course introduction Image basics.
1 Basics of Digital Imaging Digital Image Capture and Display Kevin L. Lorick, Ph.D. FDA, CDRH, OIVD, DIHD.
Section 3.6 Variation. Direct Variation If a situation gives rise to a linear function f(x) = kx, or y = kx, where k is a positive constant, we say.
High Definition Video In The Real World
Digital to Analog Converters
In-orbit calibration (TOTAL channel) V space -V IBB Raw Earth V (counts) Raw IBB V (counts) =
Lecture 03 Fasih ur Rehman
Color Management. How does the color work?  Spectrum Spectrum is a contiguous band of wavelengths, which is emitted, reflected or transmitted by different.
The Television Camera The television camera is still the most important piece of production equipment. In fact, you can produce and show an impressive.
CMOS Image Sensor Design. M. Wäny Nov EMVA Standard 1288 Standard for Measurement and Presentation of Specifications for Machine Vision Sensors and.
Lecture 12 ASTR 111 – Section 002.
1 An Observatory for Ocean, Climate and Environment SAC-D/Aquarius HSC - Radiometric Calibration H Raimondo M Marenchino 7th SAC-D Aquarius Science Meeting.
Objectives Upon the completion of this topic the student will be able to explain Formation of picture Reproduction of Motion picture &TV picture C-O5/SDE-TV
Digital Imaging. Digital image - definition Image = “a two-dimensional function, f(x,y), where x and y are spatial coordinates, and the amplitude of f.
7 elements of remote sensing process 1.Energy Source (A) 2.Radiation & Atmosphere (B) 3.Interaction with Targets (C) 4.Recording of Energy by Sensor (D)
Digital Image Processing Part 1 Introduction. The eye.
Miguel Tavares Coimbra
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
March 2004 Charles A. DiMarzio, Northeastern University ECEG287 Optical Detection Course Notes Part 15: Introduction to Array Detectors Profs.
Veggie Vision: A Produce Recognition System R.M. Bolle J.H. Connell N. Haas R. Mohan G. Taubin IBM T.J. Watson Resarch Center Presented by Chris McClendon.
The Reason Tone Curves Are The Way They Are. Tone Curves in a common imaging chain.
 Objective Objective  Basic Techniques Basic Techniques - Beam Penetration methodBeam Penetration method - The Shadow - Mask method.The Shadow - Mask.
Beam Penetration & Shadow Mask Method
COMPUTER GRAPHICS. Can refer to the number of pixels in a bitmapped image Can refer to the number of pixels in a bitmapped image The amount of space it.
CS TC 22 CT Basics CT Principle Preprocessing. 2 CT Basics CT principle preprocessing CS TC 22 Blockdiagram image processor.
Computer Fundamentals MSCH 233 Lecture 5. The Monitor A Monitor is a video screen that looks like a TV. It displays both the input data and instructions,
Announcements After lecture, adjourn to the observatory to start set-up for Dark Sky Observing night Forecast for Thursday doesn’t look good (60% chance.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Observing Transfer Functions For Multimode Spectrometers.
EE 638: Principles of Digital Color Imaging Systems Lecture 17: Digital Camera Characterization and Calibration.
Suggested Machine Learning Class: – learning-supervised-learning--ud675
X = Y. Direct variation X 1 X = Y 1 Y 2.
Week 9 Monitors and output to the screen. Monitors, also known as Visual display units (V.D.Us) Desktop computers contain a Cathode Ray Tube (C.R.T.)
LIGO-G09xxxxx-v1 Form F v1 The Viewfinder Telescopes of Advanced LIGO’s Optical Levers Michael Enciso Mentor: Riccardo DeSalvo Co-Mentor: Tara Celermsongsak.
ARYAN INSTITUTE OF ENGINEERING AND TECHNOLOGY PROJECT REPORT ON TELEVISION TRANSMITTER Guided By: Submitted by: Janmejaya Pradhan Janmitra Singh Reg :
Introduction to Digital Image Analysis Kurt Thorn NIC.
Day & Night Video Quality Video Quality. Introduction This section will bring you through the following concepts about: 1. Basic concepts about how day.
School of Electronics & Information Engineering
Digital Light Sources First introduced in 2001.
Part 1: Working with Histograms in-camera
TELEVISION Camera Tube
The Colour of Light: Additive colour theory.
Programming Operation Guide
Perception and Measurement of Light, Color, and Appearance
Computer Vision Lecture 4: Color
CSCI 1290: Comp Photo Fall Brown University James Tompkin
7 elements of remote sensing process
Colour Theories.
Digital Image Processing
Adding Surface Detail 고려대학교 컴퓨터 그래픽스 연구실.
Adding Surface Detail 고려대학교 컴퓨터 그래픽스 연구실.
Polarized Fluorescence Resonance Energy Transfer Microscopy
IN5350 – CMOS Image Sensor Design
Presentation transcript:

Module 2 : Linearity AGENDA TVI Vision, Kari Siren Linearity in general Theory, what does non-linearity mean when measuring “true” colour How to measure, compute and characterize linearity error Demanding case: Linearity in colour grading

TVI Vision TVI Vision cameras 3 CCD line scan cameras, colour separation by beam splitter 3 x 1024, 3 x 2048 or 3 x 4096 pixels speed 5 kiloLines/sec to 35 kiloLines/sec Up to 12 bit output per channel

Linearity Relative out put of ideal camera and camera with Peak Linearity Error 3.5% Output [DN] increases directly proportional to increase of light (photons) without depending the light level. Non-linearity is in fact small variation of overall system gain K [DN/e -1 ] Some times it is necessary to operate with offset to avoid clipping the signal against zero, just take care in calculation….

Theory of measuring colour Colour is independent of brightness Colour is proportion of R to G to B Brightness level should not effect the colour Why not measure always in same brightness level?  illumination may vary, lamps getting old  target may vary  shadows  dirty factory without people 24/7 you need margin  with the same line you have to measure bright and dark targets 1% error in linearity is some how visible

Theory of measuring colour X axes is the brightness Y presents out put of 8 bit camera with 3.5% Peak Linearity Error, same linearity error on each channel Color is proportion of R to G to B Right values are R=246 G=143 and B=90 when full brightness

Theory of measuring colour Proportion of R to G vary from 1.8 to 1.3 Proportion of R to B vary from 3.0 to 1.8 Worst effect at level 25% of brightness In this example linearity error start from zero in darkness, many case there is a drastic change near darkness

Theory of measuring colour To visualize the color error Measured R, G an B values are multiplied with inverse of brightness level Left you see color in darkness and right in brightness And with double linearity error it looks like that…..

The standard How to measure, compute and characterize linearity error

The standard

“The linearity error of the illumination setup must be at least a factor of 2 smaller than the linearity error that shall be characterized by this set up.” To characterize 12 bit camera linearity you must be able to know intensity level with accuracy 13 bit - this is not easy

Demanding case: Grading mink fur color 3 x 12 bit camera “white” fur, levels about 3900 DN “black” fur level can be about DN over offset black/dark brown fur are divided in 16 different groups difference between groups can be 2-3 DN difference between groups 0.5 ‰, not %