Comparing Cameras Using EMVA 1288

Slides:



Advertisements
Similar presentations
Grey Level Enhancement Contrast stretching Linear mapping Non-linear mapping Efficient implementation of mapping algorithms Design of classes to support.
Advertisements

Digital Camera Essential Elements Part 1 Sept
Optics and Human Vision The physics of light
Digital Radiography.
Radiometric Corrections
Motivation Application driven -- VoD, Information on Demand (WWW), education, telemedicine, videoconference, videophone Storage capacity Large capacity.
Resident Physics Lectures
Modelling, calibration and correction of nonlinear illumination-dependent fixed pattern noise in logarithmic CMOS image sensors Dileepan Joseph and Steve.
Diffraction Physics 202 Professor Lee Carkner Lecture 24.
Video enhances, dramatizes, and gives impact to your multimedia application. Your audience will better understand the message of your application.
Multimedia Data Introduction to Image Processing Dr Mike Spann Electronic, Electrical and Computer.
Modeling the imaging system Why? If a customer gives you specification of what they wish to see, in what environment the system should perform, you as.
From CCD to EMCCD Scientific imaging for today’s microscopy.
Charge-Coupled Device (CCD)
Digital Images The nature and acquisition of a digital image.
Digital Technology 14.2 Data capture; Digital imaging using charge-coupled devices (CCDs)
Signal vs Noise: Image Calibration First… some terminology:  Light Frame: The individual pictures you take of your target.  Dark Frame: An image taken.
1/22/04© University of Wisconsin, CS559 Spring 2004 Last Time Course introduction Image basics.
Chapter 4: Cameras and Photography Depth of Field –Circle of Confusion –Effect of aperture Apertures –F-stop –Area –Depth of field Exposure –Shutter speed.
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
1 University of Palestine Faculty of Applied Engineering and Urban Planning Software Engineering Department Introduction to computer vision Chapter 2:
UVP BioImaging Systems Solutions for the Science of Life Digital CCD Cameras 101.
Photography Is the capture of reflective light on light sensitive material. Film-Base Photography used “silver” as the light sensitive material. Digital.
Digital Photography Basics. Pixels A pixel is a contraction if the term PIcture ELement. Digital images are made up of small squares, just like a tile.
EMVA 1288 ITE 2008 M. Wäny CMOS IMAGE SENSORS Clear definitions for a clear VISION EMVA 1288 Standard for Measurement and Presentation of.
Digital Photography Fundamentals Rule One - all digital cameras capture information at 72 dots per inch (DPI) regardless of their total pixel count and.
How the Camera Works ( both film and digital )
Camera Functions Using Your Digital Camera. 1. What happens when you press the shutter button down halfway? What does macro mode allow you to do? Pressing.
The Digital Image.
SPECTROSCOPIC DIAGNOSTIC COMPLEX FOR STUDYING PULSED TOKAMAK PLASMA Yu. Golubovskii, Yu. Ionikh, A. Mestchanov, V. Milenin, I. Porokhova, N. Timofeev Saint-Petersburg.
Real Camera Real-time Rendering of Physically Based Optical Effects in Theory and Practice Yoshiharu Gotanda tri-Ace, Inc.
1 Image Basics Hao Jiang Computer Science Department Sept. 4, 2014.
Comparing Regular Film to Digital Photography
1 Imaging Techniques for Flow and Motion Measurement Lecture 5 Lichuan Gui University of Mississippi 2011 Imaging & Recording Techniques.
CMOS Image Sensor Design. M. Wäny Nov EMVA Standard 1288 Standard for Measurement and Presentation of Specifications for Machine Vision Sensors and.
Controlling the Photographic Process. With today’s modern digital cameras you can have as much or as little control over the picture taking process as.
Video Video.
Color in image and video Mr.Nael Aburas. outline  Color Science  Color Models in Images  Color Models in Video.
Multimedia Data Introduction to Image Processing Dr Sandra I. Woolley Electronic, Electrical.
10/26/20151 Observational Astrophysics I Astronomical detectors Kitchin pp
The Exposure Trio Aperture, Shutter Speed, and ISO.
Digital Imaging. Digital image - definition Image = “a two-dimensional function, f(x,y), where x and y are spatial coordinates, and the amplitude of f.
Lecture 3 The Digital Image – Part I - Single Channel Data 12 September
1 Optical observations of asteroids – and the same for space debris… Dr. D. Koschny European Space Agency Chair of Astronautics, TU Munich Stardust school.
Sounds of Old Technology IB Assessment Statements Topic 14.2., Data Capture and Digital Imaging Using Charge-Coupled Devices (CCDs) Define capacitance.
Digital Camera TAVITA SU’A. Overview ◦Digital Camera ◦Image Sensor ◦CMOS ◦CCD ◦Color ◦Aperture ◦Shutter Speed ◦ISO.
02-Gray Scale Control TTF. A TTF tells us how an imaging device relates the gray level of the input to the gray level of the output. P L Luminance, L.
Human vision The image is formed on retina (sítnice) –The light is focused on retina by lens (čočka) –Retina contains two types of receptors: rods (tyčinky)
The law of reflection: The law of refraction: Image formation
Glossary of Photographic Terms and Concepts. Aperture (aka f-stop): the opening in a lens. The bigger the opening, the more light will be allowed in through.
CCD Image Processing: Issues & Solutions. CCDs: noise sources dark current –signal from unexposed CCD read noise –uncertainty in counting electrons in.
Getting to Know Your Digital Camera It is important to know the features of your camera, it will make taking pictures (creating images) more enjoyable.
Electro-optical systems Sensor Resolution
Digital Cameras in Microscopy Kurt Thorn Nikon Imaging QB3/UCSF.
Introduction to Camera. Aperture The larger the aperture of the lens opening the more light reaches the sensor. Aperture is expressed as an f-stop. Each.
Topic Report Photodetector and CCD
Digital Image -M.V.Ramachandranwww.youtube.com/postmanchandru
1 Topic Report Photodetector and CCD Tuan-Shu Ho.
Ultrasound Physics Image Formation ‘97. Real-time Scanning Each pulse generates one line Except for multiple focal zones frame one frame consists of many.
Development of Multi-Pixel Photon Counters (1)
Digital Image Fundamentals
Ultrasound Physics Image Formation ‘97.
Introduction to Camera
Ultrasound Physics Image Formation ‘97.
Chapter I, Digital Imaging Fundamentals: Lesson II Capture
Aberrations in Optical Components (lenses, mirrors)
Technical Intro to Digital Astrophotography Part 1
Lecture 2 Photographs and digital mages
Aperture, Exposure and Depth of Field
IN5350 – CMOS Image Sensor Design
Presentation transcript:

Comparing Cameras Using EMVA 1288 Dr. Friedrich Dierks Head of Software Development Components ~ Steve wg. Analog-Technik fragen ~ Tony wg. Englisch fragen ~ Laufende Gleiderung an Kopf der Folien ~ Global Shutter-Feature ~ Multiple Cameras on ona bus ~ Bilder austauschen © Basler AG, 2006, Version 1.2

Why Attend this Presentation? After attending this presentation you can… compare the sensitivity of cameras with respect to temporal and spatial noise using EMVA 1288 data sheets. You understand the role of Gain (doesn’t matter) Pixel size (doesn’t matter) Bright light (the key) Beware : All formulas in this presentation will drop out of the sky  For details see the standard and the white papers.

Outline Some Basics Temporal Noise Spatial Noise

Gain is not Sensitivity Example: Camera A Camera B Camera A yields an image twice as bright as camera B Does that mean that camera A is twice as sensitive as camera B? No! Increase the Gain of camera B until the images have equal brightness (Gain=2) Does that mean camera B is now as sensitive as camera A ? No! Multiplying each pixel x2 in software has the same effect… The Gain has no effect on the sensitivity of a camera*). *) At least with today’s digital cameras

Sensitivity is the ability to deliver high image quality on low light. What is Sensitivity? Example: A : 10 ms exposure B : 20 ms exposure Camera A yields the same image quality as camera B. Camera A needs half the amount of light as camera B in order to achieve that. Camera A is twice as sensitive as camera B ! Sensitivity is the ability to deliver high image quality on low light.

Defining Image Quality Image Quality = Signal-to-Noise Ratio (SNR) bright signal – dark signal noise SNR does not depend on Gain. Gain increases signal as well as noise. SNR does not depend on Offset. Offset shifts dark signal as well as bright signal. There are different kinds of noise: total noise = temporal noise + spatial noise =

Different Kinds of Noise Total Noise Variation (= non-uniformity) between the grey values of pixels in a single frame. Spatial Noise Variation between the grey values of pixels if the temporal noise is averaged out. Temporal Noise Variation (=flicker) in the grey value of the pixels from frame to frame. x, y x, y x, y

Outline Some Basics Temporal Noise Spatial Noise

Light is Noisy Np = 6 photons light source exposure time Np = Number of photons collected in a single pixel during exposure time Np varies from measurement to measurement. Light itself is noisy. Physics of light yields: with mean number of photons . Image quality ~ amount of light

No camera can yield a higher SNR than the light itself. SNR Diagram Draw the SNR in a double-logarithmic diagram. Take the logarithm to a base of 2. SNRp yields a straight line with slope = ½. Real cameras live right below the light’s SNR curve. No camera can yield a higher SNR than the light itself.

Axes of the SNR Diagram Common units for SNR SNR = x : 1 SNRbit = log2 SNR = ln SNR / ln 2 SNRdB = 20 log10 SNR = 6 SNRbit Special SNR values Excellent*) SNR = 40:1 = 5…6 bit Acceptable*) SNR = 10:1 = 3…4 bit Threshold SNR = 1:1 = 0 bit Number of photons collected in one pixel during exposure time Given as logarithm to the base of 2 Example µp = 1000 ~ 1024 = 210  10 on the scale +1  double exposure; -1  half exposure *) The definitions of “excellent” and “acceptable” SNR origin from ISO 12232

Quantum Efficiency (QE) = Not every photon hitting a pixel creates a free electron. number of electrons collected number of photons hitting the pixel QE heavily depends on the wavelength. EMVA 1288 gives QE as table or diagram. QE < 100% degrades the SNR of a camera Typical max QE values : 25% (CMOS) … 60% (CCD) Quantum Efficiency (QE) =  100% QE [%] lambda [nm] blue  green  red

Quantum Efficiency in the SNR Diagram SNRe of the electrons SNRe is the SNRp curve is shifted to the right by |log2 QE|. Examples: QE=50% = 1/2  shift by 1 QE=25% = 1/4  shift by 2 A high quantum efficiency yields a sensitive camera.

*) Otherwise you get high fixed pattern noise at saturation. analog signal 12 bit 8bit subset no Gain min Gain max Gain A camera saturates… if the pixel saturates if the analog-to-digital converter saturates The useful signal range lies between saturation and the noise floor At minimum Gain the ADC saturates shortly before the pixel*) The number of electrons at saturation is the Saturation Capacity Do not confuse saturation capacity with full well capacity (pixel only). pixel saturates 11 12 8 Gain useful signal range 8 1 1 1 1 noise floor The saturation capacity depends on the Gain. All scales are log2 *) Otherwise you get high fixed pattern noise at saturation.

*) You can if you use loss-less compression Quantization Noise Rule of thumb: the dark noise must be larger than 0.5 Corollary: With a N bit digital signal you can deliver no more*) than N+1 bit dynamic range. Example : A102f camera with 11 bit dynamic range will deliver only 9 bit in Mono8 mode. Use Mono16! Have at least ±1.5 DN noise. *) You can if you use loss-less compression

Saturation in the SNR Diagram At saturation capacity SNRe becomes maximum. The corresponding number of photons saturating the camera is: Typical saturation capacity values are 30…100 ke- (“kilo electrons”). A high saturation capacity yields a good maximum image quality.

Dark Noise EMVA 1288 model assumption: Camera noise = photon noise + dark noise*) Dark noise = constant Dark noise is measured by the standard deviation of the dark signal in electrons [e-] The model approximates real world cameras pretty good for reasonable exposure times and reasonable sensor temperature. Typical dark noise values are 7…110 e- *) Dark Noise is not to be confused with Dark Current Noise which is only a fraction of dark noise.

Dark Noise in the SNR Diagram SNR without photon noise: SNRd yields a straight line with slope = 1. The minimum detectable signal is found by convention at SNRd=1*) were signal=noise. A low dark noise yields a sensitive camera. *) In the double-logarithmic diagram SNR=1 equals log(SNR) = 0

The Complete SNR Diagram Overlaying photon noise and dark noise yields: with The curve starts at and ends at An EMVA 1288 data sheet provides all parameters to draw the curve, e.g. in Excel: Quantum efficiency QE [%] as a function of wavelength Dark noise sd [e-] Saturation capacity µe.sat [e-]

A high dynamic range is especially important for natural scenes. Limits within one image The brightest spot in the image is limited by µp.sat The darkest spot in the image is limited by µp.min Dynamic Range = brightest / darkest spot *) A high dynamic range is especially important for natural scenes. *) This equation holds true only for sensors with a linear response.

A Typical EMVA1288 Data Sheet Lots of Graphics

Were Does the Data Come From? Example : At Basler a fully automated camera test tool ensures quality in production Every camera produced will be EMVA 1288 characterized (done for 1394 and GigE already) Customer benefits Guaranteed quality Full process control Parameters can be given typical + range range Other manufacturers have similar measuring devices in production

The Camera Comparer Select cameras A and B Select wavelength (white  545 nm = green) Select SNR want  read #photon ratio Select #photons have  read SNR ratio

How many Photons do I Have? The hard way to get #photons Measure the radiance R Compute µp The easy way to get #photons Use EMVA1288 characterized camera to measure #photons y : grey value in digital numbers [DN]  read from viewer QE : quantum efficiency for given wavelength (white light is tricky…)  get from data sheet K : conversion gain for operating point used for characterization (esp. Gain)  get from data sheet Some ways to influence #photons Exposure time µp is proportional to Texp Typical values are (@ 30fps) 30µs … 33ms  1:1000  10 bit Lens aperture µp is proportional to (1/f#)^2 Typical f-stops are 16, 11, 8, 5.6, 4, 2.8, 2, 1.4  1 : 128  7 bit Resolution µp is proportional to 1 / number of pixels 2MPixel : VGA  1 : 7  3 bit Distance to Scene µp is proportional to 1 / (distance to scene)^2

Larger Pixels DO NOT result in a more sensitive camera. The Pixel Size Myth… A patch on the object’s surface radiates light The lens catches a certain amount of light depending on the solid angle The lens focuses the light to the corresponding pixel no matter how large the pixel is For a fair comparison of cameras… keep the resolution constant  larger pixels require larger focal length keep the aperture diameter d = f / f# constant  larger pixels have larger relative aperture Larger Pixels DO NOT result in a more sensitive camera.

Example Start pixel pitch a focal length f aperture diameter d relative aperture f# = f / d distance to object ao = const ao d a f# f d Step 1 : double pixel pitch a  2a yields four times the amount of light because of quarter number of pixels 2a f# f Step 2 : double focal length f  2f while relative aperture f# = const back to original number of pixels yields four times the amount of light because of twice the aperture diameter 2d 2a f# 2f Step 3 : double relative aperture f#  2f# yields same amount of light because of original number of pixels because of original aperture diameter d although the pixel pitch is doubled (q.e.d.) d 2a 2f# 2f

Don’t Get Confused - Pixel Size Matters a Lot*) For example smaller pixels… yield less aberrations because of near-axis optics yield smaller and cheaper optics allow larger number of pixels have less problems with micro lenses For example larger pixels… yield sharper images because less resolving power of the lens is required keep you out of the refraction limit of the lens have a better geometrical fill factor (area scan) have a larger full well capacity More… *) Although not with respect to sensitivity 

Comparing Sensitivity without Graphics Rules of Thumb For low light (SNR1) compare µp.min = sd / QE For bright light (SNR>>1) compare QE Example A102f (CCD) : QE = 56%, sd = 9 e-  µp.min= 16 p~ A600f (CMOS) : QE = 32%, sd = 113 e-  µp.min= 353 p~ For low light the A102f is 22 (=353/16) times more sensitive than the A600f For bright light the A102f is 1.8 (=56/32) times more sensitive than the A600f

Outline Some Basics Temporal Noise Spatial Noise

Principal model of a single pixel Spatial Noise Principal model of a single pixel light grey value gain + + offset The offset differs from pixel to pixel  add offset noise DSNU The gain differs from pixel to pixel  add gain noise Gain noise is proportional to the signal itself.

Spatial Noise in the SNR Diagram Offset Noise Adds to dark noise Gain Noise New kind of behavior Flat line in SNR diagram Resulting SNR formula

Spatial Noise is relevant esp. for CMOS cameras. Spatial Noise Effects CCD CMOS Spatial Noise is relevant esp. for CMOS cameras.

operating point were the correction values have been taken Pixel Correction Spatial nose can be corrected inside a camera. Each pixel get it’s own offset to compensate for DSNU… ..and it’s own gain to compensate for PRNU Most CMOS cameras have a pixel correction Depending on the sensor even more correction types are required CCD without shading CMOS with shading operating point were the correction values have been taken

Stripes EMI based stripes Structure based stripes High frequency disturbing signal is added to the video signal The maxima of the disturbing signal are shifted between lines This results in diagonal stripes which tend to move and pivot with temperature Structure based stripes There are multiple signal paths in the sensor/camera with slightly different parameters (gain, offset) This results in fixed horizontal or vertical stripes Example: even-odd-mismatch

The Spectrogram 3 different cameras X-Axis : horizontal distance between stripes in [pixel] Y-Axis : amplitude at the corresponding frequency in #photons The ideal camera has white noise only  flat spectrogram Noise floor height indicates minimum detectable signal Peaks indicate stripes in the image

Conclusion With EMVA 1288 data sheet you can… compare the sensitivity of cameras with respect to temporal and spatial noise Remember: Gain doesn’t matter Pixel size doesn’t matter Nothing beats having enough light  Get Started: Get the camera comparer and play around with the parameters. Get a camera with EMVA1288 data sheet and determine the #photons in your application.

Thank you for your attention! Camera signals are interfaced to external circuitry via two connectors on the side of the camera. One connector is a standard IEEE-1394 socket. This is used to transmit video data and commands between the host computer and the camera. The second connector is a 9-pin, micro D connector. Two pins on the connector are an input for an external trigger signal. The external trigger can be used to trigger exposure start. The external trigger feature is part of the digital camera specification. Two pins on the connector are an output for the trigger ready signal and two pins are an output for the integrate enabled signal. These signals are unique to Basler cameras. We will discuss the use of the external trigger, trigger ready, and integrate enabled signals in more detail later in the presentation. The back of the camera also contains a green LED and a yellow LED. The green LED is used to signal power present and the yellow LED will blink if an error condition is present. More info : www.basler-vc.com > Technologies > EMVA 1288 Contact me : friedrich.dierks@baslerweb.com