Presentation on theme: "Camera Architecture and Microscopy Austin Blanco."— Presentation transcript:
Camera Architecture and Microscopy Austin Blanco
How does a camera work? ► Light is converted to electrical charge. ► Charge is stored in potential wells, these are the “Pixels” on the camera. ► Energy collected in each pixel is digitized. ► Digital data is transferred to computer.
Photons Emitted from Sample Fluorphore or Transmitted Light Pixels in CCD Array
Color Mosaic -Enables fast acquisition -Sacrifices intensity spatial resolution -Lowers Sensitivity -Only used on interline sensors
Pixel Size -Lower Spatial Resolution -Greater Dynamic Range -Faster -More Sensitive -Lower Spatial Resolution -Greater Dynamic Range -Faster -More Sensitive Larger pixels can hold more energy – Providing more dynamic range Smaller pixels hold less energy.
Choosing the Best Camera Speed Sensitivity Resolution Back Thinned Frame Transfer EM CCD Color Mosaic Interline Monochrome
CMOS vs. CCD ► What is a “CMOS” Complimentary Metal Oxide Semiconductor ► Designates an electrical conductor layout which is used for many applications: Sensors Memory Transceivers Data Converters
Positive Aspects ► Both Sensor and electronics for control are manufactured on chip Theoretically lower cost Smaller complete package (no external boards required) On chip processing is more powerful (color compositing) ► A/D at each pixel Superior control of gain for color applications Far greater speed potential than CCD’s at lower readout rates (lower A/D Noise per pixel) Sub Arrays have much more power due to non-serial nature of architecture ► Rugged Design Less soldier contacts means less potential for something to break
Negative Aspects ► Shuttering Methods Nowhere for stored charge to go! Commonly use “Rolling Shutter” to avoid problem ► Distorts moving objects Full frame shuttering ► requires multiple transistors per pixel ► Solves rolling shutter drawbacks ► Reduces fill factor ► QE Under 10um pixel size QE is lower for CMOS than CCD ► More on-active area parts reduces fill factor ► > 10um CCD & CMOS are equivalent in QE ► Dynamic Range multiple layers of construction in CMOS design ► Offer photons the opportunity to bounce to neighboring pixels ► Causes image softening ► Reduces dynamic range (creates new noise source) A/D variations between pixels increases noise factor
Integration Limits ► CMOS design changes Require more work for integration into camera bodies Slow concept to market (~18 months) ► CCD Design Changes “Drop In” new CCD’s can be used in old designs Faster turnaround means more profit now (~ 8 months)
Future of CMOS ► Potential for back thinning Could avoid current problems with softening and lower QE With back thinning would compete and possibly beat CCD’s for low light applications ► Specific Manufacturing Facilities Originally CMOS were thought to be cheap in production. Ultimately specific foundries will be required for scientific grade chips. ► Cost will go up to or above CCD manufacture.
Conclusions ► CCDs and CMOS sensors fit our needs in complimentary ways: CCDs for low light quantitative apps CMOS for high resolution brightfield apps CMOS for rugged environments CCDs for speed apps CMOS for small form factor CCDs for high dynamic range brightfield ► CMOS sensors are a “disruptive technology” Someday CMOS will eclipse and possibly replace CCDs, just like tape -> CD -> Mp3. Reaching the performance level of CCDs will require CMOS prices to meet or exceed current CCD pricing, meaning a stable market for imaging apps in the future
Thank You! ► Austin Blanco ► Technical Instrument Training on Imaging / Software Custom Programming for Software Consultation on Existing and New Systems ► 510-708-2995 ► firstname.lastname@example.org email@example.com