Presentation is loading. Please wait.

# ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The.

## Presentation on theme: "ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The."— Presentation transcript:

ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: http://ecee.colorado.edu/ecen4616/Spring2014/ (The first assignment will be posted here on 1/22) To view video recordings of past lectures, go to: http://cuengineeringonline.colorado.edu and select “course login” from the upper right corner of the page. Lecture #19: 2/26/14

Polarization For EM waves, polarization refers to the direction of the wave’s electric vector. In our consideration of plane waves as solutions to the EM wave equation, we have implicitly assumed linear polarization – that is, the electric vector oscillates in a single plane. In the PWS, we showed that a very complex field in an aperture could be described as a sum of plane waves. Linearly polarized light wave:

Polarization Polarizers are devices which can pass certain linearly polarized waves and block (or reflect) the orthogonal polarization. Often an analogy is made with waves on a rope passing (or not) through a picket fence: What constitutes the analog of a ‘picket fence’ for light waves? → Anything which has a different effect on the wave’s electric field, depending on the orientation of that field. One simple device is a grid of wires (spaced < λ apart) – the electric field is nearly cancelled if it is parallel to the wires, but not affected much if perpendicular.

Polarization Birefringent materials act like dielectrics, but have different indices of refraction for different directions of the wave’s electric field. These materials are usually crystals with non-symmetrical structure that allows for a different amount of interaction between the valence electrons and the field in different directions. Birefringence can also be engineered into materials: Long-chain polymer plastic materials – the molecules are oriented by stretching the plastic sheet. Patterned surface structures (<λ) that act like a local wire grid on an otherwise reflecting surface.

Polarization and Birefringence Birefringence can also result in ‘Anomalous Refraction’, which is sometimes used to separate light into orthogonal polarization components: Calcite is a common mineral often used to illustrate this. Retardation Progressive retardation of the phase of the wave whose E-field is aligned with the ‘slow’ axis:

Retarders Our main concern with birefringent materials will be their use as retarders. When a linearly polarized wave enters a material with a different index of refraction in orthogonal directions, one polarization component will travel faster than the other and they will no longer be in phase when exiting the material. ‘Fast’ axis (low n) ‘Slow’ axis (high n) E-field plane of oscillation: We can use this to manipulate the polarization of a wave: Retarding one component by ½ wave, for example, causes the output polarization to be rotated by 90 degrees.

Retarders Two linearly polarized, co-axial plane waves, 90 o out of phase, combine to form a circularly polarized wave. Retarders are catagorized by the amount of retardation in waves (at some specified wavelength, of course) that they produce -- half-wave plate, quarter-wave plate, etc. The difference in indices and the thickness are what determine the amount of retardation. A quarter-wave plate (QWP), for example will convert a linear polarized input wave to a circularly polarized output wave:

Application of Retarders to Optical Designs A Polarizing Beam Splitter (PBS) is a thin-film based device which reflects one polarization of light and transmits the other. QWP mirror Polarization: V, H, C (for Vertical, Horizontal, Circular) V-polarization C-polarization H-polarization PBS The system below allows light to first transmit, then reflect from the same PBS.

Heads up Displays (HUD) The basic idea of a HUD is that a virtual image of a screen (computer or camera generated) is projected into the same space (from the user’s point of view) as the scene being viewed. Virtual projected image is seen as superimposed on direct vision scene. Partially reflecting mirror Light from scene

Heads up Displays (HUD) (How does Google Glass © work) So, how does this: Become this? Polarization-folded optics is the key.

Heads up Displays (HUD) (How does Google Glass © work) The basic design was patented by IBM over 20 years ago (and, hence is public domain). More recent patents are basically design patents – what is being patented is the packaging and/or use.

Heads up Displays (HUD) (How does Google Glass © work) In fact, the military has been using similar (but uglier) devices for decades.

Input from scene (P- Polarized) Heads up Displays (HUD) (How does Google Glass © work) Polarization notation: P in the plane of the figure S perpendicular to the plane of the figure Display Projection Lenses PBS Curved Mirror (part of the optics) QWP P-polarized P-polarized up, S-polarized down Double-pass through the QWP converts the P-polarized light that passes through the PBS to S-polarized light that reflects.

Here is a recent patent on the device. Note: IBM, not Google owns it

The patent contains prescription data, which can be entered into Zemax. Patents (almost) never reveal the best version, so one can get a significant improvement by letting Zemax optimize the system. (Not a bad student project.)

SuperResolution

What is Superresolution? 1) Recovering information beyond the diffraction limit of the optics. Analytic Continuation: Surprisingly, superresolution is theoretically possible, for a normal optical system. The theoretical proof of this involves the concept of analytic continuation, described below. The Taylor Series and Analytic Continuation: The Taylor series expansion of a function about zero (also called the Maclauran expansion) is defined as: Any function which is equal to its Taylor expansion inside a convergence disk, |x| < R, is called an analytic function on that disk. If a function is equal to its Taylor expansion for all x, it is called an entire analytic function. Sin(x), cos(x) and e ix are all entire analytic functions.

SuperResolution: Analytic Continuation Why does the Taylor expansion work? The Taylor expansion of a polynomial is just the polynomial. For example: So, saying that a function is analytic is equivalent to saying it can be represented by a (possibly infinite) polynomial. Taylor theorem tells that the knowledge of such a function’s value and derivatives at a single point is equivalent to knowledge of the entire function. Note that the entire function can be determined by a sufficiently accurate knowledge of the function near the origin.

Analytic Continuation and Superresolution: From the point of view of an optical system, an object is just a pattern of light. This pattern could be decomposed (via the Fourier Transform) into an expression involving only complex exponential functions. The imaging system relays a band-limited copy of the object light pattern to the detector, where it is captured as an image. The image does not contain all the spatial frequencies in the object, since the MTF reduces and/or cuts off the higher frequencies. We will assume, however, that the lower spatial frequencies are accurately recorded. As Goodman shows (“Introduction to Fourier Optics, p. 134), the Fourier transform of such an image (or the object’s light pattern) is an entire analytic function. The reasoning goes like this: The Fourier Transform is a sum of sin functions; Each sin function is an entire analytic function – i.e., can be represented by a Taylor polynomial, hence the entire Transform can also be so represented, since a sum of polynomials is itself a polynomial. Hence, the Fourier Transform of an image is an entire analytic function.

Analytic Continuation and Superresolution: What this implies is that a sufficiently accurate knowledge of a portion of the image Fourier Transform (e.g., the lower spatial frequencies) can be used to restore the unknown higher spatial frequencies by means of extrapolation with a 2D Taylor polynomial (or similar method) derived from the known data. This process is known as analytic continuation. To Recap: 1.The Fourier Transform of the Object’s light pattern is an entire analytic function,. 2. The Fourier Transform of the Image,, is a copy of the Fourier Transform of the Object, but only for the lower spatial frequencies. 3.A sufficiently accurate knowledge of the Object’s transform at u=0, v=0, is enough to re-create the entire function. There are hundreds of theoretical papers on this subject – and even some actual experimentation – covering many different methods of achieving resolutions beyond the passband of the optics. Alas, the results are not currently spectacular – given the Signal-to-Noise ratios achievable with optical systems and detectors, the degree of successful extrapolation is only a few percent increase in resolution.

Analytic Continuation in the Digital World Discrete analog to analytic continuation: In a system where all the data is discretely sampled, such as an image from a pixilated detector or its Discrete Fourier Transform, the derivatives are estimated by means of finite differences between the data samples. Then, using central differences to approximate derivatives, we have: & etc. Finite Differences: Let F be a sequence of samples of the function F(x):

Analytic Continuation in the Digital World As we can see, knowledge of the nth derivative at x=0 requires us to know the function value at n+1 points, only one of which is zero. Consider: A more compact method (for determining the nth derivative) would be to fit a a polynomial of at least order n to the data. (Since you are trying to estimate the n th derivative, the polynomial must have a non-zero derivative of the same order.) Such a polynomial will still require knowledge of at least n+1 points of the sequence. Hence, in discretely sampled data, knowledge of the higher-order finite differences about a single point in a sequence is equivalent to knowledge of other points in the sequence distant from the evaluation point. This is a further limit on the possible range of analytic continuation, independent of the SNR.

SuperResolution: Near Field Optical Scanning Another form of SuperResolution that doesn’t require a priori knowledge about the object is near-field scanning. This involves scanning a sub-wavelength aperture in close proximity to the object being imaged. Analysis and experiment show that some light will pass through an aperture that is much smaller than the wavelength: Incident light Sub-λ aperture Object Scan the Object or the Aperture Detector Near-field scanning has a very limited depth of field – however research has shown that a correctly designed array of sub-wavelength apertures can have a DOF several times larger than a single aperture, while maintaining sub-wavelength resolution.

Instead of using a small aperture, we use a metal tip to provide a local excitation. If a sharp metal tip is placed in the focus of a laser beam, an effect called local field enhancement will cause the electric field to become roughly 1000 times stronger. This enhancement is localized to the tip, which has a typical diameter of 10 nm. As this tip is scanned over the surface, an image can be formed with a resolution as fine as the tip. From: http://www.optics.rochester.edu

Lucky Imaging A third method of Object-independent super-resolved imaging is called “Lucky Imaging”. This refers to the process where an optical system that is imaging through a turbulent medium occasionally records spatial frequencies of objects that would normally be beyond the system’s cut-off frequency (diffraction limit). Conceptually, this works as shown below: Object Random Phase Distortion Normal Ray Path “Lucky” Ray Path What the figure shows is the random distortion momentarily acting as an objective lens, and the camera is then the “eyepiece” to a much larger (and higher resolution) optical system. For this instance, the “lucky” image will appear in front of the normal focal plane. Other possibilities are:  The phase distortion creates a magnified image of the object  The phase distortion creates a real image of the object, closer to the camera. (In this case, the image would appear behind the normal image plane.)

Lucky Imaging (Astronomical) The term ‘Lucky Imaging’ is often used with respect to astronomical imaging as waiting for the times when the ‘seeing’ through a turbulent atmosphere approaches or equals the diffraction limitation of the telescope. Amateur astromomers have used the technique of taking thousands of short exposure images and selecting the best resolved ones to combine. The results have equaled the performance of the major observatories equiped with ‘adaptive optics’. (Optical systems that can be reconfigured to correct changing aberrations from the atmosphere on the fly.) 16 inch amateur telescope: Without image selection With ‘lucky’ image selection

Turbulence Region Lucky Imaging Lucky Imaging is known, however, to occasionally result in images that exceed the diffraction limit of the telescope. The only way this can happen is if the turbulence has formed an additional optical element changing the effective instrument to one with a higher NA and hence higher resolution. A reasonable argument can be made that such ‘superresolved’ lucky images are, in fact, more likely to show up either in front of the main focus or behind it than actually in focus. This is because, for the turbulence to result in an in-focus super- resolved image, there have to be (at least) two effective optical elements formed by chance: Both systems have increased apertures, and hence increased resolution. But it takes an extra element to return the focus to the original plane. Random ‘turbulence lenses’

Computer: EDOF filtering and Lucky Image Selection Enhanced Lucky Imaging Besides theory, actual experiments have shown that, in fact, superresolved ‘lucky images’ are more likely than not to be either in front of or behind the nominal focal plane. (1) (1) “Observation of superresolution in nonisoplanatic imaging through turbulence”, M.I. Charnotskii, V.A. Myaknin, and V.U. Zavorotnyy, J. Opt. Soc. Am. A, Vol. 7, No.8, (8 August, 1990) Hence, it seems evident that a Lucky Imaging system designed to use the Extended Depth of Field (EDOF) methods of pupil modification developed in Lecture 17 would be significantly more productive of Lucky Images than a standard optical system: Cubic Phase Plate EDOF Random Phase Region SuperResolved Image Output

SuperResolution based on A Priori Information about the Object For example, if it is known that the object is a point source, how accurately can the position of the PSF be determined on the detector plane? If the PSF falls only on one pixel, the answer is “one pixel accuracy”. If the PSF is oversampled, however – say it covers an area of 10x10 pixels – then, since the form of the PSF is known (the Airy pattern), its central location can be determined to a small fraction of a pixel, given sufficient SNR. This method can be extended to objects other than point sources, as long as they are known. For example, a sharp edge can be located to much less than a pixel’s width, as long as the image model is correct and the edge image extends over multiple pixels. There are a number of optical systems and research on this subject. At CU there are experimental setups that have shown the ability to locate single fluorescent molecules to in x, y, and z. (The z-location was done by engineering a pupil distortion which caused the PSF to be asymmetric and rotate with defocus.)

SuperResolved Fluorescence Excitation In fluorescence imaging, the Objects are single dye molecules which emit particular wavelengths of light when excited by shorter wavelength light. A single molecule may emit up to 100K photons/sec when exposed to sufficient radiation in the excitation band. This is easily detectable by modern electroptical systems. There are a number of methods of exciting fluorophores in regions smaller than what can be addressed by a normal PSF. They are identified by an acronym salad of designations such as STED, PALM, STORM, etc. We will describe STED (Stimulated Emission Depletion) microscopy – those interested can look up the others. The method uses fluorophores which can be excited at one wavelength, and inhibited at another wavelength – both exciting and inhibiting wavelengths must be distinct from the fluorescent wavelength, so that they may be separated before detection. Both the exciting and inhibiting wavelengths are projected onto the sample. The resolutions and intensities are arranged such that the exciting wavelength exceeds the effect of the inhibiting wavelength only in a very small central region of both PSFs. This is the region where fluorescence will occur, and can be made significantly smaller than any possible PSF through the optics: Inhibiting wavelength exciting wavelength Region of fluorescence The image is built up by scanning this PSF combination over the object and recording the fluorescence return at each position.

Download ppt "ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The."

Similar presentations

Ads by Google