Presentation is loading. Please wait.

Presentation is loading. Please wait.

ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The.

Similar presentations

Presentation on theme: "ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The."— Presentation transcript:

1 ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The first assignment will be posted here by 1/27) To view video recordings of past lectures, go to: and select “course login” from the upper right corner of the page.

2 Last lecture, we derived the “cutoff” spatial frequency of a lens as: Where θ is defined as the half angle of the ray cone from the lens to the image: Image Resolution and the Wave Nature of Light (Re-considered) θ θ x λ

3 We related this to Ernst Abbe’s observation (in 1873) that you couldn’t image a diffraction grating with a microscope unless the objective could capture the first order diffracted light from the grating: d θ λ θ Abbe’s experiments, however, did not conclude that the resolution limit was but rather And hence, the cutoff spatial frequency (S.F.) was actually observed to be: i.e, twice the resolution!

4 is also the cutoff S.F. reported by Zemax. Here is a MTF analysis window from an ideal system with N.A. = 0.25 and λ=1µm: My calculation of SF max is 250mm -1, while Abbe’s (and Zemax’s is 500mm -1. So where did we go wrong?

5 θ Grating +1 0 th Diffraction Order: Illumination Our analysis implicitly assumed on axis, plane wave illumination: Our calculated S.F. cutoff is, indeed, the correct cutoff frequency for this situation – but this is not the way microscopic objects are commonly illuminated.

6 Illumination θ Grating +1 0 th Diffraction Order: If the illumination is at an oblique angle (or, more likely, a cone of oblique illumination is used as is common in microscopes), then the lens can capture the 0 th and +1 diffraction orders in ½ the required aperture for capturing the +1 and -1 orders: Or, alternatively, the lens can capture the 0 th and 1 st orders from a grating with half the slit spacing, or d/2. With the appropriate off-axis illumination, the lens can image a grating with only ½ the period previously calculated, so the minimum resolvable object size and the cutoff S.F. are: Abbe’s Result.

7 In Practice, microscope illumination systems are auxiliary optical systems which provide a cone of illumination sufficient to fill the lens aperture. This insures that for arbitrary objects all possible spatial frequencies in the object are captured by the microscope lens. Object Illumination cone Light Source This is the kind of illumination Abbe used in his experiments, and it insures that all possible spatial frequencies can be captured by the ‘single-sideband’ method enabled by off-axis illumination, hence his experimental result that:

8 Microchip circuits are made by depositing patterned materials on a semiconductor substrate. The patterns are generated by imaging pattern masks onto chips coated with photo-sensitive polymer at the limits of achievable resolution. Certain spatial frequencies in microelectronic circuits are more important than others: For example, S.F.s that define horizontal and vertical edges. The illumination system, therefore, is usually structured to emphasize these Spatial Frequencies. Structured Illumination Mask High N.A. Image on microchip ~1 meter

9 Magnification (in the Paraxial Approximation) Suppose we have identified object-image conjugates: f’ y y’ l l’ θ θ’θ’ n n’ object image The magnification is then defined as: If, we can use the paraxial Snell’s Law: and the definitions: From the geometry, if n=n’, then θ=θ’ also and we have: To get:

10 Now consider a ray traced from the axis at the object to the image via the very edge of the lens. We will call this the “Marginal Ray”: it is the largest angle ray (from the axis) which can be traced through the system. Taking the previous expression for the magnification, and substituting for l, l’ using the definitions of u, u’: y y’ l l’ u u’ n n’ object image h Combining with the original definition of M: This quantity (which we will call is conserved before and after every surface in a system and is called the Optical Invariant.

11 Spatial Frequencies and Angular Resolution θ min a d min f D We’re shown that the image ‘resolution limit’ is: The corresponding resolution on the object can be calculated via the magnification: For a very distant object, such as with a telescope, it makes more sense to use an ‘angular resolution limit’. The above construction shows how this relates to the N.A., focal length, and lens diameter, D:

12 Optical Spatial Frequency Cutoff vs. ‘Resolution’ It’s somewhat misleading to equate the optical cutoff frequency, as the ‘resolution’ of an optical system, since f max is defined as the first S.F. whose amplitude goes to zero. Hence, f max is not visible in the image, by definition. A more practical definition, known as the “Rayleigh Criterion” and based on both observation and theory, is that two point sources (e.g., stars) are ‘resolvable’ if the angular separation between them is at least: The theory of the Rayleigh resolution limit is that the first null of one PSF falls on the peak of the adjacent PSF. The constant 1.22 is derived from the Airy Pattern produced by a circular aperture.

13 The Optical Invariant and Information Throughput We showed that a product of local quantities, (the local index, marginal ray angle, and image height) is invariant throughout an optical system. Hence, once this product is determined anywhere in a system, it is valid throughout the system, including the input and output spaces. The optical invariant (plus the wavelength) allows one to calculate the maximum number of spots (linear) that the system can pass – hence the information capacity of the system. Using the diffraction limit for resolvable spot size, and making the usual flurry of linear approximations: y y’ l l’ u u’ n n’ object image h Hence the number of resolvable spots on a line the height of the image is proportional to H, and the total # of resolvable spots (call them pixels) is

14 Stops and Pupils Marginal and Chief Rays The system APERTURE STOP is the aperture that limits the maximum angle (or height, for an object at ∞) marginal ray from a field point. For a single lens, the stop is the edge of the lens: u Marginal Ray There is always some surface in an optical system which limits the maximum angle (or height) of the marginal ray – it is best if you decide what that stop is rather than letting it be determined by accident! Aperture Stop

15 The Chief Ray is the ray from the edge of the image (or angular field) which passes through the center of the Aperture Stop: Zemax requires that one surface in the Lens Data Editor be designated the Stop Surface. If you don’t specify which one, Zemax will pick one – not always very rationally.

16 l’ y y’ l θ θ’θ’ object image Image positions are determined by places where the Marginal Ray crosses the axis – image heights by the height of the Chief Ray at the image positions. There is one more important stop in an optical system, the Field Stop. This is the aperture which limits the field of view (FOV). The system will operate without a specific field stop, but it is usually better to cut off the FOV sharply rather than just letting it fade away.

17 Entrance and Exit Pupils Stop Entrance Pupil Exit Pupil The Entrance Pupil is the image of the stop as seen from object space. It is where the rays from the object appear to be aimed. The Exit Pupil is the image of the stop as seen from image space. It is where the rays to the image appear to be coming from. Image Space Object Space

18 Pupils and Ray Tracing: 1.There is no point in tracing a ray that misses the stop, as it will not traverse the system and contribute to the image. 2.If the stop is behind the lens (or some lenses, in a compound system), then rays can be aimed into the entrance pupil, which is the image of the stop as seen from object space. Because of the properties of imaging, a ray aimed into the entrance pupil is guaranteed to make it through the stop. 3.Zemax determines pupil positions via Gaussian ray traces. For some systems this is in error, causing the analyses to be wrong. As a check, it is good to turn “ray aiming” on in the General dialog (Gen tab) to see if anything changes. (Ray aiming causes Zemax to trace multiple rays in a ‘point and shoot’ type iteration to find the ones that actually fill the pupil)

19 Radiometry Radiometry concerns the measurement of EM radiation (and light in particular). A closely related field, Photometry, is basically Radiometry scaled by the sensitivity of a particular detector. For example: The normal human eye is most sensitive at a wavelength of 555nm (green). Its sensitivity at 680nm (red) is only (taking the sensitivity at 555 as 1.0), or about 1/60 as sensitive. This is why a green laser pointer appears so much brighter than a red one, even at the same power level. One can find the complete sensitivity curves, including for each type of rod or cone in the eye, by looking up “Photopic response”.

20 The subject of radiometry contains a fair number of defined technical terms, which you can find defined in the text:  Flux (Φ): Just another term for power; J/s, but referring to propagating light.  Intensity (I): Flux per unit solid angle; W/sr (sr = steriadian)  Irradiance (E): Flux per unit area, incident; W/m 2  Exitance (M): Flux per unit area, exiting; W/m 2  Radiance (L): Flux per unit projected area per unit solid angle; W/(A∙sr) Where, in the above definitions, “Flux” refers to “Power carried by EM waves”, into, out of, or through a surface. If the above quantities are scaled by the response curve of a detector (like the eye), then they are called “Luminous Flux”, “Luminous Intensity”, …, “Luminance (for “Luminance Radiance”), etc. These values depend on wavelength, where the strict power values don’t.

21 In Radiometery, we will be considering solid angles, defined as the area of a unit sphere intercepted by a (possibly irregular) cone whose vertex is at the center of the sphere. A useful approximate formula for the solid angle of small cone angles is:

22 Lambertian Sources: (Why is the term “projected” in the definition of Radiance?) Lambert’s Law describes most diffuse, area sources of radiation. It says that the power emitted per area per steradian (i.e., the ‘radiance’) varies as the cosine of the angle between the radiation direction and the normal to the surface, as shown in the following figure:

23 Does this make such a source become dim, as the angle of incidence increases? Consider a telescope looking at a fraction of the surface of such a lambertian source, and consider what happens when the angle of incidence changes: Field of view of Telescope θ Entrance pupil The telescope is receiving radiation over a solid angle Ω from each point on the source. For convenience, take the telescope field of view (FOV) to be a square, with extent D at the source. (This could, perhaps, be the FOV of a single pixel in a detector.) Let us assume the source has a radiance of R w/(m 2 ∙st). In the case labeled ‘A’ where the emitting surface is normal to the direction to the telescope, the total emitting area is D 2 and the total received power is:



Download ppt "ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The."

Similar presentations

Ads by Google