Download presentation
Presentation is loading. Please wait.
1
What is the function of Image Processing?
In high resolution field, in addition to the usual preprocessing functions (offset, dark and flat corrections), the usefulness of image processing can be divided into two main functions: increasing the contrast of planetary details and reducing the noise.
2
Increasing the contrast of planetary detail
Increasing the contrast of small details is the aim of many processing algorithms which all act in the same way: they amplify the high frequencies in the image. This is the reason why they are called high-pass filters, and probably the most famous of them is unsharp masking. This technique is well-known but hard to use in astrophotography. In digital image processing the general principle of unsharp masking is
3
What is a MTF curve ?): a fuzzy image (blue curve) is made from the initial image (red curve) by application of a low-pass filter (gaussian) whose strenght is adjustable; the high frequencies are suppressed,
4
this fuzzy image is substracted from the initial image; the result (green curve) contains only the small details (high frequencies) but its appearance is very strange and unaesthetic (unfortunately, this image also contains noise),
5
MTF Curve
6
What is Sampling? Sampling is choosing which points you want to have represent a given image. Given an analog image, sampling represents a mapping of the image from a continuum of points in space (and possibly time, if it is a moving image) to a discrete set. Given a digital image, sampling represents a mapping from one discrete set of points to another (smaller) set.
7
Original Picture
8
Manroc Sampled
9
LINEAR FILTERING Low pass filters
Low pass filtering, otherwise known as "smoothing", is employed to remove high spatial frequency noise from a digital image. Noise is often introduced during the analog-to-digital conversion process as a side-effect of the physical conversion of patterns of light energy into electrical patterns
10
There are several common approaches to removing this noise:
If several copies of an image have been obtained from the source, some static image, then it may be possible to sum the values for each pixel from each image and compute an average. This is not possible, however, if the image is from a moving source or there are other time or size restrictions.
14
Intensity Histogram / Adjustment
15
Bone Marrow Image
16
If such averaging is not possible, or if it is insufficient, some form of low pass spatial filtering may be required. There are two main types: reconstruction filtering, where an image is restored based on some knowledge of of the type of degradation it has undergone. Filters that do this are often called "optimal filters"
17
enhancement filtering, which attempts to improve the (subjectively measured) quality of an image for human or machine interpretability. Enhancement filters are generally heuristic and problem oriented
18
Moving window operations
The form that low-pass filters usually take is as some sort of moving window operator. The operator usually affects one pixel of the image at a time, changing its value by some function of a "local" region of pixels ("covered" by the window). The operator "moves" over the image to affect all the pixels in the image.
22
Some common types are: Neighborhood-averaging filters Median filters
Mode filters
23
Neighborhood-averaging filters
These replace the value of each pixel, by a weighted-average of the pixels in some neighborhood around it, i.e. a weighted sum of the weights are non-negative. If all the weights are equal then this is a mean filter. "linear"
24
Median filters This replaces each pixel value by the median of its neighbors, i.e. the value such that 50% of the values in the neighborhood are above, and 50% are below. This can be difficult and costly to implement due to the need for sorting of the values. However, this method is generally very good at preserving edges.
25
Mode filters Each pixel value is replaced by its most common neighbor. This is a particularly useful filter for classification procedures where each pixel corresponds to an object which must be placed into a class; in remote sensing, for example, each class could be some type of terrain, crop type, water, etc..
26
These are all space invariant in that the same operation is applied to each pixel location.
27
A non-space invariant filtering, using the above filters, can be obtained by changing the type of filter or the weightings used for the pixels for different parts of the image.
28
Non-linear filters also exist which are not space invariant; these attempt to locate edges in the noisy image before applying smoothing, a difficult task at best, in order to reduce the blurring of edges due to smoothing.
29
High Pass Filter A high pass filter is used in digital image processing to remove or suppress the low frequency component, resulting in a sharpened image. High pass filters are often used in conjunction with low pass filters. For example, the image may be smoothed using a low pass filter, then a high pass filter can be applied to sharpen the image, therefore preserving boundary detail.
30
What Is An Edge? An edge may be regarded as a boundary between two dissimilar regions in an image. These may be different surfaces of the object, or perhaps a boundary between light and shadow falling on a single surface. An edge is not a physical entity, just like a shadow. It is where the picture ends and the wall starts. It is where the vertical and the horizontal surfaces of an object meet. It is what happens between a bright window and the darkness of the night. Simply speaking, it has no width. If there were sensor with infinitely small footprints and zero-width point spread functions, an edge would be recorded between pixels within in an image.
31
More about Edges edges have been loosely defined as pixel intensity discontinuities within an image. While two experimenters processing the same image for the same purpose may not see the same edge pixels in the image, two for different applications may never agree. In a word, edge detection is usually a subjective task. The quality of edge detection is limited by what's in the image. Sometimes a user knows there should be an edge somewhere in the image but it is not shown in the result. So he adjusts the parameters of the program, trying to get the edge detected. However, if the edge he has in mind is not as obvious to the program as some other features he does not want detect, he will get the other "noise" before the desired edge is detected
32
In principle an edge is easy to find since differences in pixel values between regions are relatively easy to calculate by considering gradients.
33
Many edge extraction techniques can be broken up into two distinct phases:
Finding pixels in the image where edges are likely to occur by looking for discontinuities in gradients. Candidate points for edges in the image are usually referred to as edge points, edge pixels, or edgels.
34
Linking these edge points in some way to produce descriptions of edges in terms of lines, curves etc.
35
Gradient based methods
An edge point can be regarded as a point in an image where a discontinuity (in gradient) occurs across some line. A discontinuity may be classified as one of three types
36
Types of Edges
37
Gradient Discontinuity
-- where the gradient of the pixel values changes across a line. This type of discontinuity can be classed as roof edges ramp edges convex edges concave edges
38
--by noting the sign of the component of the gradient perpendicular to the edge on either side of the edge. Ramp edges have the same signs in the gradient components on either side of the discontinuity, while roof edges have opposite signs in the gradient components.
39
A Jump or Step Discontinuity
-- where pixel values themselves change suddenly across some line.
40
A Bar Discontinuity -- where pixel values rapidly increase then decrease again (or vice versa) across some line.
41
For example, if the pixel values are depth values,
jump discontinuities occur where one object occludes another (or another part of itself). Gradient discontinuities usually occur between adjacent faces of the same object.
42
If the pixel values are intensities,
a bar discontinuity would represent cases like a thin black line on a white piece of paper. Step edges may separate different objects, or may occur where a shadow falls across an object.
43
Disadvantages of the use of second order derivatives.
Since First derivative operators exaggerate the effects of noise, Second derivatives exaggerate noise twice as much. No directional information about the edge is given.
44
Edge Linking Edge detectors yield pixels in an image lie on edges.
Next collect these pixels together into a set of edges. Replace many points on edges with a few edges themselves.
45
Problems… Small pieces of edges may be missing,
Small edge segments may appear to be present due to noise where there is no real edge, etc.
46
Local Edge Linkers -- where edge points are grouped to form edges by considering each point's relationship to any neighbouring edge points.
47
Global Edge Linkers -- where all edge points in the image plane are considered at the same time and sets of edge points are sought according to some similarity constraint, such as points which share the same edge equation.
48
Local Edge Linking Methods
Most edge detectors yield information about the magnitude of the gradient at an edge point and, more importantly, the direction of the edge in the locality of the point. This is obviously useful when deciding which edge points to link together since edge points in a neighbourhood which have similar gradients directions are likely to lie on the same edge. Local edge linking methods usually start at some arbitrary edge point and consider points in a local neighborhood for similarity of edge direction.
49
If the points satisfy the similarity constraint then the points are added to the current edge set.
The neighbourhoods based around the recently added edge points are then considered in turn and so on. If the points do not satisfy the constraint then we conclude we are at the end of the edge, and so the process stops. A new starting edge point is found which does not belong to any edge set found so far, and the process is repeated. The algorithm terminates when all edge points have been linked to one edge or at least have been considered for linking once. Thus the basic process used by local edge linkers is that of tracking a sequence of edge points. An advantage of such methods is that they can readily be used to find arbitrary curves.
50
Texture Analysis In many machine vision and image processing algorithms, simplifying assumptions are made about the uniformity of intensities in local image regions. However, images of real objects often do not exhibit regions of uniform intensities. For example, the image of a wooden surface is not uniform but contains variations of intensities which form certain repeated patterns called visual texture. The patterns can be the result of physical surface properties such as roughness or oriented strands which often have a tactile quality, or they could be the result of reflectance differences such as the color on a surface
51
Image texture, defined as a function of the spatial variation in pixel intensities (gray values), is useful in a variety of applications and has been a subject of intense study by many researchers. One immediate application of image texture is the recognition of image regions using texture properties. . This is called texture classification. The goal of texture classification then is to produce a classification map of the input image where each uniform textured region is identified with the texture class it belongs to.
52
Texture Segmentation Texture boundaries can be found even if the texture surfaces cannot be classified. The goal of texture segmentation is to obtain the boundary map separating the differently textured regions in an image.
53
Texture Synthesis Texture synthesis is often used for image compression applications. It is also important in computer graphics where the goal is to render object surfaces which are as realistic looking as possible.
54
Shape From Texture The shape from texture problem is one instance of a general class of vision problems known as ``shape from X.'' The goal is to extract three-dimensional surface shape from variations in textural properties in the image. The texture features are distorted due to the imaging process and the perspective projection which provide information about surface orientation and shape.
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.