Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Remote Sensing and Image Processing: 6 Dr. Mathias (Mat) Disney UCL Geography Office: 301, 3rd Floor, Chandler House Tel: 7670 4290

Similar presentations


Presentation on theme: "1 Remote Sensing and Image Processing: 6 Dr. Mathias (Mat) Disney UCL Geography Office: 301, 3rd Floor, Chandler House Tel: 7670 4290"— Presentation transcript:

1 1 Remote Sensing and Image Processing: 6 Dr. Mathias (Mat) Disney UCL Geography Office: 301, 3rd Floor, Chandler House Tel: 7670 4290 Email: mdisney@geog.ucl.ac.uk www.geog.ucl.ac.uk/~mdisney

2 2 Image processing: information extraction - spatial filtering and classification Purpose To extract useful information from EO data Already seen image enhancement and spectral arithmetic Today, spatial filtering and classification

3 3 Spatial filtering Spatial information –Things close together more alike than things further apart (spatial auto-correlation) –Many features of interest have spatial structure such as edges, shapes, patterns (roads, rivers, coastlines, irrigation patterns etc. etc.) Spatial filters divided into two broad categories –Feature detection e.g. edges –Image enhancement e.g. smoothing “speckly” data e.g. RADAR

4 4 Low/high frequency DN Gradual change = low frequency Rapid change = high frequency

5 5 How do we exploit this? Spatial filters highlight or suppress specific features based on spatial frequency –Related to texture – rapid changes of DN value = “rough”, slow changes (or none) = “smooth” 49 43 48 49 51 50 43 65 54 51 14 12 9 9 10 49 43 48 49 51 225 210 199 188 189 Smooth(ish) Rough(ish) Darker, horizontal linear feature Bright, horizontal linear feature

6 6 Convolution (spatial) filtering Construct a “kernel” window (3x3, 5x5, 7x7 etc.) to enhances/remove these spatial feature Compute weighted average of pixels in moving window, and assigning that average value to centre pixel. choice of weights determines how filter affects image

7 7 Convolution (spatial) filtering Filter moves over all pixels in input, calculate value of central pixel each time e.g. 49 43 48 49 51 50 43 65 54 51 14 12 9 9 10 49 43 48 49 51 225 210 199 188 189 1/9 Input image filter ?? Output image ??

8 8 Convolution (spatial) filtering For first pixel in output image –Output DN = 1/9*43 + 1/9*49 + 1/9*48 + 1/9*43 + 1/9*50 + 1/9*65 + 1/9*12 + 1/9*14 + 1/9*9 = 37 –Then move filter one place to right (blue square) and do same again so output DN = 1/9*(49+48+49+50+65+54+14+9+9) = 38.6 –And again….. DN = 1/9*(48+49+51+65+54+51+9+9+10) = 38.4 This is mean filter Acts to “smooth” or blur image 37 Output image 38.6 38.4

9 9 Convolution (spatial) filtering Mean filter known as low-pass filter i.e. allows low frequency information to pass through but smooths out higher frequency (rapidly changing DN values) –Used to remove high frequency “speckle” from data Opposite is high-pass filter –Used to enhance high frequency information such as lines and point features while getting rid of low frequency information High pass

10 10 Convolution (spatial) filtering Can also have directional filters –Used to enhance edge information in a given direction –Special case of high-pass filter Vertical edge enhancement filter Horizontal edge enhancement filter

11 11 Practical Try out various filters of various sizes See what effect each has, and construct your own filters –High-pass filters used for edge detection Often used in machine vision applications (e.g. robotics and/or industrial applications) –Directional high-pass filters used to detect edges of specific orientation –Low-pass filters used to suppress high freq. information e.g. to remove “speckle”

12 12 Example: low-pass filter ERS 1 RADAR image, Norfolk, 18/4/97 Original (left) and low-pass “smoothed” (right)

13 13 Example: high-pass edge detection SPOT image, Norfolk, 18/4/97 Original (left) and directional high-pass filter (edge detection), right

14 14 Multispectral image classification Very widely used method of extracting thematic information Use multispectral information Separate different land cover classes based on spectral response i.e. separability in “feature space”

15 15 Feature space Use different spectral response of different materials to separate E.g. plot red against NIR DN values….

16 16 Supervised classification Choose examples of homogeneous regions of given cover types (training regions) Training regions contain representative DN values - allow us to separate classes in feature space (we hope!) Go through all pixels in data set and put each one into the most appropriate class How do we choose which is most appropriate?

17 17 Supervised classification Figures from Lillesand, Kiefer and Chipman (2004)

18 18 Supervised classification Feature space clusters E.g. 2 channels of information Are all clusters separate?

19 19 Supervised classification How do we decide which cluster a given pixel is in? E.g. 1 Minimum distance to means classification Simple and quick BUT what about points 1 and 2?

20 20 Supervised classification E.g. 2 Parallelepiped classification We calculate mimumum and maximum of each training class in each band (rectangles) BUT what about when a class is large and overlaps another?

21 21 Supervised classification E.g. 3 paraellepipeds with stepped decision boundaries Gives more flexibility Examples are all 2D BUT we can extend any of these ideas into any number of dimensions given more spectral channels With 3 channels squares become cubes etc….

22 22 Supervised classification E.g. 4 Maximum likelihood Now we use probability rather than distance in feature space Calculate probability of membership of each class for each pixel Results in “probability contours”

23 23 Supervised classification Now we see pixel 1 is correctly assigned to corn class Much more sophisticated BUT is computationally expensive compared to distance methods

24 24 Unsupervised classification As would be expected – minimal user intervention We tell it how many classes we expect, then some iterative preocedure determines where clusters are in feature space Assigns each pixel in turn to a cluster, then updates cluster statistics and repeats….. –advantage is automatic –disadvantage is we don’t know what classes represent

25 25 Unsupervised classification E.g. ISODATA (Iterative self-organising data analysis) algorithm –Start with (user-defined number) randomly located clusters –Assign each pixel to nearest cluster (mean spectral distance) –Re-calculate cluster means and standard deviations –If distance between two clusters lower than some threshold, merge them –If standard deviation in any one dimension greater than some threshold, split into two clusters –Delete clusters with small number of pixels –Re-assign pixels, re-calculate cluster statistics etc. until changes of clusters smaller than some fixed threshold

26 26 Example: 2 classes, 2 bands DN Ch 1 DN Ch 2 Initial cluster means Pixel 1 a b Pixel 2 DN Ch 1 DN Ch 2 All pixels assigned to a or b - update stats New positions of cluster means Assign pixel 1 to cluster a, 2 to b etc. DN Ch 1 DN Ch 2 b a Cluster means move towards pixels 1 and 2 respectively Pixel 1 Pixel 2 DN Ch 2 Split a into 2, recalculate. Repeat…. DN Ch 1 New positions of cluster means SD of cluster a too large?

27 27 Classification Accuracy How do we tell if classification is any good? –Classification error matrix (aka confusion matrix or contingency table) –Need “truth” data – sample pixels of known classes How many pixels of KNOWN class X are incorrectly classified as anything other than X (errors of omission)? –Divide correctly classified pixels in each class of truth data by COLUMN totals (Producer’s Accuracy) How many pixels are incorrectly classified as class X when they should be some other known class (errors of commission)? –Divide correctly classified pixels in each class by ROW totals (User’s Accuracy)

28 28 Classification Accuracy Errors of omission for class U Errors of comission for class U

29 29 Post processing? Ideally, all classes would be homogeneous In reality we get wheat pixels misclassified as “peas” etc. (accuracy < 100%) Pass majority filter over classified image –E.g. 3 x 3 majority filter assigns output pixel value to be majority of nearest 8 pixels Majority filter

30 30 Assessed practical Classify Churn Farm airborne image based on information given to you Try supervised methods first –Accuracy? Test using “truth” data provided –Anything << 80% is not much use…. –What do your training data classes look like in feature space? Then perhaps unsupervised –Accuracy? Warning: don’t include the white text in any of your training data – the pixel values (255 in all channels) willl TOTALLY screw up your classes!


Download ppt "1 Remote Sensing and Image Processing: 6 Dr. Mathias (Mat) Disney UCL Geography Office: 301, 3rd Floor, Chandler House Tel: 7670 4290"

Similar presentations


Ads by Google