Presentation is loading. Please wait.

Presentation is loading. Please wait.

Spatially Constrained Segmentation of Dermoscopy Images

Similar presentations


Presentation on theme: "Spatially Constrained Segmentation of Dermoscopy Images"— Presentation transcript:

1 Spatially Constrained Segmentation of Dermoscopy Images
Howard Zhou1, Mei Chen2, Le Zou2, Richard Gass2, Laura Ferris3, Laura Drogowski3, James M. Rehg1 Good Afternoon, everyone. My name is Howard Zhou and I am a graduate student from the school of interactive computing, Georgia Tech. The work I am going to present here is the joint work of Georgia Tech, Intel Research Pittsburgh and Department of Dermatology, University of Pittsburgh 1School of Interactive Computing, Georgia Tech 2Intel Research Pittsburgh 3Department of Dermatology, University of Pittsburgh

2 Skin cancer and melanoma
Skin cancer : most common of all cancers According to the latest statistics available from the National Cancer Institute, skin cancer is the most common of all cancers in the United states. More than 1 million cases of skin cancer are diagnosed in the US each year. What’s shown here are some examples of skin lesion images. The four images shown on the left are various form of skin lesions, cancerous or non-cancerous. The two on the right are a specific form of skin cancer: melanoma. [ Image courtesy of “An Atlas of Surface Microscopy of Pigmented Skin Lesions: Dermoscopy” ]

3 Skin cancer and melanoma
Skin cancer : most common of all cancers Melanoma : leading cause of mortality (75%) Although represent only 4 percent of all skin cancers in the US, melanoma is the leading cause of mortality. They account for more than 75 percent of all skin cancer deaths. [ Image courtesy of “An Atlas of Surface Microscopy of Pigmented Skin Lesions: Dermoscopy” ]

4 Skin cancer and melanoma
Skin cancer : most common of all cancers Melanoma : leading cause of mortality (75%) Early detection significantly reduces mortality The time line shown here is the 10 year survival rate of melanoma. If caught in its early stage, as seen here, melanoma can often be cured with a simple excision, so the patient have a high chance to recover. Hence, early detection of malignant melanoma significantly reduces mortality. [ Image courtesy of “An Atlas of Surface Microscopy of Pigmented Skin Lesions: Dermoscopy” ]

5 Dermoscopy view Clinical View
Dermoscopy is a noninvasive imaging technique, and it is just the right technique for this task. It has been shown effective for early detection of melanoma. The procedure involves using an incident light magnification system, i.e. a dermatoscope, to examine skin lesions. Often oil is applied at the skin-microscope interface. This allows the incident light to penetrate the top layer of the skin tissue and reveal the pigmented structures beyond what would be visible by naked eyes. [ Image courtesy of “An Atlas of Surface Microscopy of Pigmented Skin Lesions: Dermoscopy” ]

6 Dermoscopy Improve diagnostic accuracy by 30% in the hands of trained physicians May require as much as 5 year experience to have the necessary training Motivation for Computer-aided diagnosis (CAD) in this area Clinical view Dermoscopy view For dermatologists who have become experienced with dermoscopy, it has been shown to improve the diagnostic accuracy by as much as 30% over clinical examination. However, it may require as much as 5 year experience to have the necessary training. This is the motivation for computer-aided diagnosis in this area. In recent years, there has been increasing interest in computer-aided diagnosis of pigmented skin lesion from these dermoscopy images. In the future, with the development of new algorithms and techniques, these computer procedures may aid the dermatologists to bring medical break through in early detection of melanoma.

7 First step of analysis: Segmentation
Separating lesions from surrounding skin Resulting border Gives lesion size and border irregularity Crucial to the extraction of dermoscopic features for diagnosis Previous Work : PDE approach – Erkol et al. 2005, … Histogram thresholding – Hintz-Madsen et al. 2001, … Clustering – Schmid 1999, Melli et al. 2006… Statistical region merging – Celebi et al. 2007, … The procedures of computer aided analysis often starts with segmenting the pigmented lesions from the surrounding skin. This step not only provides a basis for calculation of important clinical features such as lesion size and border irregularity, but the resulting border is also crucial for the extraction of discriminating dermoscopic features such as atypical pigment networks and radial streaming. Over the years, researchers have developed many automated segmentation algorithm, and most had roots in image processing and computer vision techniques PDE approach [2], B. Erkol, R.H. Moss, R.J. Stanley, W.V. Stoecker, and E. Hvatum, “Automatic lesion boundary detection in dermoscopy images using gradient vector flow snakes,” Skin Research and Technology, vol. 11, pp. 17–26, 2005 histogram thresholding [3], M. Hintz-Madsen, L. K. Hansen, J. Larsen, and K. Drzewiecki, “A probabilistic neural network framework for detection of malignant melanoma,” in Artificial Neural Networks in Cancer Diagnosis, Prognosis and Patient Management, pp. 141– clustering [4], P. Schmid, “Segmentation of digitized dermatoscopic images by two-dimensional color clustering,” IEEE Trans. Medical Imaging, vol. 18, pp. 164–171, 1999. R. Melli, C. Grana, and R. Cucchiara, “Comparison of color clustering algorithms for segmentation of dermatological images,” in SPIE Medical Imaging, 2006. statistical region merging [5]. M.E. Celebi, H.A. Kingravi, H. Iyatomi, J. Lee, Y.A. Aslandogan, W.V. Stoecker, R. Moss, J.M. Malters, and A.A. Marghoob, “Fast and accurate border detection in dermoscopy images using statistical region merging,” in SPIE Medical Imaging, 2007. Researchers have also explored different color spaces, e.g., RGB, HSI, CIELUV [4], and CIELAB to improve segmentation performance. However, due to the large variability in lesion properties and skin conditions, along with the presence of artifacts such as hair and air bubbles, automated segmentation of pigmented lesions in dermoscopy images remains a challenge.

8 Domain specific constraints
Spatial constraints Four corners are skin (Melli et al.2006, Celebi et al. 2007) Implicitly enforcing Local neighborhood constraints on image Cartesian coordinates (Meanshift) Because we are working on the dermoscopy images instead of general ones, many algorithms explore domain-specific constraints to pigmented skin lesion. There are algorithms enforcing explicit spatial constraints to simplify the figure/ground label assignment such as Melli et al.’s [6] and Celebi et al. [5]’s work. Some methods such as the meanshift algorithm implicitly enforces local neighborhood constraints on image Cartesian coordinates during pixel grouping.

9 Domain specific constraints
Spatial constraints Four corners are skin (Melli et al.2006, Celebi et al. 2007) Implicitly enforcing Local neighborhood constraints on image Cartesian coordinates (Meanshift) Meanshift (c = 32, s = 8) Some methods such as the meanshift algorithm implicitly enforces local neighborhood constraints on image Cartesian coordinates during pixel grouping.

10 We explore … Spatial constraints arise from the growth pattern of pigmented skin lesions Meanshift (c = 32, s = 8) In our work, we try to explore spatial constraints arise from the growth pattern of pigmented skin lesions

11 We explore … Spatial constraints arise from the growth pattern of pigmented skin lesions – radiating pattern Meanshift (c = 32, s = 8) More specifically, the kind of radiating appearance exhibit on the skin surface.

12 Embedding constraints
Radiating pattern from lesion growth Embedding constraints as polar coords improves segmentation performance Meanshift (c = 32, s = 8) We show that by embedding such constraints as polar coordinates into pixel grouping stage, we can considerably improve the segmentation performance. As seen in this side-by-side comparison of pixel grouping, embedding the radiating constraints made the grouping less blocky compared to the Meanshift algorithm. Polar (k = 6)

13 Embedding constraints
Radiating pattern from lesion growth Embedding constraints as polar coords improves segmentation performance And the final lesion boundary improves accordingly. Less prone to error in the figure/ground assignment Less localization error at the boundary. Meanshift Polar Polar (k = 6)

14 Comparison to the Doctors
Radiating pattern from lesion growth Embedding constraints as polar coords improves segmentation performance White: Dr. Ferris Red : Dr. Zhang Blue : computer When compared to the lesion boundaries manually outlined by dermatologists, our methods produce better computer counterparts. Meanshift Polar

15 Dermoscopy images Common radiating appearance
So what gives pigmented skin lesion such commen constraints, this radiating appearance that we can take advantage of? We will take a look at the reason “more than skin deep” (pun intended )

16 Growth pattern of pigmented skin lesions
lesions grow in both radial and vertical direction Skin absorbs and scatters light. Appearance of pigmented cells varies with depth Dark brown  tan  blue-gray Common radiating appearance pattern on skin surface Melanoma arises from pigmented skin cells (melanocytes) and is commonly identified with two growth phases, radial and vertical. Since skin absorbs and scatters light, the appearance of these pigmented cells varies with depth. As a result, they appear as dark brown within the epidermis, tan at or near the dermoepidermal junction, and blue-gray in the dermis. This results a common ringing effect (or radiating appearance pattern) seen on the skin surface. [ Image courtesy of “Dermoscopy : An Atlas of Surface Microscopy of Pigmented Skin Lesions]

17 Radiating growth pattern on skin surface
Difference in appearance: more significant along the radial direction than any other direction. Much like tree rings, although much less regular, but still, this radiating pattern seen from the skin surface dictates that in general, the difference in lesion appearance is more significant along the radial direction from the lesion center than any other direction.

18 Radiating growth pattern on skin surface
Difference in appearance: more significant along the radial direction than any other direction. As shown here, notice that the skin patch in the cyan square bears more resemblance to the patch in the red square than the yellow one, even though the yellow square is much closer to the cyan square in Euclidean distance. In the pixel grouping stage of algorithms like meanshift, the pixels are grouped not only according to their appearance, such as R, G, B in color space, but also grouped with respect to their location measured in Euclidean distance. However, because of the radiating appearance exhibit in dermoscopy images, a direct embedding of Cartesian coordinates may not be optimal. For example, in Cartesian coordinates, the red square in Fig. 1(b) is farther away from the cyan square than the yellow one. Consequently, it is less likely to be grouped together with the cyan square, even though their underlying pigmented cells are probably in the same growth period. Given the fact that dermatologists tend to put the lesion near the center of image frame while acquiring dermoscopy images, we can better capture the radial growth pattern by replacing the x, y coordinates with a polar radius r measured from the image center.

19 Embedding spatial constraints Feature vectors
Each pixel  feature vector in R4 3D: R,G,B or L, a, b in the color space 1D: polar radius measured from the center of the image (normalized by w) original r {R, G, B} Hence, every pixel becomes a feature vector in R4, with 3 appearance component from the color space, and 1 from the radius measured from the center of the image.

20 Embedding spatial constraints Grouping features
Each pixel  feature vector in R4 Clustering pixels in the feature space Replace pixels with mean for compact representation original r {R, G, B} filtered We then group these feature vectors according to their distance in the feature space (in this case R4). After replacing each pixel with its respective cluster center, we then have a filtered version of the dermoscopy image (on the right), a more compact representation.

21 Radiating pattern Dermoscopy vs. natural images
In order to demonstrate that radiating pattern is common among dermoscopy images and not common in general image datasets. We perform a little experiment. On the left, we have a total of 216 dermoscopy images and on the right, 300 natural images from the Berkeley segmentation dataset. What we want to show is that the radiating pattern is common among the dermoscopy images, but natural images do not exhibit this pattern. Derm dataset (216) BSD dataset (300)

22 Embedding spatial constraints Grouping features
Cartesian {Rc, Gc, Bc} Mean per-pixel residue: average per-pixel color difference of each pair original {Ro, Go, Bo} polar {Rp, Gp, Bp} What we do here is filter each image using k-means algorithm to come up with a filtered image while taking Cartesian (x, y) and polar radius as additional components to the feature vector, respectively The mean per-pixel residue, or the average per-pixel color difference of each original-filtered pair is then calculated.

23 Dermoscopy vs. natural images Polar vs. Cartesion
Mean per-pixel residue (k-means++, k = 30) Derm dataset (216) Residue (Cartesian) Residue (polar) BSD dataset (300) Residue (polar) Residue (Cartesian) We use k-means++ algorithm with k being 30 for this clustering task. The graphs here show a comparison of the residues, x coordinates when polar radius is used and y coordinate axis shows the Cartesian case, if the test image is above the diagonal line, then the polar residue is smaller than Cartesian and vice versa. When Dermsocopy images are used. The fact that almost all points are above the equal residue line indicate that embedding polar coordinates as spatial constraints can reduse clustering error. This is much less significant in the natural image case since they are much evenly spread about the diagonal line.

24 Dermoscopy vs. natural images Polar vs. Cartesion
Mean per-pixel residue (k-means++, k = 30) Quantitatively, for the dermoscopy image dataset, the mean residue is reduced by 18% by switching from Cartesian to polar radius. In contrast, the difference is much less for the natural images in BSD.

25 Polar vs. Cartesian The regions appear more blocky in the Cartesian case If we take a look at the 30 clusters generated during this comparison, we can actually see that regions appear less blocky (and more intuitive) when radiating constraints were embedded as polar radius Polar (k = 30) Cartesian (k = 30)

26 Six super-regions 30 clusters  6 super clusters (K-means++)
This in turn will result better final segmentation result. Polar (k = 6) Cartesian (k = 6)

27 Final segmentation Polar Cartesian

28 Polar vs. Meanshift The regions appear more blocky in the Meanshift case The result from our verification is consistent with the comparison with the Meanshift algorithm, where spatial constraints are also enforced in Cartesian space. Polar (k = 6) Meanshift (c = 32, s = 8)

29 Final segmentation Polar Meanshift
The regions appear more blocky when the clustering is performed in Cartesian space. This unnatural appearance is not present when polar radius is used Polar Meanshift

30 Algorithm overview Given a dermoscopy image
With this observation, we explicitly enforce the radiating constraints by embedding the polar radius into feature vectors. This gives us a novel unsupervised algorithm for segmentation of pigmented skin lesion.

31 Algorithm overview Given a dermoscopy image original

32 Algorithm overview 1. First round clustering: K-means++ (k = 30)
original 30 clusters The first clustering stage serves two purposes. First, it removes small variations in appearance due to noise. Second, it groups pixels into homogenous regions; the color and location values of each pixel are replaced with the average values of the region to which they belong. The regions resulting from this operation give us a more compact representation of the original image. We use the k-means++ algorithm [7]. It is a variation of the k-means algorithm that improves both speed and accuracy by using a randomized seeding technique. The input to k-means++ is all the image pixels represented as points in R4, where the coordinates of each point encodes the color and location of the corresponding pixel. We convert the pixels from RGB to L*a*b* values because the CIELAB space is more perceptually uniform. The fourth coordinate is the polar radius measured from the image center encoding the location of each pixel. We normalize this coordinate with a constant w to make the polar radius commensurate with the L*a*b* color values. We chose the number of clusters k as 30 such that the clusters are able to represent the image compactly without incurring large residue errors [6]. This first round of clustering serves the same noise reducing purpose as the median filtering used in some previous techniques, but reduces boundary localization errors introduced by the smoothing process.

33 Algorithm overview 2. Second round: clusters(30) super-regions(6)
original 30 clusters 6 Super-regions After the first stage of clustering, the mean values of the clusters are fed into the next round of k-means++ clustering to produce super-regions, as defined previously. We choose the number of clusters k by satisfying the following two requirements. First, it must account for intra-skin and intralesion variations. For example, the appearance of the lesion in Fig. 4(a) varies significantly across lesion, and an attempt to produce a single lesion cluster is unlikely to succeed. Second, we want to avoid a value that is too large making the subsequent combinatoric region merge (see next section) intractable. Based on our experiments and previous studies [6], we set k to 6, which produces satisfactory results. As shown in Figs. 4(d), 4(h), 4(f), and 4(j), the super-regions do correspond to meaningful regions such as skin, skin-lesion transition, and inner lesion.

34 Algorithm overview 3. Apply texture gradient filter (Martin, et al. 2004) original 30 clusters 6 Super-regions Texture edge map

35 Algorithm overview 4. Find optimal boundary (color+texture) original
30 clusters 6 Super-regions Texture edge map Final segmentation

36 1. First round clustering
First round clustering: K-means++ (k = 30) Reduce noise Groups pixels into homogenous regions – a more compact representation of the image Artuhur and Vassilvitskii, 2007 R4 : {L*a*b* (3D), w * polar radius (1D)} The first clustering stage serves two purposes. First, it removes small variations in appearance due to noise. Second, it groups pixels into homogenous regions; the color and location values of each pixel are replaced with the average values of the region to which they belong. The regions resulting from this operation give us a more compact representation of the original image. We use the k-means++ algorithm [7]. It is a variation of the k-means algorithm that improves both speed and accuracy by using a randomized seeding technique. The input to k-means++ is all the image pixels represented as points in R4, where the coordinates of each point encodes the color and location of the corresponding pixel. We convert the pixels from RGB to L*a*b* values because the CIELAB space is more perceptually uniform. The fourth coordinate is the polar radius measured from the image center encoding the location of each pixel. We normalize this coordinate with a constant w to make the polar radius commensurate with the L*a*b* color values. We chose the number of clusters k as 30 such that the clusters are able to represent the image compactly without incurring large residue errors [6]. This first round of clustering serves the same noisereducing purpose as the median filtering used in [4] and [5], but avoids boundary original

37 1. First round clustering
First round clustering: K-means++ (k = 30) Reduce noise Groups pixels into homogenous regions – a more compact representation of the image Artuhur and Vassilvitskii, 2007 R4 : {L*a*b* (3D), w * polar radius (1D)} The first clustering stage serves two purposes. First, it removes small variations in appearance due to noise. Second, it groups pixels into homogenous regions; the color and location values of each pixel are replaced with the average values of the region to which they belong. The regions resulting from this operation give us a more compact representation of the original image. We use the k-means++ algorithm [7]. It is a variation of the k-means algorithm that improves both speed and accuracy by using a randomized seeding technique. The input to k-means++ is all the image pixels represented as points in R4, where the coordinates of each point encodes the color and location of the corresponding pixel. We convert the pixels from RGB to L*a*b* values because the CIELAB space is more perceptually uniform. The fourth coordinate is the polar radius measured from the image center encoding the location of each pixel. We normalize this coordinate with a constant w to make the polar radius commensurate with the L*a*b* color values. We chose the number of clusters k as 30 such that the clusters are able to represent the image compactly without incurring large residue errors [6]. This first round of clustering serves the same noisereducing purpose as the median filtering used in [4] and [5], but avoids boundary original 30 clusters

38 2. Second round clustering
K = 6 : clusters(30) super-regions(6) Account for intra-skin and intra-lesion variations Avoid a large k Super-regions correspond to meaningful regions such as skin, skin-lesion transition, and inner lesion, etc. We choose the number of clusters k by satisfying the following two requirements. First, it must account for intra-skin and intralesion variations. For example, the appearance of the lesion in Fig. 4(a) varies significantly across lesion, and an attempt to produce a single lesion cluster is unlikely to succeed. Second, we want to avoid a value that is too large making the subsequent combinatoric region merge (see next section) intractable. Based on our experiments and previous studies [6], we set k to 6, which produces satisfactory results. As shown in Figs. 4(d), 4(h), 4(f), and 4(j), the super-regions do correspond to meaningful regions such as skin, skin-lesion transition, and inner lesion. original 30 clusters

39 2. Second round clustering
K = 6 : clusters(30) super-regions(6) Account for intra-skin and intra-lesion variations Avoid a large k Super-regions correspond to meaningful regions such as skin, skin-lesion transition, and inner lesion, etc. We choose the number of clusters k by satisfying the following two requirements. First, it must account for intra-skin and intralesion variations. For example, the appearance of the lesion in Fig. 4(a) varies significantly across lesion, and an attempt to produce a single lesion cluster is unlikely to succeed. Second, we want to avoid a value that is too large making the subsequent combinatoric region merge (see next section) intractable. Based on our experiments and previous studies [6], we set k to 6, which produces satisfactory results. As shown in Figs. 4(d), 4(h), 4(f), and 4(j), the super-regions do correspond to meaningful regions such as skin, skin-lesion transition, and inner lesion. original 30 clusters 6 super-regions

40 3. Color-texture integration
Incorporating texture information can improve segmentation performance. Severely sun damaged skin; texture variations at boundaries in addition to color variations After the super-regions are identified, we apply a region merge procedure to produce a plausible lesion segmentation. However, for many cases merging based on color cues alone is insufficient. For instance, on severely sun damaged skin (Fig. 4(i)), texture variations are often more informative than color. Moreover, many lesions exhibit texture variations at boundaries in addition to color variations. For these cases, incorporating t original

41 3. Color-texture integration
Incorporating texture information can improve segmentation performance. Severely sun damaged skin; texture variations at boundaries in addition to color variations Apply texture gradient filter (Martin, et al. 2004) After the super-regions are identified, we apply a region merge procedure to produce a plausible lesion segmentation. However, for many cases merging based on color cues alone is insufficient. For instance, on severely sun damaged skin (Fig. 4(i)), texture variations are often more informative than color. Moreover, many lesions exhibit texture variations at boundaries in addition to color variations. For these cases, incorporating t original

42 3. Color-texture integration
Incorporating texture information can improve segmentation performance. Severely sun damaged skin; texture variations at boundaries in addition to color variations Apply texture gradient filter (Martin, et al. 2004) Texture edge map: pseudo-likelihood After the super-regions are identified, we apply a region merge procedure to produce a plausible lesion segmentation. However, for many cases merging based on color cues alone is insufficient. For instance, on severely sun damaged skin (Fig. 4(i)), texture variations are often more informative than color. Moreover, many lesions exhibit texture variations at boundaries in addition to color variations. For these cases, incorporating t original Texture edge map

43 4. Optimal boundary Optimal skin-lesion boundary 6 super-regions
Color: Earth Mover’s Distance (EMD) between every pair of super-regions Uniformly sample the original values of the pixels within each super-region and compute an Earth Mover’s Distance (EMD) between every pair of super-regions. Skin-lesion boundary: curve that separates the set of super-regions inside lesion from the rest Optimal boundary is found by minimizing the integrated color-texture measure as follows where Si and Sj (i, j = k, l, m, n 2 {1, 2, , 6}) are a pair of super-regions, Ci,j is the EMD between them, T is the normalized texture gradient on the lesion boundary. \lambda is a constant that can be adjusted to put emphasis on either color or texture. 6 super-regions

44 4. Optimal boundary Optimal skin-lesion boundary 6 super-regions
Color: Earth Mover’s Distance (EMD) between every pair of super-regions Texture: Texture edge map Uniformly sample the original values of the pixels within each super-region and compute an Earth Mover’s Distance (EMD) between every pair of super-regions. Skin-lesion boundary: curve that separates the set of super-regions inside lesion from the rest Optimal boundary is found by minimizing the integrated color-texture measure as follows where Si and Sj (i, j = k, l, m, n 2 {1, 2, , 6}) are a pair of super-regions, Ci,j is the EMD between them, T is the normalized texture gradient on the lesion boundary. \lambda is a constant that can be adjusted to put emphasis on either color or texture. 6 super-regions Texture edge map

45 4. Optimal boundary Optimal skin-lesion boundary 6 super-regions
Color: Earth Mover’s Distance (EMD) between every pair of super-regions Texture: Texture edge map Minimizing the integrated color-texture measure Uniformly sample the original values of the pixels within each super-region and compute an Earth Mover’s Distance (EMD) between every pair of super-regions. Skin-lesion boundary: curve that separates the set of super-regions inside lesion from the rest Optimal boundary is found by minimizing the integrated color-texture measure as follows where Si and Sj (i, j = k, l, m, n 2 {1, 2, , 6}) are a pair of super-regions, Ci,j is the EMD between them, T is the normalized texture gradient on the lesion boundary. \lambda is a constant that can be adjusted to put emphasis on either color or texture. 6 super-regions Texture edge map

46 Validation and results
Our collaborating dermatologist Dr. Ferris manually outline the lesions in 67 dermoscopy images The border error is given by Computer : binary image obtained by filling the automatic detected border ground-truth : obtained by filling in the boundaries outlined by Dr. Ferris We have our collaborating dermatologist Dr. Ferris manually outline the lesions in 67 dermoscopy images, and treat that as groundtruth. We compare them to the automatically generated borders using the grading system in [5]. The border error is given by error = Area(computer XOR ground-truth) Area(ground-truth) × 100%, where computer is the binary image obtained by filling the automatic detected border and ground-truth is obtained by filling in the boundaries outlined by our dermatologist.

47 Typical segmentation result
Error = 12.96% White: Dr. Ferris Red : Dr. Zhang Blue : computer To account for inter-operator variation, we also asked Dr. Alex Zhang to manually outline boundaries on the same dataset

48 Comparison Fig. 3 shows a performance comparison using this error measure. We can see that while both colorspace conversion (from RGB to L*a*b*) and color-texture integration improved segmentation accuracy, the biggest boost came from embedding the spatial constraints during clustering. To account for inter-operator variation, we also asked Dr. Alex Zhang to manually outline boundaries on the same dataset

49 Additional results White: Dr. Ferris Red : Dr. Zhang Blue : computer
Typical segmentation results are shown in Fig. 4(d) - 4(k). The first image of each pair shows the super-regions while the second one shows the lesions. Both the computer generated borders (in blue) and the dermatologist’s (in white) are overlayed on top of the original images for comparison purposes Error = 5.80%

50 Additional results White: Dr. Ferris Red : Dr. Zhang Blue : computer
Error = 13.61%

51 Additional results White: Dr. Ferris Red : Dr. Zhang Blue : computer
Error = 16.60%

52 Additional results White: Dr. Ferris Red : Dr. Zhang Blue : computer
Error = 34.09%

53 Limitation Assumption that lesions appear relatively near the center may not hold Fairly low number of super regions (6) may limit the algorithm to perform well on lesions with more colors

54 Conclusion Growth pattern of pigmented skin lesions can be used to improve lesion segmentation accuracy in dermoscopy images. An unsupervised segmentation algorithm incorporating these spatial constraints We demonstrate its efficacy by comparing the segmentation results to ground-truth segmentations determined by an expert.

55 Future work Extend to meanshift? Future work
Exemplar-based inpainting procedure is much more time-consuming (around 150 seconds on an average image with hair 600x450) compared to other inpainting methods (under 1 sec). This makes our system less suitable for interactive application. In the future, we will look into how to speed up the patch searching step. Currently, we only detect “darker” hair. However, there are many possible hair colors possible (white, blond), where curvilinearness is the only safe assumption for hair detection, we plan to look into detecting these cases. Besides hair and ruler markings, air bubbles inevitably appear in many dermoscopy images. Our frame work may be extended for air bubble removal.

56 Comparison to other methods
Fig. 3 shows a performance comparison using this error measure. We can see that while both colorspace conversion (from RGB to L*a*b*) and color-texture integration improved segmentation accuracy, the biggest boost came from embedding the spatial constraints during clustering.

57 Color and texture cue integration
Apply texture gradient filter (Martin, et al. 2004) Pseudo-likelihood map - edge caused by texture variation is present at a certain location In order to measure the amount of texture variations across regions, we apply the texture gradient filter (TG) [8] to the original dermoscopy images. The resulting images are pseudo-likelihood maps, which contain the information of how likely an edge caused by texture variation is at a certain location.


Download ppt "Spatially Constrained Segmentation of Dermoscopy Images"

Similar presentations


Ads by Google