Presentation is loading. Please wait.

Presentation is loading. Please wait.

Automatic Disease Detection In Citrus Trees Using Machine Vision Rajesh Pydipati Research Assistant Agricultural Robotics & Mechatronics Group (ARMg) Agricultural.

Similar presentations


Presentation on theme: "Automatic Disease Detection In Citrus Trees Using Machine Vision Rajesh Pydipati Research Assistant Agricultural Robotics & Mechatronics Group (ARMg) Agricultural."— Presentation transcript:

1 Automatic Disease Detection In Citrus Trees Using Machine Vision Rajesh Pydipati Research Assistant Agricultural Robotics & Mechatronics Group (ARMg) Agricultural & Biological Engineering

2 Introduction Citrus industry is an important constituent of Florida’s overall agricultural economy Citrus industry is an important constituent of Florida’s overall agricultural economy Florida is the world’s leading producing region for grapefruit and second only to Brazil in orange production Florida is the world’s leading producing region for grapefruit and second only to Brazil in orange production The state produces over 80 percent of the United States’ supply of citrus The state produces over 80 percent of the United States’ supply of citrus

3 Research Justification Citrus diseases cause economic loss in citrus production due to long term tree damage and due to fruit defects that reduce crop size, quality and marketability. Citrus diseases cause economic loss in citrus production due to long term tree damage and due to fruit defects that reduce crop size, quality and marketability. Early detection systems that might detect and possibly treat citrus for observed diseases or nutrient deficiency could significantly reduce annual losses. Early detection systems that might detect and possibly treat citrus for observed diseases or nutrient deficiency could significantly reduce annual losses.

4 Objectives Collect image data set of various common citrus diseases. Collect image data set of various common citrus diseases. Evaluate the Color Co-occurrence Method, for disease detection in citrus trees. Evaluate the Color Co-occurrence Method, for disease detection in citrus trees. Develop various strategies and algorithms for classification of the citrus leaves based on the features obtained from the color co-occurrence method. Develop various strategies and algorithms for classification of the citrus leaves based on the features obtained from the color co-occurrence method. Compare the classification accuracies from the algorithms. Compare the classification accuracies from the algorithms.

5 Vision based classification

6 Sample Collection and Image Acquisition Leaf sample sets were collected from a typical Florida grape fruit grove for three common citrus diseases and from normal leaves Specimens were separated according to classification in plastic ziploc bags and stored in a environmental chamber maintained at 10 degrees centigrade Forty digital RGB format images were collected for each classification and stored to disk in uncompressed JPEG format. Alternating image selection was used to build the test and training data sets

7 Leaf sample images Greasy spot diseased leafMelanose diseased leaf

8 Leaf sample images Scab diseased leafNormal leaf

9 Ambient vs Laboratory Conditions Initial tests were conducted in a laboratory to minimize uncertainty created by ambient lighting variation. Initial tests were conducted in a laboratory to minimize uncertainty created by ambient lighting variation. An effort was made to select an artificial light source which would closely represent ambient light. An effort was made to select an artificial light source which would closely represent ambient light. Leaf samples were analyzed individually to identify variations between leaf fronts and backs. Leaf samples were analyzed individually to identify variations between leaf fronts and backs.

10

11 Spectrum Comparison with NaturaLight Filter

12 Image Acquisition Specifications Four 16W Cool White Fluorescent bulbs (4500K) with NaturaLight filters and reflectors. Four 16W Cool White Fluorescent bulbs (4500K) with NaturaLight filters and reflectors. JAI MV90, 3 CCD Color Camera with 28-90 mm Zoom lens. JAI MV90, 3 CCD Color Camera with 28-90 mm Zoom lens. Coreco PC-RGB 24 bit color frame grabber with 480 by 640 pixels. Coreco PC-RGB 24 bit color frame grabber with 480 by 640 pixels. MV Tools Image capture software MV Tools Image capture software Matlab Image Processing Toolbox Matlab Image Processing Toolbox SAS Statistical Analysis Package SAS Statistical Analysis Package

13 Image Acquisition System

14 Camera Calibration The camera was calibrated under the artificial light source using a calibration grey-card. The camera was calibrated under the artificial light source using a calibration grey-card. An RGB digital image was taken of the grey-card and each color channel was evaluated using histograms, mean and standard deviation statistics. An RGB digital image was taken of the grey-card and each color channel was evaluated using histograms, mean and standard deviation statistics. Red and green channel gains were adjusted until the grey-card images had similar means in R, G, and B equal to approximately 128, which is mid-range for a scale from 0 to 255. Standard deviation of calibrated pixel values were approximately equal to 3.0. Red and green channel gains were adjusted until the grey-card images had similar means in R, G, and B equal to approximately 128, which is mid-range for a scale from 0 to 255. Standard deviation of calibrated pixel values were approximately equal to 3.0.

15 Image acquisition and classification flow chart

16 Color cooccurence method Color Co-occurrence Method (CCM) uses HSI pixel maps to generate three unique Spatial Gray-level Dependence Matrices (SGDM) Each sub-image was converted from RGB (red, green, blue) to HSI (hue, saturation, intensity) color format The SGDM is a measure of the probability that a given pixel at one particular gray-level will occur at a distinct distance and orientation angle from another pixel, given that pixel has a second particular gray-level

17 CCM TEXTURE STATISTICS CCM texture statistics were generated from the SGDM of each HSI color feature. Each of the three matrices is evaluated by thirteen texture statistic measures resulting in 39 texture features per image. CCM Texture statistics were used to build four data models. The data models used different combinations of the HSI color co-occurrence texture features. STEPDISC was used to reduce data models through a stepwise variable elimination procedure

18 Intensity Texture Features I1 - Uniformity I2 - Mean I3 - Variance I4 - Correlation I5 - Product Moment I6 - Inverse Difference I7 - Entropy I8 - Sum Entropy I9 - Difference Entropy I10 - Information Correlation Measures #1 I11 - Information Correlation Measures #2 I12 - Contrast I13 - Modus

19 Classification Models

20 Classifier based on Mahalanobis distance The Mahalanobis distance is a very useful way of determining the similarity of a set of values from an unknown sample to a set of values measured from a collection of known samples The Mahalanobis distance is a very useful way of determining the similarity of a set of values from an unknown sample to a set of values measured from a collection of known samples Mahalanobis distance method is very sensitive to inter-variable changes in the training data Mahalanobis distance method is very sensitive to inter-variable changes in the training data

21 Mahalanobis distance contd.. Mahalanobis distance is measured in terms of standard deviations from the mean of the training samples Mahalanobis distance is measured in terms of standard deviations from the mean of the training samples The reported matching values give a statistical measure of how well the spectrum of the unknown sample matches (or does not match) the original training spectra The reported matching values give a statistical measure of how well the spectrum of the unknown sample matches (or does not match) the original training spectra

22 Formula for calculating the squared Mahalanobis distance metric Formula for calculating the squared Mahalanobis distance metric ‘x’ is the N-dimensional test feature vector (N is the number of features ) ‘µ’ is the N-dimensional mean vector for a particular class of leaves ‘∑’ is the N x N dimensional co-variance matrix for a particular class of leaves.

23 Minimum distance principle The squared Mahalanobis distance was calculated from a test image to various classes of leaves The squared Mahalanobis distance was calculated from a test image to various classes of leaves The minimum distance was used as the criterion to make classification decisions The minimum distance was used as the criterion to make classification decisions

24 Neural networks “ A neural network is a system composed of many simple processing elements operating in parallel whose function is determined by network structure, connection strengths, and the processing performed at computing elements or nodes ”. “ A neural network is a system composed of many simple processing elements operating in parallel whose function is determined by network structure, connection strengths, and the processing performed at computing elements or nodes ”. (According to the DARPA Neural Network Study (1988, AFCEA International Press, p. 60) (According to the DARPA Neural Network Study (1988, AFCEA International Press, p. 60)

25 Contd.. “A neural network is a massively parallel distributed processor that has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in two respects: “A neural network is a massively parallel distributed processor that has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in two respects: 1. Knowledge is acquired by the network through a learning process. 2. Inter-neuron connection strengths known as synaptic weights are used to store the knowledge. 1. Knowledge is acquired by the network through a learning process. 2. Inter-neuron connection strengths known as synaptic weights are used to store the knowledge. [According to Haykin, S. (1994), Neural Networks: A Comprehensive Foundation, NY: Macmillan, p. 2] [According to Haykin, S. (1994), Neural Networks: A Comprehensive Foundation, NY: Macmillan, p. 2]

26 A Basic Neuron A Basic Neuron

27 Multilayer Feed forward Neural Network

28 Back propagation In the MFNN shown earlier the input layer of the BP network is generally fully connected to all nodes in the following hidden layer In the MFNN shown earlier the input layer of the BP network is generally fully connected to all nodes in the following hidden layer Input is generally normalized to values between -1 and 1 Input is generally normalized to values between -1 and 1 Each node in the hidden layer acts as a summing node for all inputs as well as an activation function Each node in the hidden layer acts as a summing node for all inputs as well as an activation function

29 MFNN with Back propagation The hidden layer neuron first sums all the connection inputs and then sends this result to the activation function for output generation. The hidden layer neuron first sums all the connection inputs and then sends this result to the activation function for output generation. The outputs are propagated through all the layers until final output is obtained The outputs are propagated through all the layers until final output is obtained

30 Mathematical equations The governing equations are given below: Where x1,x2… are the input signals, Where x1,x2… are the input signals, w1,w2…. the synaptic weights, w1,w2…. the synaptic weights, u is the activation potential of the neuron, u is the activation potential of the neuron, is the threshold, is the threshold, y is the output signal of the neuron, y is the output signal of the neuron, and f (.) is the activation function. and f (.) is the activation function.

31 Back propagation The Back propagation algorithm is the most important algorithm for the supervised training of multilayer feed- forward ANNs The Back propagation algorithm is the most important algorithm for the supervised training of multilayer feed- forward ANNs The BP algorithm was originally developed using the gradient descent algorithm to train multi layered neural networks for performing desired tasks The BP algorithm was originally developed using the gradient descent algorithm to train multi layered neural networks for performing desired tasks

32 Back propagation algorithm BP training process begins by selecting a set of training input vectors along with corresponding output vectors. BP training process begins by selecting a set of training input vectors along with corresponding output vectors. The outputs of the intermediate stages are forward propagated until the output layer nodes are activated. The outputs of the intermediate stages are forward propagated until the output layer nodes are activated. Actual outputs are compared with target outputs using an error criterion. Actual outputs are compared with target outputs using an error criterion.

33 Back propagation The connection weights are updated using the gradient descent approach by back propagating change in the network weights from the output layer to the input layer. The connection weights are updated using the gradient descent approach by back propagating change in the network weights from the output layer to the input layer. The net changes to the network will be accomplished at the end of one training cycle. The net changes to the network will be accomplished at the end of one training cycle.

34 BP network architecture used in the research Network Architecture: 2 hidden layers with 10 processing elements each Output layer consisting of 4 output neurons An input layer ‘Tansig’ activation function used at all layers

35 Radial basis function networks A radial basis function network is a neural network approached by viewing the design as a curve-fitting (approximation) problem in a high dimensional space A radial basis function network is a neural network approached by viewing the design as a curve-fitting (approximation) problem in a high dimensional space Learning is equivalent to finding a multidimensional function that provides a best fit to the training data Learning is equivalent to finding a multidimensional function that provides a best fit to the training data

36 An RBF network

37 RBF contd… The RBF front layer is the input layer where the input vector is applied to the network The RBF front layer is the input layer where the input vector is applied to the network The hidden layer consist of radial basis function neurons, which perform a fixed non-linear transformation mapping the input space into a new space The hidden layer consist of radial basis function neurons, which perform a fixed non-linear transformation mapping the input space into a new space The output layer serves as a linear combiner for the new space. The output layer serves as a linear combiner for the new space.

38 RBF network used in the research Network architecture: 80 radial basis functions in the hidden layer 2 outputs in the output layer

39 Data Preparation 40 Images each, of the four classes of leaves were taken. 40 Images each, of the four classes of leaves were taken. The Images were divided into training and test data sets sequentially for all the classes. The Images were divided into training and test data sets sequentially for all the classes. The feature extraction was performed for all the images by following the CCM method. The feature extraction was performed for all the images by following the CCM method.

40 Data Preparation Finally the data was divided in to two text files: Finally the data was divided in to two text files: 1)Training texture feature data ( with all 39 texture features) and 1)Training texture feature data ( with all 39 texture features) and 2)Test texture feature data ( with all 39 texture features) 2)Test texture feature data ( with all 39 texture features) The files had 80 rows each, representing 20 samples from each of the four classes of leaves as discussed earlier. Each row had 39 columns representing the 39 texture features extracted for a particular sample image The files had 80 rows each, representing 20 samples from each of the four classes of leaves as discussed earlier. Each row had 39 columns representing the 39 texture features extracted for a particular sample image

41 Data preparation Each row had a unique number (1, 2, 3 or 4) which represented the class the particular row of data belonged Each row had a unique number (1, 2, 3 or 4) which represented the class the particular row of data belonged These basic files were used to select the appropriate input for various data models based on SAS analysis. These basic files were used to select the appropriate input for various data models based on SAS analysis.

42 Experimental methods The training data was used for training the various classifiers as discussed in the earlier slides. The training data was used for training the various classifiers as discussed in the earlier slides. Once training was complete the test data was used to test the classification accuracies. Once training was complete the test data was used to test the classification accuracies. Results for various classifiers are given in the following slides. Results for various classifiers are given in the following slides.

43 Results

44 Results

45 Results

46 Results

47 Comparison of various classifiers for Model 1B Classifier Greasy spot MelanoseNormalScabOverall SAS100100909596.3 Mahalanobis1001001009598.75 NNBP10090959595 RBF100100856086.25

48 Summary It is concluded that model 1B consisting of features from hue and saturation is the best model for the task of citrus leaf classification. It is concluded that model 1B consisting of features from hue and saturation is the best model for the task of citrus leaf classification. Elimination of intensity in texture feature calculation is the major advantage. It nullifies the effect of lighting variations in an outdoor environment Elimination of intensity in texture feature calculation is the major advantage. It nullifies the effect of lighting variations in an outdoor environment

49 Conclusion The research was a feasibility analysis to see whether the techniques investigated in this research can be implemented in future real time applications. The research was a feasibility analysis to see whether the techniques investigated in this research can be implemented in future real time applications. Results show a positive step in that direction. Nevertheless, the real time system involves some modifications and tradeoffs to make it practical for outdoor applications Results show a positive step in that direction. Nevertheless, the real time system involves some modifications and tradeoffs to make it practical for outdoor applications

50 Thank You Thank You May I answer any questions? May I answer any questions?


Download ppt "Automatic Disease Detection In Citrus Trees Using Machine Vision Rajesh Pydipati Research Assistant Agricultural Robotics & Mechatronics Group (ARMg) Agricultural."

Similar presentations


Ads by Google