Presentation is loading. Please wait.

Presentation is loading. Please wait.

Let see just one example….

Similar presentations


Presentation on theme: "Let see just one example…."— Presentation transcript:

1 Let see just one example….
We have seen that we can do machine learning on data that is in the nice “flat file” format Rows are objects Columns are features Taking a real problem and “massaging” it into this format is domain dependent, but often the most fun part of machine learning. Let see just one example…. Insect ID Abdomen Length Antennae Insect Class 1 2.7 5.5 Grasshopper 2 8.0 9.1 Katydid 3 0.9 4.7 4 1.1 3.1 5 5.4 8.5 6 2.9 1.9 7 6.1 6.6 8 0.5 1.0 9 8.3 10 8.1 Katydids

2 (Western Pipistrelle (Parastrellus hesperus) Photo by Michael Durham

3 A spectrogram of a bat call.
Western pipistrelle calls

4 Characteristic frequency
We can easily measure two features of bat calls. Their characteristic frequency and their call duration Characteristic frequency Call duration Bat ID Characteristic frequency Call duration (ms) Bat Species 1 49.7 5.5 Western pipistrelle

5

6

7

8

9 Classification We have seen 2 classification techniques:
Simple linear classifier, Nearest neighbor,. Let us see two more techniques: Decision tree, Naïve Bayes There are other techniques: Neural Networks, Support Vector Machines, … that we will not consider..

10 I have a box of apples.. 1 H(X) Pr(X = good) = p
then Pr(X = bad) = 1 − p  the entropy of X is given by 0.5 binary entropy function attains its maximum value when p = 0.5 1 All good All bad

11 Decision Tree Classifier
10 1 2 3 4 5 6 7 8 9 Ross Quinlan Abdomen Length > 7.1? Antenna Length no yes Antenna Length > 6.0? Katydid no yes Grasshopper Katydid Abdomen Length

12 Antennae shorter than body?
Yes No 3 Tarsi? Grasshopper Yes No Foretiba has ears? Yes No Cricket Decision trees predate computers Katydids Camel Cricket

13 Decision Tree Classification
A flow-chart-like tree structure Internal node denotes a test on an attribute Branch represents an outcome of the test Leaf nodes represent class labels or class distribution Decision tree generation consists of two phases Tree construction At start, all the training examples are at the root Partition examples recursively based on selected attributes Tree pruning Identify and remove branches that reflect noise or outliers Use of decision tree: Classifying an unknown sample Test the attribute values of the sample against the decision tree

14 How do we construct the decision tree?
Basic algorithm (a greedy algorithm) Tree is constructed in a top-down recursive divide-and-conquer manner At start, all the training examples are at the root Attributes are categorical (if continuous-valued, they can be discretized in advance) Examples are partitioned recursively based on selected attributes. Test attributes are selected on the basis of a heuristic or statistical measure (e.g., information gain) Conditions for stopping partitioning All samples for a given node belong to the same class There are no remaining attributes for further partitioning – majority voting is employed for classifying the leaf There are no samples left

15 Information Gain as A Splitting Criteria
Select the attribute with the highest information gain (information gain is the expected reduction in entropy). Assume there are two classes, P and N Let the set of examples S contain p elements of class P and n elements of class N The amount of information, needed to decide if an arbitrary example in S belongs to P or N is defined as 0 log(0) is defined as 0

16 Information Gain in Decision Tree Induction
Assume that using attribute A, a current set will be partitioned into some number of child sets The encoding information that would be gained by branching on A Note: entropy is at its minimum if the collection of objects is completely uniform

17 Person Homer 0” 250 36 M Marge 10” 150 34 F Bart 2” 90 10 Lisa 6” 78 8
Hair Length Weight Age Class Homer 0” 250 36 M Marge 10” 150 34 F Bart 2” 90 10 Lisa 6” 78 8 Maggie 4” 20 1 Abe 1” 170 70 Selma 8” 160 41 Otto 180 38 Krusty 200 45 Comic 8” 290 38 ?

18 Let us try splitting on Hair length
Entropy(4F,5M) = -(4/9)log2(4/9) - (5/9)log2(5/9) = yes no Hair Length <= 5? Let us try splitting on Hair length Entropy(1F,3M) = -(1/4)log2(1/4) - (3/4)log2(3/4) = Entropy(3F,2M) = -(3/5)log2(3/5) - (2/5)log2(2/5) = Gain(Hair Length <= 5) = – (4/9 * /9 * ) =

19 Let us try splitting on Weight
Entropy(4F,5M) = -(4/9)log2(4/9) - (5/9)log2(5/9) = yes no Weight <= 160? Let us try splitting on Weight Entropy(4F,1M) = -(4/5)log2(4/5) - (1/5)log2(1/5) = Entropy(0F,4M) = -(0/4)log2(0/4) - (4/4)log2(4/4) = 0 Gain(Weight <= 160) = – (5/9 * /9 * 0 ) =

20 Let us try splitting on Age
Entropy(4F,5M) = -(4/9)log2(4/9) - (5/9)log2(5/9) = yes no age <= 40? Let us try splitting on Age Entropy(3F,3M) = -(3/6)log2(3/6) - (3/6)log2(3/6) = 1 Entropy(1F,2M) = -(1/3)log2(1/3) - (2/3)log2(2/3) = Gain(Age <= 40) = – (6/9 * 1 + 3/9 * ) =

21 This time we find that we can split on Hair length, and we are done!
Of the 3 features we had, Weight was best. But while people who weigh over 160 are perfectly classified (as males), the under 160 people are not perfectly classified… So we simply recurse! yes no Weight <= 160? This time we find that we can split on Hair length, and we are done! yes no Hair Length <= 2?

22 We need don’t need to keep the data around, just the test conditions.
Weight <= 160? yes no How would these people be classified? Hair Length <= 2? Male yes no Male Female

23 Male Male Female It is trivial to convert Decision Trees to rules… yes
Weight <= 160? yes no Hair Length <= 2? Male yes no Male Female Rules to Classify Males/Females If Weight greater than 160, classify as Male Elseif Hair Length less than or equal to 2, classify as Male Else classify as Female

24 Once we have learned the decision tree, we don’t even need a computer!
This decision tree is attached to a medical machine, and is designed to help nurses make decisions about what type of doctor to call. Decision tree for a typical shared-care setting applying the system for the diagnosis of prostatic obstructions.

25 PSA = serum prostate-specific antigen levels PSAD = PSA density
TRUS = transrectal ultrasound  Garzotto M et al. JCO 2005;23:

26 The worked examples we have seen were performed on small datasets
The worked examples we have seen were performed on small datasets. However with small datasets there is a great danger of overfitting the data… When you have few datapoints, there are many possible splitting rules that perfectly classify the data, but will not generalize to future datasets. Yes No Wears green? Female Male For example, the rule “Wears green?” perfectly classifies the data, so does “Mothers name is Jacqueline?”, so does “Has blue shoes”…

27 Avoid Overfitting in Classification
The generated tree may overfit the training data Too many branches, some may reflect anomalies due to noise or outliers Result is in poor accuracy for unseen samples Two approaches to avoid overfitting Prepruning: Halt tree construction early—do not split a node if this would result in the goodness measure falling below a threshold Difficult to choose an appropriate threshold Postpruning: Remove branches from a “fully grown” tree—get a sequence of progressively pruned trees Use a set of data different from the training data to decide which is the “best pruned tree”

28 ? Which of the “Pigeon Problems” can be solved by a Decision Tree?
10 1 2 3 4 5 6 7 8 9 Deep Bushy Tree Useless ? 10 1 2 3 4 5 6 7 8 9 100 10 20 30 40 50 60 70 80 90 The Decision Tree has a hard time with correlated attributes

29 Advantages/Disadvantages of Decision Trees
Easy to understand (Doctors love them!) Easy to generate rules Disadvantages: May suffer from overfitting. Classifies by rectangular partitioning (so does not handle correlated features very well). Can be quite large – pruning is necessary. Does not handle streaming data easily

30

31 How would we go about building a classifier for projectile points?

32 21225212 length width length = 3.10 width = 1.45
I. Location of maximum blade width 1. Proximal quarter 2. Secondmost proximal quarter 3. Secondmost distal quarter 4. Distal quarter II. Base shape 1. Arc-shaped 2. Normal curve 3. Triangular 4. Folsomoid III. Basal indentation ratio 1. No basal indentation 2. 0·90–0·99 (shallow) 3. 0·80–0·89 (deep) IV. Constriction ratio 1·00 0·90–0·99 0·80–0·89 ·70–0·79 ·60–0·69 0·50–0·59 V. Outer tang angle 1. 93–115 2. 88–92 3. 81–87 4. 66–88 5. 51–65 6. <50 VI. Tang-tip shape 1. Pointed 2. Round 3. Blunt VII. Fluting 1. Absent 2. Present VIII. Length/width ratio 1. 1·00–1·99 2. 2·00–2·99 4. 4·00–4·99 5. 5·00–5·99 6. >6. 6·00 width length = 3.10 width = 1.45 length /width ratio= 2.13

33 21225212 yes no no yes Fluting? = TRUE? Base Shape = 4
I. Location of maximum blade width 1. Proximal quarter 2. Secondmost proximal quarter 3. Secondmost distal quarter 4. Distal quarter II. Base shape 1. Arc-shaped 2. Normal curve 3. Triangular 4. Folsomoid III. Basal indentation ratio 1. No basal indentation 2. 0·90–0·99 (shallow) 3. 0·80–0·89 (deep) IV. Constriction ratio 1·00 0·90–0·99 0·80–0·89 ·70–0·79 ·60–0·69 0·50–0·59 V. Outer tang angle 1. 93–115 2. 88–92 3. 81–87 4. 66–88 5. 51–65 6. <50 VI. Tang-tip shape 1. Pointed 2. Round 3. Blunt VII. Fluting 1. Absent 2. Present VIII. Length/width ratio 1. 1·00–1·99 2. 2·00–2·99 4. 4·00–4·99 5. 5·00–5·99 6. >6. 6·00 Fluting? = TRUE? no yes Base Shape = 4 Late Archaic no yes Mississippian Length/width ratio = 2

34 ? 21225212 21265122 - Late Archaic 14114214 - Transitional Paleo
We could also us the Nearest Neighbor Algorithm ? Late Archaic Transitional Paleo Transitional Paleo Late Archaic Woodland

35 Arrowhead Decision Tree
Decision Tree for Arrowheads It might be better to use the shape directly in the decision tree… Lexiang Ye and Eamonn Keogh (2009) Time Series Shapelets: A New Primitive for Data Mining. SIGKDD 2009 Avonlea Clovis Mix Training data (subset) 11.24 85.47 Shapelet Dictionary (Clovis) (Avonlea) I II 100 200 300 400 0.5 1.0 1.5 Arrowhead Decision Tree 2 1 Clovis Avonlea The shapelet decision tree classifier achieves an accuracy of 80.0%, the accuracy of rotation invariant one-nearest-neighbor classifier is 68.0%.

36 Naïve Bayes Classifier
Thomas Bayes We will start off with a visual intuition, before looking at the math…

37 Grasshoppers Katydids Remember this example? Let’s get lots more data…
10 1 2 3 4 5 6 7 8 9 Antenna Length Abdomen Length Remember this example? Let’s get lots more data…

38 With a lot of data, we can build a histogram
With a lot of data, we can build a histogram. Let us just build one for “Antenna Length” for now… 10 1 2 3 4 5 6 7 8 9 Antenna Length Katydids Grasshoppers

39 We can leave the histograms as they are, or we can summarize them with two normal distributions.
Let us us two normal distributions for ease of visualization in the following slides…

40 p(cj | d) = probability of class cj, given that we have observed d
We want to classify an insect we have found. Its antennae are 3 units long. How can we classify it? We can just ask ourselves, give the distributions of antennae lengths we have seen, is it more probable that our insect is a Grasshopper or a Katydid. There is a formal way to discuss the most probable classification… p(cj | d) = probability of class cj, given that we have observed d 3 Antennae length is 3

41 p(cj | d) = probability of class cj, given that we have observed d
P(Grasshopper | 3 ) = 10 / (10 + 2) = 0.833 P(Katydid | 3 ) = 2 / (10 + 2) = 0.166 10 2 3 Antennae length is 3

42 p(cj | d) = probability of class cj, given that we have observed d
P(Grasshopper | 7 ) = 3 / (3 + 9) = 0.250 P(Katydid | 7 ) = 9 / (3 + 9) = 0.750 9 3 7 Antennae length is 7

43 p(cj | d) = probability of class cj, given that we have observed d
P(Grasshopper | 5 ) = 6 / (6 + 6) = 0.500 P(Katydid | 5 ) = 6 / (6 + 6) = 0.500 6 6 5 Antennae length is 5

44 Bayes Classifiers That was a visual intuition for a simple case of the Bayes classifier, also called: Idiot Bayes Naïve Bayes Simple Bayes We are about to see some of the mathematical formalisms, and more examples, but keep in mind the basic idea. Find out the probability of the previously unseen instance belonging to each class, then simply pick the most probable class.

45 Bayes Classifiers Bayesian classifiers use Bayes theorem, which says
p(cj | d ) = p(d | cj ) p(cj) p(d) p(cj | d) = probability of instance d being in class cj, This is what we are trying to compute p(d | cj) = probability of generating instance d given class cj, We can imagine that being in class cj, causes you to have feature d with some probability p(cj) = probability of occurrence of class cj, This is just how frequent the class cj, is in our database p(d) = probability of instance d occurring This can actually be ignored, since it is the same for all classes

46 c1 = male, and c2 = female. Assume that we have two classes
We have a person whose sex we do not know, say “drew” or d. Classifying drew as male or female is equivalent to asking is it more probable that drew is male or female, I.e which is greater p(male | drew) or p(female | drew) (Note: “Drew can be a male or female name”) Drew Barrymore Drew Carey What is the probability of being called “drew” given that you are a male? What is the probability of being a male? p(male | drew) = p(drew | male ) p(male) p(drew) What is the probability of being named “drew”? (actually irrelevant, since it is that same for all classes)

47 p(cj | d) = p(d | cj ) p(cj) p(d)
This is Officer Drew (who arrested me in 1997). Is Officer Drew a Male or Female? Luckily, we have a small database with names and sex. We can use it to apply Bayes rule… Name Sex Drew Male Claudia Female Alberto Karin Nina Sergio Officer Drew p(cj | d) = p(d | cj ) p(cj) p(d)

48 p(cj | d) = p(d | cj ) p(cj) p(d)
Name Sex Drew Male Claudia Female Alberto Karin Nina Sergio p(cj | d) = p(d | cj ) p(cj) p(d) Officer Drew p(male | drew) = 1/3 * 3/8 = 0.125 3/ /8 Officer Drew is more likely to be a Female. p(female | drew) = 2/5 * 5/8 = 0.250 3/ /8

49 Officer Drew IS a female!
p(male | drew) = 1/3 * 3/8 = 0.125 3/ /8 p(female | drew) = 2/5 * 5/8 = 0.250 3/ /8

50 p(cj | d) = p(d | cj ) p(cj) p(d)
So far we have only considered Bayes Classification when we have one attribute (the “antennae length”, or the “name”). But we may have many features. How do we use all the features? p(cj | d) = p(d | cj ) p(cj) p(d) Name Over 170CM Eye Hair length Sex Drew No Blue Short Male Claudia Yes Brown Long Female Alberto Karin Nina Sergio

51 p(d|cj) = p(d1|cj) * p(d2|cj) * ….* p(dn|cj)
To simplify the task, naïve Bayesian classifiers assume attributes have independent distributions, and thereby estimate p(d|cj) = p(d1|cj) * p(d2|cj) * ….* p(dn|cj) The probability of class cj generating instance d, equals…. The probability of class cj generating the observed value for feature 1, multiplied by.. The probability of class cj generating the observed value for feature 2, multiplied by..

52 p(d|cj) = p(d1|cj) * p(d2|cj) * ….* p(dn|cj)
To simplify the task, naïve Bayesian classifiers assume attributes have independent distributions, and thereby estimate p(d|cj) = p(d1|cj) * p(d2|cj) * ….* p(dn|cj) p(officer drew|cj) = p(over_170cm = yes|cj) * p(eye =blue|cj) * …. Officer Drew is blue-eyed, over 170cm tall, and has long hair p(officer drew| Female) = 2/5 * 3/5 * …. p(officer drew| Male) = 2/3 * 2/3 * ….

53 cj … p(d1|cj) p(d2|cj) p(dn|cj)
The Naive Bayes classifiers is often represented as this type of graph… Note the direction of the arrows, which state that each class causes certain features, with a certain probability p(d1|cj) p(d2|cj) p(dn|cj)

54 cj … Naïve Bayes is fast and space efficient
We can look up all the probabilities with a single scan of the database and store them in a (small) table… p(d1|cj) p(d2|cj) p(dn|cj) Sex Over190cm Male Yes 0.15 No 0.85 Female 0.01 0.99 Sex Long Hair Male Yes 0.05 No 0.95 Female 0.70 0.30 Sex Male Female

55 Naïve Bayes is NOT sensitive to irrelevant features...
Suppose we are trying to classify a persons sex based on several features, including eye color. (Of course, eye color is completely irrelevant to a persons gender) p(Jessica |cj) = p(eye = brown|cj) * p( wears_dress = yes|cj) * …. p(Jessica | Female) = 9,000/10, * 9,975/10,000 * …. p(Jessica | Male) = 9,001/10, * 2/10, * …. Almost the same! However, this assumes that we have good enough estimates of the probabilities, so the more data the better.

56 cj An obvious point. I have used a simple two class problem, and two possible values for each example, for my previous examples. However we can have an arbitrary number of classes, or feature values p(d1|cj) p(d2|cj) p(dn|cj) Animal Mass >10kg Cat Yes 0.15 No 0.85 Dog 0.91 0.09 Pig 0.99 0.01 Animal Color Cat Black 0.33 White 0.23 Brown 0.44 Dog 0.97 0.03 0.90 Pig 0.04 0.01 0.95 Animal Cat Dog Pig

57 Naïve Bayesian Classifier
Problem! Naïve Bayes assumes independence of features… p(d|cj) Naïve Bayesian Classifier p(d1|cj) p(d2|cj) p(dn|cj) Sex Over 6 foot Male Yes 0.15 No 0.85 Female 0.01 0.99 Sex Over 200 pounds Male Yes 0.11 No 0.80 Female 0.05 0.95

58 Naïve Bayesian Classifier
Solution Consider the relationships between attributes… p(d|cj) Naïve Bayesian Classifier p(d1|cj) p(d2|cj) p(dn|cj) Sex Over 6 foot Male Yes 0.15 No 0.85 Female 0.01 0.99 Sex Over 200 pounds Male Yes and Over 6 foot 0.11 No and Over 6 foot 0.59 Yes and NOT Over 6 foot 0.05 No and NOT Over 6 foot 0.35 Female 0.01

59 Naïve Bayesian Classifier
Solution Consider the relationships between attributes… p(d|cj) Naïve Bayesian Classifier p(d1|cj) p(d2|cj) p(dn|cj) But how do we find the set of connecting arcs??

60 The Naïve Bayesian Classifier has a piecewise quadratic decision boundary
Grasshoppers Katydids Ants Adapted from slide by Ricardo Gutierrez-Osuna

61 Which of the “Pigeon Problems” can be solved by a decision tree?
10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 100 10 20 30 40 50 60 70 80 90

62 Advantages/Disadvantages of Naïve Bayes
Fast to train (single scan). Fast to classify Not sensitive to irrelevant features Handles real and discrete data Handles streaming data well Disadvantages: Assumes independence of features

63 Summary We have seen the four most common algorithms used for classification. We have seen there is no one “best” algorithm. We have seen that issues like normalizing, cleaning, converting the data can make a huge difference. We have only scratched the surface! How do we learn with no class labels? (clustering) How do we learn with expensive class labels? (active learning) How do we spot outliers (Anomaly detection) How do we….. Popular Science Book The Master Algorithm by Pedro Domingos Textbook Data Mining: by Charu C. Aggarwal

64

65 Malaria Malaria afflicts about 4% of all humans, killing one million of them each year.

66

67 Malaria Deaths (2003)

68 There are interventions to mitigate the problem
A recent meta-review of randomized controlled trials of Insecticide Treated Nets (ITNs) found that ITNs can reduce malaria-related deaths in children by one fifth and episodes of malaria by half. Mosquito nets work!

69 How do we know where to do the interventions, given that we have finite resources?

70 One second of audio from our sensor.
The Common Eastern Bumble Bee (Bombus impatiens) takes about one tenth of a second to pass the laser. 0.2 0.1 -0.1 Background noise Bee begins to cross laser Bee has past though the laser -0.2 x 10 4 0.5 1 1.5 2 2.5 3 3.5 4 4.5

71

72 Bee begins to cross laser
One second of audio from the laser sensor. Only Bombus impatiens (Common Eastern Bumble Bee) is in the insectary. 0.2 0.1 -0.1 Background noise Bee begins to cross laser -0.2 4 x 10 0.5 1 1.5 2 2.5 3 3.5 4 4.5 -3 x 10 Single-Sided Amplitude Spectrum of Y(t) 4 Peak at 197Hz 3 60Hz interference |Y(f)| 2 Harmonics 1 100 200 300 400 500 600 700 800 900 1000 Frequency (Hz)

73 4 3 |Y(f)| 2 1 Frequency (Hz) Frequency (Hz) Frequency (Hz) x 10 100
-3 x 10 4 3 |Y(f)| 2 1 100 200 300 400 500 600 700 800 900 1000 Frequency (Hz) 100 200 300 400 500 600 700 800 900 1000 Frequency (Hz) 100 200 300 400 500 600 700 800 900 1000 Frequency (Hz)

74 4 3 |Y(f)| 2 1 Frequency (Hz) Frequency (Hz) Frequency (Hz) x 10 100
-3 x 10 4 3 |Y(f)| 2 1 100 200 300 400 500 600 700 800 900 1000 Frequency (Hz) 100 200 300 400 500 600 700 800 900 1000 Frequency (Hz) 100 200 300 400 500 600 700 800 900 1000 Frequency (Hz)

75 100 200 300 400 500 600 700 Wing Beat Frequency Hz

76 Anopheles stephensi is a primary mosquito vector of malaria.
The yellow fever mosquito (Aedes aegypti) is a mosquito that can spread dengue fever, chikungunya, and yellow fever viruses 100 200 300 400 500 600 700 Wing Beat Frequency Hz

77 Anopheles stephensi: Female mean =475, Std = 30
400 500 600 700 Anopheles stephensi: Female mean =475, Std = 30 Aedes aegyptii : Female mean =567, Std = 43 517 If I see an insect with a wingbeat frequency of 500, what is it?

78 Can we get more features?
517 400 500 600 700 8.02% of the area under the red curve What is the error rate? 12.2% of the area under the pink curve Can we get more features?

79 Aedes aegypti (yellow fever mosquito)
Circadian Features Aedes aegypti (yellow fever mosquito) dawn dusk 12 24 Midnight Noon Midnight

80 Suppose I observe an insect with a wingbeat frequency of 420Hz
700 Suppose I observe an insect with a wingbeat frequency of 420Hz What is it? 600 500 400

81 400 500 600 700 Midnight 12 24 Noon Suppose I observe an insect with a wingbeat frequency of 420Hz at 11:00am What is it?

82 400 500 600 700 Midnight 12 24 Noon Suppose I observe an insect with a wingbeat frequency of 420 at 11:00am What is it? (Culex | [420Hz,11:00am]) = (6/ ( )) * (2/ ( )) = 0.111 (Anopheles | [420Hz,11:00am]) = (6/ ( )) * (4/ ( )) = 0.222 (Aedes | [420Hz,11:00am]) = (0/ ( )) * (3/ ( )) = 0.000

83 Blue Sky Ideas Once you have a classifier working, you begin to see new uses for them… Let us see some examples..

84 Capturing or killing individually targeted insects
Most efforts to capture or kill insects are “shotgun”. Many non- targeted insects (including beneficial ones) are killed/captured. In some cases, the ratios are 1,000 to 1 (i.e. 1,000 non-targeted insects are effected for each one that was targeted). We believe our sensors allow an ultra precise approach, with a ratio approaching 1 to 1. This has obvious implications for SIT/metagenomics

85 Zoom-in (after removing the wing)
Kill It seems obvious you could kill a mosquito with a powerful enough laser and with enough time. But we need to do it fast, with as little power as possible. We have gotten this down to 1/20th of a second, and just 1 watt. (and falling) The mosquitoes may survive the laser strike, but they cannot fly away (as was the case in photo shown right) We are building a SIT Hotel California for female mosquitoes (you can check out anytime you like, but you can never leave) Culex tarsalis Collaboration with UCR mechanical engineers Amir Rose and Dr. Guillermo Aguilar Zoom-in (after removing the wing)

86 Capture We envision building robotic traps that can be left in the field, and programed with different sampling missions. Such traps could be placed and retrieved by drones. Capturing live insects is important if you want to do metagenomics. Some examples of sampling missions… Capture examples of gravid{Aedes aegypti} Capture insects marked{ Cripple(left-C|right-S) } Capture examples of insects that are NOT Anopheles AND have a wingbeat frequency > (to exclude bees, etc.) Capture examples of any insects with a wingbeat frequency > 500, encountered between 4:00am and 4:10am Capture examples of fed{Anopheles gambiae} OR fed{Anopheles quadriannulatus} OR fed{Anopheles melas}

87 Capture About 10% of the insects captured by Venus fly traps are flying insects We believe that we can build inexpensive mechanical traps that can capture sex/species targeted insects. Capture examples of gravid{Aedes aegypti} Capture insects marked{ Cripple(left-C|right-S) } Capture examples of insects that are NOT Anopheles AND have a wingbeat frequency > (to exclude bees, etc.) Capture examples of any insects with a wingbeat frequency > 500, encountered between 4:00am and 4:10am Capture examples of fed{Anopheles gambiae} OR fed{Anopheles quadriannulatus} OR fed{Anopheles melas}

88 Keogh vs. State of California = {0,1,1,0,0,0,1,0}
Classification Problem: Fourth Amendment Cases before the Supreme Court II The Supreme Court’s search and seizure decisions, 1962–1984 terms. Keogh vs. State of California = {0,1,1,0,0,0,1,0} U = Unreasonable R = Reasonable

89 Decision Tree for Supreme Court Justice Sandra Day O'Connor
We can also learn decision trees for individual Supreme Court Members. Using similar decision trees for the other eight justices, these models correctly predicted the majority opinion in 75 percent of the cases, substantially outperforming the experts' 59 percent. Decision Tree for Supreme Court Justice Sandra Day O'Connor


Download ppt "Let see just one example…."

Similar presentations


Ads by Google