Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 478 – Tools for Machine Learning and Data Mining The Need for and Role of Bias.

Similar presentations


Presentation on theme: "CS 478 – Tools for Machine Learning and Data Mining The Need for and Role of Bias."— Presentation transcript:

1 CS 478 – Tools for Machine Learning and Data Mining The Need for and Role of Bias

2 Learning Rote: – Until you discover the rule/concept(s), the very BEST you can ever expect to do is: Remember what you observed Guess on everything else Inductive: – What you do when you GENERALIZE from your observations and make (accurate) predictions Claim: “All [most of] the laws of nature were discovered by inductive reasoning”

3 The Big Question All you have is what you have OBSERVED Your generalization should at least be consistent with those observations But beyond that… – How do you know that your generalization is any good? – How do you choose among various candidate generalizations?

4 The Answer Is BIAS Any basis for choosing one decision over another, other than strict consistency with past observations

5 Why Bias? If you have no bias you cannot go beyond mere memorization – Mitchell’s proof using UGL and VS The power of a generalization system follows directly from its biases Progress towards understanding learning mechanisms depends upon understanding the sources of, and justification for, various biases

6 Concept Learning Given: A language of observations/instances A language of concepts/generalizations A matching predicate A set of observations Find generalizations that: 1. Are consistent with the observations, and 2. Classify instances beyond those observed

7 Claim The absence of bias makes it impossible to solve part 2 of the Concept Learning problem, i.e., learning is limited to rote learning

8 Unbiased Generalization Language Generalization  set of instances it matches An Unbiased Generalization Language (UGL), relative to a given language of instances, allows describing every possible subset of instances UGL = power set of the given instance language

9 Unbiased Generalization Procedure Uses Unbiased Generalization Language Computes Version Space (VS) relative to UGL VS  set of all expressible generalizations consistent with the training instances

10 Version Space (I) Let: – S be the set of maximally specific generalizations consistent with the training data – G be the set of maximally general generalizations consistent with the training data

11 Version Space (II) Intuitively – S keeps generalizing to accommodate new positive instances – G keeps specializing to avoid new negative instances The key issue is that they only do that to the smallest extent necessary to maintain consistency with the training data, that is, G remains as general as possible and S remains as specific as possible.

12 Version Space (III) The sets S and G precisely delimit the version space (i.e., the set of all plausible versions of the emerging concept). A generalization g is in the version space represented by S and G if and only if: – g is more specific than or equal to some member of G, and – g is more general than or equal to some member of S

13 Version Space (IV) Initialize G to the most general concept in the space Initialize S to the first positive training instance For each new positive training instance p – Delete all members of G that do not cover p – For each s in S If s does not cover p – Replace s with its most specific generalizations that cover p – Remove from S any element more general than some other element in S – Remove from S any element not more specific than some element in G For each new negative training instance n – Delete all members of S that cover n – For each g in G If g covers n – Replace g with its most general specializations that do not cover n – Remove from G any element more specific than some other element in G – Remove from G any element more specific than some element in S If G=S and both are singletons – A single concept consistent with the training data has been found If G and S become empty – There is no concept consistent with the training data

14 Lemma 1 Any new instance, NI, is classified as positive if and only if NI is identical to some observed positive instance

15 Proof of Lemma 1 (  ). If NI is identical to some observed positive instance, then NI is classified as positive – Follows directly from the definition of VS (  ). If NI is classified as positive, then NI is identical to some observed positive instance – Let g={p: p is an observed positive instance} UGL  g  VS NI matches all of VS  NI matches g

16 Lemma 2 Any new instance, NI, is classified as negative if and only if NI is identical to some observed negative instance

17 Proof of Lemma 2 (  ). If NI is identical to some observed negative instance, then NI is classified as negative – Follows directly from the definition of VS (  ). If NI is classified as negative, then NI is identical to some observed negative instance – Let G={all subsets containing observed negative instances} UGL  G  VS=UGL NI matches none in VS  NI was observed

18 Lemma 3 If NI is any instance which was not observed, then NI matches exactly one half of VS, and so cannot be classified

19 Proof of Lemma 3 (  ). If NI was not observed, then NI matches exactly one half of VS, and so cannot be classified – Let g={p: p is an observed positive instance} – Let G’={all subsets of unobserved instances} UGL  VS={g  g’: g’  G’} NI was not observed  NI matches exactly ½ of G’  NI matches exactly ½ of VS

20 Theorem An unbiased generalization procedure can never make the inductive leap necessary to classify instances beyond those it has observed

21 Proof of the Theorem The result follows immediately from Lemmas 1, 2 and 3 Practical consequence: If a learning system is to be useful, it must have some form of bias

22 Sources of Bias in Learning The representation language cannot express all possible classes of observations The generalization procedure is biased – Domain knowledge (e.g., double bonds rarely break) – Intended use (e.g., ICU – relative cost) – Shared assumptions (e.g., crown, bridge – dentistry) – Simplicity and generality (e.g., white men can’t jump) – Analogy (e.g., heat vs. water flow, thin ice) – Commonsense (e.g., social interactions, pain, etc.)

23 Fall 2004CS 478 - Machine Learning 23 Representation Language Decrease number of expressible generalizations  Increase ability to make the inductive leap Example: Restrict generalizations to conjunctive constraints on features in a Boolean domain

24 Fall 2004CS 478 - Machine Learning 24 Proof of Concept (I) Let N = number of features 2 2 N subsets of instances Let GL = {0, 1, *} can only denote subsets of size 2 p for 0  p  N

25 Fall 2004CS 478 - Machine Learning 25 Proof of Concept (II) For each p, there are only 2 N-p expressible subsets Fix N-p features (there are ways of choosing which) Set values for the selected features (there are 2 N-p possible settings)

26 Fall 2004CS 478 - Machine Learning 26 Proof of Concept (III) Excluding the empty set, the ratio of expressible to total generalizations is given by:

27 Fall 2004CS 478 - Machine Learning 27 Proof of Concept (IV) For example, if N=5 then only about 1 in 10 7 subsets may be represented Strong bias Two-edge sword: representation could be too sparse

28 Generalization Procedure Domain knowledge (e.g., double bonds rarely break) Intended use (e.g., ICU – relative cost) Shared assumptions (e.g., crown, bridge – dentistry) Simplicity and generality (e.g., white men can’t jump) Analogy (e.g., heat vs. water flow, thin ice) Commonsense (e.g., social interactions, pain, etc.)

29 Fall 2004CS 478 - Machine Learning 29 Conclusion Absence of bias = rote learning Efforts should focus on combined use of prior knowledge and observations in guiding the learning process Make biases and their use as explicit as observations and their use


Download ppt "CS 478 – Tools for Machine Learning and Data Mining The Need for and Role of Bias."

Similar presentations


Ads by Google