Presentation on theme: "Cognitive Modelling Assignment Suzanne Cotter 09134328 March 2010."— Presentation transcript:
Cognitive Modelling Assignment Suzanne Cotter 09134328 March 2010
Initial Analysis The first approach I followed was to figure out how I would approach classifying each of the test items. Dim 1Dim 2Dim 3MeParticipants ABYABA CYBCC YACACAC XBBBC XXBBB I found that naturally, I looked at the probabilities of a symptom appearing on a certain dimension pointing to a specific disease. So A on dimension 1 was always A, Z on dimension 1 was always B and C on dimension 1 was always C...this made dimension 1 very powerful to me. Similarly, C on dimension 3 was almost always C and B on dimension 3 was almost always B...the only times these were overruled were when something more powerful occurred on dimension 1 (like A or C). This further enforced my theory that dimension 1 was very powerful.
I found dimension 2 very confusing and didn’t feel it helped a whole lot although it did lead me to classify ABY as AB because B on dimension 2 was always B and A on dimension 1 was always A...this didn’t apply to the participants. The reason dimension 2 was so confusing was because most of the symptoms were split across the 3 diseases...for example, X, Y and A were all symptoms of each of the diseases at some stage or another.
Exemplar V Prototype NOTE: in order to assess models, I used the correl() function in excel, the higher the correlation, the better the model. First model I tried was the exemplar one, counting AB as a category by itself...the results were very encouraging with a correlation of 0.79. I then tried to improve matters as I didn’t believe people considered AB as a separate category (I’ll explain this later). The next model gave me a correlation of 0.86.
Exemplar V Prototype ctd’d Next I tried the prototype model and found that this didn’t class things as well as the exemplar model, with correlations of 0.75 and 0.74 depending on whether AB was classed as a separate disease or not. For the Exemplar model, I used attention parameters (S values) of 0.1 on Dimension 1, 0.8 on Dimension 2 and 0.5 on Dimension 3. This reproduced the effect of making Dimension 1 very powerful, Dimension 2 very weak, and Dimension 3 somewhere in the middle. However, I wanted to see if I could figure out how to model how people classified each test case myself, so I came up with a new model based on a mixture of probability and intuition!
My theory of how people classify With a list of data so large and complicated, people try and remember the main points of the list they have. They’ll remember best what’s “easy”. For example, in the current list of training data, whenever there is an A in dimension 1, the disease classification is always A (or A&B) so whenever they see A in dimension 1, they’re more inclined to classify anything in the test data with A in this dimension as disease A. Dim 1Dim 2Dim 3 AXCcategory A AYYcategory A AAXcategory A YAYcategory A XABcategories A and B ABXcategories A and B ZBBcategory B XBBcategory B YXBcategory B ZYBcategory B CAYcategory C CXBcategory C CYCcategory C CACcategory C CXCcategory C XYCcategory C
Dim1Dim2Dim3BCAIF AorBIF AorBorC D1ABY000.900 CYB01000 YAC000.10.20 XBC000.10 XXB00 0 Dim1Dim2Dim3BCCA D2ABY0.6000 CYB0.100.20 YAC0.1000.3 XBC0.60.200 XXB0.1000 Dim1Dim2Dim3ACBAC D3ABY0.1 0.20.1 CYB 00 YAC 0.80.100 XBC 0.80.100 XXB 0.83333300 If C on dim1, always classified as disease C, easiest to remember Chances of set of symptoms classified as B C on dim1 always C so assign it no weight for disease A Will be given weight for both disease A and B only if symptom matches If Dim1 is Y, people assign it a lower priority because it's too evenly split between disease A and B
Dim1Dim2Dim3BCAIF AorBIF AorBorC D1ABY000.900 CYB01000 YAC000.10.20 XBC000.10 XXB00 0 Dim1Dim2Dim3BCCA D2ABY0.6000 CYB0.100.20 YAC0.1000.3 XBC0.60.200 XXB0.1000 Dim1Dim2Dim3ACBAC D3ABY0.1 0.20.1 CYB 00 YAC 0.80.100 XBC 0.80.100 XXB 0.83333300 As on dim2, lower probability, people don't consider it as important Slightly higher chance of C than A or B so will only give weight to C if symtom matches
Dim1Dim2Dim3BCAIF AorBIF AorBorC D1ABY000.900 CYB01000 YAC000.10.20 XBC000.10 XXB00 0 Dim1Dim2Dim3BCCA D2ABY0.6000 CYB0.100.20 YAC0.1000.3 XBC0.60.200 XXB0.1000 Dim1Dim2Dim3ACBAC D3ABY0.1 0.20.1 CYB 00 YAC 0.80.100 XBC 0.80.100 XXB 0.83333300 0.1 instead of 0.2 don't think people would even consider there to be a 20% chance 5 times of 6 occurrences were B Slightly higher chance of Y being disease A than B so higher odds. Not as high as 2/3 though as on dim 3
ABCABMineACMineBCMine ABY22.214.171.124.950.70.45 CYB0.10.41.30.250.70.85 YAC0.70.40.80.550.850.6 XBC0.30.80.90.550.60.95 XXB0.31.0333333330.40.66666670.450.716667 ScaledABCABMineACMineBCMine ABY6-0.666666667-7.3333333332.6666667-0.66666667-4 CYB-8.666666667-4.6666666677.333333333-6.6666667-0.666666671.333333 YAC-0.666666667-4.6666666670.666666667-2.66666671.333333333-2 XBC-60.6666666672-2.6666667-22.666667 XXB-63.777777778-4.666666667-1.1111111-4-0.44444 Equal to sum of the probabilities of anytime a symptom appeared under a disease category, e.g.. On dim1, ABY was given a 0.9 chance of being disease A PLUS on dim 3 it was given a 0.1 chance of being classified as A because Y wasn’t X, the most powerful symptom on dim3 PLUS 0.2 chance of being classified as A because Y is classified as A more often than C on dim3. It received no weighting on dim2 because at no stage did disease A ever stand out on dim2.
Analysis of Model The correlation works out at 0.8436 which is pretty good overall. When conjugating, I compared the Product and Average functions with my own function (explained later). I found overall that my own function correlated very slightly lower than the Average function which got a correlation of 0.855. However, I chose my own function because I believe when people think there’s a very similar chance of a set of symptoms being disease A or being disease B, for example, then that’s when they conjugate and classify the symptoms as AB. With the average function, if 2 diseases have the same probabilities, A, B and AB will all be classified as having the same chance when I believe there’s a better chance people will conjugate.
Analysis cont’d When something is on dimension 2, even though the odds may say that there’s a 80% chance of a certain symptom being a certain disease, I believe people don’t rate it this highly because overall dimension 2 is spread very evenly across the 3 diseases. So an 80% chance becomes 60% or lower. When something is on dimension 1, it is often considered to be a higher probability than it actually is because people remember the first symptom in a group best (known as the primacy effect in psychology). The same might apply to dimension 3 because people remember the last thing in a list better (known as the recency effect). Dimension 3 loses some of it’s power though because there is nothing that is 100% classified as one disease here...C was classified as disease A once out of 5 occurrences, and B was classified as AB and C as well as disease B 4 times.
Analysis cont’d Another factor that reduces dimension 3’s power is the ability of dimension 1 to overrule it. With symptom C, for example, the reason it was classified as disease A in one case is because on dimension 1, the symptom A appeared which was always classified as disease A. This is the same reason that one instance of symptom B was classified as disease C...symptom C appeared on dimension 1. ABY and CYB were very easy to model as A on dimension 1 was 100% of the time disease A (or AB) and C on dimension 1 was 100% of the time disease C.
Analysis cont’d YAC was very difficult to model because though intuitively I understand why people classified this as AC (the C on dimension 3 was balanced out by the YA on the first 2 dimensions which was very similar to YAY which was classified as A...maybe YA was a very easy pattern to remember), I found it very difficult to conjugate this in the model XBC was also very difficult because I felt the disease should have been B based on the fact that B (or BC) on dimension 2 was always disease B and there was a training example XBB which was classed as disease B. C on dimension 3 was more powerful in people’s minds in this instance however. XXB was ok to model also because X on dimensions 1 and 2 was never a clear indicator of disease whereas B on dimension 3 was 83% of the time B.
Graph of Actual disease classification against Model disease classification
Issues - AB The classification of certain training examples into both categories A and B presented a problem when trying to classify. I found instead of simplifying things, it made it more complicated, especially when it came to modelling. For the conjugation of AC and BC, the usual methods of doing so applied, for example multiplying the probability of a disease being in A by the probability of a disease being in C or getting the average of the probability of a disease being in B and the probability of a disease being in C. However, as there were already 2 examples of a set of symptoms being classed as both A&B, it wasn’t as simple as just multiplying probabilities. From looking at the results of the classifications, it seemed that people were inclined to ignore the fact that 2 sets of symptoms were classed as A&B. This is demonstrated by test object ABY. ABY should have been classed as AB because there was an example already where ABX was classed as AB and 100% of cases where B was in dimension 2 were classed as B or AB. However, the majority classed it as A followed by AB. I feel this is because of the strength of symptom A in dimension 1. In order to replicate this in the exemplar model, I minimised the probability of the values being classed as AB and then used normal conjugation methods to classify items in AB.
Issues - Conjugation In conjunctive classification, the model gives each item a score as a member of each conjunction or ANDED-PAIR of categories. So, what are the odds of someone classifying a certain set of symptoms as disease AC as opposed to A or C. It’s suggested to use the functions product, sum, min, normalised sum or average. In my models, average usually worked out as the best function overall. However, because of it’s nature, it will never classify a conjugated classification as the first choice. For example, if A=0.8, B=0.7, AB=(0.8+0.7)/2=0.75. So even though the probabilities of A and B are very similar making this a good candidate for classification as AB, the model will always choose A over AB. To overcome this, I decided to do a check that if there was a difference of less than a certain amount(0.1 in this case) between 2 probabilities, the 2 diseases would be conjugated. I believe that’s the way people thing...if 2 things have very similar likelihoods of occurring, then people won’t choose one or the other, they will conjugate.
Correlation Function Aside Found the correlation function to be useful because it checks how similar the trend of the model against the trend of the participants average response is, though not very accurate if trying to predict how close each value is against each other For example: Actual values: 2,5,3,4,4. Model 1 Values: 4, 6, 6, 6,6. Model 2 Values: 3,6, 6, 4, 4 GRAPH: Correlation function gives 0.784 for Actual values against Model1 values and 0.523 against Model2 values though Model 2 looks closer than Model1. This is because it favours the general trend downwards of the Model1 values which matches that of the actual values.