Presentation is loading. Please wait.

Presentation is loading. Please wait.

Previous version of this project: System had a set of expert agents predefined The blackboard was updated incrementally and heuristically. The evidence.

Similar presentations


Presentation on theme: "Previous version of this project: System had a set of expert agents predefined The blackboard was updated incrementally and heuristically. The evidence."— Presentation transcript:

1 Previous version of this project: System had a set of expert agents predefined The blackboard was updated incrementally and heuristically. The evidence to be posted was chosen with a goal in mind As agents were activated based on the posted evidence, they were being clustered to see if an ontology could be determined

2 Current version of the project: The system only has one predefined agent The blackboard is a snapshot of data The ontology is learned over time

3 One predefined agent: One of the goals of this project is to be able to get a system to learn over time Expert agents are a form of representational knowledge Can we then automatically build agents through learning knowledge?

4 One predefined agent: The system starts with one agent As evidence is posted to the blackboard, the agent evaluates the information If the information on the blackboard is similar to, but not exactly, the agent’s profile, what can we do to make use of the extra information? My system outputs its leading hypothesis, but then requests feedback

5 Example: The blackboard currently holds this information: Motion, Pouncing, 0.4648 Motion, Running, 0.9465 Vision, Stripes, 0.4678 Vision, Claws, 0.4354 Vision, Teeth, 0.9785 Vision, Has Screws, 0.7897 Sound, Clanking, 0.8346 Vision, Reflects Light, 0.7645 Motion, Staggered Motion, 0.9788 Maybe the system has an agent that can identify this as a tiger, but, because we are given extra information, perhaps we can use that to learn something more, so the system requests some feedback

6 Feedback: This is a form of reinforcement learning If the system receives positive reinforcement, then we can augment the profile of our leading hypothesis and increase our knowledge If the system is supplied with a corrected hypothesis, we can create a new agent with the extra information, and relate it to our existing hypothesis

7 This is increasing our knowledge in two ways: We are learning the profile of a new object, that is yet unseen – We are creating new agents We are also learning that there is a correlation between this new object and something we already know – We are creating the ontology

8 Feedback has to come from somewhere: Currently, I, as the human, am providing feedback to the system What if this system were interacting with other agents? Could other agents provide feedback somehow?

9 The blackboard Previously, the blackboard was being updated incrementally and heuristically. Does this necessarily give us a full and objective view of the situation? The blackboard in my system, currently, represents a snapshot of data taken from the Knowledge Sources. Each Knowledge Source is assumed to be interacting with its environment continuously

10 The issue of similarity One of the challenges for me was figuring out a way of evaluating the confidence of a hypothesis My attempt was to use cosine similarity As evidence is being posted to the blackboard, it can be viewed as the continuous update of 3 feature vectors, one for each of the knowledge sources

11 The issue of similarity Each of the agents has its own feature vectors that are representative of the object that they identify

12 The issue of similarity To assess the similarity between the contents of the blackboard and the contents of the agent, we can use the cosine similarity formula:

13 The issue of similarity Using the vector for “Vision” as an example:

14 The issue of similarity There is a slight flaw in using cosine similarity. Take the following 2 vectors for example: We want to say with the highest confidence that the Tiger Agent has “seen” a tiger since it only requires three features to come to that conclusion.

15 The issue of similarity The cosine similarity, however, is: This is due to the normalization of the vector. To rectify this, I calculate two different similarities. A “true similarity” and an “adjusted similarity” The “adjusted similarity” is the cosine similarity of the blackboard’s vector with the Tiger Agent’s vector, but with the non-relevant attributes in the blackboard’s vector removed (“Metal Screws” and “Antennae”).

16 The issue of similarity This way the “adjusted similarity” of the two vectors is 1.0, whereas the “true similarity” of the two vectors is 0.7746. This, to me, is a bit of a hack, but it was a way in which I could definitively say that the object in question is a tiger, but it may be something else too (a robot?)


Download ppt "Previous version of this project: System had a set of expert agents predefined The blackboard was updated incrementally and heuristically. The evidence."

Similar presentations


Ads by Google