Consciousness as awareness Levels of consciousness can be compiled and sometimes decompiled from one to another Compiling conscious into subconscious thought.

Slides:



Advertisements
Similar presentations
1 Knowledge Representation Introduction KR and Logic.
Advertisements

Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Modeling Social Cognition in a Unified Cognitive Architecture.
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
Artificial Intelligence
Presentation on Artificial Intelligence
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Second Language Acquisition
RECAP – TASK 1 What is utilitarianism? Who is Jeremy Bentham?
CS 484 – Artificial Intelligence1 Announcements Choose Research Topic by today Project 1 is due Thursday, October 11 Midterm is Thursday, October 18 Book.
5-1 Chapter 5: REACTIVE AND HYBRID ARCHITECTURES.
Inferences The Reasoning Power of Expert Systems.
Logic programming  Combining declarative and procedural representations: Emergencies on the London Underground  Logic programming for proactive rather.
Rule Based Systems Alford Academy Business Education and Computing
Autism and Asperger’s Syndrome How to make accommodations in academic assessments.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
The Semantic Web Week 13 Module Website: Lecture: Knowledge Acquisition / Engineering Practical: Getting to know.
CS 357 – Intro to Artificial Intelligence  Learn about AI, search techniques, planning, optimization of choice, logic, Bayesian probability theory, learning,
Models -1 Scientists often describe what they do as constructing models. Understanding scientific reasoning requires understanding something about models.
1 Chapter 9 Rules and Expert Systems. 2 Chapter 9 Contents (1) l Rules for Knowledge Representation l Rule Based Production Systems l Forward Chaining.
Rules and Expert Systems
© C. Kemke1Reasoning - Introduction COMP 4200: Expert Systems Dr. Christel Kemke Department of Computer Science University of Manitoba.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
COMP 3009 Introduction to AI Dr Eleni Mangina
Scientific Thinking - 1 A. It is not what the man of science believes that distinguishes him, but how and why he believes it. B. A hypothesis is scientific.
Let remember from the previous lesson what is Knowledge representation
01 -1 Lecture 01 Intelligent Agents TopicsTopics –Definition –Agent Model –Agent Technology –Agent Architecture.
Meaningful Learning in an Information Age
Software Reengineering 2003 년 12 월 2 일 최창익, 고광 원.
Formulating objectives, general and specific
Suffix –er “a person who _____” or “more” 1.leadersprinterfighterreader 2.strongerlighterquickerluckier.
Thinking Actively in a Social Context T A S C.
Chapter 5 Models and theories 1. Cognitive modeling If we can build a model of how a user works, then we can predict how s/he will interact with the interface.
Katanosh Morovat.   This concept is a formal approach for identifying the rules that encapsulate the structure, constraint, and control of the operation.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
High level & Low level language High level programming languages are more structured, are closer to spoken language and are more intuitive than low level.
Health Chapter 2.
Knowledge representation
Psychology: An Introduction Charles A. Morris & Albert A. Maisto © 2005 Prentice Hall Cognition and Language Chapter 7B.
Cognition and Language Chapter 7. Building Blocks of Thought Language –A flexible system of symbols that enables us to communicate our ideas, thoughts,
Artificial Intelligence Introductory Lecture Jennifer J. Burg Department of Mathematics and Computer Science.
Chapter 9: Rules and Expert Systems Lora Streeter.
Logic Programming Module 2AIT202 Website Lecturer: Dave Sharp Room: AG15
© 2008 The McGraw-Hill Companies, Inc. Chapter 8: Cognition and Language.
1 The main topics in AI Artificial intelligence can be considered under a number of headings: –Search (includes Game Playing). –Representing Knowledge.
Generic Tasks by Ihab M. Amer Graduate Student Computer Science Dept. AUC, Cairo, Egypt.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 28Slide 1 CO7206 System Reengineering 4.2 Software Reengineering Most slides are Slides.
CS 127 Introduction to Computer Science. What is a computer?  “A machine that stores and manipulates information under the control of a changeable program”
Lecture 2: 11/4/1435 Problem Solving Agents Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Chapter 1 –Defining AI Next Tuesday –Intelligent Agents –AIMA, Chapter 2 –HW: Problem.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
International Conference on Fuzzy Systems and Knowledge Discovery, p.p ,July 2011.
Do software agents know what they talk about? Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11, 2005.
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
The single most important skill for a computer programmer is problem solving Problem solving means the ability to formulate problems, think creatively.
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Expert System Seyed Hashem Davarpanah University of Science and Culture.
Production systems The Production System Cycle Conflict resolution Thermostat’s input-output behaviour Passenger input-output behaviour on the underground.
Conditionals in Computational Logic Bob Kowalski Imperial College London with acknowledgements to Fariba Sadri Keith Stenning Michiel van Lambalgen.
Artificial Intelligence Knowledge Representation.
Artificial Intelligence Logical Agents Chapter 7.
Emotions and Communication
Chapter 7. Propositional and Predicate Logic
Cognition and Language
Chapter 9. Rules and Expert Systems
Discussion session by Dr Zahari Hamidon
KNOWLEDGE REPRESENTATION
Chapter 9. Rules and Expert Systems
Chapter 1: Computational Intelligence and Knowledge
Representations & Reasoning Systems (RRS) (2.2)
Presentation transcript:

Consciousness as awareness Levels of consciousness can be compiled and sometimes decompiled from one to another Compiling conscious into subconscious thought by reasoning in advance Logic and neural networks The meaning of life ALP agent model that combines reflective and intuitive thinking

Dual Process Theory of Human Thinking Intuitive thinking, which is “tacit”, opaque and automatic, extends perceptual processing to subconscious levels of thought. Reflective thinking, which is self-aware and controlled, can be used to improve conscious thought and communication. Reflective thinking can migrate to the intuitive level, e.g. learning to use a keyboard, play a musical instrument or drive a car. Intuitive knowledge can sometimes be made conscious and explicit, e.g. constructing a formal grammar for a natural language, coaching sports or developing an expert system.

Consciousness in computational logic An agent is conscious when it is aware of what it is doing and why it is doing it. Computationally, when an agent is conscious, its behaviour is controlled by a high level program, which manipulates symbols that have meaningful interpretations in the environment. When the agent is not conscious, its behaviour is controlled by a lower level program or physical device, whose structure is ultimately determined by the agent’s interactions with the environment. Logically, when an agent is conscious, its behaviour is generated proactively by goals and beliefs. When the agent is not conscious, its behaviour is determined reactively by (condition-action rules (or input- output associations). These rules and associations can be represented at different levels in turn, including both a logical, symbolic level and the lower, physical level of the agent’s body itself.

Consciousness on the London underground Goal: If there is an emergency then I get help. Beliefs: A person gets help if the person alerts the driver. A person alerts the driver if the person presses the alarm signal button. There is an emergency if there is a fire. There is an emergency if one person attacks another. There is an emergency if someone becomes seriously ill. There is an emergency if there is an accident. There is a fire if there are flames. There is a fire if there is smoke.

Compiling by reasoning in advance Use unfolding to reduce the conclusion of the top-level goal: Goal: If there is an emergency then I get help. Beliefs: A person gets help if the person alerts the driver. A person alerts the driver if the person presses the alarm signal button. New goal: If there is an emergency then I press the alarm signal button. Unfolding replaces a predicate by its definition, doing backward reasoning in advance of the need to solve goals.

Compiling by reasoning in advance Use unfolding to reduce the conditions of the new goal (doing forward reasoning in advance): Goal:If there is an emergency then I press the alarm signal button. Beliefs: There is a fire if there are flames. There is a fire if there is smoke. There is an emergency if there is a fire. There is an emergency if one person attacks another. There is an emergency if someone becomes seriously ill. There is an emergency if there is an accident. New input-output associations (reactive condition-action rules): If there are flames then I press the alarm signal button. If there is smoke then I press the alarm signal button. If one person attacks another then I press the alarm signal button. If someone becomes seriously ill then I press the alarm signal button. If there is an accident then I press the alarm signal button.

Higher-level compared with lower-level representation The higher-level representation is aware that the goal of pressing the alarm signal button is to get help. The lower-level representation is not aware of the goal. If something goes wrong, for example if the button doesn’t work or the driver doesn’t get help, then the passenger might not realise there is a problem. Also, if the environment changes, and there are better ways of dealing with emergencies, then it would be harder to modify the lower level representation to adapt to the change.

In Computing Lower-level representations are more efficient. Higher-level representations are more flexible, easier to develop, and easier to change. Typically, the higher-level representation is developed first, and then transformed or compiled into a lower-level representation. Low-level programs can sometimes be decompiled into equivalent higher- level programs. The higher-level representation can then be modified and recompiled into a new, improved, lower-level form. Legacy systems, developed directly in low-level languages, may not have enough structure to decompile them. But even then it may be possible to approximate them with higher-level programs.

Feed-forward neural networks can be decompiled into logic programming form (from Computational Intelligence, Poole, Mackworth, Goebel, 1998) inputshidden units output known new reads short home reads with strength W if arguably reads with strength W1 and arguably doesn’t read with strength W2 and W = f( W1 – 2.1W2)

arguably reads with strength W1 if known with strength W4 and new with strength W5 and short with strength W6 and home with strength W7 and W1 = f(– W W W6 –.389W7) arguably doesn’t read with strength W2 if known with strength W4 and new with strength W5 and short with strength W6 and home with strength W7 and W2 = f( W W W W7)

In English A person will read a paper if there is strong reason to read the paper and there is no sufficiently strong reason not to read the paper. There is a reason to read the paper if the author is known to the person, the topic is new, the paper is short and the person is at home. There is a reason not to read the paper if the author is not known to the person, the topic is old, the paper is long and the person is not at home.

A wood louse’s life is without meaning: If it’s clear ahead, then I move forward. If there’s an obstacle ahead, then I turn right. If I am tired, then I stop. The meaning of life (for a wood louse)

But a wood louse’s life may have hidden meaning Top-level goals:The louse stays alive for as long as possible and the louse has as many children as possible. Beliefs: The louse stays alive for as long as possible, if whenever it is hungry then it looks for food and when there is food ahead it eats it, and whenever it is tired then it rests, and whenever it is threatened with attack then it defends itself. The louse has as many children as possible, if whenever it desires a mate then it looks for a mate and when there is a mate ahead it tries to make babies. The louse looks for an object, if whenever it is clear ahead then it moves forward, and whenever there is an obstacle ahead and it isn’t the object then it turns right and when the object is ahead then it stops. The louse defends itself if it makes a pre-emptive attack. Food is an object. A mate is an object.

Deriving an input-output specification by reasoning in advance Unfolding the louse’s top-level goals generates subgoals: whenever the louse is hungry then it looks for food and when there is food ahead it eats it, and whenever the louse is tired then it rests, and whenever the louse is threatened with attack then it defends itself and whenever the louse desires a mate then it looks for a mate and when there is a mate ahead it tries to make babies. The first subgoal can be written as two subgoals in the simpler form: If the louse is hungry then it looks for food and If the louse is hungry and there is food ahead then it eats it

Sub-goals in simplified form If the louse is hungry then it looks for food, and If the louse is hungry and there is food ahead then it eats it, and If the louse is tired then it rests, and If the louse is threatened with attack then it defends itself, and If the louse desires a mate then it looks for a mate, and If the louse desires a mate and there is a mate ahead then it tries to make babies.

Sub-goals in simplified form If the louse is hungry then it looks for food, and If the louse is hungry and there is food ahead then it eats it, and If the louse is tired then it rests, and If the louse is threatened with attack then it defends itself, and If the louse desires a mate then it looks for a mate, and If the louse desires a mate and there is a mate ahead then it tries to make babies.

An input-output specification in reactive, condition-action rule form (which requires conflict resolution) If the louse is hungry and it is clear ahead then the louse moves forward. If the louse is hungry and there is an obstacle ahead and it isn’t food then the louse turns right. If the louse is hungry and there is food ahead then the louse stops and the louse eats the food. If the louse is tired then the louse rests. If the louse is threatened with attack then the louse makes a pre-emptive attack. If the louse desires a mate and it is clear ahead then the louse moves forward. If the louse desires a mate and there is an obstacle ahead and it isn’t a mate then the louse turns right. If the louse desires a mate and there is a mate ahead then the louse stops and the louse tries to make babies.

A input-output specification with conflict resolution compiled into the rules If the louse is threatened with attack then the louse makes a pre-emptive attack. If the louse is hungry and the louse is not threatened with attack and it is clear ahead then the louse moves forward. If the louse is hungry and the louse is not threatened with attack and there is an obstacle ahead and it isn’t food then the louse turns right. If the louse is hungry and the louse is not threatened with attack and there is food ahead then the louse stops and the louse eats the food. If the louse is tired and the louse is not threatened with attack and the louse is not hungry then the louse rests. If the louse desires a mate and it is clear ahead and the louse is not threatened with attack and the louse is not hungry and the louse is not tired then the louse moves forward. If the louse desires a mate and there is an obstacle ahead and it isn’t a mate and the louse is not threatened with attack and the louse is not hungry and the louse is not tired then the louse turns right. If the louse desires a mate and there is a mate ahead and the louse is not threatened with attack and the louse is not hungry and the louse is not tired then the louse stops and the louse tries to make babies.

Forward reasoning Forward reasoning Backward Reasoning Consequences Judge probabilities and utilities Decide Maintenance goal Achievement goal ObserveAct Consequences The World Input-output associations ALP agent that combines conscious and subconscous thinking and that is aware of the meaning of its life

Epilog Modeling Physical Skill Discovery and Diagnosis by Abduction Ikuo Kobayashi1) and Koichi Furukawa2)1) SFC Research Institute, Keio University 2) Graduate School of Media and Governance, Keio University(Received: August 25, 2007) Abstract: We investigate an Abductive Logic Programming (ALP) framework to find appropriate hypotheses to explain both professional and amateur skill performance, and to distinguish and diagnose amateur faulty performance. In our approach, we provide two kinds of rules: motion integrity constraints and performance rules. Motion integrity constraints are essential to formulate skillful performance, as they prevent the generation of hypotheses that contradict the constraints. Performance rules formulate the problem of achieving difficult physical tasks in terms of preferred body movements as well as preferred muscles usage and preferred posture. We also formulate the development of skills in terms of default logic by considering the basic skills as defaults, and advanced skills as exceptions. In this case, we introduce preferences in integrity constraints: either hard integrity constraints to be always satisfied or soft integrity constraints which can be ignored if necessary. Finally we apply this framework to realize skill diagnosis.