The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.

Slides:



Advertisements
Similar presentations
1 Knowledge Representation Introduction KR and Logic.
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Rulebase Expert System and Uncertainty. Rule-based ES Rules as a knowledge representation technique Type of rules :- relation, recommendation, directive,
Logic Use mathematical deduction to derive new knowledge.
1 DCP 1172 Introduction to Artificial Intelligence Chang-Sheng Chen Topics Covered: Introduction to Nonmonotonic Logic.
Knowledge Representation
Intelligent systems Lecture 6 Rules, Semantic nets.
Rule Based Systems Michael J. Watts
Copyright © 2006 Addison-Wesley. All rights reserved.1-1 ICS 410: Programming Languages Chapter 3 : Describing Syntax and Semantics Axiomatic Semantics.
Reasoning Lindsay Anderson. The Papers “The probabilistic approach to human reasoning”- Oaksford, M., & Chater, N. “Two kinds of Reasoning” – Rips, L.
Cognitive Psychology Chapter 7. Cognitive Psychology: Overview  Cognitive psychology is the study of perception, learning, memory, and thought  The.
Relational Data Mining in Finance Haonan Zhang CFWin /04/2003.
© C. Kemke1Reasoning - Introduction COMP 4200: Expert Systems Dr. Christel Kemke Department of Computer Science University of Manitoba.
1 Department of Computer Science and Engineering, University of South Carolina Issues for Discussion and Work Jan 2007  Choose meeting time.
Artificial Intelligence Chapter 17 Knowledge-Based Systems Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Describing Syntax and Semantics
WELCOME TO THE WORLD OF FUZZY SYSTEMS. DEFINITION Fuzzy logic is a superset of conventional (Boolean) logic that has been extended to handle the concept.
THE TRANSITION FROM ARITHMETIC TO ALGEBRA: WHAT WE KNOW AND WHAT WE DO NOT KNOW (Some ways of asking questions about this transition)‏
Knowledge Representation and Reasoning University "Politehnica" of Bucharest Department of Computer Science Fall 2010 Adina Magda Florea
Artificial Intelligence: Definition “... the branch of computer science that is concerned with the automation of intelligent behavior.” (Luger, 2009) “The.
INTRODUCTION TO MACHINE LEARNING. $1,000,000 Machine Learning  Learn models from data  Three main types of learning :  Supervised learning  Unsupervised.
Knowledge representation
Some Thoughts to Consider 1 What is so ‘artificial’ about Artificial Intelligence? Just what are ‘Knowledge Based Systems’ anyway? Why would we ever want.
Dr. Matthew Iklé Department of Mathematics and Computer Science Adams State College Probabilistic Quantifier Logic for General Intelligence: An Indefinite.
Introduction Algorithms and Conventions The design and analysis of algorithms is the core subject matter of Computer Science. Given a problem, we want.
Formal Models in AGI Research Pei Wang Temple University Philadelphia, USA.
Pattern-directed inference systems
ISBN Chapter 3 Describing Semantics -Attribute Grammars -Dynamic Semantics.
AGI Architectures & Control Mechanisms. Realworld environment Anatomy of an AGI system Intellifest 2012 Sensors Actuators Data Processes Control.
LOGIC AND ONTOLOGY Both logic and ontology are important areas of philosophy covering large, diverse, and active research projects. These two areas overlap.
Natural Language Processing by Reasoning and Learning Pei Wang Temple University Philadelphia, USA.
© 2008 The McGraw-Hill Companies, Inc. Chapter 8: Cognition and Language.
 Dr. Syed Noman Hasany 1.  Review of known methodologies  Analysis of software requirements  Real-time software  Software cost, quality, testing.
Machine Learning Chapter 5. Artificial IntelligenceChapter 52 Learning 1. Rote learning rote( โรท ) n. วิถีทาง, ทางเดิน, วิธีการตามปกติ, (by rote จากความทรงจำ.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
Programming Languages and Design Lecture 3 Semantic Specifications of Programming Languages Instructor: Li Ma Department of Computer Science Texas Southern.
A Logic of Partially Satisfied Constraints Nic Wilson Cork Constraint Computation Centre Computer Science, UCC.
Generic Tasks by Ihab M. Amer Graduate Student Computer Science Dept. AUC, Cairo, Egypt.
Basic Concepts of Logic An Overview of Introduction to Logic Yingrui Yang
KNOWLEDGE BASED SYSTEMS
Thinking  Cognition  mental activities associated with thinking, knowing, remembering, and communicating  Cognitive Psychology  study of mental activities.
Artificial Intelligence “Introduction to Formal Logic” Jennifer J. Burg Department of Mathematics and Computer Science.
International Conference on Fuzzy Systems and Knowledge Discovery, p.p ,July 2011.
Chapter 7. Propositional and Predicate Logic Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
Issues in Temporal and Causal Inference Pei Wang Temple University, USA Patrick Hammer Graz University of Technology, Austria.
INTRODUCTION TO COGNITIVE SCIENCE NURSING INFORMATICS CHAPTER 3 1.
From NARS to a Thinking Machine Pei Wang Temple University.
5 Lecture in math Predicates Induction Combinatorics.
Artificial Intelligence Hossaini Winter Outline book : Artificial intelligence a modern Approach by Stuart Russell, Peter Norvig. A Practical Guide.
Artificial Intelligence Knowledge Representation.
Artificial Intelligence Logical Agents Chapter 7.
Chapter 8 Thinking and Language.
Artificial Intelligence
Chapter 7. Propositional and Predicate Logic
What is cognitive psychology?
CIS Automata and Formal Languages – Pei Wang
Ontology From Wikipedia, the free encyclopedia
REASONING WITH UNCERTANITY
Knowledge Representation
Intelligent Agents Chapter 2.
EXPERT SYSTEMS.
Supplement Beyond Computation
Chapter 7. Propositional and Predicate Logic
Generalized Diagnostics with the Non-Axiomatic Reasoning System (NARS)
Embodiment: Does a laptop have a body?
NARS an Artificial General Intelligence Project
Presentation transcript:

The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University

Artificial General Intelligence Main­stream AI treats “Intelligence” as a collection of problem-specific and domain-specific parts Artificial General Intelligence (AGI) takes “Intelligence” as a general-­purpose capability that should be treated as a whole AGI research still includes different research objectives and strategies

Artificial Intelligence and Logic “Intelligence” can be understood as “rationality” and “validity” --- “do the right thing” In general, “logic” is the study of valid reasoning, or the regularity in thinking Therefore, an AI system may be built according to a logic, by converting various thinking processes into reasoning processes

Reasoning System A reasoning system typically consists of the following major components:  a formal language  a semantic theory  a set of inference rules  a memory structure  a control mechanism The first three are usually called a “logic”

Traditional Theories  Language and inference rules: first-order predicate calculus  Semantics: model theory  Memory: relational or object-oriented data structures and database  Inference control: theory of computation (algorithm, computability, and computational complexity)

Problems of Traditional Theories  Uncertainty: fuzzy concepts, changing meanings and truth values, plausible results, conflicting evidence, nondeterministic inference process, …  Semantic justification of non-deductive inference: induction, abduction, analogy, …  Counter-intuitive results: sorites paradox, implication paradox, confirmation paradox, Wason’s selection task, …  Computability and complexity: termination problem, combinatorial explosion, …

Proposed Solutions  non-monotonic logic  paraconsistent logic  relevance logic  probabilistic logic  fuzzy logic  inductive logic  temporal logic  modal logic  situation calculus  possible world theory  mental logic  mental model  case-based reasoning  Bayesian network  neural network  genetic algorithm  heuristic algorithm  learning algorithm  anytime algorithm … …

Common Root of the Problems The traditional theories were developed in the study of the foundation of mathematics, while the problems appear outside math The logic of mathematics may be different from the logic of cognition In mathematical reasoning, the knowledge and resources are assumed to be sufficient (with respect to the tasks)

Different Types of Systems  “Pure-axiomatic system”: the system’s knowledge and resources are assumed to be sufficient  “Semi-axiomatic system”: certain aspects (but not all) of the knowledge and resources are assumed to be sufficient  “Non-axiomatic system”: the knowledge and resources of the system are assumed to be generally insufficient

NARS (Non-Axiomatic Reasoning System) NARS uses a formal logic (language, semantics, inference rules) and is implemented in a computer system NARS is fully based on the assumption of insufficient knowledge and resources, in the sense of being a finite, real time, open, and adaptive system NARS is different from traditional theories in all major components

Inheritance Based Representation S  P : there is an inheritance relation from term S to term P S is a specialization of P P is a generalization of S Inheritance is reflexive and transitive birdanimal

Extension and Intension For a given term T, its extension T E = {x | x  T} its intension T I = {x | T  x} T TETE TITI Theorem: (S  P)  (S E  P E )  (P I  S I ) Therefore, “Inheritance” means “inheritance of extension/intension”

Evidence Positive evidence of S  P : {x | x  (S E  P E )  (P I  S I )} Negative evidence of S  P : {x | x  (S E – P E )  (P I – S I )} SP   Amount of evidence: positive: w + = | S E  P E | + | P I  S I | negative: w – = | S E – P E | + | P I – S I | total: w = w + + w – = | S E | + | P I |

Truth Value In NARS, the truth value of a statement is a pair of numbers, and measures the evidential support to the statement. SP [f, c] S  P [f, c] f: frequency, w + /w c: confidence, w / (w +1)

Experience-Grounded Semantics The truth value of a statement is defined according to certain “idealized experience”, consisting of a set of binary inheritance statements The meaning of a term is defined by its extension and intension, according to certain “idealized experience” So meaning and truth-value changes according to the system’s experience

Syllogistic Inference Rules A typical syllogistic inference rule takes a pair of premises with a common term, and produces a conclusion The truth value of the conclusion is calculated by a truth-value function Different combinations of premises trigger different rules (with different truth-value functions)

To Design a Truth-value Function 1. Treat all involved variables as Boolean (binary) variables 2. For each value combination in premises, decide the values in conclusion 3. Build Boolean functions among the variables 4. Extend the functions to real-number: not (x) = 1 – x and (x, y) = x * y or (x, y) = 1 – (1 – x) * (1 – y)

Deduction bird  animal [1.00, 0.90] robin   bird [1.00, 0.90]  robin  animal [1.00, 0.81] M  P [f 1, c 1 ] S  M [f 2, c 2 ]  S  P [f, c] f = f 1 * f 2 c = c 1 * c 2 * f 1 * f 2 M SP

Induction swan   bird [1.00, 0.90] swan  swimmer [1.00, 0.90]  bird  swimmer [1.00, 0.45] M  P [f 1, c 1 ] M  S [f 2, c 2 ]  S  P [f, c] f = f 1 c = f 2 * c 1 * c 2 / (f 2 * c 1 * c 2 + 1) S M P

Abduction seabird   swimmer [1.00, 0.90] gull  swimmer [1.00, 0.90]  gull  seabird [1.00, 0.45] P   M [f 1, c 1 ] S  M [f 2, c 2 ]  S  P [f, c] f = f 2 c = f 1 * c 1 * c 2 / (f 1 * c 1 * c 2 + 1) S M P

Revision bird  swimmer [1.00, 0.62] bird  swimmer [0.00, 0.45]  bird  swimmer [0.67, 0.71] S  P [f 1, c 1 ] S  P [f 2, c 2 ]  S  P [f, c] f = f 1 * c 1 * (1 - c 2 ) + f 2 * c 2 * (1 - c 1 )  c 1 * (1 - c 2 ) + c 2 * (1 - c 1 )  c 1 * (1 - c 2 ) + c 2 * (1 - c 1 ) + (1 - c 2 ) * (1 - c 1 ) c = S P

Other Inference Rules M  P [f 1, c 1 ] S  M [f 2, c 2 ]  S  P [f, c] analogy union implication P  M [f 1, c 1 ] S  M [f 2, c 2 ]  (S  P)  M [f, c] B  C [f 1, c 1 ] A  B [f 2, c 2 ]  A  C [f, c]

Other Relations and Inheritance An arbitrary statement R(a, b, c) can be rewritten as inheritance relations with compound terms:  (*, a, b, c)  R “The relation among a, b, c is a kind of R.”  a  (/, R, _, b, c) “a is such an x that satisfies R(x, b, c ).”  b  (/, R, a, _, c) “b is such an x that satisfies R(a, x, c ).”  c  (/, R, a, b, _) “c is such an x that satisfies R(a, b, x ).”

Memory as a Belief Network bird gull swan robin swimmer crow feathered_creature [1.00, 0.90] [0.00, 0.90] [1.00, 0.90] The knowledge of the system is a network of beliefs among terms. A term with all of its beliefs is a concept C bird

Inference Tasks NARS accepts several types of inference tasks:  Knowledge to be absorbed  Questions to be answered  Goals to be achieved A task is stored in the corresponding concepts To process each task means letting it interacts with the available beliefs in the concept This process usually generates new tasks, beliefs, and concepts, recursively

Inference Process NARS runs by repeating the following cycle: 1. Choose a concept within the memory 2. Choose a task within the concept 3. Choose a belief within the concept 4. Use inference rules to produce new tasks 5. Return the used items to memory 6. Add the new tasks into the memory and provide an answer if available

Control Strategy NARS maintains priority distributions among data items, uses them to make choice, and adjusts them after each step Factors influence priority:  quality of the item  usefulness of the item in history  relevance of the item to the current context

Architecture and Working Cycle

Design and Implementation The conceptual design of NARS has been described in a series of publications Most parts of the design have been implemented in several prototypes, and the current version is open source in Java Working examples exist as proof of concept, and only cover single-step inference or short inference processes The project is on-going, though has produced novel and interesting results

Unified Solutions  The truth value uniformly represents various kinds of uncertainty  The truth value depends on both positive and negative evidence  The non-deductive inference rules is justified according to the semantics  The meaning of a term is determined by its experienced relations with other terms  With syllogistic rules, the premises and conclusions must be semantically related  The inference processes in NARS does not follow predetermined algorithms

Conclusions It is possible to build a reasoning system that adapts to its environment, and works with insufficient knowledge and resources Such a system provides a unified solution to many problems in A(G)I There is a logic of intelligence, though it is fundamentally different from the logic of mathematics