Presentation is loading. Please wait.

Presentation is loading. Please wait.

7 Principles of Synthetic Intelligence Joscha Bach, University of Osnabrück, Cognitive Science March 2008.

Similar presentations


Presentation on theme: "7 Principles of Synthetic Intelligence Joscha Bach, University of Osnabrück, Cognitive Science March 2008."— Presentation transcript:

1 7 Principles of Synthetic Intelligence Joscha Bach, University of Osnabrück, Cognitive Science March 2008

2 AGI 08 March 1st, 20082 What is Artificial General Intelligence up to? Perception, and what depends on it, is inexplicable in a mechanical way, that is, using figures and motions. Suppose there would be a machine, so arranged as to bring forth thoughts, experiences and perceptions; it would then certainly be possible to imagine it to be proportionally enlarged, in such a way as to allow entering it, like into a mill. This presupposed, one will not find anything upon its examination besides individual parts, pushing each other— and never anything by which a perception could be explained. (Gottfried Wilhelm Leibniz 1714)

3 AGI 08 March 1st, 20083 What is Artificial General Intelligence up to? Perception, and what depends on it, is inexplicable in a mechanical way, that is, using figures and motions. Suppose there would be a machine, so arranged as to bring forth thoughts, experiences and perceptions; it would then certainly be possible to imagine it to be proportionally enlarged, in such a way as to allow entering it, like into a mill. This presupposed, one will not find anything upon its examination besides individual parts, pushing each other— and never anything by which a perception could be explained. (Gottfried Wilhelm Leibniz 1714)

4 AGI 08 March 1st, 20084 AI Scepticism: G. W. Leibniz Perception, and what depends on it, is inexplicable in a mechanical way, that is, using figures and motions.

5 AGI 08 March 1st, 20085 AI Scepticism: Roger Penrose Perception, and what depends on it, is inexplicable in a mechanical way, that is, using figures and motions. The quality of understanding and feeling possessed by human beings is not something that can be simulated computationally.

6 AGI 08 March 1st, 20086 AI Scepticism: John R. Searle Perception, and what depends on it, is inexplicable in a mechanical way, that is, using figures and motions. The quality of understanding and feeling possessed by human beings is not something that can be simulated computationally. Syntax by itself is neither constitutive of nor sufficient for semantics. Computers only do syntax, so they can never understand anything.

7 AGI 08 March 1st, 20087 AI Scepticism: Joseph Weizenbaum Perception, and what depends on it, is inexplicable in a mechanical way, that is, using figures and motions. The quality of understanding and feeling possessed by human beings is not something that can be simulated computationally. Syntax by itself is neither constitutive of nor sufficient for semantics. Computers only do syntax, so they can never understand anything. Human experience is not transferable. (…) Computers can not be creative.

8 AGI 08 March 1st, 20088 AI Scepticism: General Consensus… Perception, and what depends on it, is inexplicable in a mechanical way, that is, using figures and motions. The quality of understanding and feeling possessed by human beings is not something that can be simulated computationally. Syntax by itself is neither constitutive of nor sufficient for semantics. Computers only do syntax, so they can never understand anything. Human experience is not transferable. (…) Computers can not be creative. Computers can not, because they should not. The “Winter of AI” is far from over.

9 AGI 08 March 1st, 20089 AI is not only trapped by cultural opposition AI suffers from - paradigmatic fog - methodologism - lack of unified architectures - too much ungrounded, symbolic modeling - too much non-intelligent, robotic programming - lack of integration of motivation and representation - lack of conviction

10 AGI 08 March 1st, 200810 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures

11 AGI 08 March 1st, 200811 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures (infrared) imaging of combustion engine

12 AGI 08 March 1st, 200812 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures (infrared) imaging of combustion engine

13 AGI 08 March 1st, 200813 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures

14 AGI 08 March 1st, 200814 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures Requirement: Dissection of system into parts and relationships between them

15 AGI 08 March 1st, 200815 #1: Build functionalist architectures Requirement: Dissection of system into parts and relationships between them

16 AGI 08 March 1st, 200816 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures 2. Let the question define the method

17 AGI 08 March 1st, 200817 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures 2. Let the question define the method – not vice versa! AI‘s specialized sub-disciplines will not be re-integrated into a whole.

18 AGI 08 March 1st, 200818 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures 2. Let the question define the method 3. Aim for the Big Picture, not narrow solutions

19 AGI 08 March 1st, 200819 Conceptual Analysis: HCogAff (Sloman 2001)

20 AGI 08 March 1st, 200820 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures 2. Let the question define the method 3. Aim for the Big Picture, not narrow solutions 4. Build grounded systems

21 AGI 08 March 1st, 200821 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures 2. Let the question define the method 3. Aim for the Big Picture, not narrow solutions 4. Build grounded systems – but do not get entangled in the „Symbol Grounding Problem“ The meaning of a concept is equivalent to an adequate encoding over environmental patterns.

22 AGI 08 March 1st, 200822 Modal vs. amodal representation (Barsalou 99)

23 AGI 08 March 1st, 200823 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures 2. Let the question define the method 3. Aim for the Big Picture, not narrow solutions 4. Build grounded systems 5. Do not wait for robots to provide embodiment

24 AGI 08 March 1st, 200824 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures 2. Let the question define the method 3. Aim for the Big Picture, not narrow solutions 4. Build grounded systems 5. Do not wait for robots to provide embodiment – Robotic embodiment is costly, but not necessarily more “real” than virtual embodiment.

25 AGI 08 March 1st, 200825

26 AGI 08 March 1st, 200826 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures 2. Let the question define the method 3. Aim for the Big Picture, not narrow solutions 4. Build grounded systems 5. Do not wait for robots to provide embodiment 6. Build autonomous systems

27 AGI 08 March 1st, 200827 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures 2. Let the question define the method 3. Aim for the Big Picture, not narrow solutions 4. Build grounded systems 5. Do not wait for robots to provide embodiment 6. Build autonomous systems Intelligence is an answer to serving polythematic goals, by unspecified means, in an open environment.  Integrate motivation and emotion into the model.

28 AGI 08 March 1st, 200828 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures 2. Let the question define the method 3. Aim for the Big Picture, not narrow solutions 4. Build grounded systems 5. Do not wait for robots to provide embodiment 6. Build autonomous systems 7. Intelligence is not going to simply “emerge”

29 AGI 08 March 1st, 200829 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures 2. Let the question define the method 3. Aim for the Big Picture, not narrow solutions 4. Build grounded systems 5. Do not wait for robots to provide embodiment 6. Build autonomous systems 7. Intelligence is not going to simply “emerge”: Sociality, personhood, experience, consciousness, emotion, motivation will have to be conceptually decomposed and their components and functional mechanisms realized.

30 AGI 08 March 1st, 200830 Taking the Lessons: MicroPsi Integrated architecture, based on a theory originating in psychology Unified neuro-symbolic representation (hierarchical spreading activation networks) Functional modeling of emotion: – Emotion as cognitive configuration – Emotional moderators Functional modeling of motivation: – Modeling autonomous behavior – Cognitive and Physiological drives – Integrating motivational relevance with perception/memory

31 AGI 08 March 1st, 200831 Implementation: MicroPsi (Bach 03, 05, 04, 06)

32 AGI 08 March 1st, 200832 Implementation: MicroPsi (Bach 03, 05, 04, 06) Low-level perception

33 AGI 08 March 1st, 200833 Implementation: MicroPsi (Bach 03, 05, 04, 06) Low-level perception Control and simulation

34 AGI 08 March 1st, 200834 Implementation: MicroPsi (Bach 03, 05, 04, 06) Low-level perception Control and simulation Multi-agent interaction

35 AGI 08 March 1st, 200835 Implementation: MicroPsi (Bach 03, 04, 05, 06) Low-level perception Control and simulation Multi-agent interaction Robot control

36 AGI 08 March 1st, 200836 Foundation of MicroPsi: PSI theory (Dörner 99, 02) How can the different aspects of cognition be realized?

37 AGI 08 March 1st, 200837 PSI theory (Dörner 99, 02)

38 AGI 08 March 1st, 200838 PSI theory (Dörner 99, 02)

39 AGI 08 March 1st, 200839 PSI theory (Dörner 99, 02)

40 AGI 08 March 1st, 200840 PSI theory (Dörner 99, 02)

41 AGI 08 March 1st, 200841 Motivation in PSI/MicroPsi

42 AGI 08 March 1st, 200842 Integrated representation

43 AGI 08 March 1st, 200843 Goal of MicroPsi: broad model of cognition Aim at Perceptual symbol system approach Integrating goal-setting Use motivational and emotional system as integral part of addressing mental representation Physiological, physical and social demands and affordances Modulation/moderation of cognition

44 AGI 08 March 1st, 200844 Lessons for Synthesizing Intelligence 1. Build whole, functionalist architectures 2. Let the question define the method 3. Aim for the Big Picture, not narrow solutions 4. Build grounded systems 5. Do not wait for robots to provide embodiment 6. Build autonomous systems 7. Intelligence is not going to simply “emerge” Website: www.cognitive-agents.org Publications, Download of Agent, Information for Developers

45 AGI 08 March 1st, 200845 … and this is where it starts. Thank you! Website: www.cognitive-agents.org Publications, Download of Agent, Information for Developers

46 AGI 08 March 1st, 200846 Many thanks to… - the Institute for Cognitive Science at the University of Osnabrück and the AI department at Humboldt-University of Berlin for making this work possible - Ronnie Vuine, David Salz, Matthias Füssel, Daniel Küstner, Colin Bauer, Julia Böttcher, Markus Dietzsch, Caryn Hein, Priska Herger, Stan James, Mario Negrello, Svetlana Polushkina, Stefan Schneider, Frank Schumann, Nora Toussaint, Cliodhna Quigley, Hagen Zahn, Henning Zahn and Yufan Zhao for contributions

47 AGI 08 March 1st, 200847 Motivation in PSI/MicroPsi

48 AGI 08 March 1st, 200848 Modulation in PSI/MicroPsi

49 AGI 08 March 1st, 200849 Motivation in PSI/MicroPsi Urges/drives: – Finite set of primary, pre-defined urges (drives) – All goals of the system are associated with the satisfaction of an urge including abstract problem solving, aesthetics, social relationships and altruistic behavior – Urges reflect demands – Categories:  physiological urges (food, water, integrity)  social urges (affiliation, internal legitimacy)  cognitive urges (reduction of uncertainty, and competence)

50 AGI 08 March 1st, 200850 Emotion in PSI/MicroPsi Lower emotional level (affects): – Not independent sub-system, but aspect of cognition – Emotions are emergent property of the modulation of perception, behavior and cognitive processing – Phenomenal qualities of emotion are due to  effect of modulatory settings on perception on cognitive functioning  experience of accompanying physical sensations (Higher level) emotions: – Directed affects – Objects of affects are given by motivational system


Download ppt "7 Principles of Synthetic Intelligence Joscha Bach, University of Osnabrück, Cognitive Science March 2008."

Similar presentations


Ads by Google