Presentation is loading. Please wait.

Presentation is loading. Please wait.

Martin Takac Department of Computer Science University of Otago, New Zealand.

Similar presentations


Presentation on theme: "Martin Takac Department of Computer Science University of Otago, New Zealand."— Presentation transcript:

1 Martin Takac Department of Computer Science University of Otago, New Zealand

2 Takáč, M.: Construction of Meanings in Living and Artificial Agents. Dissertation thesis, Comenius University, Bratislava, 2007. Supervisor: Lubica Benuskova 2

3 Motivation: What is it good for? Application aspect Pre-defined ontologies are not sufficient in dynamic and open environments. It is better to endow the agents with learning abilities and let them discover what is relevant and useful for them => developmental approach to intelligent systems design 3

4 Motivation: What is it good for? Philosophy of AI Can machines understand? Turing TestSearle’s Chinese RoomHarnad’s Symbol Grounding Cognitive Science Better understanding of our own cognition 4

5 Can machines understand? Can animals understand? Can human infants understand? Depends on the definition of “understanding”. Our approach: conceive understanding in such a way that the answer is yes and look what can we get out of it. 5

6 Understanding We say that an agent understands its environment, if it picks up relevant environmental features and utilizes them for its goals/survival. Situated making of meaning of one’s experience Semiotics Umwelt (von Uexkull) Sign (Peirce) Understanding is a gradual phenomenon in the living realm ranging from very primitive innate forms to complex learned human linguistic cognition Interpretant (meaning) Object (referent) Representamen (form) Sign 6

7 Key features of meaning Sensorimotor coupling with the environment Incremental and continuous construction of meaning in interactions with open and dynamic environment Collective coordination of individually constructed meanings [ Takáč, M.: Construction of Meanings in Living and Artificial Agents. In: Trajkovski, G., Collins, S. G. (eds.): Agent-Based Societies: Social and Cultural Interactions, IGI Global, Hershey, PA, 2009.] 7

8 Goal Propose semantic representation that: could be incrementally and continuously (re)constructed from experience/interactions (sensorimotor coupling) would enable the agent to understand its world causality (prediction of consequences of actions) planning inference of intentions/internal states of agents Do computational implementation and measure the results 8

9 Roadmap Semantics of distinguishing criteria Models of autonomous construction of meanings By sensorimotor exploration By social instruction (labelling) From episodes 9

10 Roadmap Semantics of distinguishing criteria Models of autonomous construction of meanings By sensorimotor exploration By social instruction (labelling) From episodes 10

11 Distinguishing criterion is a basic semantic unit and an abstraction of the ability to distinguish, react differentially, understand (Šefránek, 2002). Semantics of distinguishing criteria 11

12 Distinguishing criterion is a basic semantic unit and an abstraction of the ability to distinguish, react differentially, understand (Šefránek, 2002). Neuro-biological motivation Locally tuned detectors (Balkenius, 1999) Geometric representation Conceptual spaces (Gärdenfors, 2000) Semantics of distinguishing criteria 12

13 Conceptual spaces Similarity inversely proportional to distance Concepts represented by prototypes learning – a prototype computed as centroid of instances categorization – finding the closest prototype Concept – (convex) region in the space Metric common for the whole space  symmetrical similarity d 13

14 Semantics of distinguishing criteria A distinguishing criterion r : is incrementally constructed from the incoming sequence of examples of the concept: r  {x 1, …, x N } (learnability) identifies (distinguishes) instances of the concept: r(x )  [0,1] (identification) auto-associatively completes the input: r(x )  p (auto-associativity) 14

15 Distinguishing criteria Each criterion uses its own metrics with parameters reflecting statistical properties of input sample set. d2d2 x + All learning starts from scratch, and is online and incremental! 15

16 16/50 Spectral decomposition of the covariance matrix

17 Receptive fields a1a1 a2a2 a1a1 a2a2 a1a1 a2a2 a1a1 a2a2 a1a1 a2a2. a1a1 a2a2 17

18 Types of distinguishing criteria “left_of“ “big“, “blue“, “triangle“ “grew“ “house“ “a bulldozer pushed the house from the left “, “the house fell down“ tt+1 18

19 Roadmap Semantics of distinguishing criteria Models of autonomous construction of meanings By sensorimotor exploration By social instruction (labelling) From episodes 19

20 Roadmap Semantics of distinguishing criteria Models of autonomous construction of meanings By sensorimotor exploration By social instruction (labelling) From episodes 20

21 We know how to construct the criterion from its sample set r  {x 1, …, x N } Practical problem – to delineate the sample set (which criterion should be fed with the current stimuli?) Unsupervised (clustering) Environmental relevance By pragmatic feedback Ecological relevance By naming (labeling) Social relevance Mechanisms of meaning construction 21

22 We know how to construct the criterion from its sample set r  {x 1, …, x N } Practical problem – to delineate the sample set (which criterion should be fed with the current stimuli?) Unsupervised (clustering) Environmental relevance By pragmatic feedback Ecological relevance By naming (labeling) Social relevance Mechanisms of meaning construction 22

23 Meaning creation by sensorimotor exploration Environment Virtual child, surrounded by objects: fruits, toys, furniture. In discrete time steps, the child performs random actions on randomly chosen objects: trying to lift them or put them down (with various parameters – force, arm angle). Actions performed on objects cause changes of their attribute values. Simple physics simulated. Learning The sensations of the child are in the form of perceptual frames (sets of attribute- value pairs) of objects, actions and changes [x a, x o, x c ]. The child creates and updates criteria of objects C o, actions C a and changes C c and their associations V  C a  C o  C c (all sets initially empty). Objects and actions are grouped to categories by the change. That is, if an action leads to the same change on several objects, they will all fall in the same category and vice versa. 23

24 Architecture World Perception Agent Causal module  objects, actions,consequences  Scheduler Motivation system needs, goals { vertices: 3, posX: 20, posY: 7, R: 0, G: 0, B: 255 } Action repertoire Changes Proprioception lift( {force: 10, angle: 45} ) 24

25 Meaning creation by sensorimotor exploration - Results Causal relations – able to predict consequences of own actions. Affordances „Objects too heavy to be lifted.“ „Objects that cannot be put down (because they are already on the ground).“ Growing sensitivity helpful. 25

26 Roadmap Semantics of distinguishing criteria Models of autonomous construction of meanings By sensorimotor exploration By social instruction (labelling) From episodes 26

27 The agent’s architecture Pragmatics (actions, causality, goals, planning) Actions Environment { vertices: 3, posX: 20, posY: 7, R: 0, G: 0, B: 255 } Percepts Perception Concepts Learning Categorization big blue Language Child 27

28 Cross-situational learning No true homonymy assumption: Different words have different senses, even if they share a referent (in this case, they denote different aspects of the referent). No true synonymy assumption: All referents of a word across multiple situation are considered instances of the same concept.  The more contexts of use, the better chance that essential properties stay invariant, while unimportant ones will vary. 28

29 Construction of meaning by labeling „left_of“ „triangle“ „blue“ „big“, „blue“, „triangle“ 29

30 Iterated learning 30

31 Iterated learning 31

32 Iterated learning 32

33 Iterated learning... 33

34 Construction of meaning by labeling - results We measured: similarity of description between teacher and learner ability to locate the referent(s) of a name Good meaning similarity between two subsequent generations Meaning shifts and drift over many generations Replicator dynamics, more relevant and more general meanings survive. Structural meanings more stable. [ Takáč, M.: Autonomous Construction of Ecologically and Socially Relevant Semantics. Cognitive Systems Research 9 (4), October 2008, pp. 293-311.] 34

35 Roadmap Semantics of distinguishing criteria Models of autonomous construction of meanings By sensorimotor exploration By social instruction (labelling) From episodes 35

36 Roadmap Semantics of distinguishing criteria Models of autonomous construction of meanings By sensorimotor exploration By social instruction (labelling) From episodes 36

37 Episodic representation – being learned from observed/performed actions Example experiment: Lattice 5 x 5 4 agents (posX, posY, dir, energy) 10 objects (posX, posY, nutrition) Actions: move(steps), turn(angle), eat(howMuch) 37

38 Frame representation of episodes Role structure [ACT, SUBJ, OBJ,  SUBJ,  OBJ] Example: [ ACT = { eat: 1; howMuch: 6 }, SUBJ = { dir: 2; @energy: 10; posX: 4; posY: 3 }, OBJ = { nutrition: 129; posX: 3; posY: 3 },  SUBJ = { dir: 0; @energy: +6; posX: 0; posY: 0 },  OBJ = { nutrition: -6; posX: 0; posY: 0 } ] 38

39 Episodic representation can be incomplete (partial) missing roles missing attributes because they are internal (private) due to noise/stochasticity due to the developmental stage incompleteness can be used for predictions 39

40 Recall from partial episode [ACT, SUBJ, OBJ,  SUBJ,  OBJ] subject’s abilities (what can I do?) [ACT, SUBJ, OBJ,  SUBJ,  OBJ] object’s affordances (what can be done with it?) [ACT, SUBJ, OBJ,  SUBJ,  OBJ] verb islands (how and upon what to perform the action?) [ACT, SUBJ, OBJ,  SUBJ,  OBJ] action selection/planning (how to achieve a desired change?) 40

41 Requirements Open set of possible attributes Stochastic occurrence of attributes Learning from observed/performed actions incremental permanent performance while learning & learning from performance Fast learning – reasonable performance after seeing one or few examples 41

42 Architecture Primary layer Episodic layer [ACT, SUBJ, OBJ,  SUBJ,  OBJ] 42

43 Primary layer transforms continuous real domain of an attribute to a vector of real  [0,1] activities covers the real domain with the set of nodes (1-dim detectors), each reacting to a neighborhood of some real value neurobiological motivaton - primary sensory cortices (localistic coding) qualitatively important landmarks approximates the distribution of attribute values with least possible error 43

44 consist of nodes {e 1, e 2, … e k } – episodic „memories“ Nodes can be added, refined, merged and forgotten A node e i : maintains N, A,  i  A: p i,  2 i, f i reacts to a frame Episodic layer 44

45 Episode-based learning - Results Agents able to acquire causal relations (we measured predictive ability). Autoassociative recall – potential for simple inferences Subject’s abilities Object’s affordances Prediction Planning Inherently episodic organization of knowledge (implicit categories of objects, properties, relations and actions) Prediction of unobservable properties (“empathy” or ToM) [ Takáč, M., 2008. Developing Episodic Semantics. In: Proceedings of AKRR-08 (Adaptive Knowledge Representation and Reasoning). ] 45

46 Mirroring effect, „empathy“, inference of internal states A0 sensed (A3  O3): [ACT = {eat: 1; howMuch: 4; }, SUBJ = {dir: 1; posX: 2; posY: 0 }, OBJ = {nutrition: 1792; posX: 3; posY: 0 },  SUBJ = {dir: 0; posX: 0; posY: 0 },  OBJ = {nutrition: -4; posX: 0; posY: 0 } ] A0 recalled: [ACT = {eat: 1 (100%); howMuch: 2 (50%) } SUBJ = {dir: 0 (50%); @energy: 40 (46%); posX: 1 (100%); posY: 0 (100%) }, OBJ = {nutrition: 1795 (98%); posX: 3 (100%); posY: 0 (100%) },  SUBJ = {dir: 0 (100%); @energy: 2.5 (45%); posX: 0 (100%); posY: 0 (100%) }  OBJ = {nutrition: -4 (99%); posX: 0 (100%); posY: 0 (100%) } ] Pragm. Success = 0.83 46

47 Adding communication (future work) For successful inter-agent communication, the meanings should be mutually coordinated and associated with some signals in a collectively coherent way. Speech act as a type of action Collective dynamics Pragmatic and contextual language representation connected to particular states of the speaker (SUBJ) and the hearer (OBJ), possibly leading to changes of their states (∆SUBJ, ∆OBJ) prediction/production of different utterances depending on a personal style and affective state of the speaker, or to infer the internal state of the speaker from its utterance in some context. 47

48 Conclusion - what we have done Non-anthropocentric conceptual apparatus for study of meanings in different kinds of agents (virtual, embodied, alive, human...) Computational representation of meanings amenable to autonomous construction supported by implemented models. Interesting hybrid computational architecture that features: openness in terms of possible attributes and categories or their gradual change (no catastrophic forgetting) online learning – from scratch, incremental, fast and permanent dynamic organization amenable to analysis of internal structures 48

49 Conclusion - what we haven’t done Cognitive modeling fit of particular empirical/developmental data Neuroscience fit of particular brain structures Real-scale models/applications complex environments, many agents, noise tolerance Full-blown semantics abstract meanings, cultural scenarios and many more … we even haven’t got to language yet… 49

50 Thank you for your attention! 50


Download ppt "Martin Takac Department of Computer Science University of Otago, New Zealand."

Similar presentations


Ads by Google