Presentation is loading. Please wait.

Presentation is loading. Please wait.

Com1005: Machines and Intelligence Amanda Sharkey.

Similar presentations


Presentation on theme: "Com1005: Machines and Intelligence Amanda Sharkey."— Presentation transcript:

1 Com1005: Machines and Intelligence Amanda Sharkey

2 3 approaches to AI -Symbolic AI or Traditional AI or GOFAI (Good Old Fashioned AI) –Connectionism, or Neural Computing –New AI, (nouvelle AI), Adaptive Behaviour, or Embodied AI.

3 Changing emphasis ….Changing emphasis …. GOFAI- emphasis on human intelligence, and cognitive reasoning.GOFAI- emphasis on human intelligence, and cognitive reasoning. –Emphasis on representing, and reasoning with knowledge. Turn of the millennium – interest in wider range of biological intelligence.Turn of the millennium – interest in wider range of biological intelligence. –E.g. self organisation of ant colonies.

4 New emphasis on interaction betweenNew emphasis on interaction between –Brain –Body –World

5 Rodney Brooks, MIT Artificial Intelligence LabRodney Brooks, MIT Artificial Intelligence Lab Brooks, R.A. (1990) Elephants don’t play chess. In Pattie Maes (Ed.) Designing autonomous agents. Cambridge, Mass, MIT PressBrooks, R.A. (1990) Elephants don’t play chess. In Pattie Maes (Ed.) Designing autonomous agents. Cambridge, Mass, MIT Press Brooks, R.A. (1991) Intelligence without Reason. In Proceedings of the 12 th International Joint Conference on Artificial Intelligence. Morgan Kauffman.Brooks, R.A. (1991) Intelligence without Reason. In Proceedings of the 12 th International Joint Conference on Artificial Intelligence. Morgan Kauffman. Brooks, R.A. (1991) Intelligence without Representation, Artificial Intelligence, 47, Brooks, R.A. (1991) Intelligence without Representation, Artificial Intelligence, 47,

6 Brooks, R.A. (1990) Elephants don’t play chess. In Pattie Maes (Ed.) Designing autonomous agents. Cambridge, Mass, MIT PressBrooks, R.A. (1990) Elephants don’t play chess. In Pattie Maes (Ed.) Designing autonomous agents. Cambridge, Mass, MIT Press Elephants don’t play chess – but still intelligentElephants don’t play chess – but still intelligent

7 Nouvelle AI – –based on the physical grounding hypothesis. An intelligent systems needs to have its representations grounded in the physical world. Needs sensors and actuators connected to the world - not typed input and output. The world is its own best model – it contains every detail – “the trick is to sense it appropriately and often enough”

8 Robots operate in world, using “highly reactive architectures, with no reasoning systems, no manipulable representations, no symbols, and totally decentralised computation” (Brooks, 1991).Robots operate in world, using “highly reactive architectures, with no reasoning systems, no manipulable representations, no symbols, and totally decentralised computation” (Brooks, 1991).

9 “I wish to build completely autonomous mobile agents that co-exist in the world with humans, and are seen by those humans as intelligent beings in their own right. I will call such agents Creatures” (Brooks, 1991)“I wish to build completely autonomous mobile agents that co-exist in the world with humans, and are seen by those humans as intelligent beings in their own right. I will call such agents Creatures” (Brooks, 1991)

10 A Creature must cope appropriately and in a timely fashion with changes in its dynamic environmentA Creature must cope appropriately and in a timely fashion with changes in its dynamic environment A Creature should be robust with respect to its environment: - no effect of minor changes in the properties of the worldA Creature should be robust with respect to its environment: - no effect of minor changes in the properties of the world A Creature should be able to maintain multiple goals and ….adapt to surroundings and capitalise on fortuitous circumstancesA Creature should be able to maintain multiple goals and ….adapt to surroundings and capitalise on fortuitous circumstances A Creature should do something in the world; it should have some purpose in being.A Creature should do something in the world; it should have some purpose in being.

11 Set of principles (Brooks, 1991)Set of principles (Brooks, 1991) –The goal is to study complete integrated intelligent autonomous agents –The agents should be embodied as mobile robots situated in unmodified worlds found round laboratory (embodiment) –Robots should operate under different environmental conditions – e.g different lighting conditions (situatedness) –Robots should operate on timescales commensurate with timescales used by humans (situatedness)

12 Traditional AITraditional AI –Concentrates on aspects of problem that can be solved symbolically. –Assumes perception and recognition has already occurred. –E.g. knowledge of chair (CAN (SIT-ON PERSON CHAIR)), (CAN STAND-ON PERSON CHAIR))(CAN (SIT-ON PERSON CHAIR)), (CAN STAND-ON PERSON CHAIR)) –Representation could be used to solve problem of hungry person in room with bananas just out of reach. –But how will chair be recognised?

13 Early robots developed by Brooks et al at MITEarly robots developed by Brooks et al at MIT AllenAllen Can approach goal, while avoiding obstacles – without plan or map of environmentCan approach goal, while avoiding obstacles – without plan or map of environment –Distance sensors, and 3 layers of control –Layer 1: avoid static and dynamic objects – repulsed through distance sensors –Layer 2: randomly wander about –Layer 3: Head towards distant places

14 Subsumption ArchitectureSubsumption Architecture –Tight connection of perception to action. –Simple modules connected in layers –Starting from lowest level: e.g. obstacle avoidance –Next level e.g. goal finding.

15 HerbertHerbert Coke can collecting robotCoke can collecting robot Laser-based table-like object finder – drives robot to tableLaser-based table-like object finder – drives robot to table Laser-based coke-can-like object finder – finds coke canLaser-based coke-can-like object finder – finds coke can If robot stationary, arm control reaches out for coke canIf robot stationary, arm control reaches out for coke can Hand – has grasp reflex when something breaks infrared beam between fingers.Hand – has grasp reflex when something breaks infrared beam between fingers. Arm locates soda can, hand positioned near can, hand grasps can.Arm locates soda can, hand positioned near can, hand grasps can. No planning, or communication between modules – reactive approach.No planning, or communication between modules – reactive approach.

16 No planningNo planning No representation of the environmentNo representation of the environment Herbert could respond quickly to changed circumstancesHerbert could respond quickly to changed circumstances E.g. new obstacle, or object approaching on a collision course.E.g. new obstacle, or object approaching on a collision course. Place a coke can in front of Herbert – will pick it up. No expectations about where coke cans will be found.Place a coke can in front of Herbert – will pick it up. No expectations about where coke cans will be found.

17 Key topics of Embodied AIKey topics of Embodied AI –Embodiment –Situatedness –Intelligence –Emergence

18 EmbodimentEmbodiment –According to Brooks, 1991, embodiment is critical 1. Only an embodied agent is validated as one that can deal with the real world.1. Only an embodied agent is validated as one that can deal with the real world. 2. Only through a physical grounding can any internal symbolic system be given meaning.2. Only through a physical grounding can any internal symbolic system be given meaning.

19 SituatednessSituatedness –A situated automaton with sensors connected to the environment, and outputs connected to effectors –Traditional AI – working in symbolic abstracted domain. No real connection to external world. Dealing with model domain. –Situated agent – getting information from its sensors, and responding in timely fashion. –No intervening human –“the world is its own best model”

20 Can have embodiment without situatednessCan have embodiment without situatedness E.g. a remote controlled carE.g. a remote controlled car Or a robot that carries out a predefined plan of action.Or a robot that carries out a predefined plan of action.

21 IntelligenceIntelligence –Brooks: “intelligence is determined by the dynamics of interaction with the world” –Our reasoning, and language abilities are comparatively recent developments –Simple behaviours, perception and mobility, took much longer to evolve –Look at simple animals –Look at dynamics of interaction of robot with its environment.

22 EmergenceEmergence –“intelligence is in the eye of the observer” (Brooks) –Intelligence emerges from interaction of components of system. –Behaviour-based approach – intelligence emerges from interaction of simple modules E.g. obstacle avoidance, goal finding, wall following modules.E.g. obstacle avoidance, goal finding, wall following modules. Individual components simple, but resulting combined behaviour appears intelligent.Individual components simple, but resulting combined behaviour appears intelligent.

23 Main ideas of Brooks’ Behaviour-based roboticsMain ideas of Brooks’ Behaviour-based robotics –No central model of world –No central locus of control –No separation into perceptual system, central system, and actuation system –Layers, or behaviours run in parallel –Behavioural competence built up by adding behavioural modules

24 Criticisms of nouvelle AI approach?Criticisms of nouvelle AI approach?

25 Will approach scale up?Will approach scale up? –Can subsumption-like approaches scale up to arbitrarily complex systems? Need for representations?Need for representations? –can get apparently sophisticated behaviour from simple reaction to environment –But soon begin to need things like map of environment (e.g. bring coke can back to bin)

26 Does embodied AI solve all problems?Does embodied AI solve all problems? –Symbol grounding? –Need for human intervention and programming?

27 Does embodiment solve symbol grounding problem? Searle (using Chinese room) argued that computers just manipulate symbols, without real understanding of what those symbols refer to.Searle (using Chinese room) argued that computers just manipulate symbols, without real understanding of what those symbols refer to. Symbol grounding problem – how to relate symbols to the real world?Symbol grounding problem – how to relate symbols to the real world?

28 Rodney Brooks: claim that only through a physical grounding can any internal symbolic system be given meaning.Rodney Brooks: claim that only through a physical grounding can any internal symbolic system be given meaning. Emphasis on connecting robots to the world, using sensors.Emphasis on connecting robots to the world, using sensors. Robot says “pig” in response to a real pig detected in the world.Robot says “pig” in response to a real pig detected in the world.

29 Does connecting robot to the world with sensors and effectors solve the symbol- grounding problem?Does connecting robot to the world with sensors and effectors solve the symbol- grounding problem? No - Still dealing with binary input from the worldNo - Still dealing with binary input from the world –Not really seeing…..

30 –Human involvement? –Human researcher deciding on Which modules to addWhich modules to add What the environment and task should beWhat the environment and task should be See Lovelace objection from Turing test, ““The Analytical Engine has no pretensions to originate anything. It can do what ever we know how to order it to perform” (Lady Lovelace memoir 1842)See Lovelace objection from Turing test, ““The Analytical Engine has no pretensions to originate anything. It can do what ever we know how to order it to perform” (Lady Lovelace memoir 1842)


Download ppt "Com1005: Machines and Intelligence Amanda Sharkey."

Similar presentations


Ads by Google