Presentation is loading. Please wait.

Presentation is loading. Please wait.

A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions.

Similar presentations


Presentation on theme: "A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions."— Presentation transcript:

1 A New Artificial Intelligence 7 Kevin Warwick

2 Embodiment & Questions

3 Issues of Modern AI We will look here at some of the important questions facing AI today We will look here at some of the important questions facing AI today We will open up some of the directions being taken We will open up some of the directions being taken We will attempt to move away from the restrictions imposed by Classical AI We will attempt to move away from the restrictions imposed by Classical AI

4 Brains A brain has different neuronal structures each with a specialised role – sensory, motor, inter A brain has different neuronal structures each with a specialised role – sensory, motor, inter Neurons communicate through BINARY (not analogue) codes Neurons communicate through BINARY (not analogue) codes We know something about the physical - chemical aspects of the brain We know something about the physical - chemical aspects of the brain We know almost nothing about how memories are encoded or faces are recognised We know almost nothing about how memories are encoded or faces are recognised

5 Innate Knowledge Can learning occur on a blank slate? Can learning occur on a blank slate? Must there be some prior bias? Must there be some prior bias? Are memories inherited? Are memories inherited? Meaningful convergence of ANNs depends on number of neurons + topology + learning Meaningful convergence of ANNs depends on number of neurons + topology + learning Is this also true of a brain? Is this also true of a brain? Are there hard wired cognitive biases? Are there hard wired cognitive biases?

6 Genetics/emergence Darwinian (natural) selection – shapes individual behaviours Darwinian (natural) selection – shapes individual behaviours AND/OR AND/OR Lamarckian evolution – offspring inherit acquired characteristics (e.g. giraffe) Lamarckian evolution – offspring inherit acquired characteristics (e.g. giraffe) LEAD TO LEAD TO Strengthening of particular circuits in the brain & weakening of others Strengthening of particular circuits in the brain & weakening of others

7 Plato – unsupervised learning? How can you enquire, Socrates, into that which you do not already know? How can you enquire, Socrates, into that which you do not already know? What will you put forth as the subject of the enquiry? What will you put forth as the subject of the enquiry? And if you find out what you want, how will you ever know that this is what you did not know? And if you find out what you want, how will you ever know that this is what you did not know? i.e. how can we know we are someplace when we do not know where we are going? i.e. how can we know we are someplace when we do not know where we are going?

8 Questions Perceptions depend on distributed neural codes – how are these combined? Perceptions depend on distributed neural codes – how are these combined? What we perceive is highly dependent on how our brain attempts to interpret a situation/scene – how? What we perceive is highly dependent on how our brain attempts to interpret a situation/scene – how? How does an individual acquire language? How does an individual acquire language? How does a brain index temporally related information? How does a brain index temporally related information?

9 Agents + Emergence Idea - The mind is organised into sets of specialised functional units (Minsky) Idea - The mind is organised into sets of specialised functional units (Minsky) Modular theories good for agents Modular theories good for agents Emergent globally intelligent behaviour arises from the cooperation of large numbers of agents Emergent globally intelligent behaviour arises from the cooperation of large numbers of agents Supported by fMRI scans Supported by fMRI scans

10 Piaget Humans assimilate external phenomena according to our present understanding Humans assimilate external phenomena according to our present understanding We accommodate our understanding to the demands of the phenomena We accommodate our understanding to the demands of the phenomena

11 Kant Schemata – apriori structure used to organise experience of the external world Schemata – apriori structure used to organise experience of the external world Observation is not passive and neutral but active and interpretive Observation is not passive and neutral but active and interpretive

12 Perception Perceived information never fits precisely into our schemata Perceived information never fits precisely into our schemata Depends on I/O devices – in humans and robots Depends on I/O devices – in humans and robots With different I/O the real world will be perceived differently With different I/O the real world will be perceived differently Each entity has a different concept of reality Each entity has a different concept of reality There is NO absolute reality! (Berkeley) There is NO absolute reality! (Berkeley)

13 Embodiment in cognition Classical AI – instantiation of a physical symbol system is irrelevant to its performance – structure is important (Brain in a vat) Classical AI – instantiation of a physical symbol system is irrelevant to its performance – structure is important (Brain in a vat) New AI - Intelligent action requires a physical embodiment that allows the entity to be integrated in the world New AI - Intelligent action requires a physical embodiment that allows the entity to be integrated in the world Present day robot I/O limited – requires more complexity in interfacing Present day robot I/O limited – requires more complexity in interfacing

14 Culture Classical AI – Individual mind is the sole source of intelligence Classical AI – Individual mind is the sole source of intelligence But knowledge is a social construct – an understanding of the social context of knowledge and behaviour is also important (memes!) But knowledge is a social construct – an understanding of the social context of knowledge and behaviour is also important (memes!)

15 Interpretations - Communication Symbols are used in context – a domain has different interpretations, depending on the goals Symbols are used in context – a domain has different interpretations, depending on the goals Sign interpretation – coding system Sign interpretation – coding system The meaning of a symbol is understood in the context of its role as an interpretor The meaning of a symbol is understood in the context of its role as an interpretor

16 Falsifiable Computation Any number of confirming experiments are not sufficient for confirmation of a theory Any number of confirming experiments are not sufficient for confirmation of a theory Scientific theories must be falsifiable Scientific theories must be falsifiable There must exist circumstances under which a model is a poor approximant There must exist circumstances under which a model is a poor approximant Many computational models are not falsifiable – universal machines! Many computational models are not falsifiable – universal machines! Need computation that is falsifiable Need computation that is falsifiable

17 Let’s Move On Classical AI – (Hobbes/Locke/Aristotle) – intelligent processes conform to universal laws and are understandable/modelable Classical AI – (Hobbes/Locke/Aristotle) – intelligent processes conform to universal laws and are understandable/modelable Converse (Winograd/Penrose/Weisenbaum) – important aspects of intelligence cannot be modelled Converse (Winograd/Penrose/Weisenbaum) – important aspects of intelligence cannot be modelled A model/simulation is not the real thing A model/simulation is not the real thing The only ‘exact’ simulation of a human brain would be that specific human brain and no other – even then it would need to be in its place/time The only ‘exact’ simulation of a human brain would be that specific human brain and no other – even then it would need to be in its place/time

18 Differences Just because something is different does not make it worse Just because something is different does not make it worse A simulation of a human brain could be more/less intelligent/conscious/self- aware/understanding A simulation of a human brain could be more/less intelligent/conscious/self- aware/understanding Models/simulations are used to explore, explain & predict – if a model is proven to be accurate for this then that’s just fine Models/simulations are used to explore, explain & predict – if a model is proven to be accurate for this then that’s just fine

19 Comments on Intelligence As long as we understand the basics of what intelligence is, that is sufficient As long as we understand the basics of what intelligence is, that is sufficient We should not be bogged down by trying to copy exactly the functioning of the human brain, interesting though that might be We should not be bogged down by trying to copy exactly the functioning of the human brain, interesting though that might be More interesting is to create entities that are intelligent in their own right More interesting is to create entities that are intelligent in their own right

20 Next Growing Brains – Biological AI Growing Brains – Biological AI

21 Contact Information Web site: www.kevinwarwick.com Web site: www.kevinwarwick.comwww.kevinwarwick.com Email: k.warwick@reading.ac.uk Email: k.warwick@reading.ac.ukk.warwick@reading.ac.uk Tel: (44)-1189-318210 Tel: (44)-1189-318210 Fax: (44)-1189-318220 Fax: (44)-1189-318220 Professor Kevin Warwick, Department of Cybernetics, University of Reading, Whiteknights, Reading, RG6 6AY,UK Professor Kevin Warwick, Department of Cybernetics, University of Reading, Whiteknights, Reading, RG6 6AY,UK


Download ppt "A New Artificial Intelligence 7 Kevin Warwick. Embodiment & Questions."

Similar presentations


Ads by Google