Presentation on theme: "Consciousness, Cognition, and Internal Models Owen Holland."— Presentation transcript:
Consciousness, Cognition, and Internal Models Owen Holland
Lets consider the problems of an autonomous embodied agent (an animal or robot) in a complex, occasionally novel, dynamic, and hostile world, in which it has to achieve some task (or mission).
How could the agent achieve its task (or mission)? - by being preprogrammed for every possible contingency? No - by having learned the consequences for the achievement of the mission of every possible action in every contingency? No - by having learned enough to be able to predict the consequences of tried and untried actions, by being able to evaluate those consequences for their likely contribution to the mission, and by selecting a relatively good course of action? Maybe…
But how could it predict? For actions it has tried before in these circumstances, it could simply remember what happened last time If things are only slightly different, it could simply generalise from what it has learned Otherwise, it could run some kind of simulation of its potential actions in the world, enabling it to predict their effects – even if they involve novel situations or actions
What exactly has to be simulated? Whatever affects the mission. In an embodied agent, the agent can only affect the world through the actions of its body in and on the world, and the world can only affect the mission by affecting the agents body. So it needs to simulate those aspects of its BODY that affect the world in ways that affect the mission, along with those aspects of the WORLD that affect the body in ways that affect the mission.
What exactly has to be simulated? The body is always present and available, and changes slowly, if at all. When it moves, it is usually because it has been commanded to move. The world is different. It is complex, occasionally novel, dynamic, and hostile. Its only locally available, and may contain objects of known and unknown kinds in known and unknown places. How should all this be modelled? As a single model containing both body and world?
What is needed for simulation? The body is always present and available, and changes slowly, if at all. When it moves, it is usually because it has been commanded to move. The world is different. It is complex, occasionally novel, dynamic, and hostile. Its only locally available, and may contain objects of known and unknown kinds in known and unknown places. How should all this be modelled? As a single model containing both body and world? Or as a separate model of the body coupled to and interacting with a separate model of the world?
When an action is required, the Internal Agent Model (IAM) and the world model interact appropriately, the outcomes are evaluated, and information is passed to the agent.
In between times, the IAM and the world model are constantly updated. This ensures that the planning system is always up to date.
If the models are really good…
…the IAM will think its this, the agent…
…but its actually this – a model of the agent
It will think its interacting with the real world
But its actually interacting with a model of the real world
"The phenomenal self is a virtual agent perceiving virtual objects in a virtual world...I think that 'virtual reality' is the best technological metaphor which is currently available as a source for generating new theoretical intuitions...heuristically the most interesting concept may be that of 'full immersion'. Thomas Metzinger 2000
Were trying to build a robot that has an internal model of itself and an internal model of the world, and that uses them to predict the outcomes of novel or untried actions. And maybe the IAM will be conscious…
Copying the body To make sure any internal agent model developed is like our own, we should copy ourselves as best we can – our bodies, as well as our brains So that means using paired elastic actuators (muscles) acting on a body consisting of rigid elements (bones) joined by freely moving joints, and linked by passive elastic elements (tendons)… …and you only have to start building robots like that – we call them anthropomimetic robots – to realise how different they are from normal robots. Heres CRONOS:
Bones We make bones (often copied from Grays Anatomy Online) by hand moulding using Polymorph (Friendly Plastic in the US) - an engineering plastic, melts at 62°C, hardens ataround 30°C - true thermoplastic - can be softened and remoulded locally - good tensile strength - can be stiffened with inserts - low friction - bearing surfaces can be moulded in
Muscles and tendons Muscles are powered by electric screwdriver motors and gearboxes delivering up to 3Nm from 6V 7Ah Nicad packs. We fit them with 10mm spindles, on to which we wind 250kg breaking strain Dyneema (or Spectra) kiteline. We attach the kiteline to one or more strands of marine grade shock cord – sleeved natural rubber, 5mm or 10mm thick – and attach the shock cord to the Polymorph. Sensors: multi-turn potentiometers for motor control and length measurement. Monitoring motor currents to get torque. Evaluating tension sensors.
Sensory inputs If its to be like us, then vision will be its most important sense. So it has a visual system that mimics our own – but it only has one eye (dont ask!)
Observations With these anthropomimetic robots, every movement and every external force is reflected through the whole structure, and they will deform the structure unless active compensation is applied Some of this compensation can be reactive, but much of it will have to be predictive (internal models again) to enable actions to be carried out from a reasonably stable platform. This goes far beyond merely maintaining the balance of a passively rigid structure.
EVERY MOVEMENT IS A WHOLE BODY MOVEMENT What we have now is a serious control problem – and thats good, because its essentially the same problem the brain has, and the brain has solved it.
And now for the rest of the architecture… We now need to provide the robot with an internal model of itself and an internal model of the world, and their interactions must have predictive value. For many reasons, it would be good if the control of the robots internal model was similar enough to the control of the real robot to enable the same control programmes to be used for both. And it would be nice if the simulated interactions could run at least in real time. Where should we look for inspiration?
SIMNOS We have used the Ageia PhysX libraries to build a model of CRONOS. We think weve got it right – it is uncontrollable in exactly the same way as CRONOS – and motor programs that work (or fail) on SIMNOS should work (or fail) on CRONOS. And we can populate the world with whatever objects, particles, cloth, fluids we want, and the robots interactions with them will be correct – i.e. will have predictive value. See what you think:
Where are we now?
Is all this pain really necessary?
If consciousness is about having virtual models of real things, could it also be found in virtual models of virtual things?
Is all this pain really necessary? If consciousness is about having virtual models of real things, could it also be found in virtual models of virtual things? Couldnt we dispense with the drudgery of building real robots and just simulate the whole shebang?
Is all this pain really necessary? If consciousness is about having virtual models of real things, could it also be found in virtual models of virtual things? Couldnt we dispense with the drudgery of building real robots and just simulate the whole shebang? I dont see why not. But who would believe it?
Thanks to Rob Knight Richard Newcombe David Gamez Hugo Gravato Marques Magdalena Kogutowska Tom Troscianko Iain Gilchrist Ben Vincent