Presentation on theme: "EECS 690 April 18. Child-machines In Alan Turing’s landmark paper, he defined ‘machine’ rather rigidly so to prevent male/female “engineering teams” from."— Presentation transcript:
EECS 690 April 18
Child-machines In Alan Turing’s landmark paper, he defined ‘machine’ rather rigidly so to prevent male/female “engineering teams” from winning the contest by bringing about a child in the usual way. He did, however think that the most effective approach to a thinking machine would be to make a machine that had the capacity to learn, and then to teach it much like we teach our own children. That is the approach discussed in this section of the text.
How much (or little) do you build in? If human children are the model ( a starting point open to dispute) then there is some discussion of how much or little innate knowledge human children come equipped with. –On one side are empiricists, largely inspired by Locke, who view the child-machine as a ‘tabula rasa’ (blank slate) –On the other side are Chomskyans and most cognitive psychologists who do infant research (the text overstates the balance of the debate in this area) who see a rather large amount of innate knowledge as prerequisite for what we call learning.
Embodiment Embodiment has quickly become a nearly universal element of AI research. AT heart it represents a simple design concern. Rather than painstakingly creating a representation of the world in a a digital environment, let the world represent itself and make something that interacts with it, with or without storing any robust representation of the world. The first major research effort in embodiment was MIT’s Cog (a project inspired by and which collaborated with Dennett)
How to reinforce (teach) morality? It seems like the way that many parents teach morally acceptable behavior is by reward and punishment. It is a technical challenge to figure out how to put these aspects into a machine in a suitable way. Additionally, it may be inadequate. Sociopaths tend to view morality in terms solely of benefits and costs to themselves only, indicating that there’s more to behaving morally than just that. There also seem to be desires in typical persons to get along harmoniously and to benefit groups, sometimes at costs to the individual.
Pain and moral subjects It is generally considered wrong to inflict pain without some greater purpose. It is in this sense that lower animals are seen as moral subjects, that is, in the sense that it is wrong to inflict pain on them without a compelling reason. What communicates this pain to us as humans is the animals’ pain behaviors, which are intelligible to us as such. Consider a (ro)bot that exhibited intelligible pain behaviors when being damaged or misused or put into what it could recognize as situations likely to lead to damage. Would we feel as if we had any duty not to inflict such pain?
Doubts about this approach being purely “bottom-up” One of the virtues of this approach is that ethical systems achieved in this manner are aware of and sensitive to specific cultural norms of its “upbringing”. This virtue also suggests a down-side. What criteria are to be used in the machine’s ethical training? Does this indicate that we have to have some top-down approach in mind before even starting a bottom-up approach?
Modules, a Middle-up approach It may be that even a learning machine can be broken up into smaller discrete tasks (e.g. distinguishing humans from inanimate human-shaped objects, navigating hallways without damaging things, etc.)
Results It seems that this kind of overt imitation of human moral behavior raises a few questions. Could these systems be any more consistently moral than we are? If not, is that acceptable? Since human morality is dynamic, is it acceptable that these systems be more moral at some times than at others? Is it acceptable for them to change their minds about morals over time?