Presentation on theme: "Ethical issues in human-robot interaction Blay Whitby Centre for Research in Cognitive Science Department of Informatics University of Sussex"— Presentation transcript:
Ethical issues in human-robot interaction Blay Whitby Centre for Research in Cognitive Science Department of Informatics University of Sussex firstname.lastname@example.org
Structure Technology Ethics The Glover Principle Some Ethical Concerns Existing Problem Technologies A Call for Action
The Humanity Principle Technology should be built, designed and made available only in so far as it benefits humanity. Animals? The cui bono problem? Changes caused to social structures - in particular strengthening of power relations?
The Glover Principle What Sort of People Should There Be? (1984) - A simple moratorium on human cloning is merely a delaying tactic. There is no ethically consistent way resisting technological advances (unless the technology is all bad). Therefore constant and complete ethical scrutiny is required.
Technology Ethics Therefore ethical input is essential at the design, and implementation phases of many contemporary technologies. Human-like interfaces; exploitation of human affective responses; robots and other systems with intimate roles all fall into this category.
Example Technologies Caring systems for the elderly (Fogg and Tseng 1999) Tutoring systems that attempt to employ or modify the affective state of the learner. (du Boulay et al 1999) Cyber-therapies (ELIZA has not gone away) Kismet (Brezeal and Scassellati 2002) The general enthusiasm for more human-like interfaces.
Some Ethical Concerns Vulnerable users Deliberate manipulation of human emotions Interfaces, programs, and robots that pretend or explicitly claim to be more human-like (or perhaps animal-like) than they really are
Ethical Concerns in Practice How vulnerable are we? The uncanny valley Frudes dystopia Human replacement caring systems Autism and robots
Frudes Dystopia Frude, N. (1983) Computer Companions will be preferred to human society. Frude saw such developments as inevitable. Frude sees this as more or less the collapse of human society.
Frudes Dystopia Frudes predictions havent come about yet but theres plenty of work that should worry us: A major predicted application in the immediate future is caring systems. MITI Japan - recent major initiative for helpers and companions for the elderly.
Autism and Robots The Aurora Project, University of Hertfordshire. Use of robots to improve communication skills in autistic children. (Woods et al 2004, 2005) An important ethical question is: what should we do with this information?
Existing Codes BCS - not covered ACM - not covered
Existing Codes BCS - no guidance ACM - no guidance Whitby 1988, CSS 1989 (at least mentioned)
Claims not made The uncanny valley will save us all from any real harm. Frudes Dystopia is inevitable (or even dystopian). Human-like interfaces are, in general, morally repugnant. Human-like interfaces should not be built.
A call to action The design, implementation, testing, and use of all types of human-like interfaces should now be subject to ethical scrutiny. The concept of vulnerable users should enter our ethical vocabulary. Interdisciplinary scientific research aimed at resolving some of the relevant empirical questions should now be encouraged.
Your consent to our cookies if you continue to use this website.