Presentation is loading. Please wait.

Presentation is loading. Please wait.

Digital soul Intelligent Machines and Human Values Thomas M. Georges COMP 3851, 2009 Matthew Cudmore.

Similar presentations


Presentation on theme: "Digital soul Intelligent Machines and Human Values Thomas M. Georges COMP 3851, 2009 Matthew Cudmore."— Presentation transcript:

1 digital soul Intelligent Machines and Human Values Thomas M. Georges COMP 3851, 2009 Matthew Cudmore

2 Overview

3 [Artificial] Distinctions Artificial intelligence Weak AI Virtual reality Machine intelligence Artificial intelligence Weak AI Virtual reality Machine intelligence Real intelligence Strong AI Reality Human intelligence Carbon chauvinism Real intelligence Strong AI Reality Human intelligence Carbon chauvinism

4 What Makes Computers So Smart? Computers’ jobs were to do arithmetic Turing point (1940s) – universal computers Divide and conquer – 1s and 0s – Limitations? Computers’ jobs were to do arithmetic Turing point (1940s) – universal computers Divide and conquer – 1s and 0s – Limitations?

5 Smarter Than Us? How could we create something smarter than us? Brain power – Blue Brain project – 100 billion neurons, 100 trillion synapses Computing power – Moore’s law – Memory capacity, speed, exactitude Expert systems Simple learning machines How could we create something smarter than us? Brain power – Blue Brain project – 100 billion neurons, 100 trillion synapses Computing power – Moore’s law – Memory capacity, speed, exactitude Expert systems Simple learning machines

6 Machines Who Think “Can machines think?” – Practically uninteresting – Turing test; Chinese room Not If, but When – Moore’s law – Mere power isn’t enough “i dont want no robot thinking like me.” A machine could never…?

7 Let the Android Do It Robots today have specific functions Goal-seeking robots with values (persistent cognitive biases) Leave more decisions— and more mistakes—to the androids Robots today have specific functions Goal-seeking robots with values (persistent cognitive biases) Leave more decisions— and more mistakes—to the androids Arthur C. Clarke: “The future isn’t what it used to be.”

8 What Is Intelligence? The Gold Standard; IQ Common sense Memory, learning, selective attention Pattern recognition Understanding Creativity, imagination Strategies, goals Self-aware The Gold Standard; IQ Common sense Memory, learning, selective attention Pattern recognition Understanding Creativity, imagination Strategies, goals Self-aware (CAPTCHA)

9 What Is Consciousness? Not just degree, but also nature of consciousness Self-monitoring, self-maintaining, self-improving (knowledge of right and wrong) Short-term memory of thought Long-term memory of self Attention, high-level awareness Self-understanding Paradox of free will Not just degree, but also nature of consciousness Self-monitoring, self-maintaining, self-improving (knowledge of right and wrong) Short-term memory of thought Long-term memory of self Attention, high-level awareness Self-understanding Paradox of free will

10 Can Computers Have Emotions? Dualistic thinking – head and heart Emotions as knob settings – reorganize priorities Mood-sensing computers – Personal assistants, etc. Dualistic thinking – head and heart Emotions as knob settings – reorganize priorities Mood-sensing computers – Personal assistants, etc.

11 Can Your PC Become Neurotic? Dysfunctional response to conflicting instructions HAL in 2001 – “Never distort information” – “Do not disclose the real purpose of the mission to the crew” – Murdered crew Dysfunctional response to conflicting instructions HAL in 2001 – “Never distort information” – “Do not disclose the real purpose of the mission to the crew” – Murdered crew

12 The Moral Mind Moral creatures act out of self-interest Different cultures, different morals Moral inertia Only at the precipice do we evolve New moral codes based on reason – A science of human values Moral creatures act out of self-interest Different cultures, different morals Moral inertia Only at the precipice do we evolve New moral codes based on reason – A science of human values

13 Moral Problems with Intelligent Artifacts Engineering & Ethics Four levels of moral/ethical problems 1.Old problems in a new light 2.How we see ourselves 3.How to treat sentient machines 4.How should sentient machines behave? Crime and punishment

14 The Moral Machine Isaac Asimov, Three Laws of Robotics 1.A robot may not injure a human being, or through inaction allow a human being to come to harm. 2.A robot must obey the orders given to it by human beings, except when such orders would conflict with the First Law. 3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Prime directives, must not be violated Is HAL to blame? Isaac Asimov, Three Laws of Robotics 1.A robot may not injure a human being, or through inaction allow a human being to come to harm. 2.A robot must obey the orders given to it by human beings, except when such orders would conflict with the First Law. 3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Prime directives, must not be violated Is HAL to blame?

15 Will Machines Take Over? Machines already do much of our work Humans will not understand the details of the machines that run the world Machines might develop their own goals Out of control on Wall Street Painless, even pleasurable, transition Machines already do much of our work Humans will not understand the details of the machines that run the world Machines might develop their own goals Out of control on Wall Street Painless, even pleasurable, transition

16 Why Not Just Pull the Plug? We’re addicted! Cannot stop research – Scientists strongly oppose taboos and restrictions on what they may and may not look into – Would drive development underground Self-preservation Diversification Cybercide – murder? We’re addicted! Cannot stop research – Scientists strongly oppose taboos and restrictions on what they may and may not look into – Would drive development underground Self-preservation Diversification Cybercide – murder?

17 Cultures in Collision The Other is dangerous – History has taught us that conquest can mean enslavement or extinction Scientists versus humanists The Other is dangerous – History has taught us that conquest can mean enslavement or extinction Scientists versus humanists

18 Beyond Human Dignity Dignity, if machines meet/surpass us – Our concepts of soul and free will – Pride in humanity and its achievements – Who could take credit? – We are still somehow responsible, even if not free – Demystify human nature: would we despair? – What if we all believed there were no free will? – We don’t know what’s possible: keep searching!

19 Extinction or Immortality? Homo cyberneticus Virtual reality – mind uploading Genetic engineering Mechanical bodies Fermi’s paradox Peaceful coexistence Utopian hope Homo cyberneticus Virtual reality – mind uploading Genetic engineering Mechanical bodies Fermi’s paradox Peaceful coexistence Utopian hope

20 The Enemy Within “Our willingness to let others think for us” – Humans who act like machines – “Just following orders!” – “Well, that’s what the computer says!” Groupthink & conformance – Minimize conflict and reach consensus – Diffusion of responsibility Waiting for the messiah – The challenge now is to think for ourselves Critical thinking, a lost art “Our willingness to let others think for us” – Humans who act like machines – “Just following orders!” – “Well, that’s what the computer says!” Groupthink & conformance – Minimize conflict and reach consensus – Diffusion of responsibility Waiting for the messiah – The challenge now is to think for ourselves Critical thinking, a lost art

21 Electronic Democracy Teledemocracy – Too much information, not enough attention – Impractical today, and would exclude many people Intelligent delegates Supernegotiators No more secrets – dynamic open information – Whistle-blowers anonymous The Napster effect – free information – Information may cease to be considered property Teledemocracy – Too much information, not enough attention – Impractical today, and would exclude many people Intelligent delegates Supernegotiators No more secrets – dynamic open information – Whistle-blowers anonymous The Napster effect – free information – Information may cease to be considered property

22 Rethinking the Covenant between Science and Society Risky fields (Bill Joy: GNR) – Genetic engineering – Nanotechnology – Robotics & artificial intelligence Knowledge is good, is dangerous Science for sale – capitalism Socially aware science Slow down! Risky fields (Bill Joy: GNR) – Genetic engineering – Nanotechnology – Robotics & artificial intelligence Knowledge is good, is dangerous Science for sale – capitalism Socially aware science Slow down!

23 What about God? We resist changing our core values Altruism without religious inspiration? Gods of the future – The force behind the universe – Namaste: “I bow to the divine in you” – Gaia: Earth as a single organism – Superintelligence We resist changing our core values Altruism without religious inspiration? Gods of the future – The force behind the universe – Namaste: “I bow to the divine in you” – Gaia: Earth as a single organism – Superintelligence

24


Download ppt "Digital soul Intelligent Machines and Human Values Thomas M. Georges COMP 3851, 2009 Matthew Cudmore."

Similar presentations


Ads by Google