Presentation is loading. Please wait.

Presentation is loading. Please wait.

Super-Intelligent Machines Jared Schmidt "I mean, being a robot's great; but we don't have emotions and sometimes that makes me very sad."

Similar presentations


Presentation on theme: "Super-Intelligent Machines Jared Schmidt "I mean, being a robot's great; but we don't have emotions and sometimes that makes me very sad.""— Presentation transcript:

1 Super-Intelligent Machines Jared Schmidt "I mean, being a robot's great; but we don't have emotions and sometimes that makes me very sad."

2 In This Presentation… Super-intelligent machines Beginnings of A.I. Technological singularity Where we stand today Notable predictions about future A.I. Moral / ethical implications of developing intelligent machines

3 Deep Blue Deep Blue – (1997, IBM) First chess playing machine to beat the reigning world chess champion. – Successor to Deep Thought (1989) – 259 th fastest supercomputer at the time – 200 million positions per second Brute force algorithm – Less than intelligent

4 But Computers with Brains? Some prominent figures and computer scientists believe it will be possible. Human brain processes ≈ 15-25 petaflops data IBM’s Sequoia = 16.3 petaflops (TOP500) – 1.57 million cores – 1.6 PB memory

5

6 Technological Singularity The hypothetical future emergence of greater- than-human super-intelligence through technological means. An intellectual event horizon  Singularity Summit 12 (6 th annual, Oct 13)

7 Hindrances of Super-Intelligent Machines Limited understanding of how the human brain works Advancements in technology and other fields could overcome such barriers

8 Advancements in Nanotechnology Carbon nanotubes showing promise for many medical applications. – Kanzius Machine Nanobots in R&D stage – Potential medical breakthroughs – Interaction with biological systems Possible future use to reverse-engineer brain

9 Raymond Kurzweil American inventor and futurist Predicts machines will be more intelligent than humans in the near future The Singularity is Near (2005) 2020s: Nanobot use in medical field 2029: First computer passes Turning test 2045: Singularity

10 Bill Joy Cofounder of Sun Microsystems Why the Future Doesn’t Need Us (2000) Possible dangers of increasingly advancing technologies – genetics, nanotechnology and robotics Argues machines will eventually become smarter than us, creating a dystopia

11 Existential Risk The first super-intelligent machine would be programmed by error prone humans. “We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question” - Nick Bostrom (Existential Risks)

12 Robots are People Too? If machines think and act like a human, should they be considered human? Is it moral to give machines human values and emotions? If a super-intelligent robot committed a crime, is the programmer to blame?

13 Morally Sound A.I. Artificial intelligence projects which help us to live our lives better or more efficiently are morally sound Overcoming disease, genetics, and famine with technology

14 In Conclusion Less than super-intelligent A.I. to date Theory of technological singularity Advancements in nanoscience could advance neuroscience Risks of developing super-intelligent machines Moral / Ethical Issues Morally sound A.I.

15 Sources Bostrom, Nick. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.” Journal of Evolution and Technology 1.9 (2002). www.nickbostrom.com. Georges, Thomas M. Digital Soul: Intelligent Machines and Human Values. Boulder, CO: Westview, 2003. Print. Joy, Bill. Why the Future Doesn’t Need Us. www.wired.com/wired/archive/8.04/joy Kurzweil, Raymond. The Singularity is Near. Cambridge, MA: MIT, 2005. Print.


Download ppt "Super-Intelligent Machines Jared Schmidt "I mean, being a robot's great; but we don't have emotions and sometimes that makes me very sad.""

Similar presentations


Ads by Google