Presentation is loading. Please wait.

Presentation is loading. Please wait.

Professor Nick Bostrom Director, Future of Humanity Institute Oxford Martin School Oxford University.

Similar presentations


Presentation on theme: "Professor Nick Bostrom Director, Future of Humanity Institute Oxford Martin School Oxford University."— Presentation transcript:

1 Professor Nick Bostrom Director, Future of Humanity Institute Oxford Martin School Oxford University

2

3 global catastrophic risk SCOPE pan- generational Global Dark Age Biodiversity reduced by one species of beetle Aging imperceptible Congestion from one extra vehicle Recession in one country Loss of one hair (cosmic) global local personal endurable crushing Car is stolen Fatal car crash Genocide (hellish) X Global warming by 0.01 Cº Thinning of ozone layer SEVERITY existential risk trans- generational Destruction of cultural heritage One original Picasso painting destroyed Ephemeral global tyranny

4 The risk of creativity

5 ?

6 Hazardous future techs? Machine intelligence Synthetic biology Molecular nanotechnology Totalitarianism-enabling techs Human modification Geoengineering Unknown

7 Technological determinism?

8 Need for speed? “I instinctively think go faster. Not because I think this is better for the world. Why should I care about the world when I am dead and gone? I want it to go fast, damn it! This increases the chance I have of experiencing a more technologically advanced future.” — the blog-commenter “washbash” “I instinctively think go faster. Not because I think this is better for the world. Why should I care about the world when I am dead and gone? I want it to go fast, damn it! This increases the chance I have of experiencing a more technologically advanced future.” — the blog-commenter “washbash”

9 Principle of differential technological development Retard the development of dangerous and harmful technologies, especially ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risks posed by nature or by other technologies.

10

11 Biological cognition Networks and organizations (Brain-computer interfaces) Whole brain emulation Artificial intelligence

12 Embryo selection 1.Conduct very large genome-wide association studies (and select a number of embryos that are higher in desired genetic characteristics. 2.Overcome various ethical scruples 3.Use to select during IVF

13 Iterated embryo selection 1.Genotype and select a number of embryos that are higher in desired genetic characteristics. 2.Extract stem cells from those embryos and convert them to sperm and ova, maturing within six months or less. 3.Cross the new sperm and ova to produce embryos. 4.Repeat until large genetic changes have been accumulated.

14 Maximum IQ gains from selecting among a set of embryos SelectionIQ points gained 1 in in in in generations of 1 in 10< 65 (b/c diminishing returns) 10 generations of 1 in 10< 130 (b/c diminishing returns) Cumulative limits (additive variants optimized for cognition) 100+ (< 300 (b/c diminishing returns))

15 Possible impacts?

16 Biological cognition Networks and organizations (Brain-computer interfaces) Whole brain emulation Artificial intelligence

17

18 Decision theory First-order logic Heuristic search Decision trees Alpha-Beta Pruning Hidden Markov Models Policy iteration Backprop algorithm Evolutionary algorithms Support vector machines Hierarchical planning Algorithmic complexity theory TD learning Bayesian networks Big Data Convolutional neural networks

19 Brain emulation?

20 Applications Algorithmic trading Route-finding software Medical decision support Industrial robotics Speech recognition Recommender systems Machine translation Face recognition Search engines Equation-solving and theorem- proving Automated logistics planning Airline reservation systems Spam filters Credit card fraud detection Game AI …

21 Game AI CheckersSuperhuman Backgammon Superhuman Traveller TCSSuperhuman in collaboration with human OthelloSuperhuman ChessSuperhuman CrosswordsExpert level ScrabbleSuperhuman BridgeEqual to the best Jeopardy!Superhuman PokerVaried FreeCellSuperhuman GoVery strong amateur level

22 When will HLMI be achieved? 10%50%90% PT-AI AGI EETN TOP Combined

23 How long from HLMI to SI? within 2 yrswithin 30 yrs TOP1005%50% Combined10%75%

24 Difficulty of achievement

25

26 Fast – minutes, hours, days Slow – decades, centuries Intermediate – months, years

27 An AI takeover scenario

28 What do alien minds want?

29 Principles of AI motivation The orthogonality thesis – Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.

30 Principles of AI motivation The orthogonality thesis – Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal. The instrumental convergence thesis – Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents. – self-preservation, goal content integrity, cognitive enhancement, technological perfection, resource acquisition

31 The challenge

32 Solve the intelligence problem and the control problem

33 The challenge Solve the intelligence problem and the control problem In the correct order!

34 The challenge Solve the intelligence problem and the control problem In the correct order! Principle of differential technological development

35

36 Capability control Motivation selection Control methods

37 Reliable self-modification (“tiling agents”) Logical uncertainty (reasoning without logical omniscience) Reflective stability of decision theory Decision theory for Newcomb-like problems Corrigibility (accepting modifications) The shutdown problem Value loading Indirect specification of decision theory Domesticity (goal specification for limited impact) The competence gap Weighting options or outcomes for variance-normalizing solution to moral uncertainty Program analysis for self-improvement Reading values and beliefs of AIs Pascal’s mugging Infinite ethics Mathematical modelling of intelligence explosion Technical research questions

38 Theoretical computer science Parts of mathematics Parts of philosophy Relevant fields

39 History and difficulty of international technology coordination (treaties) Past progress in artificial intelligence Survey of intelligence measures Survey of endogenous growth theories in economics History of opinions on danger among AI experts Examine past large, (semi-)secret tech projects (Manhattan, Apollo) Examine past price trends in software, hardware, networking, etc. Analyse the technological completion conjecture Search for additional technology couplings Search for plausible “second-guessing” policy arguments History of policy on technological / scientific / catastrophic risks Strategic research questions

40 Technology forecasting Risk analysis Technology policy and strategy S&T governance Ethics Parts of philosophy History of technology Parts of economics, game theory Relevant fields

41 The Common Good Principle Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals.

42


Download ppt "Professor Nick Bostrom Director, Future of Humanity Institute Oxford Martin School Oxford University."

Similar presentations


Ads by Google