Presentation on theme: "Professor Nick Bostrom Director, Future of Humanity Institute Oxford Martin School Oxford University."— Presentation transcript:
Professor Nick Bostrom Director, Future of Humanity Institute Oxford Martin School Oxford University
global catastrophic risk SCOPE pan- generational Global Dark Age Biodiversity reduced by one species of beetle Aging imperceptible Congestion from one extra vehicle Recession in one country Loss of one hair (cosmic) global local personal endurable crushing Car is stolen Fatal car crash Genocide (hellish) X Global warming by 0.01 Cº Thinning of ozone layer SEVERITY existential risk trans- generational Destruction of cultural heritage One original Picasso painting destroyed Ephemeral global tyranny
Need for speed? “I instinctively think go faster. Not because I think this is better for the world. Why should I care about the world when I am dead and gone? I want it to go fast, damn it! This increases the chance I have of experiencing a more technologically advanced future.” — the blog-commenter “washbash” “I instinctively think go faster. Not because I think this is better for the world. Why should I care about the world when I am dead and gone? I want it to go fast, damn it! This increases the chance I have of experiencing a more technologically advanced future.” — the blog-commenter “washbash”
Principle of differential technological development Retard the development of dangerous and harmful technologies, especially ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risks posed by nature or by other technologies.
Embryo selection 1.Conduct very large genome-wide association studies (and select a number of embryos that are higher in desired genetic characteristics. 2.Overcome various ethical scruples 3.Use to select during IVF
Iterated embryo selection 1.Genotype and select a number of embryos that are higher in desired genetic characteristics. 2.Extract stem cells from those embryos and convert them to sperm and ova, maturing within six months or less. 3.Cross the new sperm and ova to produce embryos. 4.Repeat until large genetic changes have been accumulated.
Maximum IQ gains from selecting among a set of embryos SelectionIQ points gained 1 in in in in generations of 1 in 10< 65 (b/c diminishing returns) 10 generations of 1 in 10< 130 (b/c diminishing returns) Cumulative limits (additive variants optimized for cognition) 100+ (< 300 (b/c diminishing returns))
Decision theory First-order logic Heuristic search Decision trees Alpha-Beta Pruning Hidden Markov Models Policy iteration Backprop algorithm Evolutionary algorithms Support vector machines Hierarchical planning Algorithmic complexity theory TD learning Bayesian networks Big Data Convolutional neural networks
Applications Algorithmic trading Route-finding software Medical decision support Industrial robotics Speech recognition Recommender systems Machine translation Face recognition Search engines Equation-solving and theorem- proving Automated logistics planning Airline reservation systems Spam filters Credit card fraud detection Game AI …
Game AI CheckersSuperhuman Backgammon Superhuman Traveller TCSSuperhuman in collaboration with human OthelloSuperhuman ChessSuperhuman CrosswordsExpert level ScrabbleSuperhuman BridgeEqual to the best Jeopardy!Superhuman PokerVaried FreeCellSuperhuman GoVery strong amateur level
When will HLMI be achieved? 10%50%90% PT-AI AGI EETN TOP Combined
How long from HLMI to SI? within 2 yrswithin 30 yrs TOP1005%50% Combined10%75%
Difficulty of achievement
Fast – minutes, hours, days Slow – decades, centuries Intermediate – months, years
An AI takeover scenario
What do alien minds want?
Principles of AI motivation The orthogonality thesis – Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.
Principles of AI motivation The orthogonality thesis – Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal. The instrumental convergence thesis – Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents. – self-preservation, goal content integrity, cognitive enhancement, technological perfection, resource acquisition
Solve the intelligence problem and the control problem
The challenge Solve the intelligence problem and the control problem In the correct order!
The challenge Solve the intelligence problem and the control problem In the correct order! Principle of differential technological development
Capability control Motivation selection Control methods
Reliable self-modification (“tiling agents”) Logical uncertainty (reasoning without logical omniscience) Reflective stability of decision theory Decision theory for Newcomb-like problems Corrigibility (accepting modifications) The shutdown problem Value loading Indirect specification of decision theory Domesticity (goal specification for limited impact) The competence gap Weighting options or outcomes for variance-normalizing solution to moral uncertainty Program analysis for self-improvement Reading values and beliefs of AIs Pascal’s mugging Infinite ethics Mathematical modelling of intelligence explosion Technical research questions
Theoretical computer science Parts of mathematics Parts of philosophy Relevant fields
History and difficulty of international technology coordination (treaties) Past progress in artificial intelligence Survey of intelligence measures Survey of endogenous growth theories in economics History of opinions on danger among AI experts Examine past large, (semi-)secret tech projects (Manhattan, Apollo) Examine past price trends in software, hardware, networking, etc. Analyse the technological completion conjecture Search for additional technology couplings Search for plausible “second-guessing” policy arguments History of policy on technological / scientific / catastrophic risks Strategic research questions
Technology forecasting Risk analysis Technology policy and strategy S&T governance Ethics Parts of philosophy History of technology Parts of economics, game theory Relevant fields
The Common Good Principle Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals.