CS 416 Artificial Intelligence Lecture 21 Making Complex Decisions Chapter 17 Lecture 21 Making Complex Decisions Chapter 17.

Slides:



Advertisements
Similar presentations
Chapter 17: Making Complex Decisions April 1, 2004.
Advertisements

EC941 - Game Theory Prof. Francesco Squintani Lecture 4 1.
CPS Bayesian games and their use in auctions Vincent Conitzer
Nash’s Theorem Theorem (Nash, 1951): Every finite game (finite number of players, finite number of pure strategies) has at least one mixed-strategy Nash.
Mechanism Design without Money Lecture 1 Avinatan Hassidim.
Game Theory S-1.
Game theory (Sections )
15 THEORY OF GAMES CHAPTER.
APPENDIX An Alternative View of the Payoff Matrix n Assume total maximum profits of all oligopolists is constant at 200 units. n Alternative policies.
Two-Player Zero-Sum Games
Operations Research Assistant Professor Dr. Sana’a Wafa Al-Sayegh 2 nd Semester ITGD4207 University of Palestine.
The basics of Game Theory Understanding strategic behaviour.
Chapter 6 Game Theory © 2006 Thomson Learning/South-Western.
Chapter 6 Game Theory © 2006 Thomson Learning/South-Western.
An Introduction to... Evolutionary Game Theory
MIT and James Orlin © Game Theory 2-person 0-sum (or constant sum) game theory 2-person game theory (e.g., prisoner’s dilemma)
Study Group Randomized Algorithms 21 st June 03. Topics Covered Game Tree Evaluation –its expected run time is better than the worst- case complexity.
Game theory.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 15 Game Theory.
EC941 - Game Theory Lecture 7 Prof. Francesco Squintani
Game Theory. “If you don’t think the math matters, then you don’t know the right math.” Chris Ferguson 2002 World Series of Poker Champion.
Game Theory: introduction and applications to computer networks Game Theory: introduction and applications to computer networks Zero-Sum Games (follow-up)
Introduction to Game Theory
Game theory (Sections ). Game theory Game theory deals with systems of interacting agents where the outcome for an agent depends on the actions.
Part 3: The Minimax Theorem
An Introduction to Game Theory Part I: Strategic Games
Game Theory Part 5: Nash’s Theorem.
Chapter 6 © 2006 Thomson Learning/South-Western Game Theory.
Chapter 12 Choices Involving Strategy McGraw-Hill/Irwin Copyright © 2008 by The McGraw-Hill Companies, Inc. All Rights Reserved.
Game-Theoretic Approaches to Multi-Agent Systems Bernhard Nebel.
Lecture 1 - Introduction 1.  Introduction to Game Theory  Basic Game Theory Examples  Strategic Games  More Game Theory Examples  Equilibrium  Mixed.
Simple Neural Nets For Pattern Classification
Review: Game theory Dominant strategy Nash equilibrium
1 Game Theory Here we study a method for thinking about oligopoly situations. As we consider some terminology, we will see the simultaneous move, one shot.
Game Theory Here we study a method for thinking about oligopoly situations. As we consider some terminology, we will see the simultaneous move, one shot.
Lecture Slides Dixit and Skeath Chapter 4
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
UNIT II: The Basic Theory Zero-sum Games Nonzero-sum Games Nash Equilibrium: Properties and Problems Bargaining Games Bargaining and Negotiation Review.
Game Theory Statistics 802. Lecture Agenda Overview of games 2 player games representations 2 player zero-sum games Render/Stair/Hanna text CD QM for.
Experts Learning and The Minimax Theorem for Zero-Sum Games Maria Florina Balcan December 8th 2011.
Game Theory.
CS 416 Artificial Intelligence Lecture 23 Making Complex Decisions Chapter 17 Lecture 23 Making Complex Decisions Chapter 17.
MAKING COMPLEX DEClSlONS
Advanced Artificial Intelligence Lecture 3B: Game theory.
Game Theory, Strategic Decision Making, and Behavioral Economics 11 Game Theory, Strategic Decision Making, and Behavioral Economics All men can see the.
Dynamic Games of complete information: Backward Induction and Subgame perfection - Repeated Games -
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
Game Theory Robin Burke GAM 224 Spring Outline Admin Game Theory Utility theory Zero-sum and non-zero sum games Decision Trees Degenerate strategies.
McGraw-Hill/Irwin Copyright  2008 by The McGraw-Hill Companies, Inc. All rights reserved. GAME THEORY, STRATEGIC DECISION MAKING, AND BEHAVIORAL ECONOMICS.
Game Theory: introduction and applications to computer networks Game Theory: introduction and applications to computer networks Lecture 2: two-person non.
Game Theory: introduction and applications to computer networks Game Theory: introduction and applications to computer networks Introduction Giovanni Neglia.
ADVANCED PERCEPTRON LEARNING David Kauchak CS 451 – Fall 2013.
CS 416 Artificial Intelligence Lecture 23 Making Complex Decisions Chapter 17 Lecture 23 Making Complex Decisions Chapter 17.
Lecture 5 Introduction to Game theory. What is game theory? Game theory studies situations where players have strategic interactions; the payoff that.
1 What is Game Theory About? r Analysis of situations where conflict of interests is present r Goal is to prescribe how conflicts can be resolved 2 2 r.
Strategic Behavior in Business and Econ Static Games of complete information: Dominant Strategies and Nash Equilibrium in pure and mixed strategies.
How to Analyse Social Network? : Part 2 Game Theory Thank you for all referred contexts and figures.
Auctions serve the dual purpose of eliciting preferences and allocating resources between competing uses. A less fundamental but more practical reason.
Game theory (Sections )
By: Donté Howell Game Theory in Sports. What is Game Theory? It is a tool used to analyze strategic behavior and trying to maximize his/her payoff of.
Advanced Subjects in GT Outline of the tutorials Static Games of Complete Information Introduction to games Normal-form (strategic-form) representation.
CS 416 Artificial Intelligence Lecture 18 Neural Nets Chapter 20 Lecture 18 Neural Nets Chapter 20.
Neural networks.
11b Game Theory Must Know / Outcomes:
Introduction to Game Theory
Game Theory.
Choices Involving Strategy
Game Theory and Strategic Play
Presentation transcript:

CS 416 Artificial Intelligence Lecture 21 Making Complex Decisions Chapter 17 Lecture 21 Making Complex Decisions Chapter 17

Game Theory Multiagent games with simultaneous moves First, study games with one moveFirst, study games with one move –bankruptcy proceedings –auctions –economics –war gaming Multiagent games with simultaneous moves First, study games with one moveFirst, study games with one move –bankruptcy proceedings –auctions –economics –war gaming

Definition of a game The playersThe players The actionsThe actions The payoff matrixThe payoff matrix –provides the utility to each player for each combination of actions The playersThe players The actionsThe actions The payoff matrixThe payoff matrix –provides the utility to each player for each combination of actions Two-finger Morra

Game theory strategies Strategy == policy (as in policy iteration) What do you do?What do you do? –pure strategy  you do the same thing all the time –mixed strategy  you rely on some randomized policy to select an action Strategy ProfileStrategy Profile –The assignment of strategies to players Strategy == policy (as in policy iteration) What do you do?What do you do? –pure strategy  you do the same thing all the time –mixed strategy  you rely on some randomized policy to select an action Strategy ProfileStrategy Profile –The assignment of strategies to players

Game theoretic solutions What’s a solution to a game? All players select a “rational” strategyAll players select a “rational” strategy Note that we’re not analyzing one particular game, but the outcomes that accumulate over a series of played gamesNote that we’re not analyzing one particular game, but the outcomes that accumulate over a series of played games What’s a solution to a game? All players select a “rational” strategyAll players select a “rational” strategy Note that we’re not analyzing one particular game, but the outcomes that accumulate over a series of played gamesNote that we’re not analyzing one particular game, but the outcomes that accumulate over a series of played games

Prisoner’s Dilemma Alice and Bob are caught red handed at the scene of a crime both are interrogated separately by the policeboth are interrogated separately by the police the penalty if they both confess is 5 years for eachthe penalty if they both confess is 5 years for each the penalty if they both refuse to confess is 1 year for eachthe penalty if they both refuse to confess is 1 year for each if one confesses and the other doesn’tif one confesses and the other doesn’t –the honest one (who confesses) gets 10 years –the liar gets 0 years Alice and Bob are caught red handed at the scene of a crime both are interrogated separately by the policeboth are interrogated separately by the police the penalty if they both confess is 5 years for eachthe penalty if they both confess is 5 years for each the penalty if they both refuse to confess is 1 year for eachthe penalty if they both refuse to confess is 1 year for each if one confesses and the other doesn’tif one confesses and the other doesn’t –the honest one (who confesses) gets 10 years –the liar gets 0 years What do you do to act selfishly?

Prisoner’s dilemma payoff matrix confess

Prisoner’s dilemma strategy Alice’s Strategy If Bob testifiesIf Bob testifies –best option is to testify (-5) If Bob confessesIf Bob confesses –best options is to testify (0) Alice’s Strategy If Bob testifiesIf Bob testifies –best option is to testify (-5) If Bob confessesIf Bob confesses –best options is to testify (0) testifying is a dominant strategy confess

Prisoner’s dilemma strategy Bob’s Strategy If Alice testifiesIf Alice testifies –best option is to testify (-5) If Alice confessesIf Alice confesses –best options is to testify (0) Bob’s Strategy If Alice testifiesIf Alice testifies –best option is to testify (-5) If Alice confessesIf Alice confesses –best options is to testify (0) testifying is a dominant strategy confess

Rationality Both players seem to have clear strategies Both testifyBoth testify –game outcome would be (-5, -5) Both players seem to have clear strategies Both testifyBoth testify –game outcome would be (-5, -5)

Dominance of strategies Comparing strategies Strategy s can strongly dominate s’Strategy s can strongly dominate s’ –the outcome of s is always better than the outcome of s’ no matter what the other player does  testifying strongly dominates confessing for Bob and Alice Strategy s can weakly dominate s’Strategy s can weakly dominate s’ –the outcome of s is better than the outcome of s’ on at least one action of the opponent and no worse on others Comparing strategies Strategy s can strongly dominate s’Strategy s can strongly dominate s’ –the outcome of s is always better than the outcome of s’ no matter what the other player does  testifying strongly dominates confessing for Bob and Alice Strategy s can weakly dominate s’Strategy s can weakly dominate s’ –the outcome of s is better than the outcome of s’ on at least one action of the opponent and no worse on others

Pareto Optimal Pareto optimality comes from economics An outcome can be Pareto optimalAn outcome can be Pareto optimal –textbook: no alternative outcome that all players would prefer –I prefer: the best that could be accomplished without disadvantaging at least one group Pareto optimality comes from economics An outcome can be Pareto optimalAn outcome can be Pareto optimal –textbook: no alternative outcome that all players would prefer –I prefer: the best that could be accomplished without disadvantaging at least one group Is the testify outcome (-5, -5) Pareto Optimal?

Is (-5, -5) Pareto Optimal? Is there an outcome that improves outcome without disadvantaging any group? How about (-1, -1) from (confess, confess)? confess

Dominant strategy equilibrium (-5, -5) represents a dominant strategy equilibrium neither player has an incentive to divert from dominant strategyneither player has an incentive to divert from dominant strategy –If Alice assumes Bob executes same strategy as he is now, she will only lose more by switching  likewise for Bob Imagine this as a local optimum in outcome spaceImagine this as a local optimum in outcome space –each dimension of outcome space is dimension of a player’s choice –any movement from dominant strategy equilibrium in this space results in worse outcomes (-5, -5) represents a dominant strategy equilibrium neither player has an incentive to divert from dominant strategyneither player has an incentive to divert from dominant strategy –If Alice assumes Bob executes same strategy as he is now, she will only lose more by switching  likewise for Bob Imagine this as a local optimum in outcome spaceImagine this as a local optimum in outcome space –each dimension of outcome space is dimension of a player’s choice –any movement from dominant strategy equilibrium in this space results in worse outcomes

Thus the dilemma… Now we see the problem Outcome (-5, -5) is Pareto dominated by outcome (-1, -1)Outcome (-5, -5) is Pareto dominated by outcome (-1, -1) –To achieve Pareto optimal outcome requires divergence from local optimum at strategy equilibrium Tough situation… Pareto optimal would be nice, but it is unlikely because each player risks losing moreTough situation… Pareto optimal would be nice, but it is unlikely because each player risks losing more Now we see the problem Outcome (-5, -5) is Pareto dominated by outcome (-1, -1)Outcome (-5, -5) is Pareto dominated by outcome (-1, -1) –To achieve Pareto optimal outcome requires divergence from local optimum at strategy equilibrium Tough situation… Pareto optimal would be nice, but it is unlikely because each player risks losing moreTough situation… Pareto optimal would be nice, but it is unlikely because each player risks losing more

Nash Equilibrium John Nash studied game theory in 1950s Proved that every game has an equilibriumProved that every game has an equilibrium –If there is a set of strategies with the property that no player can benefit by changing her strategy while the other players keep their strategies unchanged, then that set of strategies and the corresponding payoffs constitute the Nash Equilibrium All dominant strategies are Nash equilibriaAll dominant strategies are Nash equilibria John Nash studied game theory in 1950s Proved that every game has an equilibriumProved that every game has an equilibrium –If there is a set of strategies with the property that no player can benefit by changing her strategy while the other players keep their strategies unchanged, then that set of strategies and the corresponding payoffs constitute the Nash Equilibrium All dominant strategies are Nash equilibriaAll dominant strategies are Nash equilibria

Another game Acme: Hardware manufacturer chooses between CD and DVD format for next game platformAcme: Hardware manufacturer chooses between CD and DVD format for next game platform Best: Software manufacturer chooses between CD and DVD format for next titleBest: Software manufacturer chooses between CD and DVD format for next title Acme: Hardware manufacturer chooses between CD and DVD format for next game platformAcme: Hardware manufacturer chooses between CD and DVD format for next game platform Best: Software manufacturer chooses between CD and DVD format for next titleBest: Software manufacturer chooses between CD and DVD format for next title

No dominant strategy Verify that there is no dominant strategyVerify that there is no dominant strategy

Yet two Nash equilibria exist Outcome 1: (DVD, DVD)… (9, 9) Outcome 2: (CD, CD)… (5, 5) If either player unilaterally changes strategy, that player will be worse off Outcome 1: (DVD, DVD)… (9, 9) Outcome 2: (CD, CD)… (5, 5) If either player unilaterally changes strategy, that player will be worse off

We still have a problem Two Nash equlibria, but which is selected? If players fail to select same strategy, both will loseIf players fail to select same strategy, both will lose –they could “agree” to select the Pareto optimal solution  that seems reasonable –they could coordinate Two Nash equlibria, but which is selected? If players fail to select same strategy, both will loseIf players fail to select same strategy, both will lose –they could “agree” to select the Pareto optimal solution  that seems reasonable –they could coordinate

Zero-sum games Intro Payoffs in each cell of payoff matrix sum to 0Payoffs in each cell of payoff matrix sum to 0 The Nash equilibrium in such cases may be a mixed strategyThe Nash equilibrium in such cases may be a mixed strategyIntro Payoffs in each cell of payoff matrix sum to 0Payoffs in each cell of payoff matrix sum to 0 The Nash equilibrium in such cases may be a mixed strategyThe Nash equilibrium in such cases may be a mixed strategy

Zero-sum games Payoffs in each cell sum to zero Two-finger Morra Two players (Odd and Even)Two players (Odd and Even) ActionAction –Each player simultaneously displays one or two fingers EvaluationEvaluation –f = total number of fingers  if f == odd, Even gives f dollars go to Odd  if f == even, Odd gives f dollars go to Even Payoffs in each cell sum to zero Two-finger Morra Two players (Odd and Even)Two players (Odd and Even) ActionAction –Each player simultaneously displays one or two fingers EvaluationEvaluation –f = total number of fingers  if f == odd, Even gives f dollars go to Odd  if f == even, Odd gives f dollars go to Even

Optimal strategy von Neumann (1928) developed optimal mixed strategy for two-player, zero-sum games We need only keep track of what one player wins (because we then know what other player loses)We need only keep track of what one player wins (because we then know what other player loses) –Let’s pick the Even player –assume this player wishes to maximize Maximin technique (note we studied minimax in Ch. 6)Maximin technique (note we studied minimax in Ch. 6) –make game a turn-taking game and analyze von Neumann (1928) developed optimal mixed strategy for two-player, zero-sum games We need only keep track of what one player wins (because we then know what other player loses)We need only keep track of what one player wins (because we then know what other player loses) –Let’s pick the Even player –assume this player wishes to maximize Maximin technique (note we studied minimax in Ch. 6)Maximin technique (note we studied minimax in Ch. 6) –make game a turn-taking game and analyze

Maximin Change the rules of Morra for analysis Force Even to reveal strategy firstForce Even to reveal strategy first –apply maximin algorithm –Odd has an advantage and thus the outcome of the game is Even’s worst case and Even might do better in real game The worst Even can do is to lose $3 in this modified gameThe worst Even can do is to lose $3 in this modified game Change the rules of Morra for analysis Force Even to reveal strategy firstForce Even to reveal strategy first –apply maximin algorithm –Odd has an advantage and thus the outcome of the game is Even’s worst case and Even might do better in real game The worst Even can do is to lose $3 in this modified gameThe worst Even can do is to lose $3 in this modified game

Maximin Force Odd to reveal strategy first –Apply minimax algorithm  If Odd selects 1, loss will be $2  If Odd selects 2, loss will be $4 –The worst Odd can do is to lose $2 in this modified game Force Odd to reveal strategy first –Apply minimax algorithm  If Odd selects 1, loss will be $2  If Odd selects 2, loss will be $4 –The worst Odd can do is to lose $2 in this modified game

Combining two games Even’s combined utility Even’s winnings will be somewhere betweenEven’s winnings will be somewhere between –the best case (MAX) in the game modified to its disadvantage –the worst case (MIN) in the game modified to its advantage EvenFirst_Utility ≤ Even’s_Utility ≤ OddFirst_UtilitEvenFirst_Utility ≤ Even’s_Utility ≤ OddFirst_Utilit -3 ≤ Even’s_Utility ≤ 2 -3 ≤ Even’s_Utility ≤ 2 Even’s combined utility Even’s winnings will be somewhere betweenEven’s winnings will be somewhere between –the best case (MAX) in the game modified to its disadvantage –the worst case (MIN) in the game modified to its advantage EvenFirst_Utility ≤ Even’s_Utility ≤ OddFirst_UtilitEvenFirst_Utility ≤ Even’s_Utility ≤ OddFirst_Utilit -3 ≤ Even’s_Utility ≤ 2 -3 ≤ Even’s_Utility ≤ 2

Considering mixed strategies Mixed strategyMixed strategy –select one finger with prob: p –select two fingers with prob: 1 – p If one player reveals strategy first, second player will always use a pure strategyIf one player reveals strategy first, second player will always use a pure strategy –expected utility of a mixed strategy  U1 = p * u one + (1-p) u two –expected utility of a pure strategy  U2 = max (u one, u two ) –U2 is always greater than U1 when your opponent reveals action early Mixed strategyMixed strategy –select one finger with prob: p –select two fingers with prob: 1 – p If one player reveals strategy first, second player will always use a pure strategyIf one player reveals strategy first, second player will always use a pure strategy –expected utility of a mixed strategy  U1 = p * u one + (1-p) u two –expected utility of a pure strategy  U2 = max (u one, u two ) –U2 is always greater than U1 when your opponent reveals action early

Modeling as a game tree Because the second player will always use a fixed strategy… Still pretending Even goes firstStill pretending Even goes first Because the second player will always use a fixed strategy… Still pretending Even goes firstStill pretending Even goes first Typo in book -

What is outcome of this game? Player Odd has a choice Always pick the option that minimizes utility to EvenAlways pick the option that minimizes utility to Even Represent two choices as functions of pRepresent two choices as functions of p Odd picks line that is lowest (dark part on figure)Odd picks line that is lowest (dark part on figure) Even maximizes utility by choosing p to be where lines crossEven maximizes utility by choosing p to be where lines cross –5p – 3 = 4 – 7p p = 7/12 => E utility = -1/12 Player Odd has a choice Always pick the option that minimizes utility to EvenAlways pick the option that minimizes utility to Even Represent two choices as functions of pRepresent two choices as functions of p Odd picks line that is lowest (dark part on figure)Odd picks line that is lowest (dark part on figure) Even maximizes utility by choosing p to be where lines crossEven maximizes utility by choosing p to be where lines cross –5p – 3 = 4 – 7p p = 7/12 => E utility = -1/12

Pretend Odd must go first Even’s outcome decided by pure strategy (dependent on q) Even will always pick maximum of two choicesEven will always pick maximum of two choices Odd will minimize the maximum of two choicesOdd will minimize the maximum of two choices –Odd chooses intersection point –5q – 3 = 4 – 7q q = 7/12 => E utility = -1/12 Even’s outcome decided by pure strategy (dependent on q) Even will always pick maximum of two choicesEven will always pick maximum of two choices Odd will minimize the maximum of two choicesOdd will minimize the maximum of two choices –Odd chooses intersection point –5q – 3 = 4 – 7q q = 7/12 => E utility = -1/12

Final results Both players use same mixed strategy –p one = 7/12 –p two = 5/12 –Outcome of the game is -1/12 to Even Both players use same mixed strategy –p one = 7/12 –p two = 5/12 –Outcome of the game is -1/12 to Even

Generalization Two players with n action choices mixed strategy is not as simple as p, 1-pmixed strategy is not as simple as p, 1-p –it is (p 1, p 2, …, p n-1, 1-(p 1 +p 2 +…+p n-1 )) Solving for optimal p vector requires finding optimal point in (n-1)- dimensional spaceSolving for optimal p vector requires finding optimal point in (n-1)- dimensional space –lines become hyperplanes –some hyperplanes will be clearly worse for all p –find intersection among remaining hyperplanes –linear programming can solve this problem Two players with n action choices mixed strategy is not as simple as p, 1-pmixed strategy is not as simple as p, 1-p –it is (p 1, p 2, …, p n-1, 1-(p 1 +p 2 +…+p n-1 )) Solving for optimal p vector requires finding optimal point in (n-1)- dimensional spaceSolving for optimal p vector requires finding optimal point in (n-1)- dimensional space –lines become hyperplanes –some hyperplanes will be clearly worse for all p –find intersection among remaining hyperplanes –linear programming can solve this problem

Repeated games Imagine same game played multiple times payoffs accumulate for each playerpayoffs accumulate for each player optimal strategy is a function of game historyoptimal strategy is a function of game history –must select optimal action for each possible game history StrategiesStrategies –perpetual punishment  cross me once and I’ll take us both down forever –tit for tat  cross me once and I’ll cross you the subsequent move Imagine same game played multiple times payoffs accumulate for each playerpayoffs accumulate for each player optimal strategy is a function of game historyoptimal strategy is a function of game history –must select optimal action for each possible game history StrategiesStrategies –perpetual punishment  cross me once and I’ll take us both down forever –tit for tat  cross me once and I’ll cross you the subsequent move

The design of games Let’s invert the strategy selection process to design fair/effective games Tragedy of the commonsTragedy of the commons –individual farmers bring their livestock to the town commons to graze –commons is destroyed and all experience negative utility –all behaved rationally – refraining would not have saved the commons as someone else would eat it  Externalities are a way to place a value on changes in global utility  Power utilities pay for the utility they deprive neighboring communities (yet another Nobel prize in Econ for this – Coase (prof at UVa)) Let’s invert the strategy selection process to design fair/effective games Tragedy of the commonsTragedy of the commons –individual farmers bring their livestock to the town commons to graze –commons is destroyed and all experience negative utility –all behaved rationally – refraining would not have saved the commons as someone else would eat it  Externalities are a way to place a value on changes in global utility  Power utilities pay for the utility they deprive neighboring communities (yet another Nobel prize in Econ for this – Coase (prof at UVa))

Auctions English AuctionEnglish Auction –auctioneer incrementally raises bid price until one bidder remains  bidder gets the item at the highest price of another bidder plus the increment (perhaps the highest bidder would have spent more?)  strategy is simple… keep bidding until price is higher than utility  strategy of other bidders is irrelevant English AuctionEnglish Auction –auctioneer incrementally raises bid price until one bidder remains  bidder gets the item at the highest price of another bidder plus the increment (perhaps the highest bidder would have spent more?)  strategy is simple… keep bidding until price is higher than utility  strategy of other bidders is irrelevant

Auctions Sealed bid auctionSealed bid auction –place your bid in an envelope and highest bid is selected  say your highest bid is v  say you believe the highest competing bid is b  bid min (v, b +  )  player with highest value on good may not win the good and players must contemplate other player’s values Sealed bid auctionSealed bid auction –place your bid in an envelope and highest bid is selected  say your highest bid is v  say you believe the highest competing bid is b  bid min (v, b +  )  player with highest value on good may not win the good and players must contemplate other player’s values

Auctions Vickery Auction (A sealed bid auction)Vickery Auction (A sealed bid auction) –Winner pays the price of the second-highest bid –Dominant strategy is to bid what item is worth to you Vickery Auction (A sealed bid auction)Vickery Auction (A sealed bid auction) –Winner pays the price of the second-highest bid –Dominant strategy is to bid what item is worth to you

Auctions These auction algorithms can find their way into computer- controlled systemsThese auction algorithms can find their way into computer- controlled systems –Networking  Routers  Ethernet –Thermostat control in offices (Xerox PARC) These auction algorithms can find their way into computer- controlled systemsThese auction algorithms can find their way into computer- controlled systems –Networking  Routers  Ethernet –Thermostat control in offices (Xerox PARC)

Neural Networks Read Section 20.5 Small program and homework assignment Read Section 20.5 Small program and homework assignment

Model of Neurons Multiple inputs/dendrites (~10,000!!!)Multiple inputs/dendrites (~10,000!!!) Cell body/soma performs computationCell body/soma performs computation Single output/axonSingle output/axon Computation is typically modeled as linearComputation is typically modeled as linear –  change in input corresponds to k  change in output (not k  2 or sin  …) Multiple inputs/dendrites (~10,000!!!)Multiple inputs/dendrites (~10,000!!!) Cell body/soma performs computationCell body/soma performs computation Single output/axonSingle output/axon Computation is typically modeled as linearComputation is typically modeled as linear –  change in input corresponds to k  change in output (not k  2 or sin  …)

Early History of Neural Nets Eons ago: Neurons are invented 1868: J. C. Maxwell studies feedback mechanisms1868: J. C. Maxwell studies feedback mechanisms 1943: McCulloch-Pitts Neurons1943: McCulloch-Pitts Neurons 1949: Hebb indicates biological mechanism1949: Hebb indicates biological mechanism 1962: Rosenblatt’s Perceptron1962: Rosenblatt’s Perceptron 1969: Minsky and Papert decompose perceptrons1969: Minsky and Papert decompose perceptrons Eons ago: Neurons are invented 1868: J. C. Maxwell studies feedback mechanisms1868: J. C. Maxwell studies feedback mechanisms 1943: McCulloch-Pitts Neurons1943: McCulloch-Pitts Neurons 1949: Hebb indicates biological mechanism1949: Hebb indicates biological mechanism 1962: Rosenblatt’s Perceptron1962: Rosenblatt’s Perceptron 1969: Minsky and Papert decompose perceptrons1969: Minsky and Papert decompose perceptrons

McCulloch-Pitts Neurons One or two inputs to neuronOne or two inputs to neuron Inputs are multiplied by weightsInputs are multiplied by weights If sum of products exceeds a threshold, the neuron firesIf sum of products exceeds a threshold, the neuron fires One or two inputs to neuronOne or two inputs to neuron Inputs are multiplied by weightsInputs are multiplied by weights If sum of products exceeds a threshold, the neuron firesIf sum of products exceeds a threshold, the neuron fires

What can we model with these? Error in book

Perceptrons Each input is binary and has associated with it a weightEach input is binary and has associated with it a weight The sum of the inner product of the input and weights is calculatedThe sum of the inner product of the input and weights is calculated If this sum exceeds a threshold, the perceptron firesIf this sum exceeds a threshold, the perceptron fires Each input is binary and has associated with it a weightEach input is binary and has associated with it a weight The sum of the inner product of the input and weights is calculatedThe sum of the inner product of the input and weights is calculated If this sum exceeds a threshold, the perceptron firesIf this sum exceeds a threshold, the perceptron fires

Neuron thresholds (activation functions) It is desirable to have a differentiable activation function for automatic weight adjustmentIt is desirable to have a differentiable activation function for automatic weight adjustment

Hebbian Modification “When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased” from Hebb’s 1949 The Organization of Behavior, p. 62 “When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased” from Hebb’s 1949 The Organization of Behavior, p. 62

Error Correction Only updates weights for non-zero inputs For positive inputs If the perceptron should have fired but did not, the weight is increasedIf the perceptron should have fired but did not, the weight is increased If the perceptron fired but should not have, the weight is decreasedIf the perceptron fired but should not have, the weight is decreased Only updates weights for non-zero inputs For positive inputs If the perceptron should have fired but did not, the weight is increasedIf the perceptron should have fired but did not, the weight is increased If the perceptron fired but should not have, the weight is decreasedIf the perceptron fired but should not have, the weight is decreased

Perceptron Example Example modified from “The Essence of Artificial Intelligence” by Alison CawseyExample modified from “The Essence of Artificial Intelligence” by Alison Cawsey Initialize all weights to 0.2Initialize all weights to 0.2 Let epsilon = 0.05 and threshold = 0.5Let epsilon = 0.05 and threshold = 0.5 Example modified from “The Essence of Artificial Intelligence” by Alison CawseyExample modified from “The Essence of Artificial Intelligence” by Alison Cawsey Initialize all weights to 0.2Initialize all weights to 0.2 Let epsilon = 0.05 and threshold = 0.5Let epsilon = 0.05 and threshold = 0.5

Perceptron Example First output is 1 since >0.5First output is 1 since >0.5 Should be 0, so weights with active connections are decremented by 0.05Should be 0, so weights with active connections are decremented by 0.05 First output is 1 since >0.5First output is 1 since >0.5 Should be 0, so weights with active connections are decremented by 0.05Should be 0, so weights with active connections are decremented by 0.05

Perceptron Example Next output is 0 since <=0.5Next output is 0 since <=0.5 Should be 1, so weights with active connections are incremented by 0.05Should be 1, so weights with active connections are incremented by 0.05 New weights work for Alison, Jeff, and GailNew weights work for Alison, Jeff, and Gail Next output is 0 since <=0.5Next output is 0 since <=0.5 Should be 1, so weights with active connections are incremented by 0.05Should be 1, so weights with active connections are incremented by 0.05 New weights work for Alison, Jeff, and GailNew weights work for Alison, Jeff, and Gail

Perceptron Example Output for Simon is 1 ( >0.5)Output for Simon is 1 ( >0.5) Should be 0, so weights with active connections are decremented by 0.05Should be 0, so weights with active connections are decremented by 0.05 Are we finished?Are we finished? Output for Simon is 1 ( >0.5)Output for Simon is 1 ( >0.5) Should be 0, so weights with active connections are decremented by 0.05Should be 0, so weights with active connections are decremented by 0.05 Are we finished?Are we finished?

Perceptron Example After processing all the examples again we get weights that work for all examplesAfter processing all the examples again we get weights that work for all examples What do these weights mean?What do these weights mean? In general, how often should we reprocess?In general, how often should we reprocess? After processing all the examples again we get weights that work for all examplesAfter processing all the examples again we get weights that work for all examples What do these weights mean?What do these weights mean? In general, how often should we reprocess?In general, how often should we reprocess?

Perceptrons are linear classifiers Consider a two-input neuron Two weights are “tuned” to fit the dataTwo weights are “tuned” to fit the data The neuron uses the equation w 1 * x 1 + w 2 * x 2 to fire or notThe neuron uses the equation w 1 * x 1 + w 2 * x 2 to fire or not –This is like the equation of a line mx + b - y Consider a two-input neuron Two weights are “tuned” to fit the dataTwo weights are “tuned” to fit the data The neuron uses the equation w 1 * x 1 + w 2 * x 2 to fire or notThe neuron uses the equation w 1 * x 1 + w 2 * x 2 to fire or not –This is like the equation of a line mx + b - y

Linearly separable These single-layer perceptron networks can classify linearly separable systems

For homework Consider a system like XOR x 1 x 2 x 1 XOR x Consider a system like XOR x 1 x 2 x 1 XOR x

Class Exercise Find w1, w2, and theta such that Theta(x1*w1+x2*w2 )= x1 xor x2Find w1, w2, and theta such that Theta(x1*w1+x2*w2 )= x1 xor x2 Or, prove that it can’t be doneOr, prove that it can’t be done Find w1, w2, and theta such that Theta(x1*w1+x2*w2 )= x1 xor x2Find w1, w2, and theta such that Theta(x1*w1+x2*w2 )= x1 xor x2 Or, prove that it can’t be doneOr, prove that it can’t be done

2 nd Class Exercise x3 = ~x1, x4 = ~x2x3 = ~x1, x4 = ~x2 Find w1, w2, w3, w4, and theta such that Theta(x1*w1+x2*w2 )= x1 xor x2Find w1, w2, w3, w4, and theta such that Theta(x1*w1+x2*w2 )= x1 xor x2 Or, prove that it can’t be doneOr, prove that it can’t be done x3 = ~x1, x4 = ~x2x3 = ~x1, x4 = ~x2 Find w1, w2, w3, w4, and theta such that Theta(x1*w1+x2*w2 )= x1 xor x2Find w1, w2, w3, w4, and theta such that Theta(x1*w1+x2*w2 )= x1 xor x2 Or, prove that it can’t be doneOr, prove that it can’t be done

3 rd Class Exercise Find w1, w2, and f() such that f(x1*w1+x2*w2) = x1 xor x2Find w1, w2, and f() such that f(x1*w1+x2*w2) = x1 xor x2 Or, prove that it can’t be doneOr, prove that it can’t be done Find w1, w2, and f() such that f(x1*w1+x2*w2) = x1 xor x2Find w1, w2, and f() such that f(x1*w1+x2*w2) = x1 xor x2 Or, prove that it can’t be doneOr, prove that it can’t be done