Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 CS B551: Elements of Artificial Intelligence Instructor: Kris Hauser

Similar presentations


Presentation on theme: "1 CS B551: Elements of Artificial Intelligence Instructor: Kris Hauser"— Presentation transcript:

1 1 CS B551: Elements of Artificial Intelligence Instructor: Kris Hauser http://cs.indiana.edu/~hauserk

2 2 Recap  http://cs.indiana.edu/classes/b551 http://cs.indiana.edu/classes/b551  Brief history and philosophy of AI  What is intelligence? Can a machine act/think intelligently? Turing machine, Chinese roomTuring machine, Chinese room

3 3 Agenda   Agent Frameworks   Problem Solving and the Heuristic Search Hypothesis

4 4 Agent Frameworks

5 5 Definition of Agent  Anything that: Perceives its environmentPerceives its environment Acts upon its environmentActs upon its environment  A.k.a. controller, robot

6 6 Definition of “Environment”  The real world, or a virtual world  Rules of math/formal logic  Rules of a game  …  Specific to the problem domain

7 7 Environment ? Agent Percepts Actions Actuators Sensors

8 8 Environment ? Agent Percepts Actions Actuators Sensors Sense – Plan – Act

9 9 “Good” Behavior  Performance measure (aka reward, merit, cost, loss, error)  Part of the problem domain

10 10 Exercise  Formulate the problem domains for: Tic-tac-toeTic-tac-toe A web serverA web server An insectAn insect A student in B551A student in B551 A doctor diagnosing a patientA doctor diagnosing a patient IU’s basketball teamIU’s basketball team The U.S.A.The U.S.A. What is/are the: Environment Percepts Actions Performance measure How might a “good- behaving” agent process information?

11 11 Types of agents  Simple reflex (aka reactive, rule- based)  Model-based  Goal-based  Utility-based (aka decision-theoretic, game-theoretic)  Learning (aka adaptive)

12 12 Simple Reflex Percept Action Rules Interpreter State

13 13 Simpl(est) Reflex Action Rules Interpreter State Observable Environment

14 14 Simpl(est) Reflex Action Rules State Observable Environment

15 15 Rule-based Reflex Agent AB if DIRTY = TRUE then SUCK else if LOCATION = A then RIGHT else if LOCATION = B then LEFT

16 16 Model-Based Reflex Percept Action Rules Interpreter State Action

17 17 Model-Based Reflex Percept Action Rules Model State Action

18 18 Model-Based Reflex Percept Action Rules Model State Action State estimation

19 19 Model-Based Agent AB Rules: if LOCATION = A then if HAS-SEEN(B) = FALSE then RIGHT else if HOW-DIRTY(A) > HOW-DIRTY(B) then SUCK else RIGHT … State: LOCATION HOW-DIRTY(A) HOW-DIRTY(B) HAS-SEEN(A) HAS-SEEN(B) Model: HOW-DIRTY(LOCATION) = X HAS-SEEN(LOCATION) = TRUE

20 20 Model-Based Reflex Agents  Controllers in cars, airplanes, factories  Robot obstacle avoidance, visual servoing

21 21 Goal-Based, Utility-Based Percept Action Rules Model State Action

22 22 Goal- or Utility-Based Percept Action Decision Mechanism Model State Action

23 23 Goal- or Utility-Based State Decision Mechanism Action Model Simulated State Action Generator Performance tester Best Action Percept Model

24 24 Goal-Based Agent

25 25 Big Open Questions: Goal-Based Agent = Reflex Agent? Percept Action DM Rules Model State ActionMental Action Mental State Mental Model Physical Environment “Mental Environment”

26 26 Big Open Questions: Goal-Based Agent = Reflex Agent? Percept Action DM Rules Model State ActionMental Action Mental State Mental Model Physical Environment “Mental Environment”

27 27 With Learning Percept Action Decision Mechanism Model/Learning Action State/Model/DM specs

28 28 Big Open Questions: Learning Agents  The modeling, learning, and decision mechanisms of artificial agents are tailored for specific tasks  Are there general mechanisms for learning?  If not, what are the limitations of the human brain?

29 29 Types of Environments  Observable / non-observable  Deterministic / nondeterministic  Episodic / non-episodic  Single-agent / Multi-agent

30 30 Observable Environments Percept Action Decision Mechanism Model State Action

31 31 Observable Environments State Action Decision Mechanism Model State Action

32 32 Observable Environments State Action Decision Mechanism Action

33 33 Nondeterministic Environments Percept Action Decision Mechanism Model State Action

34 34 Nondeterministic Environments Percept Action Decision Mechanism Model Set of States Action

35 35 Agents in the bigger picture  Binds disparate fields (Econ, Cog Sci, OR, Control theory)  Framework for technical components of AI Components are useful and rich topics themselvesComponents are useful and rich topics themselves Rest of class primarily studies componentsRest of class primarily studies components  Casting problems in the framework sometimes brings insights Search Knowledge rep. Planning Reasoning Learning Agent Robotics Perception Natural language... Expert Systems Constraint satisfaction

36 36 Problem Solving and the Heuristic Search Hypothesis

37 37 Example: 8-Puzzle 1 2 34 56 7 8123 456 78 Initial stateGoal state State: Any arrangement of 8 numbered tiles and an empty tile on a 3x3 board

38 38 Successor Function: 8-Puzzle 1 2 34 56 7 8 1 2 34 5 6 78 1 2 34 56 78 1 2 34 56 78 SUCC(state)  subset of states The successor function is knowledge about the 8-puzzle game, but it does not tell us which outcome to use, nor to which state of the board to apply it.

39 39 Across history, puzzles and games requiring the exploration of alternatives have been considered a challenge for human intelligence:  Chess originated in Persia and India about 4000 years ago  Checkers appear in 3600-year-old Egyptian paintings  Go originated in China over 3000 years ago So, it’s not surprising that AI uses games to design and test algorithms

40 40 Exploring Alternatives  Problems that seem to require intelligence require exploring multiple alternatives  Search: a systematic way of exploring alternatives

41 41 Defining a Search Problem  State space S  Successor function: x  S  SUCC (x)  2 S  Initial state s 0  Goal test: x  S  GOAL? (x) =T or F  Arc cost S

42 42 Problem Solving Agent Algorithm 1.I  sense/read initial state 2.GOAL?  select/read goal test 3.SUCC  select/read successor function 4.solution  search(I, GOAL?, SUCC) 5.perform(solution)

43 43 State Graph  Each state is represented by a distinct node  An arc (or edge) connects a node s to a node s’ if s’  SUCC (s)  An arc (or edge) connects a node s to a node s’ if s’  SUCC (s)  The state graph may contain more than one connected component

44 44 Solution to the Search Problem  A solution is a path connecting the initial node to a goal node (any one)  The cost of a path is the sum of the arc costs along this path  An optimal solution is a solution path of minimum cost  There might be no solution ! I G

45 45 Pathless Problems  Sometimes the path doesn’t matter  A solution is any goal node  Arcs represent potential state transformations  E.g. 8-queens, Simplex for LPs, Map coloring I G

46 46 8-Queens Problem  State repr. 1 Any non- conflicting placement of 0-8 queensAny non- conflicting placement of 0-8 queens  State repr. 2 Any placement of 8 queensAny placement of 8 queens

47 47 Intractability  It may not be feasible to construct the state graph completely  n-puzzle: (n+1)! states  k-queens: k k states

48 48 Heuristic Search Hypothesis (Newell and Simon, 1976)  Intelligent systems must use heuristic search to find solutions efficiently  Heuristic: knowledge that is not presented immediately by the problem specification “The solutions to problems are represented as symbol structures. A physical symbol system exercises its intelligence in problem solving by search - that is, by generating and progressively modifying symbol structures until it produces a solution structure.”

49 49 Example  I’ve thought of a number between 1 and 100. Guess it.

50 50 Example  I’ve picked a password between 3 and 8 alphanumeric characters that I’ll never forget. Guess it.

51 51 Discussion  Debated whether all intelligence is modifying symbol structures… e.g., Elephants don’t play chess, Brooks ’91e.g., Elephants don’t play chess, Brooks ’91  But for those tasks that do require modifying symbol structures, hypothesis seems true Perhaps circular logic?Perhaps circular logic?

52 52 Topics of Next 3-4 Classes  Blind Search Little or no knowledge about how to searchLittle or no knowledge about how to search  Heuristic Search How to use heuristic knowledgeHow to use heuristic knowledge  Local Search With knowledge about goal distributionWith knowledge about goal distribution

53 53 Recap  Agent: a sense-plan-act framework for studying intelligent behavior  “Intelligence” lies in sophisticated components

54 54 Recap  General problem solving framework State spaceState space Successor functionSuccessor function Goal testGoal test => State graph=> State graph  Search is a methodical way of exploring alternatives

55 55 Homework  Register!  Readings: R&N Ch. 3.1-3.3

56 56 What is a State?  A compact representation of elements of the world relevant to the problem at hand Sometimes very clear (logic, games)Sometimes very clear (logic, games) Sometimes not (brains, robotics, econ)Sometimes not (brains, robotics, econ)  History is a general-purpose state representation: [p 1,a 1,p 2,a 2,…]  State should capture how history affects the future


Download ppt "1 CS B551: Elements of Artificial Intelligence Instructor: Kris Hauser"

Similar presentations


Ads by Google