Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artificial Intelligence

Similar presentations


Presentation on theme: "Artificial Intelligence"— Presentation transcript:

1 Artificial Intelligence
Intelligent Agents Chapter 2 Intelligent Agents

2 Outline of this Chapter
What is an agent? Rational Agents PEAS (Performance measure, Environment, Actuators, Sensors) Structure of Intelligent Agents Types of Agent Program Types of Environments Intelligent Agents

3 What is an agent? Anything that can be viewed as:
perceiving its environment through sensors acting upon that environment through Actuators. Human agent: Sensors: eyes, ears… Actuators : legs, mouth, and other body parts Robotic agent: Sensors: cameras and infrared range finders Actuators: various motors Intelligent Agents

4 Rational Agents Rational Agent
Agent that does the right thing based on what it can perceive and the actions it can perform. Right action  cause the agent to be most successful. Problem: how & when to evaluate the agent’s success? It depends on 4 things: Performance measure- defines degree of success the criteria that determine how successful an agent is. Example: vacuum-cleaner amount of dirt cleaned up, amount of electricity consumed, amount of noise generated, etc.. When to evaluate the agent’s success? E.g. Measure performance over the long run. Intelligent Agents

5 Rational Agents Environment Percepts Actions Sensors Actuators
Agent Program Intelligent Agents 5

6 Rational Agents (cont.)
Rational agent is distinct from omniscient (knows the actual outcome of its actions & act accordingly). percepts may not supply all relevant information 2. Percept sequence Everything that the agent has perceived so far. 3. What the agent knows about the environment The actions that the agent can perform maps any given percept sequence to an action Ideal Rational Agent One that always takes the action that is expected to maximize its performance measure, given the percept sequence it has seen so far & whatever build-in knowledge the agent has. Intelligent Agents

7 Mapping: percept sequences  actions
Mappings describe agents: by making a table of the action it takes in response to each possible percept sequence. Do we need to create an explicit table with an entry for every possible percept sequence? Example: square root function on a calculator Percept sequence: sequence of keystrokes Action: display a No on screen Intelligent Agents

8 Autonomy An agent’s behaviour can be based on both its own experience and the built-in knowledge used in constructing the agent for the environment in which it operates. An agent is autonomous if its behaviour is determined by its own experience, rather than on knowledge of the environment that has been built-in by the designer. AI agent should have some initial knowledge, & ability to learn. Autonomous intelligent agent should be able to operate successfully in a wide variety of environment, given sufficient time to adapt. Intelligent Agents

9 PEAS The design of an agent program depends on:
PEAS: Performance measure, Environment, Actuators, Sensors Must first specify the setting for intelligent agent design Consider, e.g., the task of designing an automated taxi driver: Performance measure Environment Actuators Sensors Intelligent Agents

10 PEAS Example, the task of designing an automated taxi driver:
Performance measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn Sensors: Cameras, sonar, speedometer, odometer, engine sensors, keyboard PEAS Performance measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn. Cameras, sonar, speedometer, odometer, engine sensors, keyboard Sensors: Intelligent Agents

11 Structure of Intelligent Agents
Agent: humans, robots, softbots, etc. The role of AI is to design the agent program. Agent Program: a function that implements the agent mapping from a percept to an action. This program will run on some sort of computing device: Architecture: computing device (computer / special HW), makes the percept from the sensors available to the program, runs the program, and feeds the program’s action choices to the effectors as generated. The relationship among the above can be summed up as bellow: agent = architecture + program Intelligent Agents

12 Agent Types How to build a program to implement the mapping from percepts to action? Five basic types will be considered: Table-driven agents use a percept sequence/action table in memory to find the next action. They are implemented by a (large) lookup table. Simple reflex agents Respond immediately to percepts. Model-based reflex agents maintain internal state to track aspects of the world that are not evident in the current percept. Goal-based agents Act so that they will achieve their goals Utility-based agents base their decision on classic utility-theory in order to act rationally. Intelligent Agents

13 Table-driven agents It operates by keeping in memory its entire percept sequence Use it to index into table of actions (contains appropriate action for all possible sequences.) This proposal is doomed to failure. The table needed for something as simple as an agent that can only play chess would be about entries. Agent has no autonomy at all – the calculation of best actions is entirely built in. (if the Env changes, agent would be lost) Intelligent Agents

14 Simple reflex agents in schematic form
Sensors Actuators Condition-action rules Environment What the world is like now What Action I should I do Now? How the condition-action rules allow the agent to make the connection from percept to action Intelligent Agents

15 Simple reflex agents can often summarise portions of the look-up table by noting commonly occurring input/output associations which can be written as condition-action rules: if {set of percepts} then {set of actions} Based on condition-action rules. In humans, condition-action rules are both learned responses and innate reflexes (e.g., blinking)  if it is raining then put up umbrella Correct decisions must be made solely based on current percept. Examples:  Is driving purely a matter of reflex? What happens when making a lane change? Intelligent Agents

16 Simple reflex agents (Cont.)
Function SIMPLE-REFLEX-Agent(percept) Static: rules, set of condition-actions rules state <- Interpret-Input(percept) Rule <- Rule-Match(state, rules) action <- Rule-Action[Rule] Return action Find the rule whose condition matches the current situation, and do the action associated with that rule. Intelligent Agents

17 Simple Reflex Agent Environment Percepts Actions Actuators Sensors
Current State Selected Action If-then Rules Intelligent Agents

18 Simple Reflex Agent Function Simple reflex agents([locaiton, status]) returns actions IF status = Dirty then return Suck Else IF location = A then return Right Else IF location = B then return Left Intelligent Agents

19 Model-based reflex agents (Reflex agents with state)
How the current percept is combined with the old internal state to generate the updated description of the current state. Intelligent Agents

20 Model-Based Agent Environment Percepts Actions Sensors Actuators
Selected Action Current State Previous perceptions Impact of actions World changes If-then Rules Intelligent Agents

21 Model-based reflex agents (Cont)
Function REFLEX-Agent-WITH-STATE(percept) Static: state, a description of the current world state rules, a set of condition-action rules. action, the most recent action, initially none state <- Update-State(state,action,percept) Rule <- Rule-Match(state, rules) action <- Rule-Action[Rule] Return action It keeps track of the current state of the world using an internal model. It then chooses an action in the same way as the reflex agent. Update-State, responsible for creating the new internal state description, interpreting the new percept in the light of existing knowledge about the state & it uses inf about how the world evolves to keep track of the unseen parts of the world. Is it enough to decide what to do by knowing only the current state of the environment? Intelligent Agents

22 Goal-based agents agents which in addition to state information have a kind of goal information which describes desirable situations. Agents of this kind take future events into consideration. The right decision depends on where the agent is trying to get to  agent needs goal info & current state description. e.g: Decision to change lanes depends on a goal to go somewhere. It is flexible: the knowledge that supports its decisions is represented explicitly & can be modified without having to rewrite a large No of condition-action rules ( reflex agent). Intelligent Agents

23 An agent with explicit goals
Sensors What the world is like now ? What Action should I do Now? Goals Actuators E n v i r o m e t state How the world evolves What my actions do What it will be like if I do Action A It keeps track of the world state as well as a set of goals it is trying to achieve, & chooses an action that will lead to the achievement of its goals. Intelligent Agents

24 Goal-Based Agent Environment Percepts Actions Sensors Actuators
Selected Action Current State Goal Previous perceptions Impact of actions World changes State if I do action X Intelligent Agents

25 Goal-based agents (Cont.)
Given knowledge of how the world evolves, how its own actions will affect its state, an agent will be able to determine the consequences of all possible actions. then compare each of these against its goal to determine which action achieves its goal, and hence which action to take. If a long sequence of actions is required to reach the goal, then Search (Russell Chapters 3-5) and Planning (Chapters 11-13) are the sub-fields of AI that must be called into action. Are goals alone enough to generate high quality behaviour? Many action sequences can result in the same goal being achieved. but some are Quicker,Safer,more Reliable,cheaper than others. Intelligent Agents

26 Utility-based agents Any utility based agent can be described as possessing an utility function Utility is a function that maps a state/sequence of state onto real number, which describes the associated degree of happiness. Use the utility to choose between alternative sequences of actions/states that lead to a given goal being achieved. Utility function allows rational decisions in 2 kind of cases: When there are conflicting goals, only some of which can be achieved (e.g. speed & safety). When they are several goals that the agent can aim for, none of which can be achieved with certainty. Intelligent Agents

27 Utility-based agents t n e m n o r i v n E Sensors Agent Utility
state What the world is t n like now ? e How the world evolves m n What my actions What it will be like if o do I do Action A r i v n E How happy I will be In such a state Utility What Action should I do Now? Actuators Intelligent Agents

28 Happiness in that state
Utility-Based Agent Environment Percepts Actions Sensors Actuators Selected Action Current State Utility Previous perceptions Impact of actions World changes State if I do action X Happiness in that state Intelligent Agents

29 Learning agents An agent whose behavior improves over time based on its experience. Intelligent Agents

30 Learning Agent Environment Percepts Actions Sensors Actuators
Problem Generator Learning Element Feedback Performance standard Changes Knowledge Learning Goals Performance Element Critic Intelligent Agents

31 Machine Learning Percept should not only be used for generating an agent’s immediate actions, but also for improving its ability to act in the future, i.e. for learning learning can correspond to anything from trivial memorisation, to the creation of complete scientific theories. learning can be classified into 3 increasingly difficult classes: Supervised Learning – learning with a teacher (e.g. the system is told what outputs it should produce for each of a set of inputs) Reinforcement learning – learning with limited feedback (e.g. the system must produce outputs for a set of inputs and is only told whether they are good or bad) Unsupervised learning – learning with no help (e.g. the system is given a set of inputs and is expected to make some kind of sense of them on its own) Machine learning systems can be set up to do all three types of learning Intelligent Agents

32 Types of Environments We have talked about intelligent agents, but little about the environments with which they interact How to couple an agent to an environment? Actions are done by the agent on the environment, which in turn provides percepts to the agent. Fully Observable vs partially: An agent's sensors give it access to the complete state of the environment at each point in time. Such environments are convenient, since the agent is freed from the task of keeping track of the changes in the environment. Deterministic vs stochastic: The next state of the environment is completely determined by the current state and the action selected by the agent. e.g. taxi driving is stochastic- one can never predict the behavior of traffic exactly. Intelligent Agents

33 Types of Environments (Cont.)
Episodic/Nonepisodic: An episodic environment means that subsequent episodes do not depend on what actions occurred in previous episodes. Such environments do not require the agent to plan ahead. e.g. an agent has to spot defective parts on an assembly line bases each decision on the current part, regardless of previous decisions  Static vs dynamic: The environment is unchanged while an agent is deliberating. Easy to deal with  agent doesn’t need to keep looking at the world while it is deciding on an action. Discrete vs continuous: if they are a limited number of distinct, clearly defined percepts and actions.e.g. chess is discrete- they are fixed no of possible moves on each turn. Single agent vs. multiagent: An agent operating by itself in an environment. Diff environment types require somewhat diff agent programs to deal with them effectively. Intelligent Agents

34 Examples of Environments
Solitaire Chess(clock) Internet shopping Taxi driver Observable Yes Yes No No Deterministic Yes Yes Partly No Episodic No No No No Static Yes Semi Semi No Discrete Yes Yes Yes No Single-agent Yes No Yes (except auctions) No The real world is partially observable, nondeterministic, nonepisodic (sequential), dynamic, continuous, multi-agent. Intelligent Agents

35 Components of an AI Agent
The components that need to be built into an AI agent: A means to infer properties of the world from its percepts. Information about the way the world evolves. Information about what will happen as a result of its possible actions. Utility information indicating the desirability of possible world states and the actions that lead to them. Goals that describe the classes of states whose achievement maximises the agent’s utility A mapping from the above forms of knowledge to its actions. An active learning system that will improve the agents ability to perform well Intelligent Agents

36 Conclusion An agent perceives and acts in an environment. It has an architecture and is implemented by a program. An ideal agent always chooses the action which maximizes its expected performance, given the percept sequence received so far. An autonomous agent uses its own experience rather than built-in knowledge of the environment by the designer. An agent program maps from a percept to an action and updates its internal state. Intelligent Agents

37 Conclusion (Cont.) Reflex agents respond immediately to percepts.
Goal-based agents act in order to achieve their goal(s). Utility-based agents maximize their own utility function. Representing knowledge is important for successful agent design. Some environments are more difficult for agents than others. The most challenging environments are partially observable, non-deterministic, non-episodic, dynamic, and continuous. Intelligent Agents


Download ppt "Artificial Intelligence"

Similar presentations


Ads by Google