Presentation is loading. Please wait.

Presentation is loading. Please wait.

Software Agent - Overview -. Outline Overview of agents –Definition –Characteristics –Comparison –Environment –Related research areas –Applications Design.

Similar presentations


Presentation on theme: "Software Agent - Overview -. Outline Overview of agents –Definition –Characteristics –Comparison –Environment –Related research areas –Applications Design."— Presentation transcript:

1 Software Agent - Overview -

2 Outline Overview of agents –Definition –Characteristics –Comparison –Environment –Related research areas –Applications Design of agents –Formalization –Task modeling –Environmental type review –Agent type Summary 1/72

3 Face to Face Telephone Internet AGENTS Information to Knowledge 2/72

4 Attributes of Intelligent Behavior Think and reason Use reason to solve problems Learn or understand from experience Acquire and apply knowledge Exhibit creativity and imagination Deal with complex or perplexing situations Respond quickly and successfully to new situations. Recognize the relative importance of elements in a situation Handle ambiguous, incomplete, or erroneous information 3/72

5 AI Application in Software Agents Inference Decision Modeling Interaction AI Application in Software Agents AI Application in Software Agents Learning Planning 4/72

6 What is the Agent? “ An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors.” “ Autonomous agents are computational systems that inhabit some complex dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks for which they are designed.” [Maes, 1995] “An autonomous agent is a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future.” [Franklin and Graesser, 1995] 5/72

7 What is the Agent? “A hardware or (more usually) software-based computer system that enjoys the following properties: autonomy, social ability, reactivity, pro-activeness.” [Wooldridge and Jennings, 1995] “Intelligent agents continuously perform three functions: perception of dynamic conditions in the environment; action to affect conditions in the environment; and reasoning to interpret perceptions, solve problems, draw inferences, and determine actions.” [Hayes-Roth, 1995] “Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user's goals or desires.” [IBM] 6/72

8 Software Agent A formal definition of “Agent” [Wooldridge, 2002] –An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives Key attributes –Autonomy: capable of acting independently, exhibiting control over their internal state and behavior –Situateness Place to live: it situates in some environment Perception capability: its ability to perceive the environment Effector capability: its ability to modify the environment –Persistent: it functions continuously as long as it is alive AGENT ENVIRONMENT input output 7/72

9 Characteristics of Software Agents Cooperation (Proactive) Autonomy Adaptation (Learning) Collaborative learning agent Smart agent Collaborative agent Interface agent [Nwana, 1996] 8/72

10 Adaptation Agents adapt to their environment and users and learn from experience. –Via machine learning, knowledge discovery, data mining, etc. –Via exchange of metadata, brokering, and facilitation. –Interface agents acquire and use user models –Situated in and aware of their environment Cooperation Autonomy Adaptation Software Agent 9/72

11 Cooperation Agents use standard languages and protocols to cooperate and collaborate to achieve common goals. –Cooperate with human agents and other software agents –Supported by agent communication languages and protocols. –Consistent with human conventions and intuition. –Toward team formation and agent ensembles Cooperation Autonomy Adaptation Software Agent 10/72

12 Autonomy Agents act autonomously to pursue their agenda. –Proactive and reactive –Goal directed behavior –Appropriately persistent –Multi-threaded behavior –Encourage viewing from an “intentional stance” Cooperation Autonomy Adaptation Software Agent 11/72

13 Intelligent Agents An intelligent agent is one that is capable of flexible autonomous action in order to meet its design objectives –Agent + flexibility Properties on flexibility –Reactivity: agents are able to perceive their environment, and respond in a timely fashion to changes that occur in it in order to satisfy its design objectives –Pro-activeness: intelligent agents are able to exhibit goal-directed behavior by taking the initiative in order to satisfy its design objectives –Social ability: intelligent agents are capable of interacting with other agents (and possibly humans) in order to satisfy its design objectives Open research issue –Purely reactive is easy –Purely proactive is not hard –But designing an agent that can balance the two remains open Because most environments are dynamic rather than fixed Taking into account possibility of program failure rather than blindly executing program 12/72

14 Reactivity If a program’s environment is guaranteed to be fixed, the program need never worry about its own success or failure – program just executes blindly –Example of fixed environment: compiler The real world is not like that: things change, information is incomplete. Many (most?) interesting environments are dynamic Software is hard to build for dynamic domains: program must take into account possibility of failure – ask itself whether it is worth executing! A reactive system is one that maintains an ongoing interaction with its environment, and responds to changes that occur in it (in time for the response to be useful) Intelligent Agents 13/72

15 Proactiveness Reacting to an environment is easy (e.g., stimulus  response rules) But we generally want agents to do things for us Hence goal directed behavior Pro-activeness = generating and attempting to achieve goals; not driven solely by events; taking the initiative Recognizing opportunities Intelligent Agents 14/72

16 Balancing Reactive and Goal-Oriented Behavior We want our agents to be reactive, responding to changing conditions in an appropriate (timely) fashion We want our agents to systematically work towards long-term goals These two considerations can be at odds with one another Designing an agent that can balance the two remains an open research problem Intelligent Agents 15/72

17 Social Ability The real world is a multi-agent environment: we cannot go around attempting to achieve goals without taking others into account Some goals can only be achieved with the cooperation of others Similarly for many computer environments: witness the Internet Social ability in agents is the ability to interact with other agents (and possibly humans) via some kind of agent-communication language, and perhaps cooperate with others Intelligent Agents 16/72

18 Other Properties Mobility: the ability of an agent to move around an electronic network Veracity – honesty: an agent will not knowingly communicate false information Benevolence – kind: agents do not have conflicting goals, and that every agent will therefore always try to do what is asked of it Rationality: agent will act in order to achieve its goals, and will not act in such a way as to prevent its goals being achieved - at least insofar as its beliefs permit Learning/adaption: agents improve performance over time Intelligent Agents 17/72

19 Summary of Agents’ Features [Brenner et al., 1998] 18/72

20 GilBert Taxonomy Service interactivity Application interactivity Data interactivity Representation of user Asynchrony Static Mobile scripts Mobile objects Preferences Reasoning Planning Learning Expert Systems Fixed - Function Agents Agency Mobility Intelligence Intelligent Agent 19/72

21 Franklin and Graesser Taxonomy An autonomous agent is a system situated within and a part of an environment that senses that environment and acts on it over time, in pursuit of its own agenda and so as to effect what it senses in the future biological agents software agents robotic agents task-specific agents entertainment agentsviruses artificial life agents autonomous agents computational agents [Franklin and Graesser, 1996] 20/72

22 Caglayan and Harrison Taxonomy Agent Task level skills Task level skills Knowledge Communications Skills Communications Skills Task A priori knowledge A priori knowledge Learning with user with other agents with other agents Information Retrieval Information Filtering Electronic Commerce Coaching Information Retrieval Information Filtering Electronic Commerce Coaching Developer Specified User Specified System Specified Developer Specified User Specified System Specified Case-Based Learning Decision Trees Neural Networks Evolutionary Algorithms Case-Based Learning Decision Trees Neural Networks Evolutionary Algorithms Interface Speech Social Interface Speech Social Inter-agent Communication Language Inter-agent Communication Language [Caglayan and Harrison, 1997] 21/72

23 Agents vs. Objects Are agents just objects by another name? Object: –encapsulates some state –communicates via message passing –has methods, corresponding to operations that may be performed on this state Main differences: –agents are autonomous: agents embody stronger notion of autonomy than objects, and in particular, they decide for themselves whether or not to perform an action on request from another agent –agents are smart: capable of flexible (reactive, pro-active, social) behavior, and the standard object model has nothing to say about such types of behavior –agents are active: a multi-agent system is inherently multi-threaded, in that each agent is assumed to have at least one thread of active control 22/72

24 Agents vs. Expert Systems (1) Aren’t agents just expert systems by another name? Expert systems typically disembodied ‘expertise’ about some (abstract) domain of discourse (e.g., blood diseases) Example: MYCIN knows about blood diseases in humans –It has a wealth of knowledge about blood diseases, in the form of rules –A doctor can obtain expert advice about blood diseases by giving MYCIN facts, answering questions, and posing queries Main differences: –agents situated in an environment: MYCIN is not aware of the world — only information obtained is by asking the user questions –agents act: MYCIN does not operate on patients Some real-time (typically process control) expert systems are agents 23/72

25 Agents vs. Expert Systems (2) Software AgentsExpert Systems Level of usersnaiveexpert TasksCommonhigh-level task Personalizeddifferent actionssame actions Active, autonomouson their ownPassively Adaptivelearn and changeremain fixed [Maes, 1997] 24/72

26 Intelligent Agents vs. AI Aren’t agents just the AI project? Isn’t building an agent what AI is all about? –AI aims to build systems that can (ultimately) understand natural language, recognize and understand scenes, use common sense, think creatively, etc. - all of which are very hard –So, don’t we need to solve all of AI to build an agent…? When building an agent, we simply want a system that can choose the right action to perform, typically in a limited domain We do not have to solve all the problems of AI to build a useful agent: –a little intelligence goes a long way! Oren Etzioni, speaking about the commercial experience of NETBOT, Inc: “We made our agents dumber and dumber and dumber…until finally they made money.” 25/72

27 Accessible vs. Inaccessible An accessible environment is one in which the agent can obtain complete, accurate, up-to-date information about the environment’s state Most moderately complex environments (including, for example, the everyday physical world and the Internet) are inaccessible The more accessible an environment is, the simpler it is to build agents to operate in it Environments 26/72

28 Deterministic vs. Non-deterministic A deterministic environment is one in which any action has a single guaranteed effect — there is no uncertainty about the state that will result from performing an action The physical world can to all intents and purposes be regarded as non-deterministic Non-deterministic environments present greater problems for the agent designer Environments 27/72

29 Episodic vs. Non-episodic In an episodic environment, the performance of an agent is dependent on a number of discrete episodes, with no link between the performance of an agent in different scenarios Episodic environments are simpler from the agent developer’s perspective because the agent can decide what action to perform based only on the current episode — it need not reason about the interactions between this and future episodes Environments 28/72

30 Static vs. Dynamic A static environment is one that can be assumed to remain unchanged except by the performance of actions by the agent A dynamic environment is one that has other processes operating on it, and which hence changes in ways beyond the agent’s control Other processes can interfere with the agent’s actions (as in concurrent systems theory) The physical world is a highly dynamic environment Environments 29/72

31 Discrete vs. Continuous An environment is discrete if there are a fixed, finite number of actions and percepts in it Russell and Norvig give a chess game as an example of a discrete environment, and taxi driving as an example of a continuous one Continuous environments have a certain level of mismatch with computer systems Discrete environments could in principle be handled by a kind of “lookup table” Environments 30/72

32 Related Research Areas Distributed Systems Database & Knowledge base Technology Biological analogies Machine Learning agents Cognitive Science AI & Computational linguistics 31/72

33 Database and Knowledge-base Intelligent agents need to be able to represent and reason about a number of things, including: –metadata about documents and collections of documents –linguistic knowledge (e.g., thesauri, proper name recognition, etc) –domain-specific data, information and knowledge –models of other agents (human or artificial): their capabilities, performance, beliefs, desires, intentions, plans, etc. –tasks, task structures, plans, etc. Related Area 32/72

34 Distributed Computing Concurrency –analyzing and specifying protocols, e.g., deadlock and livelock prevention, fairness –achieving and preserving consistency Performance evaluation –visualization –debugging Exploit the advantages of parallelism –multi-threaded implementation Related Area 33/72

35 Cognitive Science BDI agent model –Beliefs: Recognitions for the self and environments –Desires: Goal state –Intentions: Actions and plans to achieve the goal The advantage of the BDI model –Designing static agent models –Designing the communication between agent –Inferring the internal state of the agent –Expecting the other agent’s actions B + D => I I => A B + D => I I => A Related Area 34/72

36 Computational Linguistics We draw on research in (computational) linguistics for an underlying communication model Speech act theory is a high level framework developed by philosophers and linguists to account for human communication –Speakers do more than just utter sentences which have a logical (truth-theoretic) meaning “The cat is on the mat” –Speakers perform speech acts: requests, suggestions, promises, warnings, threats, criticisms, praise, etc. “I hereby promise to buy you lunch” –Every utterance is a speech act startaskedtold Ask-one Tell Deny Untell Sorry Related Area 35/72

37 Econometric Models Economics studies how to model predict and control the aggregate behavior of large collections of independent agents. Conceptual tools include: –game theory –general equilibrium market mechanisms –protocols for voting and auctions An objective is to design good artificial markets and protocols to result in the desired behavior of our agents Michigan’s Digital Library project uses a market-based approach to control its agents. Related Area 36/72

38 Biological Analogies One way that agents could adapt is via an evolutionary process. Individual agents use their own strategies for behavior with “death” as the punishment for poor performance and “reproduction” the reward for good. Artificial life techniques include: –genetic programming –natural selection –sexual reproduction Related Area 37/72

39 Machine Learning Techniques developed for machine learning are being applied to software agents to allow them to adapt to users and other agents. Popular techniques –reasoning with uncertainty –decision tree induction –neural networks –reinforcement learning –memory-based reasoning –genetic algorithms Rev. Thomas Bayes and his Theorem Related Area 38/72

40 Applications of Intelligent Agents (1) E-mail Agents –Beyond Mail, Lotus Notes, Maxims Scheduling Agents –ContactFinder Desktop Agents –Office 2000 Help, Open Sesame Web-Browsing Assistants –WebWatcher, Letizia Information Filtering Agents –Amalthaea, Jester, InfoFinders, Remembrance agent, PHOAKS, SiteSeer 39/72

41 Applications of Intelligent Agents (2) News-service Agents –NewsHound, GroupLens, FireFly, Fab, ReferralWeb, NewT Comparison Shopping Agents –Mysimon, BargainFinder, Bazzar, Shopbor, Fido Brokering Agents –PersonalLogic, Barnes, Kasbah, Jango, Yenta Auction Agents –AuctionBot, AuctionWeb Negotiation Agents –DataDetector, T@T 40/72

42 10 min Breaktime 41/72

43 Formalization of Agents Agents –Standard agents –Purely reactive agents –Agents with state Environments History Perception 42/72

44 Agents & Environments Agent’s environment states characterised by a set: S={ s1,s2,…} Effectoric capability of the Agent characterised by a set of actions: A={ a1,a2,…} Formalization Environment sensor input action output Agent 43/72

45 Standard Agents A Standard agent decides what action to perform on the basis of his history (experiences) A Standard agent can be viewed as function action: S*  A S* is the set of sequences of elements of S Formalization 44/72

46 Environments Environments can be modeled as function env: S x A  P(S) where P(S) is the powerset of S; This function takes the current state of the environment s  S and an action a  A (performed by the agent), and maps them to a set of environment states env(s,a) Deterministic environment: all the sets in the range of env are singletons Non-deterministic environment: otherwise Formalization 45/72

47 History History represents the interaction between an agent and its environment. A history is a sequence: where: s 0 is the initial state of the environment a u is the u’th action that the agent choose to perform s u is the u’th environment state h:s 0 s 1 s 2 … s u a0a0 a1a1 a2a2 a u-1 auau Formalization 46/72

48 Purely Reactive Agents A purely reactive agent decides what to do without reference to its history (no references to the past). It can be represented by a function action: S  A Example: thermostat Environment states: temperature OK; too cold heater off if s = temperature OK action(s) = heater on otherwise Formalization 47/72

49 Perception See and action functions: Perception is the result of the function see: S  P where P is a (non-empty) set of percepts (perceptual inputs) Then, the action becomes: action: P*  A which maps sequences of percepts to actions Environment Agent seeaction Formalization 48/72

50 Perception Ability (1) MIN MAX Omniscient Non-existent perceptual ability | E | = 1| E | = | S | where E: is the set of different perceived states Two different states s 1  S and s 2  S (with s 1  s 2 ) are indistinguishable if see( s 1 ) = see( s 2 ) Formalization 49/72

51 Perception Ability (2) Example: x = “The room temperature is OK” y = “There is no war at this moment” then: S={ (x,y),(x,  y),(  x,y),(  x,  y)} s1 s2 s3 s4 but for the thermostat: p1 if s=s1 or s=s2 see(s) = p2 if s=s3 or s=s4 Formalization 50/72

52 Agents with State (1) see, next and action functions Formalization Environment Agent seeaction next state 51/72

53 Agents with State (2) The same perception function: ‘see: S  P The action-selection function is now: action: I  A where I: set of all internal states of the agent An aditional function is introduced: next: I x P  I Behavior –The agent starts in some internal initial state i 0 –Then observes its environment state s –The internal state of the agent is updated with next(i 0,see(s)) –The action selected by the agent becomes action(next(i 0,see(s))), and it is performed –The agent repeats the cycle observing the environment Formalization 52/72

54 Task Modeling in Intelligent Agents Before we design an intelligent agent, we must specify its “task environment”: PEAS: Performance measure Environment Actuators Sensors 53/72

55 Example: Agent = Taxi Driver Performance measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard Task Modeling 54/72

56 Example: Agent = Medical Diagnosis System Performance measure: Healthy patient, minimize costs, lawsuits Environment: Patient, hospital, staff Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) Sensors: Keyboard (entry of symptoms, findings, patient's answers) Task Modeling 55/72

57 Example: Agent = Part-picking Robot Performance measure: Percentage of parts in correct bins Environment: Conveyor belt with parts, bins Actuators: Jointed arm and hand Sensors: Camera, joint angle sensors Task Modeling 56/72

58 Environmental Type Review (1) Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time. Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic) Episodic (vs. sequential): An agent’s action is divided into atomic episodes. Decisions do not depend on previous decisions/actions. 57/72

59 Environmental Type Review (2) Static (vs. dynamic): The environment is unchanged while an agent is deliberating. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does) Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. How do we represent or abstract or model the world? Single agent (vs. multi-agent): An agent operating by itself in an environment. 58/72

60 Environmental Type Review (3) task environm. observable determ./ stochastic episodic/ sequential static/ dynamic discrete/ continuous agents crossword puzzle fullydeterm.sequentialstaticdiscretesingle chess with clock fullystrategicsequentialsemidiscretemulti pokerpartialstochasticsequentialstaticdiscretemulti back gammon fullystochasticsequential static discretemulti taxi driving partialstochasticsequentialdynamiccontinuousmulti medical diagnosis partialstochasticsequentialdynamiccontinuoussingle image analysis fullydeterm.episodicsemicontinuoussingle partpicking robot partialstochasticepisodicdynamiccontinuoussingle refinery controller partialstochasticsequentialdynamiccontinuoussingle interact. Eng. tutor partialstochasticsequentialdynamicdiscretemulti 59/72

61 Agent Types Goal of AI: –given a PEAS task environment –construct agent function f –design an agent program that implements f on a particular architecture Types –Table driven agents –Simple reflex agents –Model-based reflex agents –Goal-based agents –Utility-based agents –Learning agents 60/72

62 Table Driven Agents current state of decision process table lookup for entire history Agent Type 61/72

63 Simple Reflex Agents Agent Type example: vacuum cleaner world NO MEMORY Fails if environment is partially observable function REFLEX_VACUUM_AGENT( percept ) returns an action (location,status) = UPDATE_STATE( percept ) if status = DIRTY then return SUCK; else if location = A then return RIGHT; else if location = B then return LEFT; 62/72

64 Example: Simple Reflex Agents PerceptAction At A, A Dirty Vacuum At A, A CleanMove Left At B, B Dirty Vacuum At B, B CleanMove right Agent Type 63/72

65 Simple Reflex Agents Act only on the basis of the current percept The agent function is based on the –condition-action rule: condition  action Limited functionality –Work well only when the environment is fully observable the condition-action rules have predicted all necessary actions Agent Type 64/72

66 Model-based Reflex Agents Agent Type Models the world by: modeling how the world chances how it’s actions change the world description of current world state sometimes it is unclear what to do without a clear goal 65/72

67 Model-based Reflex Agents Have information about how the world behaves – Model of the World They can work out information about the part of the world which they have not seen Handle partially observable environments The model of the world allows them to –Use information about how the world evolves to keep track of the parts of the world they cannot see Example: If the agent has seen an object in a place and has since not seen any agent moving towards that object then the object is still at that place. –Know the effects of their own actions on the world. Example: if the agent has moved northwards for 5 minutes then it is 5 minutes north of where it was. Agent Type 66/72

68 Goal-based Agents Agent Type Goals provide reason to prefer one action over the other We need to predict the future: we need to plan & search 67/72

69 Goal-based Agents The current state of the world is not always enough to decide what to do –For example at a junction a car can go left, right or straight. It needs knowledge of its destination to make the decision which of these to choose World model (as model-based agents) + goals –Goals are situations that are desirable –The goals allow the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state Differences from reflexive agents –Goals are explicit –The future is taken into account –Reasoning about the future is necessary – planning, search Agent Type 68/72

70 Utility-based Agents Agent Type Some solutions to goal states are better than others. Which one is best is given by a utility function. Which combination of goals is preferred? 69/72

71 Utility-based Agents What if there are multiple alternative ways of achieving the same goal? –Goals provide coarse distinction between “happy” and “unhappy” states. –Utility-based agents have finer degrees of comparison between states. –World model + goals + utility functions Utility functions map states to a measure of the utility of the states, often real numbers. They are used to: –Select between conflicting goals –Select between alternative ways of achieving a goal –Deal with cases of multiple goals, none of which can be achieved with certainty – weighing up likelihood of success against importance of goal. Agent Type 70/72

72 Learning Agents Agent Type How does an agent improve over time? By monitoring it’s performance and suggesting better modeling, new action rules, etc. 71/72

73 Summary An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives –Key characteristics: autonomy, social ability, reactivity, pro-activeness, situateness, persistent, cooperation, adaptation –Other characteristics: mobility, veracity, benevolence, rationality Next lecture’s topic: Agent architecture –BDI (Belief-Desire-Intention) architecture –Reactive architecture –Deliberative architecture –Hybrid architecture 72/72


Download ppt "Software Agent - Overview -. Outline Overview of agents –Definition –Characteristics –Comparison –Environment –Related research areas –Applications Design."

Similar presentations


Ads by Google