Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ift6802 - H03 Commerce electronique Agents Presentation adaptée des notes de Adina Florea Cours a Worcester Polytechnic.

Similar presentations


Presentation on theme: "Ift6802 - H03 Commerce electronique Agents Presentation adaptée des notes de Adina Florea Cours a Worcester Polytechnic."— Presentation transcript:

1 Ift H03 Commerce electronique Agents Presentation adaptée des notes de Adina Florea Cours a Worcester Polytechnic

2 Lecture outline Motivation for agents
Definitions of agents  agent characteristics, taxonomy Agents and objects Multi-Agent Systems Agents intelligence Examples

3 Contexte de techno agents
Besoin de composantes logicielles robustes et flexibles Opportunisme: prendre avantage de ressources disponibles Prévoir l’exploitation de l’imprévisible Autonomie, intelligence, évolution 3

4 Motivations for agents
Large-scale, complex, distributed systems: understand, built, manage Open and heterogeneous systems - build components independently Distribution of resources Distribution of expertise Needs for personalization and customization Interoperability with legacy systems 4

5 Questions: Programs or agents?
a thermostat with a sensor for detecting room temperature electronic calendar (arranges meetings) system with messages sorted by date with messages sorted by order of importance and SPAM deleted Bidding program for electronic auctions 5

6 Agent? The term agent is used frequently nowadays in:
Sociology, Biology, Cognitive Psychology, Social Psychology, and Computer Science  AI Why agents? What are they in Computer Science? Do they bring us anything new in modelling and constructing our applications? Much discussion of what (software) agents are and of how they differ from programs in general 6

7 What is an agent (en informatique)?
There is no universally accepted definition of the term agent and there is a good deal of ongoing debate and controversy on this subject The situation is somehow comparable with the one encountered when defining artificial intelligence. Confluence de plusieurs groupes de recherche: Informatique distribuée IA Sociologie & Economie Vie artificielle / simulation 7

8 More than 30 years ago, computer scientists set themselves to create artefacts with the capacities of an intelligent person (artificial intelligence ). Now we are facing the challenge to emulate or simulate the way human act in their environment, interact with one another, cooperatively solve problems or act on behalf of others, solve more and more complex problems by distributing tasks or enhance their problem solving performances by competition. 8

9 Are agents intelligent?
Not necessarily. Not to enter a debate about what intelligence is Agent = more often defined by its characteristics - many of them may be considered as a manifestation of some aspect of intelligent behaviour. 9

10 Agent definitions “Most often, when people use the term ‘agent’ they refer to an entity that functions continuously and autonomously in an environment in which other processes take place and other agents exist.” (Shoham, 1993) “An agent is an entity that senses its environment and acts upon it” (Russell, 1997)

11 Agent & Environment Agent Sensor Input Action Output Environment 11

12 “Autonomous agents are computational systems that inhabit some complex dynamic environment, sense and act autonomously in this environment, and by doing so realise a set of goals or tasks for which they are designed.” (Maes 1995) “Intelligent agents continuously perform three functions: perception of dynamic conditions in the environment; action to affect conditions in the environment; and reasoning to interpret perceptions, solve problems, draw inferences, and determine actions. (Hayes-Roth 1995)” “Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program, with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user’s goals or desires.” (the IBM Agent) 12

13 Niveaux d’agents Controlleur réactifs Loop Daemon logiciels
percevoir environnement et attendre qqc a faire decider quoi faire prendre action Controlleur réactifs Daemon logiciels Thermostat , spool in/out, serveurs

14 Identified characteristics
Two main streams of definitions Define an agent in isolation Define an agent in the context of a society of agents  social dimension  MAS Two types of definitions Does not necessary incorporate intelligence Must incorporate a kind of IA behaviour  intelligent agents 14

15 “Interesting Agent = a hardware or software-based computer system that enjoys the following properties: autonomy - agents operate without the direct intervention of humans or others, and control their actions and state; reactivity: agents perceive their environment and respond in a timely fashion to changes that occur in it; complexity: manipulating the environment is not trivial pro-activeness: agents do not simply act in response to their environment, they are able to exhibit goal-directed behaviour by taking initiative.” social ability - agents interact with other agents (and possibly humans) via some kind of agent- language; (Wooldridge and Jennings, 1995) 15

16 Agents characteristics
function continuously / persistent software sense the environment and acts upon it / reactivity act on behalf of a user or a / another program autonomous purposeful action / pro-activity goal-directed behavior vs reactive behaviour? mobility ? intelligence? Goals, rationality Reasoning, decision making cognitive Learning/adaptation Interaction with other agents - social dimension Other basis for intelligence? 16

17 Agent Environment Environment properties - Accessible vs inaccessible
- Deterministic vs nondeterministic - Episodic vs non-episodic - Static vs dynamic - Open vs closed - Contains or not other agents Agent Sensor Input Action Output Environment 17

18 MAS - many agents in the same environment
Interactions among agents - high-level interactions Interactions for - coordination - communication - organization Coordination  collectively motivated / interested  self interested - own goals / indifferent - own goals / competition / competing for the same resources - own goals / competition / contradictory goals - own goals / coalitions 18

19  communication protocol
 communication language - negotiation to reach agreement - ontology Organizational structures  centralized vs decentralized  hierarchical/ markets "cognitive agent" approach MAS systems? Electronic calendars Air-traffic control system 19

20 The wise men problem A king wishing to know which of his three wise men is the wisest, paints a white spot on each of their foreheads, tells them at least one spot is white, and asks each to determine the color of his spot. After a while the smartest announces that his spot is white A king wishing to know which of his three wise men is the wisest, paints a white spot on each of their foreheads, tells them at least one spot is white, and asks each to determine the color of his spot. After a while the smartest announces that his spot is white reasoning as follows: ``Suppose my spot were black. The second wisest of us would then see a black and a white and would reason that if his spot were black, the dumbest would see two black spots and would conclude that his spot is white on the basis of the king's assurance. He would have announced it by now, so my spot must be white."  In formalizing the puzzle, we don't wish to try to formalize the reasoning about how fast other people reason. Therefore, we will imagine that either the king asks the wise men in sequence whether they know the colors of their spots or that he asks synchronously, ``Do you know the color of your spot" getting a chorus of noes. He asks it again with the same result, but on the third asking, they answer that their spots are white. Needless to say, we are also not formalizing any notion of relative wisdom. The two players in the game can choose between two moves, either "cooperate" or "defect". The idea is that each player gains when both cooperate, but if only one of them cooperates, the other one, who defects, will gain more. If both defect, both lose (or gain very little) but not as much as the "cheated" cooperator whose cooperation is not returned. The whole game situation and its different outcomes can be summarized by table 1, where hypothetical "points" are given as an example of how the differences in result might be quantified. 20

21 Agents vs Objects Les agents sont le résultat d’une progression continue dans l’évolution des concepts d’abstraction en informatique. 21

22 Agents vs Objects Agents vs MAS 22
Autonomy - stronger - agents have sole control over their actions, an agent may refuse or ask for compensation Flexibility - Agents are reactive, like objects, but also pro-active Agents are usually persistent Own thread of control Agents vs MAS Coordination - as defined by designer, no contradictory goals Communication - higher level communication than object messages Organization - no explicit organizational structures for objects No prescribed rational/intelligent behaviour 22

23 Ferber: 23 An agent is a physical or virtual entity
1. which is capable of acting in an environment. 2. which can communicate directly with other agents. 3. which is driven by a set of tendencies (in the form of individual objectives or of a satisfaction/survival function which it tries to optimize). 4. which possesses resources of its own. 5. which is capable of perceiving its environment (but to a limited extent). 6. which has only a partial representation of its environment (and perhaps none at all). 7. which possesses skills and can offer services. 8. which may be able to reproduce itself. 9. whose behaviour tends towards satisfying its objectives, taking account of the resources and skills available to it and depending on its perception, its representation and the communications it receives. 23

24 Ferber: 24 Multi Agent System
1. An environment E, that is, a space which generally has volume. 2. A set of objects, O. These objects are situated, that is to say, it is possible at a given moment to associate any object with a position in E. 3. An assembly of agents, A, which are specific objects (a subset of O), represent the active entities in the system. 4. An assembly of relations, R, which link objects (and therefore, agents) to one another. 5. An assembly of operations, Op, making it possible for the agents of A to perceive, produce, transform, and manipulate objects in O. 6. Operators with the task of representing the application of these operations and the reaction of the world to this attempt at modification, which we shall call the laws of the universe. 24

25 There are two important special cases of this general definition.
Purely situated agents An example would be a robot. In this case E, the environment, is Euclidean 3-space. A are the robots, and O, not only other robots but physical objects such as obstacles. These are situated agents. Pure Communication Agents If A = O and E is empty, then the agents are all interlinked in a communication networks and communicate by sending messages. We have a purely communicating MAS. 25

26 How do agents acquire intelligence?
Cognitive agents The model of human intelligence and human perspective of the world  characterise an intelligent agent using symbolic representations and mentalistic notions: knowledge - John knows humans are mortal beliefs - John took his umbrella because he believed it was going to rain desires, goals - John wants to possess a PhD intentions - John intends to work hard in order to have a PhD choices - John decided to apply for a PhD commitments - John will not stop working until getting his PhD obligations - John has to work to make a living (Shoham, 1993) 26

27 Premises Such a mentalistic or intentional view of agents - a kind of "folk psychology" - is not just another invention of computer scientists but is a useful paradigm for describing complex distributed systems. The complexity of such a system or the fact that we can not know or predict the internal structure of all components seems to imply that we must rely on animistic, intentional explanation of system functioning and behaviour. Is this the only way agents can acquire intelligence? 27

28 Comparison with AI - alternate approach of realizing intelligence - the sub-symbolic level of neural networks An alternate model of intelligence in agent systems. Reactive agents Simple processing units that perceive and react to changes in their environment. Do not have a symbolic representation of the world and do not use complex symbolic reasoning. The advocates of reactive agent systems claims that intelligence is not a property of the active entity but it is distributed in the system, and steams as the result of the interaction between the many entities of the distributed structure and the environment. 28

29 29  The problem of pray and predators     Cognitive approach
Detection of prey animals Setting up the hunting team; allocation of roles Reorganisation of teams Necessity for dialogue/comunication and for coordination Predator agents have goals, they appoint a leader that organize the distribution of work and coordinate actions Reactive approach The preys emit a signal whose intensity decreases in proportion to distance - plays the role of attractor for the predators Hunters emit a signal which acts as a repellant for other hunters, so as not to find themselves at the same place Each hunter is each attracted by the pray and (weakly) repelled by the other hunters Problem definition: coordinating the actions of the predators so that they can surround the prey animals as fast as possible, a surrounded prey animal being considered dead. The prays and the predators (both are agents) move over a space represented in the form of a grid. The objective is for the predators to capture the pray animals by surrounding them as in the figure above. The following hypotheses are laid down: 1. The dimension of the environment is finite 2. The predator and pray animals move at fixed speeds and generally at the same speed. 3. The pray animals move in a random manner by making Brownian movements 4. The predators can use the corner and edges to block a pray animal's path 5. The predators have a limited perception of the world that surrounds them, which means that they can see the prey only if it is in one of the squares at a distance within their field of perception 29

30 30 The wise men problem The problem of Prisoner's Dilemma
A king wishing to know which of his three wise men is the wisest, paints a white spot on each of their foreheads, tells them at least one spot is white, and asks each to determine the color of his spot. After a while the smartest announces that his spot is white The problem of Prisoner's Dilemma Outcomes for actor A (in words, and in hypothetical "points") depending on the combination of A's action and B's action, in the "prisoner's dilemma" game situation. A similar scheme applies to the outcomes for B. Player A/ Player B Cooperate Defect Fairly good (+5) Bad (-10) Good (+10) Mediocre (0) A king wishing to know which of his three wise men is the wisest, paints a white spot on each of their foreheads, tells them at least one spot is white, and asks each to determine the color of his spot. After a while the smartest announces that his spot is white reasoning as follows: ``Suppose my spot were black. The second wisest of us would then see a black and a white and would reason that if his spot were black, the dumbest would see two black spots and would conclude that his spot is white on the basis of the king's assurance. He would have announced it by now, so my spot must be white."  In formalizing the puzzle, we don't wish to try to formalize the reasoning about how fast other people reason. Therefore, we will imagine that either the king asks the wise men in sequence whether they know the colors of their spots or that he asks synchronously, ``Do you know the color of your spot" getting a chorus of noes. He asks it again with the same result, but on the third asking, they answer that their spots are white. Needless to say, we are also not formalizing any notion of relative wisdom. The two players in the game can choose between two moves, either "cooperate" or "defect". The idea is that each player gains when both cooperate, but if only one of them cooperates, the other one, who defects, will gain more. If both defect, both lose (or gain very little) but not as much as the "cheated" cooperator whose cooperation is not returned. The whole game situation and its different outcomes can be summarized by table 1, where hypothetical "points" are given as an example of how the differences in result might be quantified. 30

31 Emotional agents 31 A computable science of emotions Virtual actors
Is intelligence only optimal action towards a a goal? Only rational behaviour? Emotional agents A computable science of emotions Virtual actors Listen trough speech recognition software to people Respond, in real time, with morphing faces, music, text, and speech Emotions: Appraisal of a situation as an event: joy, distress; Presumed value of a situation as an effect affecting another: happy-for, gloating, resentment, jealousy, envy, sorry-for; Appraisal of a situation as a prospective event: hope, fear; Appraisal of a situation as confirming or disconfirming an expectation: satisfaction, relief, fears-confirmed, disappointment Manifest temperament control of emotions 31

32 Areas of R&D in MAS 32 Agent architectures
Knowledge representation: of world, of itself, of the other agents Distributed search Coordination Planning: task sharing, result sharing, distributed planning Communication: languages, protocols Decision making: negotiation, markets, coalition formation Organizational theories Learning Implementation: Agent programming: paradigms, languages Agent platforms Middleware, mobility, security 32

33 Agents and MAS Applications
Industrial applications: real-time monitoring and management of manufacturing and production process, telecommunication networks, transportation systems, eletricity distribution systmes, etc. Business process management, decision support ecommerce, emarkets information retrieving and filtering PDAs Human-computer interaction Entertainment CAI, Web-based learning CSCW 33

34 Some examples PDAs Server agents
used as interface with the user : manage the interactions with other assistant agents (meeting organisation, ...) and with other types of agent (information search, service, contract negotiation, ...) Server agents “terminal services” Bounded to applications s.a. database, thematic servers, etc. Achieve a “terminal” service encapsulated in agents “intermediate services” Provide services with an added-value since they brought together terminal services. In a travel agency scenario: it corresponds to the integration of services such as “Hotel reservation”, “flight reservation” … 34

35 Some examples Marketing agents
“commercial” agents: used to inform potential users about their services, offers, etc. (broker, intermediate services and assistants) “buyer” agents: negotiate the prices of services “broker” agents that move on the net and look for interesting information for the user. 35

36 Intelligent Interfaces (server)
Example : The agent Artimis (CNET) User Interaction Naturel Language linguistic analysers Learning Observation Trace - Analysis Dialog Box Application Speech Acts Communication protocols Interaction ‘‘clone’’ agent Modeling users’habits Specialised Agents 36

37 Information Agents Push Technology 37 Agent themtic sort-selection
Internet themtic chain Agent sort-selection theme1 Sender Subscription Publication theme2 themek 37

38 Are these example of agents? If yes, are they intelligent?
Thermostat ex. Electronic calendar Present a list of messages sorted by date Present a list of messages sorted by order of importance - act on behalf of a user or a / another program - autonomous - sense the environment and acts upon it / reactivity - purposeful action / pro-activity - function continuously / persistent software - goals, rationality - reasoning, decision making - learning/adaptation - social dimension 38


Download ppt "Ift6802 - H03 Commerce electronique Agents Presentation adaptée des notes de Adina Florea Cours a Worcester Polytechnic."

Similar presentations


Ads by Google