Download presentation
Presentation is loading. Please wait.
1
Agent Technology for e-Commerce
Chapter 2: Software Agents Maria Fasli Maria Fasli, University of Essex
2
What is an agent? A piece software (and/or hardware) that acts on behalf of the user Unfortunately there is no unique and universally accepted definition of what constitutes an agent Different characteristics are important for different domains of application
3
Defining agents “An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors” (Russell and Norvig 2003) “Agents are active, persistent (software) components that perceive, reason, act and communicate” (Huhns and Singh 1997) “an entity that functions continuously in an environment in which other processes take place and other agents exist” (Shoham 1997) “Autonomous agents are computational systems that inhabit some complex environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks that they are designed for” (Maes 1995)
4
Characteristics of agents
Although there is no agreement regarding the definitive list of characteristics for agents, among the most important seem to be: Autonomy Proactiveness Reactiveness Social ability
5
Autonomy Difficult to pin down exactly: how self-ruled the agent really is An autonomous agent is one that can interact with its environment without the direct intervention of other agents and has control over its own actions and internal states The less predictable an agent is the more autonomous it appears to be to an external observer Absolute autonomy (complete unpredictability) may not be desirable; travel agent may exceed the allocated budget Restrictions on autonomy via social norms
6
Proactiveness Proactive (goal-directed) behaviour: an agent actively seeks to satisfy its goals and further its objectives Simplest form is writing a procedure or method which involves: Preconditions that need to be satisfied for the procedure/method to be executed Postconditions which are the effects of the correct execution of the procedure If the preconditions are met and the procedure executes correctly, then the postconditions will be true
7
Reactiveness Goal-directed behaviour as epitomised via the execution of procedures makes two limiting assumptions: while the procedure executes the preconditions remain valid the goal and the conditions for pursuing such a goal, remain valid at least until the procedure terminates Not realistic in dynamic, complex and uncertain environments Agents must not blindly attempt to achieve their goals, but should perceive their environment and any changes that affect their goals and respond accordingly Building an agent that achieves a balance between proactive and reactive behaviour is difficult
8
Social ability Agents are rarely isolated entities, they usually live and act in an environment with other agents, human or software Social ability means being able to operate in a multi-agent environment and coordinate, cooperate, negotiation and even compete with others This social dimension of the notion of agency must address many difficult situations which are not yet fully understood, even in the context of human behaviour
9
Agents as intentional systems
Trying to understand and analyze the behaviour of complex agents in a natural, intuitive and efficient way is a nontrivial task Methods that abstract us away from the mechanistic and design details of a system may be more convenient Intentional stance: ascribing to a system human mental attitudes (anthropomorphism), e.g. beliefs, desires, knowledge, wishes
10
How useful/legitimate is this approach? McCarthy explains:
To ascribe certain ‘beliefs’, ‘knowledge’, ‘free will’, ‘intentions’, consciousness’, ‘abilities’ or ‘wants’ to a machine or computer program is legitimate when such an ascription expresses the same information about the machine that it expresses about a person. It is useful when the ascription helps us understand the structure of the machine, its past or future behaviour, or how to repair or improve it.
11
The intentional stance provides us with a powerful abstraction tool: the behaviour of systems whose structure is unknown can be explained Computer systems or programs are treated as rational agents. An agent possesses knowledge or beliefs about the world it inhabits, it has desires and intentions and it is capable of performing a set of actions An agent uses practical reasoning and based on its information about the world and its chosen desires and intentions will select an action which will lead it to the achievement of one of its goals
12
Making decisions We require software agents to whom complex tasks and goals can be delegated Agents should be smart so that they can make decisions and take actions to successfully complete tasks and goals Endowing the agent with the capability to make good decisions is a nontrivial issue Environment Sensory information Action
13
A simple view of an agent
Environment states S={s1, s2, …} Perception see:S→P An agent has an internal state (IS) which is updated by percepts: next:IS P →IS An agent can choose an action from a set A={a1, a2, …}: action:IS →A The effects of an agent’s actions are captured via the function do: do:A S →S
14
The control loop of such an agent would look as follows:
15
Characteristics of the environment
The nature of the environment has a direct impact on the design of an agent and its decision-making process It can be characterized as: Fully or partially observable Deterministic or stochastic Static or dynamic Episodic or sequential Discrete or continuous Single-agent or multi-agent
16
Fully vs partially observable environments
Observability describes access to information about the world In a fully observable environment an agent has complete access to its state and can observe any changes as they occur in it Most realistic environments are only partially observable Partial observability can be attributed to noise in the agent’s sensors or perceptual aliasing Partial observability is important in multi-agent systems as it also affects what the agent knows about the other agents The more information an agent has about its world, the easier it is to choose and perform the best action
17
Deterministic vs stochastic environments
Deterministic environments The next state is completely determined by the current state and the actions performed by the agent The outcome of an agent’s actions is uniquely defined, no need to stop and reconsider Most environments are stochastic There is a random element that decides how the world changes Limited sphere of influence: the effects of an agent’s actions are not known in advance An agent’s actions may even fail Stochasticity complicates agent design
18
Static vs dynamic environments
Static environments The world only changes by the performance of actions by the agent itself If an agent perceives the world at time t0 and the agent performs no action until t1, the world will not change Dynamic environments The world constantly changes Even when an agent is executing an action a with a precondition p which holds true before the execution, p may not be true at some point during the execution The outcome of an agent’s action cannot be guaranteed as other agents and the environment itself may interfere
19
It is more difficult to build agents for dynamic environments
Issue: the agent needs to do information gathering often enough in order to have up-to-date information about the world; this depends on the rate of change of the environment An agent also needs to take into account the other agents and synchronize and coordinate its actions with theirs in order to avoid interference and conflicts
20
Episodic vs sequential environments
In an episodic environment A cycle of perception and then action is considered to be a separate episode The agent’s performance depends only on the current episode The agent need not worry about the effects of its actions on subsequent episodes and need not think ahead In sequential environments each decision made affects the next one How an environment is characterized depends on the level of abstraction
21
Discrete vs continuous environments
In a discrete environment There is a fixed, finite number of actions and percepts In principle, one can enumerate all possible states and the best action to perform in each of these – not practical though In continuous environments The number of states may be infinitely long
22
Single-agent vs multi-agent environments
In a single-agent environment there is one agent operating whereas in multi-agent environments there are many agents that interact with each other But, at times objects or entities that we would not normally consider as agents may have to be modelled as such Nature may be modelled as an agent Usually any entity/object that affects or influences the behaviour of the agent under consideration needs to be regarded as an agent
23
Open environments The most complex class of environments are those that are partially observable, stochastic, dynamic, sequential, continuous and multi-agent Known as open environments
24
Performance measure Objective: develop agents that perform well in their environments A performance measure indicates how successful an agent is Two aspects: how and when
25
How is performance assessed:
Different performance measures will be suitable for different types of agents and environments Contrast a trading agent with a vacuum cleaning agent Objective performance measures are defined by us as external observers of a system When is performance assessed: Important Continually, periodically or one-shot
26
Goal states One possible way to measure how well an agent is doing is to check that it has achieved its goal There may be a number of different action sequences that will enable an agent to satisfy its goal A good performance measure should allow the comparison of different world states or sequences of states
27
Preferences and utilities
Agents need to be able to express preferences over different goal states Each state s can be associated with a utility u(s) for each agent The utility is a real number which indicates the desirability of the state for the agent For two states s and s’ agent i prefers s to s’ if and only if u(s)>u(s’) is indifferent between the two states if and only if u(s)=u(s’) The agent’s objective is to bring about states of the environment that maximise its utility
28
Maximum Expected Utility
In a stochastic environment the performance of an action may bring about any one of a number of different outcomes The expected utility of an action a is: The agent then chooses to perform action a* which has the maximum expected utility (MEU):
29
A complete specification of the utility function allows rational decisions when:
there are conflicting goals, only some of which can be accomplished; the utility function indicates the appropriate tradeoff; there are several goals that the agent can endeavour to achieve, but none of which can be achieved with certainty; the utility provides a way in which the likelihood of success can be evaluated against the importance of the goals
30
Rationality What is rational at any given time depends on: the performance measure that determines the degree of success everything that the agent has perceived so far what the agent expects to perceive and happen in the future what the agent knows about the environment the actions that the agent can perform Definition: An ideal rational agent performs actions that are expected to maximize its performance measure
31
Bounded rationality Making a decision requires computational power, memory and computation takes time Agents are resource-bounded and this has an impact on their decision-making process: optimal decision making may not be possible Ideal rationality may be difficult to achieve Bounded rationality restrictions on the types of options may be imposed the time/computation for option consideration may be limited the search space may be pruned the option selected will be strategically inferior to the optimal one
32
Rational decision making and optimal policies
An agent is situated in an environment where there may be other agents present Time can be measured in discrete time points T={1,2,…} The environment is in a state st at time t and the set of world states is indicated by S The agent perceives its environment through a percept pt The agent affects its environment by performing an action at A policy is a complete mapping from states to actions Given a state, a policy tells an agent what action to perform An agent may take into account information about the past and the future
33
Taking into account the past
Information about the past: percept and action pairs (pt,at) An agent’s policy: ((p1,a1),(p2,a2),…,pt)=at This can be problematic (i) history may be too large, and (ii) computing an optimal policy from a computational complexity point of view would be nontrivial
34
Markov environments In some environments, the state of the world at time t provides a complete description of the history before t, hence pt=st All necessary information to decide on an optimal action is in pt Such a world state is said to be Markov or have the Markov property An agent’s policy is: (pt)=at or (st)=at Such an agent that can ignore the past is called a reactive agent and its policy reactive or memoryless
35
Taking into account the future
In a discrete world the agent performs an action a at each time point t, and the world changes as a result of this in t+1 A transition model T(s,a,s’) describes how the world s changes as a result of an action a being performed In a deterministic environment the transition model maps (s,a) to a single resulting state s’ a number of approaches exist to plan ahead In a stochastic environment: the transition model is T(s,a,s)= P(s’|s,a) Graph search is not applicable as there is uncertainty about the transitions between states
36
MDPs and POMDPs The problem of calculating an optimal policy in a fully observable, stochastic environment with a transition model that satisfies the Markov property, is called a Markov decision problem (MDP) In a partially observable environment, pt provides limited information and the agent cannot determine in which state the world really is The problem of calculating an optimal policy in a partially observable environment is called a partially observable Markov decision problem (POMDP) Methods for solving MDPs are not directly applicable to POMDPs
37
Optimal policies in MDPs
Given a MDP, one can calculate a policy from the transition model (probabilities) and the utility function Assume a stochastic, single-agent world with transition model P(s’|s,a). The agent should choose an optimal action a*: The optimal policy is But to be able to calculate the policy we need to know the utilities of all states
38
Example The agent can move North, South, East and West
Bumping onto a wall leaves the position unchanged Fully observable and stochastic: every action to the intended direction succeeds with p=0.8, but with probability p=0.2 the agent moves at right angles towards the intended direction Only the utilities of the terminal states are known Start +1 -1 3 2 1
39
The utility function needs to be based on a state sequence u([s0,…,sn]) instead of a single state
To use the MEU rule, the utility function needs to be separable u([s0,…,sn]) =R(s0,)+ u([s1,…,sn]) where R(s0,) is called the reward function
40
The immediate reward for each of the non-terminal states -0.04
The utility u(s) of a state is defined as The rewards of the terminal states are propagated out through all the other states Known as the Bellman equation forms the basis of dynamic programming Calculating the utilities in dynamic programming is an n-step decision problem
41
Two other methods of calculating optimal policies in MDPs are:
Value iteration: starting with a transition model and a reward function, calculate a utility for each state, and then use this to create a policy Policy iteration: start with some policy, and then repeatedly calculate the utility function for that policy, and then use it to calculate a new policy, and so on. A policy usually converges long before the utility function does.
42
The utilities have been calculated using the value iteration method
+1 -1 3 2 1 (a) (b) 0.87 0.82 0.76 0.72 0.49 0.78 0.93
43
Planning A key ability for intelligent behaviour as it increases an agent’s flexibility enabling it to construct sequences of actions to achieve its goals Planning is deciding what to do; closely related to scheduling which is deciding when to do an action The agent needs to figure out the list of steps (subtasks) that need to be carried out in order to achieve its goal Different approaches to planning
44
Approaches to planning
Logic-based approaches: such as Situation Calculus Case-based approaches: previously generated plans are stored in a library and can be reused to solve similar problems in the future Operator-based approaches: such as STRIPS
45
Planning as search: Situation space search: find a path from the initial state to the goal state Progression: forward search Regression: backward search from the goal state Plan space search: search through the space of possible plans Total order planning (totally-ordered sequence of actions) Partial order planning or least-commitment planning (partially-ordered set of actions)
46
Learning Learning: the process of acquiring knowledge, skills, or attitudes through experience, imitation, or teaching, which then causes changes in the behaviour Important in open, dynamic and constantly evolving environments that electronic marketplaces are
47
Why is learning important
If we succeed in endowing agents with the ability to learn, this may also offer important insights into how humans learn Impractical or even impossible to specify agent systems correctly and completely at the time of design and implementation It may not be practical to build all intelligence into the system in advance There may be hidden relationships and correlations among huge amounts of data which may not be easy to discern Produced systems may not work as well as desired or expected in the environments in which they are used
48
Knowledge about certain tasks may simply be too large to be explicitly encoded by humans
Agents that operate in dynamic environments have to be able to cope with the constant changes Any initial knowledge and skills encoded at the development stage with time it may become useless or out-of-date as the environment changes The nature of the task may change The environment may change in such a way so that the agent’s goals need to be changed as well
49
Learning methods Learning takes place as a result of the interaction between an agent and the environment and from observing one’s own decision-making Isolated (centralized) learning Learning with others (decentralized or interactive learning) Through knowledge acquisition (rote, memorization) From instruction and through advice taking From examples and practice By analogy Through problem-solving By discovery – the most difficult type of learning to implement
50
To improve the learning process feedback may be provided:
Supervised learning: the feedback specifies the desired activity and the objective is to match this desired action as closely as possible Reinforcement learning: the feedback only specifies the utility of the learner’s activity and the goal is to maximise this utility Unsupervised learning: no explicit feedback is provided and the objective is to learn useful and desired skills and activities through a process of trial and error
51
Agent architectures An agent architecture describes how an agent can be built Logic-based Reactive Belief-Desire-Intention Hybrid
52
Logic-based architecture
The ‘traditional’ symbolic artificial intelligence approach The agent possesses a symbolic representation of its environment (logical formulas) and rules on how it should behave and what actions it can take The behaviour of the system is generated by syntactic manipulation of the symbolic representations (logical deduction) Agent execution as theorem proving: If there is a theory that explains how the agent behaves, how goals are generated and how the agent can take action to satisfy them, then this specification can be directly executed to produce behaviour
53
The environment is described by sentences in L, KB= (L)
At every moment in time t an agent’s internal state is KBtKB Environment states S={s1, s2, …} Perception see:S→P The agent’s internal state is updated by percepts: next:KB P →KB An agent can choose an action from a set A={a1, a2, …}: action:KB →A The effects of an agent’s actions are captured via the function do: do:A S →S
54
The agent’s decision-making process is modelled through the rules of inference
KB : can be proven from the inference rules The agent programmer has to encode the inference rules in a way that that enables the agent to decide what to do
55
The control loop for a logic-based agent takes the following form
56
If Do(a) cannot be proven, find an action that is consistent with KB
57
The Maze World The agent’s objective is to discover the gold, pick it up and then get it to the exit (2,2) (0,1) (2,0) (1,1) (2,1) (0,0) (1,0) (2,2) (1,2) (0,2)
58
Starting position (0,0) facing East
The state of the world is described by the following predicates In(x,y) the agent is in square with coordinates (x,y) Gold(x,y) there is gold in square (x,y) Facing(d) the agent faces d{North, South, East, West} Perception: The agent can perceive the world by detecting whether or not there is gold in a square, gold or null respectively It can also perceive its position on the grid and its direction Possible actions A={pick-up, forward, turn} When the agent turns, it turns 90 degrees clockwise
59
The rules of inference determine the agent’s behaviour
Rule for picking up the gold when detected: In(x,y)Gold(x,y)→Do(pick-up) Rules to enable the agent to move around: In(0,0) Facing(East) Gold(0,0) →Do(forward) In(0,1) Facing(East) Gold (0,1) → Do(forward) In(0,2) Facing(East) Gold(0,2) → Do(turn) In(0,2) Facing(South) Gold(0,0) → Do(forward) …
60
Advantages of logic-based approach
If there is a theory which describes the agent’s behaviour, all we have to do is execute this specification Elegant, intuitive, clear semantics
61
Disadvantages of logic-based approach
Two issues in building agents with traditional AI approach Transduction problem: how can the world be translated into a meaningful symbolic model at the right abstraction level and in time for that model to be useful (images, speech etc.) Representation problem: how to represent information in a symbolic form suitable for the agents to reason with and in time for the results of the reasoning to be useful (knowledge representation, reasoning and planning)
62
Other issues How to transform percepts into declarative statements that describe the environment precisely enough Writing down all the rules that would allow agents to operate in complex environments is unrealistic Assumes calculative rationality: the world does not change in a significant way while the agent is deliberating – not realistic Computational complexity of theorem proving is a problem. Propositional logic is decidable, but first-order logic is only semi-decidable: even if there is a proof, the theorem prover may fail to terminate Representing temporal information and changes is difficult
63
Reactive architecture
The use of an internal representation and decision-making based on it is rejected ‘Smart’ behaviour is linked directly to the environment that the agent inhabits and can be generated by responding to changes The representation of the world is built into the agent’s sensory and effectory capabilities; perceptual input is mapped to actions
65
Brooks’ position The first to reject the idea of a symbolic model was Brooks ‘Real’ intelligence is situated in the world and not in disembodied systems Intelligence is an emergent property Intelligent behaviour can be generated without an explicit internal representation and without explicit reasoning, but by the interaction of simple behaviours
66
Subsumption Architecture
Brooks proposed the Subsumption Architecture Task Accomplishing Behaviours (TABs): a TAB can be a finite state machine or a rule of the form situation →action Each behaviour achieves a task and can be considered as an individual action function which takes sensory input and maps it to an action to be performed Many behaviours can ‘fire’ simultaneously Behaviours are arranged in layers: the lower the layer, the higher the priority Lower levels are usually associated with tasks/functions that are critical to the agent’s survival (obstacle avoidance etc.)
67
The Luc Steels scenario
A mission to a distant planet to collect samples of rocks and minerals Spaceship Vehicle
68
if detect crumb → pick up 1 and travel down the gradient
Each behaviour may encompass more than one situation-action rules, for example: if detect crumb → pick up 1 and travel down the gradient if carrying samples and not at the base → drop 2 crumbs and travel up the gradient The behaviours are arranged in a subsumption hierarchy S E N I G A C T I N G Random movement Return movement Exploration Trail detection Obstacle avoidance
69
Advantages of the reactive approach
Simple and elegant An agent’s behaviour is computationally tractable Very robust against failure The power lies in numbers: complex tasks can be accomplished by a group of simple reactive agents Complex behaviours emerge from the interaction of simple ones
70
Disadvantages of the reactive approach
With no model of the environment, the agents need sufficient information about their current state in order to determine an action Short-sighted and with no planning capabilities Learning is difficult to achieve Emergence or emergent behaviour is not yet fully understood and it is even more difficult to engineer Hence, difficult to build task-specific agents
71
Belief-Desire-Intention architecture
Based on Bratman’s (1987) philosophical theory of practical reasoning Theoretical reasoning: the processes of deriving knowledge or reaching conclusions using one’s beliefs and knowledge Practical reasoning: deciding what to do Deliberation: deciding what to achieve Means-end reasoning: deciding how to achieve it Beliefs: an agent’s information about the world Desires: an agent’s motivation or possible options Intentions: an agent’s commitments
72
The role of intentions Intentions are key in practical reasoning:
They describe states of affairs that the agent has committed to bringing about and as a result they are action-inducing They resist reconsideration, are volitional and reasoning-centered, and one intention leads to further being generated Forming intentions is critical to an agent’s success
73
Intentions play three characteristic roles (Bratman 1987):
An agent needs to determine ways to achieve its intentions An agent’s intentions lead it to adopt or refrain from adopting further intentions An agent is interested in succeeding in its intentions and thus it keeps track of its attempts to do so
74
BDI components
75
A set of current beliefs (Beliefs)
A belief revision function brf: (Beliefs) P→ (Beliefs) A set of desires (Desires) An option generation function: options: (Beliefs) (Intentions) → (Desires) A set of current intentions (Intentions) A filter function filter: (Beliefs) (Desires) (Intentions) → (Intentions) An action selection function asf: (Intentions) → A
76
Control loop for a BDI agent would take the form:
77
Advantages of the BDI approach
Clear and intuitive Clear functional decomposition of the agent’s subsystems The formal properties can be studied (BDI logics)
78
Disadvantages of the BDI approach
Although the decomposition of the subsystems is clear, how to efficiently implement their functionality is not Agents need to achieve a balance between commitment and reconsideration This depends on the rate of the environment change Bold and cautious agents If the rate of the environment change is low, then bold agents do better If the rate of the environment change is high, then cautious agents do better
79
Hybrid architectures Combine reactive and deliberative components and form a hierarchy of interacting layers Each layer reasons at a different level of abstraction Two types of layering: Horizontal layering Vertical layering
80
Horizontal layering Each layer can act as an independent agent
For n different behaviours n layers are implemented The layers compete with each other in order to take control of the agent; a mediator function can be introduced Layer n Layer 2 Layer 1 Action output Sensory input
81
Problems The layers’ competition for the agent’s control can cause incoherence Consistency can be achieved by introducing a function which achieves mediation between the layers Mediator function is exponentially complete: if there are n layers capable of suggesting m possible actions there are mn interactions The mediator function or a central control system can introduce a bottleneck into the agent’s decision making
82
TouringMachines Modeling layer Sensory Action input output
Planning layer Reactive layer Sensory input Action output Control rules
83
Reactive layer Acts as a reactive agent and responds to changes as they occur Implemented through situation-action rules There is no model of the environment in this layer Planning layer Achieves the agent’s pro-active behaviour via plans based on a library of plan skeletons or schemas
84
Modelling layer Endows the agent with reflective and predictive capabilities Entities are modelled as having a configuration, beliefs, desires and intentions Generates goals to resolve conflicts which are then propagated to the planning layer Control subsystem Decides which of the layers has control over the agent It is implemented via control rules which can either suppress sensor information between the control rules and the control layers or else censor action outputs from the control layers
85
Vertical layering One-pass Two-pass Layer n Layer 2 Layer 1
Sensory input Action output Sensory input Action output One-pass Two-pass
86
Advantages Low complexity. If there are n layers there are n-1 interfaces between them. If each layer is capable of suggesting m possible actions then there are at most m2(n-1) interactions No central control, no bottleneck in the agent’s decision making Problems Less flexible Not fault tolerant
87
InteRRaP Cooperation planning layer Local planning layer
Behaviour-based layer World interface (WIF) Sensory information Action output Social model Mental model World model KB CU
88
Each layer consists of two subprocesses
Situation recognition and goal activation process (SG) Planning, scheduling and execution process (PS) Two main types of interactions take place between the layers: Activation requests (bottom up) which are issued when a lower layer passes control to a higher layer. The request is issued by the PS of layer i to the SG of layer i+1 Commitment postings (top down) are sent from layer i to i-1 in order to achieve its goals. These are communicated between the PSs of the two layers
89
Agents in perspective Two views: A new software engineering paradigm
What is the fuss all about? – nothing new
90
Agents vs objects In the object-oriented approach
objects have identity, state and behaviour and communicate via messages A message is a procedure call which is executed in the context of the receiving object In the agent-oriented approach Agents have identity, state (knowledge, beliefs, desires, intentions) and behaviour (plans to achieve goals, actions, reactions to events and even roles) So are objects agents?
91
Agents exhibit autonomy, they have control over their state, execution and ultimately behaviour
Agents exhibit goal-directed, reactive and social behaviour They are persistent, self-aware and able to learn and adapt Agents have their own thread of control in a multi-agent system Control in multi-agent systems is also distributed Objects have none of these characteristics
92
Agents vs expert systems
Expert systems which started appearing in the 1970s were heralded as the first intelligent pieces of software Are they agents? Although intelligent, they have no control over their actions They only provide domain-specific information and usually do not adapt No true social ability: limited interaction with humans or other (expert) systems They are decoupled from their environment
93
Agents, Web Services and the Semantic Web
Semantic Web vision: enrich web pages with meaning so that information can be accessed by programs To facilitate automated processing on the Web and enable applications to communicate and work with each other, we need programmatic interfaces, namely web services A web service is a collection of functions that are packaged as a single entity and published on the network to be used by others Web services can be combined to offer complex functionality Users delegate their needs to agents who can then deploy web services
94
Users Agents Semantic Web Web Services Documents Emails Databases
Applications Web sites Ontologies
95
Both web services and agents encapsulate functionality, but:
Web services are static, unable to adapt Services are passive until they are invoked – once invoked the service will execute given the correct parameters Communication between services is at a low syntactic level; services can exchange data but they do not understand them; they do not ‘understand’ ontologies to resolve incompatibilities Composed functionality of web services can only be achieved when orchestrated by a third party, e.g. an agent; no social ability to coordinate with other web services or entities
96
Methodologies and languages
The reluctance to accept agents and multi-agent systems as a distinct software engineering paradigm is mainly because: The field is relatively new Lack of standard definitions and well-established and accepted methodologies for designing and developing such systems
97
Methodologies Establishing a systematic methodology for the analysis and design of agent-based applications is imperative Current methodologies can be divided into Agent-oriented methodologies (Gaia, Prometheus, Tropos, ROADMAP, MaSE) Methodologies that directly extend or adapt object-oriented methodologies (AAAI, AUML) Methodologies that adapt knowledge engineering models (MAS-CommonKADS, DESIRE) or other techniques (Z language)
98
Programming languages and environments
Currently, no languages support agent-oriented programming at a high level of abstraction One of the first attempts to create a pure agent-oriented language based on a mentalistic view of agents was Shoham’s (1993) agent-oriented programming (AOP) proposal and AGENT0 Logic-based approaches to agent programming: METATEM, AgentSpeak(L), recently Jason Toolkits and environments to support the agent developer mostly based on Java: ZEUS, JADE, JACK Intelligent Agents, JATLite Other frameworks: OAA, FIPA-OS, Grasshopper, ABLE, AgentBuilder
99
From legacy systems to agents
Wrapper Transducer (a) (b) (c) Multi-agent world Agent Agentification methods (a) Transducer; (b) Wrapper; (c) Rewrite the system
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.