Presentation is loading. Please wait.

Presentation is loading. Please wait.

What is Intelligence Vali Derhami Yazd University, Computer Department

Similar presentations


Presentation on theme: "What is Intelligence Vali Derhami Yazd University, Computer Department"— Presentation transcript:

1 What is Intelligence Vali Derhami Yazd University, Computer Department
HomePage:

2 What is agent There is no universally accepted definition of the term agent.  Agent: A computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives. key problem facing an agent: deciding which of its actions it should perform in order to best satisfy its design objectives. 2/75

3 What is intelligent agent
An intelligent agent is computer system one that is capable of flexible autonomous action in order to meet its design objectives. Flexibility means : Reactivity: pro-activeness social ability

4 Reactivity Reactivity: intelligent agents are able to perceive their environment, and respond in a timely fashion to changes that occur in it in order to satisfy their design objectives. If a program’s environment is guaranteed to be fixed, the program need never worry about its own success or failure – program just executes blindly Example of fixed environment: compiler The real world is not like that: things change, information is incomplete. Many (most) interesting environments are dynamic Software is hard to build for dynamic domains: program must take into account possibility of failure – ask itself whether it is worth executing! A reactive system is one that maintains an ongoing interaction with its environment, and responds to changes that occur in it (in time for the response to be useful) Ongoing interation: تعامل مداوم

5 Pro-activeness Balance between goal directed and reactive behavior.
pro-activeness: intelligent agents are able to exhibit goal-directed behavior by taking the initiative in order to satisfy their design objectives. We generally want agents to do things for us Hence goal directed behavior Two assumptions in traditional systems: The environment does not change while the procedure is executing. The goal, that is, the reason for executing the procedure, remains valid at least until the procedure terminates. There is uncertainty in the environment, these assumptions are not reasonable. Pro-activeness = generating and attempting to achieve goals; not driven solely by events; taking the initiative (Recognizing opportunities) Balance between goal directed and reactive behavior. Solely: منحصرا We want agents that will attempt to achieve their goals systematically, perhaps by making use of complex procedure-like patterns of action. But we do not want our agents to continue blindly executing these procedures in an attempt to achieve a goal either when it is clear that the procedure will not work, or when the goal is for some reason no longer valid. In such circumstances, we want our agent to be able to react to the new situation, in time for the reaction to be of some use. However, we do not want our agent to be continually reacting, and hence never focusing on a goal long enough to actually achieve it.

6 Social ability Social ability: Intelligent agents are capable of interacting with other agents (and possibly humans) via some kind of agent-communication language, and perhaps cooperate with others in order to satisfy their design objectives. The real world is a multi-agent environment: we cannot go around attempting to achieve goals without taking others into account Some goals can only be achieved with the cooperation of others Similarly for many computer environments: witness the Internet Hence, Agent must negotiate and cooperate with others. Agent may be required to understand and reason about the goals of others, and to perform actions Agents as a tool for understanding human societies

7 Other Properties Other properties, sometimes discussed in the context of agency: mobility: the ability of an agent to move around an electronic network veracity: an agent will not knowingly communicate false information rationality: agent will act in order to achieve its goals, and will not act in such a way as to prevent its goals being achieved — at least insofar as its beliefs permit learning/adaption: agents improve performance over time Verasity: صحت، راستگویی

8 نگاهي ديگر به رفتار هوشمندانه
رفتاري هوشمند است که با کمترين هزينه جواب بهتري را بدهد البته بايد به منابع در اختيار موجود و محدوديت هاي آن نيز توجه شود. رفتار احساسي Emotional : غرق شدن بچه و پريدن پدر در آب نمونه آن برخورد شب پره در برابر خفاش سقوط آزاد دربرد 30 متري رفتار عقلاني (تفکري) Intellectual غرق شدن بچه و آوردن قايق نکته: توجه به پروسه پشت رفتار هوشمند مقايسه تشخيص اجسام سه بعدي بين انسان و کبوتر وقتي جسم چرخانده شود زمان تشخيص انسان با چرخش بيشتر مي شود. چشم کبوتر قابليتهاي بالاتري دارد اختلاف سطح هوشی در حیوانات رابطه منطقی با حجم مغر آنهاندارد. هوشمندی در حیوانات رابطه مستقیمی با قدرت پردازش آنها ندارد

9 Agents and Objects Objects are defined as computational entities that encapsulate some state, are able to perform actions, or methods on this state, and communicate by message passing. encapsulates some state communicates via message passing has methods, corresponding to operations that may be performed on this state

10 Main differences: Agents and Objects
agents are autonomous: agents embody stronger notion of autonomy than objects, and in particular, they decide for themselves whether or not to perform an action on request from another agent agents are smart: capable of flexible (reactive, pro-active, social) behavior, and the standard object model has nothing to say about such types of behavior. An object does not exhibit control over it's behavior. agents are active: In the standard object model, there is a single thread of control in the system. A multi-agent system is inherently multi-threaded, in that each agent is assumed to have at least one thread of active control

11 Objects do it for free… agents do it because they want to
agents do it for money

12 Concrete Architectures for Intelligent Agents
Four classes of agents: Logic based agents: decision making is realized through logical deduction; Reactive agents: decision making is implemented in some form of direct mapping from situation to action; Belief-desire-intention agents: decision making depends upon the manipulation of data structures representing the beliefs, desires, and intentions of the agent; Layered architectures: decision making is realized via various software layers, each of which is more-or-less explicitly reasoning about the environment at different levels of abstraction.

13 Logic based agents The traditional approach to building artificially intelligent systems, (known as symbolic AI ) suggests that intelligent behavior can be generated in a system by giving that system a symbolic representation of its environment and its desired behavior, and syntactically manipulating this representation.  Example: Classical first-order predicate logic.

14 Reactive Architectures
There are many unsolved (some would say insoluble) problems associated with symbolic AI In the mid to-late 1980s, researchers began to investigate alternatives to the symbolic AI paradigm. Certain themes: The rejection of symbolic representations, and of decision making based on syntactic (نحوی)manipulation of such representations; The idea that intelligent, rational behavior is seen as innately linked to the environment an agent occupies intelligent behavior is not disembodied, but is a product of the interaction the agent maintains with its environment; the idea that intelligent behavior emerges from the interaction of various simpler behaviors.

15 Reactive Architectures (Cont.)
Approaches referred to as Behavioral: a common theme is that of developing and combining individual behaviors, Situated :a common theme is that of agents actually situated in some environments, rather than being disembodied from it), Reactive: such systems are often perceived as simply reacting to an environment, without reasoning about it. Subsumption architecture: The best-known reactive agent architecture, developed by Rodney Brooks

16 Brooks – behavior languages
Brooks has put forward three theses: Intelligent behavior can be generated without explicit representations of the kind that symbolic AI proposes Intelligent behavior can be generated without explicit abstract reasoning of the kind that symbolic AI proposes. Intelligence is an emergent property of certain complex systems

17 Brooks – behavior languages
To illustrate his ideas, Brooks built some based on his subsumption architecture A subsumption architecture is a hierarchy of task-accomplishing behaviors Each behavior is a rather simple rule-like structure Each behavior ‘competes’ with others to exercise control over the agent Lower layers represent more primitive kinds of behavior (such as avoiding obstacles), and have precedence over layers further up the hierarchy Suppression and Inhabitation The resulting systems are, in terms of the amount of computation they do, extremely simple Inhibite: در خروجی ماژول می اید و خروجی را در زمانی که مشخص شده قطع میکند و هیچ سیگنالی عبور نمیکند Suppression: سیگنال مربوطه را قطع و سیگنال دیگر را در زمان تعیین شده جانشین میکند

18 A Traditional Decomposition of a Mobile Robot Control System into Functional Modules
From Brooks, “A Robust Layered Control System for a Mobile Robot”, 1985

19 Example and Ex. Find behaviors for a mobile robot in environment with obstacles. moving forward Random walk Ex. 1: Subsumption for “cup keeping and reputation” Download and read Brook’s papers: A Robust Layered Control System for a Mobile Robot”, 1985 Provide a report as slide form (Powerpoint) from “Belief-Desire-Intention Architectures” (Book: Multiagent systems by Gerhard weiss, page ). Page 8-9 lecture.

20 A Decomposition of a Mobile Robot Control System Based on Task Achieving Behaviors
From Brooks, “A Robust Layered Control System for a Mobile Robot”, 1985

21 Layered Control in the Subsumption Architecture
From Brooks, “A Robust Layered Control System for a Mobile Robot”, 1985

22 Example of a Module – Avoid
From Brooks, “A Robust Layered Control System for a Mobile Robot”, 1985

23 Schematic of a Module From Brooks, “A Robust Layered Control System for a Mobile Robot”, 1985

24 Levels 0, 1, and 2 Control Systems
From Brooks, “A Robust Layered Control System for a Mobile Robot”, 1985

25 Steels’ Mars Explorer Steels’ Mars explorer system, using the subsumption architecture, achieves near-optimal cooperative performance in simulated ‘rock gathering on Mars’ domain: The objective is to explore a distant planet, and in particular, to collect sample of a precious rock. The location of the samples is not known in advance, but it is known that they tend to be clustered.

26 Steels’ Mars Explorer Rules
For individual (non-cooperative) agents, the lowest-level behavior, (and hence the behavior with the highest “priority”) is obstacle avoidance: if detect an obstacle then change direction (1) Any samples carried by agents are dropped back at the mother-ship: if carrying samples and at the base then drop samples (2) Agents carrying samples will return to the mother-ship: if carrying samples and not at the base then travel up gradient (3)

27 Steels’ Mars Explorer Rules
Agents will collect samples they find: if detect a sample then pick sample up (4) An agent with “nothing better to do” will explore randomly: if true then move randomly (5)

28 Advantages of Reactive Agents
Simplicity Economy Computational tractability Robustness against failure Elegance

29 Limitations of Reactive Agents
Agents without environment models must have sufficient information available from local environment If decisions are based on local environment, how does it take into account non-local information (i.e., it has a “short-term” view) Difficult to make reactive agents that learn Since behavior emerges from component interactions plus environment, it is hard to see how to engineer specific agents (no principled methodology exists) It is hard to engineer agents with large numbers of behaviors (dynamics of interactions become too complex to understand)

30 Hybrid Architectures Many researchers have argued that neither a completely deliberative nor completely reactive approach is suitable for building agents They have suggested using hybrid systems, which attempt to marry classical and alternative approaches An obvious approach is to build an agent out of two (or more) subsystems: a deliberative one, containing a symbolic world model, which develops plans and makes decisions in the way proposed by symbolic AI a reactive one, which is capable of reacting to events without complex reasoning

31 Hybrid Architectures Often, the reactive component is given some kind of precedence over the deliberative one This kind of structuring leads naturally to the idea of a layered architecture, of which TOURINGMACHINES and INTERRAP are examples In such an architecture, an agent’s control subsystems are arranged into a hierarchy, with higher layers dealing with information at increasing levels of abstraction

32 Hybrid Architectures A key problem in such architectures is what kind of control framework to embed the agent’s subsystems in, to manage the interactions between the various layers Horizontal layering Layers are each directly connected to the sensory input and action output. In effect, each layer itself acts like an agent, producing suggestions as to what action to perform. Vertical layering Sensory input and action output are each dealt with by at most one layer each

33 mn interactions m2(n-1) interactions Hybrid Architectures
m possible actions suggested by each layer, n layers mn interactions m2(n-1) interactions Introduces bottleneck in central control system Not fault tolerant to layer failure

34 Ferguson – TOURINGMACHINES
The TOURINGMACHINES architecture consists of perception and action subsystems, which interface directly with the agent’s environment, and three control layers, embedded in a control framework, which mediates between the layers

35 Ferguson – TOURINGMACHINES

36 Ferguson – TOURINGMACHINES
The reactive layer is implemented as a set of situation-action rules, a la subsumption architecture Example: rule-1: kerb-avoidance if is-in-front(Kerb, Observer) and speed(Observer) > 0 and separation(Kerb, Observer) < KerbThreshHold then change-orientation(KerbAvoidanceAngle) The planning layer constructs plans and selects actions to execute in order to achieve the agent’s goals

37 Ferguson – TOURINGMACHINES
The modeling layer contains symbolic representations of the ‘cognitive state’ of other entities in the agent’s environment The three layers communicate with each other and are embedded in a control framework, which use control rules Example: censor-rule-1: if entity(obstacle-6) in perception-buffer then remove-sensory-record(layer-R, entity(obstacle-6))


Download ppt "What is Intelligence Vali Derhami Yazd University, Computer Department"

Similar presentations


Ads by Google