Introduction to Artificial Intelligence (AI) (GATE-561)

Slides:



Advertisements
Similar presentations
Intelligent Agents Russell and Norvig: 2
Advertisements

Artificial Intelligence: Chapter 2
ECE457 Applied Artificial Intelligence R. Khoury (2007)Page 1 Please pick up a copy of the course syllabus from the front desk.
Artificial Intelligence Lecture No. 5 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
GATE Reactive Behavior Modeling Fuzzy Logic (GATE-561) Dr.Çağatay ÜNDEĞER Instructor Middle East Technical University, GameTechnologies Bilkent University,
Planning under Uncertainty
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
ICS 101 Fall 2011 Introduction to Artificial Intelligence Asst. Prof. Lipyeow Lim Information & Computer Science Department University of Hawaii at Manoa.
Introduction to Artificial Intelligence Ruth Bergman Fall 2004.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Plans for Today Chapter 2: Intelligent Agents (until break) Lisp: Some questions that came up in lab Resume intelligent agents after Lisp issues.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Agents & Environments. © Daniel S. Weld Topics Agents & Environments Problem Spaces Search & Constraint Satisfaction Knowledge Repr’n & Logical.
Rational Agents (Chapter 2)
Intelligent Agents: an Overview. 2 Definitions Rational behavior: to achieve a goal minimizing the cost and maximizing the satisfaction. Rational agent:
Rational Agents (Chapter 2)
Introduction to Artificial Intelligence ITK 340, Spring 2010.
Leroy Garcia 1.  Artificial Intelligence is the branch of computer science that is concerned with the automation of intelligent behavior (Luger, 2008).
CSE 573 Artificial Intelligence Dan Weld Peng Dai
Intelligent Agents. Software agents O Monday: O Overview video (Introduction to software agents) O Agents and environments O Rationality O Wednesday:
Ch1 AI: History and Applications Dr. Bernard Chen Ph.D. University of Central Arkansas Spring 2011.
1 AI and Agents CS 171/271 (Chapters 1 and 2) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Introduction to AI. H.Feili, 1 Introduction to Artificial Intelligence LECTURE 2: Intelligent Agents What is an intelligent agent?
CSCI 4410 Introduction to Artificial Intelligence.
Lecture 1 Note: Some slides and/or pictures are adapted from Lecture slides / Books of Dr Zafar Alvi. Text Book - Aritificial Intelligence Illuminated.
ICS 101 Fall 2011 Introduction to Artificial Intelligence Asst. Prof. Lipyeow Lim Information & Computer Science Department University of Hawaii at Manoa.
MIDTERM REVIEW. Intelligent Agents Percept: the agent’s perceptual inputs at any given instant Percept Sequence: the complete history of everything the.
A RTIFICIAL I NTELLIGENCE Introduction 3 October
If the human brain were so simple that we could understand it, we would be so simple that we couldn't. —Emerson M. Pugh.
How R&N define AI Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally humanly vs. rationally.
Artificial Intelligence
SWARM INTELLIGENCE Sumesh Kannan Roll No 18. Introduction  Swarm intelligence (SI) is an artificial intelligence technique based around the study of.
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
CSC 423 ARTIFICIAL INTELLIGENCE Intelligence Agents.
Chapter 2 Agents & Environments. © D. Weld, D. Fox 2 Outline Agents and environments Rationality PEAS specification Environment types Agent types.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 2 Tuesday, August 29,
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
Artificial Intelligence Lecture 1. Objectives Definition Foundation of AI History of AI Agent Application of AI.
AI ● Dr. Ahmad aljaafreh. What is AI? “AI” can be defined as the simulation of human intelligence on a machine, so as to make the machine efficient to.
Introduction of Intelligent Agents
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
Artificial Intelligence IES 503 Asst. Prof. Dr. Senem Kumova Metin.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
University of Kurdistan Artificial Intelligence Methods (AIM) Lecturer: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture,
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2004.
Games Programming with Java ::: AI ::: Games Programming with Java Montri Karnjanadecha Andrew Davison.
Chapter 1 Artificial Intelligence Overview Instructor: Haris Shahzad Artificial Intelligence CS-402.
Feng Zhiyong Tianjin University Fall  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that.
Intelligent Agents Introduction Rationality Nature of the Environment Structure of Agents Summary.
Chapter 2 Agents. Intelligent Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
Chapter 2 Agents & Environments
CS382 Introduction to Artificial Intelligence Lecture 1: The Foundations of AI and Intelligent Agents 24 January 2012 Instructor: Kostas Bekris Computer.
Lecture 2: Intelligent Agents Heshaam Faili University of Tehran What is an intelligent agent? Structure of intelligent agents Environments.
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
ARTIFICIAL INTELLIGENCE
Introduction to Artificial Intelligence
Introduction to Artificial Intelligence
Artificial Intelligence Lecture No. 5
Intelligent Agents Chapter 2.
© James D. Skrentny from notes by C. Dyer, et. al.
Intelligence Are the things shown below, Intelligent?
Systems that THINK Like Humans
EA C461 – Artificial Intelligence Problem Solving Agents
EA C461 – Artificial Intelligence Introduction
Games Programming with Java
Introduction to Artificial Intelligence Instructor: Dr. Eduardo Urbina
Presentation transcript:

Introduction to Artificial Intelligence (AI) (GATE-561) Dr.Çağatay ÜNDEĞER Instructor Middle East Technical University, GameTechnologies Bilkent University, Computer Engineering & General Manager SimBT Inc. e-mail : cagatay@undeger.com Game Technologies Program – Middle East Technical University – Fall 2009

Outline Description of AI History of AI Behavior and Agent Types Properties of Environments Problem Solving Well-Known Search Strategies

Artificial Intelligence in Games Artificial intelligence (AI) in games deals with creating synthetic characters (agents, animats, bots) with realistic behaviors. These are autonomous creatures with an artificial body situated in a virtual world. Job of AI developers is to give them unique skills and capabilities so that they can interact with their environment intelligently and realistically. * Animats stand for artificial animals, robots or virtual simulations. * Bots stand for a computer program that runs automatically or robots.

Description of AI Definitions are divided into 2 broad categories: Success in terms of human performance, Ideal concept of intelligence, which is called rationality. A system is said to be rational: If it does the right thing, If it is capable of planning and executing the right task at the right time.

Description of AI Systems that think like humans Bellman, 1978: The automation of activities that we associate with human thinking, activities such as decision-making, problem solving, learning,... Haugeland, 1985: The exciting new effort to make computers think...machine with minds, in the full and literal sense.

Description of AI Systems that act like humans. Kurzweil, 1990: The art of creating machines that perform functions that require intelligence when performed by people. Rich and Knight, 1991: The study of how to make computers do things at which, at the moment, people are better. Artificial Intelligence and Soft Computing, 2000: The simulation of human intelligence on a machine, so as to make machine efficient to identify and use right piece of knowledge at a given step of solving a problem.

Description of AI Systems that think rationally. Charniak and McDermott, 1985: The study of mental faculties through the use of computational models. Winston, 1992: The study of computations that make it possible to perceive, reason and act. Basis of formal logic. 100% certain knowledge is impossible to find!

Description of AI Systems that act rationally. Schakkoff, 1990: A field of study that seeks to explain and emulate intelligent bahavior in terms of computational process. Lager and Stubblefied, 1993: The branch of computer science that is concerned with the automation of intelligence behavior. Widely accepted today. Deals with having the correct inference that should not be theoretically rational all the time. Sometimes there is no provably correct thing to do, but you still need to do something in uncertainty.

History of AI The Classical Period The Romantic Period The Modern Period

The Classical Period Dates back to 1950 Involves intelligent search problems in game playing and theorem proving. State space approach came up in this period.

The Romantic Period From mid 1960s to mid 1970s Interested in making machines “understand” by which they usually mean the understanding of natural languages. Knowledge representation techniques such as semantic nets, frames and so on are developed.

The Modern Period From mid 1970s to today Focused on real-world problems. Devoted to solving more complex problems of practical interest. Expert systems came up. Theoretical reseach was performed on heuristic search, uncertainty modeling, and so on

We will build agents that act successfully on their environment. Intelligent Agents Anything that can be viewed as: Perceiving its environment through sensors and acting on that environment through effectors. percepts agent actions environment sensors effectors We will build agents that act successfully on their environment.

Agent Program In each step (cycle / iteration / frame): Gets the percepts from the environment, [ Builds a percept sequence in memory ], Maps this percept sequence to actions. How we do the mapping from percepts to actions determines the success of our agent program.

For instance, a 35100 entry table for playing chess Percepts to Actions For instance, a 35100 entry table for playing chess p1 a1 p2 a2 p3 a3 Percept sequence Actions Just an ordinary table A complex algorithm a compile-time code a run-time script with fixed commands a run-time script with extendable commands that call compile-time code

Behavior Types Reactive behaviors: A simple decision is taken immediately depending on the developing situation (lying down when someone is shooting at you,...). Deliberative behaviors: A complex decision is taken in a longer time by reasoning deeply (evaluation, planning, conflict resolution,...). Learning behaviors: Learning how to act by observation, try and error.

Agent Types Reactive agents: Simple reflex agents Reflex agents with internal state Deliberative agents: Goal–based agents Utility-based agents

Simple Reflect Agents In each step: Gets a set of percepts (current state) Learn “what the world I sense now is like” Performs rule matching on current state Takes decision without memorizing the past Answer “what action I should do now” by using condition-action rules Performs the selected action/s Change the environment

Simple Reflect Agents Rules of a traffic simulation may look like: If distance between me and car in front of me is large then Accelerate If distance between me and car in front of me is normal then Keep state If distance between me and car in front of me is small then Decelerate If distance between me and car in front of me is large and car in front of me breaks then If distance between me and car in front of me is normal and car in front of me breaks then Break .....

Reflect Agents with Int. State In each step: Gets a set of percepts (current state) Learn “what the world I sense now is like” Integrates past state to current state to get an enriched current state (internal state) Keep track of the world and estimate “what the world is like now” Performs rule matching on current state Takes decision Answer “what action I should do now” by using condition-action rules Performs the selected action/s Change the environment

Reflect Agents with Int. State If there are enemies then Select the nearest enemy Look at the selected enemy Point the gun to the selected enemy If the gun is pointing at the enemy, fire sensing region you Current detection Fire Past detection Recent detection

Goal-based Agents Knowing the current state of the environment will not always enough to decide correctly. You may also be given a goal to achieve. It may not be easy or even possible to reach that goal by just reactive decisions.

Goal-based Agents For instance, you will not be able to go to a specified adress in a city by just doing simple reactive bahaviors. Since you may enter a dead-lock situation. Planning towards a desired state (goal) may be inevitable. goal agent loop forever

Goal-based Agents In each step: Gets a set of percepts (current state) Learn “what the world I sense now is like” Integrates past state to current state to get an enriched current state (internal state) Keep track of the world and estimate “what the world is like now” Examines actions and their possible outcomes Answer of “what the world will be like in the future if I do A,B,C,...” Selects the action which may achieve the goal Answer of “which action->state will make me closer to my goal” Performs the selected action/s Change the environment

Goal-based Agents Safe path shortest path Goal to reach Plan a path towards the target (goal) location from a safe route. you Current detection shortest path Past detection Goal to reach Recent detection

Utility-based Agents Goals are not enough to generate a high-quality behavior. There are many action sequences that will take you to the goal state. But some will give better utility than others: A more safer, more shorter, more reliable, more cheaper.... A general performance measure for computing utility is required to decide among alternatives.

Utility-based Agents In each step: Gets a set of percepts (current state) Learn “what the world I sense now is like” Integrates past state to current state to get an enriched current state (internal state) Keep track of the world and estimate “what the world is like now” Examines actions and their possible outcomes Answer of “what the world will be like in the future if I do A,B,C,...” Selects the action which may achieve the goal and maximize the utility Answer of “which action->state will lead me to my goal with higher utility” Performs the selected action/s Change the environment

Goal-based Agents shortest path Goal to reach longer safe path Plan a path towards the target (goal) location from a safe, but also short route. shorter safe path you Current detection shortest path Past detection Goal to reach Recent detection

Properties of Environments Accessible: Sensors gives complete state of the environment (e.g. chess). Inaccessible: Sensors gives a partial state of the environment.

Properties of Environments Deterministic: If the next state is determined by just current state and actions of agents. Non-deterministic: If there is uncertainty, and Next state is not determined by only current state and actions of agents.

Properties of Environments Episodic: The environment is divided into episodes. Next episode is started when current one is completed. Non-episodic: The environment is defined as a large single set.

Properties of Environments Static: If environment can not change while agent is deliberating and acting. Dynamic: If environment can change while agent is deliberating and acting.

Properties of Environments Discrete: If there are a limited number of distinct, clearly defined percepts and actions (chess). Continuous: If there are infinite number percepts and/or actions (velocity, position, ...).

Problem Solving AI algorithms deal with “states” and sets of states called “state spaces”. A state is one of the feasible steps towards a solution for a problem and its problem solver. So solution to a problem is a sequential and/or parallel collection of the problem states. A solution

Problem Solving Strategies The problem solver applies an operator to a state to get the next state for performing a search. Called “state space approach” Feasible next states An operator determines the next state and performs the transition (expands the next state), This continuous until the goal state is expanded Infeasible next states

4-Puzzle Problem a 4 cell board 3 cells filled 1 cell blank Initial: a random distribution of cells and blank Final: a fixed location of cells and blank 1 3 2 3 1 2 initial state final (goal) state

4-Puzzle Problem 4 operations (actions): Blank-up Blank-down Blank-left Blank-right 1 3 2 3 1 2 initial state final (goal) state

State space for 4-puzzle 4-Puzzle Problem 1 3 2 State space for 4-puzzle 1 3 2 1 2 3 3 1 2 1 3 2 1 3 2 1 2 3

Common Search Strategies Generate and Test Heuristic Search Hill-Climbing

Generate and Test From a known starting state (root) Generation of state space Continues expanding the reasoning space Until goal node (terminal state) is reached In case there are muliple paths leading to goal state, The path having the smallest cost from the root is preferred

Heuristic Search From a known starting state (root) Generation of state space Continues expanding the reasoning space Until goal node (terminal state) is reached One or more heuristic functions are used to determine the better candidate states among legal states. Heuristics are used in order to see the measure of fittness Difficult task: Heuristics are selected intuitively.

Hill-Climbing From a known or random starting state Measure / estimate the total cost from current to goal state If there exists a neigbor state having lower cost Move to that state Until goal node (terminal state) is reached If trapped at a hilllock or local extrema Select a random starting state and restart