Wumpus World Code There is extra information stored to facilitate modification.

Slides:



Advertisements
Similar presentations
BackTracking Algorithms
Advertisements

(())()(()) ((())()(())()(()))
Logic CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Lets Play Catch! Keeping Score in Alice By Francine Wolfe Duke University Professor Susan Rodger May 2010.
Section 2.3 Gauss-Jordan Method for General Systems of Equations
Artificial Intelligence Knowledge-based Agents Russell and Norvig, Ch. 6, 7.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
CSC1016 Coursework Clarification Derek Mortimer March 2010.
Knowledge Representation & Reasoning.  Introduction How can we formalize our knowledge about the world so that:  We can reason about it?  We can do.
Class Project Due at end of finals week Essentially anything you want, so long as its AI related and I approve Any programming language you want In pairs.
CS 460, Sessions Knowledge and reasoning – second part Knowledge representation Logic and representation Propositional (Boolean) logic Normal forms.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 9 Jim Martin.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Knowledge in intelligent systems So far, we’ve used relatively specialized, naïve agents. How can we build agents that incorporate knowledge and a memory?
© Franz Kurfess Lab Exercise Reasoning CSC 480: Artificial Intelligence Wumpus World Lab Dr. Franz J. Kurfess Computer Science Department Cal Poly.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 8 Jim Martin.
LOGICAL AGENTS Yılmaz KILIÇASLAN. Definitions Logical agents are those that can:  form representations of the world,  use a process of inference to.
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
Agents that Reason Logically Logical agents have knowledge base, from which they draw conclusions TELL: provide new facts to agent ASK: decide on appropriate.
Chapter 11: understanding randomness (Simulations)
11 Chapter 4 LOOPS AND FILES. 22 THE INCREMENT AND DECREMENT OPERATORS To increment a variable means to increase its value by one. To decrement a variable.
The Marriage Problem Finding an Optimal Stopping Procedure.
Copyright © 2010 Pearson Education, Inc. Unit 3: Gathering Data Chapter 11 Understanding Randomness.
Do Now: In your journal, write about a memory in your life (good/bad) that has had a major impact on who you are today. What is the memory? How has it.
Inference is a process of building a proof of a sentence, or put it differently inference is an implementation of the entailment relation between sentences.
Go to the Destiny home page,
Artificial Intelligence Lecture No. 9 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
MSV 3: Most Likely Value
CS.462 Artificial Intelligence SOMCHAI THANGSATHITYANGKUL Lecture 07 : Planning.
For Friday Finish reading chapter 7 Homework: –Chapter 6, exercises 1 (all) and 3 (a-c only)
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part A Knowledge Representation and Reasoning.
Class Project Due at end of finals week Essentially anything you want, so long as its AI related and I approve Any programming language you want In pairs.
Making Decisions uCode: October Review What are the differences between: o BlueJ o Java Computer objects represent some thing or idea in the real.
Over the river. We came home. Change your clothes.
Mtivity Client Support System Quick start guide. Mtivity Client Support System We are very pleased to announce the launch of a new Client Support System.
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Learning Objectives: 1. Identify different personality types 2. Understand our own personality type 3. Relate our personality type to career and college.
For Friday Read chapter 8 Homework: –Chapter 7, exercise 1.
Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.
1 The Wumpus Game StenchBreeze Stench Gold Breeze StenchBreeze Start  Breeze.
Thanks for continuing to work at becoming a better reader. As soon as you can quickly read these phrases, please go onto the next 100 phrases. Your extra.
Submitting Others Work Copy-Paste Purchased Material Paraphrasing Modification of Text Plagiarism.
More Sequences. Review: String Sequences  Strings are sequences of characters so we can: Use an index to refer to an individual character: Use slices.
Introduction to Computer Programming - Project 2 Intro to Digital Technology.
Research Experience Program (REP) Spring 2008 Psychology 100 Ψ.
CSCI 156: Lab 11 Paging. Our Simple Architecture Logical memory space for a process consists of 16 pages of 4k bytes each. Your program thinks it has.
1 UNIT-3 KNOWLEDGE REPRESENTATION. 2 Agents that reason logically(Logical agents) A Knowledge based Agent The Wumpus world environment Representation,
Phrases with Second 100 Words. over the river after the game take a little just the same.
FRY PHRASES Learn these words and you will be well on your way to becoming a great reader!!!
Review: What is a logic? A formal language –Syntax – what expressions are legal –Semantics – what legal expressions mean –Proof system – a way of manipulating.
By: WenHao Wu. A current situation that I have is that I cannot decide if a computer career is for me. I am considering any career in computers, but I.
Research Experience Program (REP) Fall 2007 Psychology 100 Ψ.
Module 6 Problems Unit 2 If you tell him the truth now, you will show that you are honest. ask for advice give advice.
Statistics 11 Understanding Randomness. Example If you had a coin from someone, that they said ended up heads more often than tails, how would you test.
Chapter 3: Formatted Input/Output 1 Chapter 3 Formatted Input/Output.
1 Knowledge Representation Logic and Inference Propositional Logic Vumpus World Knowledge Representation Logic and Inference Propositional Logic Vumpus.
CONDITIONALS CITS1001. Scope of this lecture if statements switch statements Source ppts: Objects First with Java - A Practical Introduction using BlueJ,
FILES AND EXCEPTIONS Topics Introduction to File Input and Output Using Loops to Process Files Processing Records Exceptions.
Learning to use a ‘For Loop’ and a ‘Variable’. Learning Objective To use a ‘For’ loop to build shapes within your program Use a variable to detect input.
Artificial Intelligence Logical Agents Chapter 7.
Knowledge and reasoning – second part
Learning and Knowledge Acquisition
Class #9– Thursday, September 29
Knowledge and reasoning – second part
EA C461 – Artificial Intelligence Logical Agent
Artificial Intelligence Lecture 10: Logical Agents
CMSC 471 Fall 2011 Class #10 Tuesday, October 4 Knowledge-Based Agents
Logical Agents Prof. Dr. Widodo Budiharto 2018
Presentation transcript:

Wumpus World Code There is extra information stored to facilitate modification

main.pro Clear the data base % Write a string and a list, followed by a newline. format(S,L):- write(S), write(L), nl. % GAME begins here. play :- initialize_general, format("the game is begun.",[]), description_total, retractall(is_situation(_,_,_,_,_)), time(T),agent_location(L),agent_orientation(O), assert(is_situation(T,L,O,[],i_know_nothing)), write("I'm conquering the World Ah!Ah!..."),nl, step. Clear the data base Fairly crude IO

step :- % One move for healthy agent who is in cave agent_healthy, agent_in_cave, is_nb_visited,% count the number of rooms visited agent_location(L), retractall(is_visited(L)), assert(is_visited(L)), description,% display a short summary of my state make_percept_sentence(Percept),% I perceive... format("I feel ",[Percept]), tell_KB(Percept),% I learn...(infer) ask_KB(Action), format("I'm doing : ",[Action]), apply(Action),% I do... short_goal(SG),% the goal of my current action time(T),% Time update New_T is T+1, retractall(time(_)), assert(time(New_T)), agent_orientation(O), assert(is_situation(New_T,L,O,Percept,SG)), % we keep in memory to check : % time, agent_location, agent_Orientation,perception, short_goal. step, !. Looks up what you are trying to do Ask knowledge base what it should do AND do it.

% Final move if dead or out of the cave % NOTE: we get here if the first version of step cannot be satisfied. step :- format("the game is finished.~n",[]), % either dead or out the cave agent_score(S), time(T), New_S is S - T, retractall(agent_score(_)), assert(agent_score(New_S)), description_total, the_end(MARK), display(MARK). Note how we update global variables. Prints summary

decl.pro initialize_land(map0):- retractall(land_extent(_)), retractall(wumpus_location(_)), retractall(wumpus_healthy), retractall(gold_location(_)), retractall(pit_location(_)), assert(land_extent(5)), assert(wumpus_location([3,2])), assert(wumpus_healthy), assert(gold_location([2,3])), assert(pit_location([3,3])), assert(pit_location([4,4])), assert(pit_location([3,1])).

initialize agent initialize_agent(agent0):- retractall(agent_location(_)), … assert(agent_location([1,1])), assert(agent_orientation(0)), assert(agent_healthy), assert(agent_arrows(1)), assert(agent_goal(find_out)), assert(agent_score(0)), assert(agent_in_cave).

other initialization initialize_general :- initialize_land(map0),% NOTE: Which map you wish initialize_agent(agent0), retractall(time(_)), assert(time(0)), retractall(nb_visited(_)), assert(nb_visited(0)), retractall(score_agent_dead(_)), assert(score_agent_dead(10000)), retractall(score_climb_with_gold(_)), assert(score_climb_with_gold(1000)), retractall(score_grab(_)), assert(score_grab(0)), retractall(score_wumpus_dead(_)), assert(score_wumpus_dead(0)), retractall(is_situation(_,_,_,_,_)), retractall(short_goal(_)).

perc.pro make_percept_sentence([Stench,Breeze,Glitter,Bump,Scream]) :- stenchy(Stench), breezy(Breeze), glittering(Glitter), bumped(Bump), heardscream(Scream). stenchy(yes) :- wumpus_location(L1), agent_location(L2), adjacent(L1,L2), !. stenchy(no). The system knows the wumpus location. It had BETTER be adjacent to where you are, or you will fail. KB check.

Similar KB check for glittering glittering(yes) :- agent_location(L), gold_location(L), !. glittering(no). Looking up agent_location

disp.pro (display) Displays state info agent_healthy_state(perfect_health) :- agent_healthy, agent_courage, !. agent_healthy_state(a_little_tired_but_alive) :- agent_healthy, !. agent_healthy_state(dead). Performs a series of actions

the_end('=) Pfftt too easy') :- no(agent_in_cave), agent_hold, no(is_dead), !. Called by: the_end(MARK), display(MARK). display does io Can convert from infix to prefix

% A location is estimated thanks to... good, medium, risky, deadly. good(L) :-% a wall can be a good room !!! is_wumpus(no,L), is_pit(no,L), no(is_visited(L)). medium(L) :- % Obviously if is_visited(L) -> is_visited(L).% is_wumpus(no,L) and is_pit(no,L) risky(L) :- no(deadly(L)). deadly(L) :- is_wumpus(yes,L), is_pit(yes,L), no(is_visited(L)). More.pro – I’m not sure if want a wall to be a good room – or it just happens

agent_courage seems to reflect on time spent searching agent_courage :-% we could compute nb_visited / max_room_to_visit time(T),% time nb_visited(N),% number of visted room land_extent(LE),% size of the land E is LE * LE, % maximum of room to visit NPLUSE is E * 2, less_equal(T,NPLUSE).

Computes next location based on orientation location_toward([X,Y],0,[New_X,Y]) :- New_X is X+1. location_toward([X,Y],90,[X,New_Y]) :- New_Y is Y+1. location_toward([X,Y],180,[New_X,Y]) :- New_X is X-1. location_toward([X,Y],270,[X,New_Y]) :- New_Y is Y-1

Note the use of !, fail to prohibit trying other choices. no(P) :- P, !, fail. no(P)

tell.pro – updates KB tell_KB([Stench,Breeze,Glitter,yes,Scream]) :- add_wall_KB(yes),!. tell_KB([Stench,Breeze,Glitter,Bump,Scream]) :- %agent_location(L),% update only if unknown could be great %no(is_visited(L)),% but the wumpus dead changes : percept add_wumpus_KB(Stench), add_pit_KB(Breeze), add_gold_KB(Glitter), add_scream_KB(Scream). Allows options of having walls other than at borders

There was no stench add_wumpus_KB(no) :- agent_location(L1), assume_wumpus(no,L1), % I'm not in a wumpus place location_toward(L1,0,L2),% I'm sure there are no Wumpus in assume_wumpus(no,L2),% each adjacent room. >=P location_toward(L1,90,L3), assume_wumpus(no,L3), location_toward(L1,180,L4), assume_wumpus(no,L4), location_toward(L1,270,L5), assume_wumpus(no,L5), !.

There was a stench add_wumpus_KB(yes) :- agent_location(L1),% I don't know if I'm in a wumpus place location_toward(L1,0,L2),% And It's possible there is a Wumpus in assume_wumpus(yes,L2),% each adjacent room. <=| location_toward(L1,90,L3), assume_wumpus(yes,L3), location_toward(L1,180,L4), assume_wumpus(yes,L4), location_toward(L1,270,L5), assume_wumpus(yes,L5). But you came from one direction, so how does knowing no wumpus compare with possible wumpus?

% Don't allow a "possible" wumpus to override a "no wumpus" assume_wumpus(yes,L) :- % before I knew there is no Wumpus, is_wumpus(no,L),% so Wumpus can't be now... =) !. assume_wumpus(yes,L) :- wall(L),% Wumpus can't be in a wall retractall(is_wumpus(_,L)), assert(is_wumpus(no,L)), !. assume_wumpus(yes,L) :- wumpus_healthy,% so... retractall(is_wumpus(_,L)), assert(is_wumpus(yes,L)), !. assume_wumpus(yes,L) :- retractall(is_wumpus(_,L)), assert(is_wumpus(no,L)).% because Wumpus is dead >=] We think there is a wumpus. We treat is as truth until proven otherwise

exec.pro – carry out result of action apply(grab) :- agent_score(S),% get my current score score_grab(SG),% get value of grabbing New_S is S + SG, retractall(agent_score(S)), assert(agent_score(New_S)), % reset score retractall(gold_location(_)),% no more gold at this place retractall(is_gold(_)),% The gold is with me! assert(agent_hold),% money, money, :P retractall(agent_goal(_)), assert(agent_goal(go_out)),% Now I want to go home format("Give me the money >=}...",[]), !.

If miss wumpus, update facts… % Wumpus is missed % there are several shoot options. % this one occurs after we know we didn’t hit apply(shoot) :- format("Ouchh, I fail Grrrr >=}...",[]), retractall(agent_arrows(_)),% I can infer some information !!! assert(agent_arrows(0)), agent_location([X,Y]),% I can assume that Wumpus... location_ahead([X,NY]), is_wumpus(yes,[X,WY]), retractall(is_wumpus(yes,[X,WY])), assert(is_wumpus(no,[X,WY])),%...is not in the supposed room. !

ask.pro – ask KB for advice ask_KB(Action) :- make_action_query(Strategy,Action). make_action_query(Strategy,Action) :- act(strategy_reflex,Action),!. make_action_query(Strategy,Action) :- act(strategy_find_out,Action),!. make_action_query(Strategy,Action) :- act(strategy_go_out,Action),! act(strategy_reflex,die) :- agent_healthy, wumpus_healthy, agent_location(L), wumpus_location(L), is_short_goal(die_wumpus), !. act(strategy_reflex,die) :- agent_healthy, agent_location(L), pit_location(L), is_short_goal(die_pit), !. Notice our order for actions, first reflex, then find goal, then get out

act(strategy_reflex,shoot) :-% I shoot Wumpus only if I think agent_arrows(1),% that we are in the same X agent_location([X,Y]),% it means I assume Wumpus and me location_ahead([X,NY]), is_wumpus(yes,[X,WY]),% are in the same column dist(NY,WY,R1),% And If I don't give him my back... dist(Y,WY,R2),% If I'm in the good orientation less_equal(R1,R2),% to shoot him... HE!HE! is_short_goal(shoot_forward_in_the_same_X), !.

act(strategy_find_out,forward) :- agent_goal(find_out), agent_courage, good(_),% I'm interested by a good room somewhere location_ahead(L), good(L),% the room in front of me. no(is_wall(L)), is_short_goal(find_out_forward_good_good), !. act(strategy_find_out,turnleft) :- agent_goal(find_out), agent_courage, good(_),% I'm interested,... agent_orientation(O), Planned_O is (O+90) mod 360, agent_location(L), location_toward(L,Planned_O,Planned_L), good(Planned_L),% directly by my left side. no(is_wall(Planned_L)), is_short_goal(find_out_turnleft_good_good), ! find_out means find out where the gold is. Thus, you proceed if finding the gold is your goal and you have enough time to do it. Are there any good rooms known?

more.pro shows what is_short_goal is // Here you set a short goal //The short_goal is stored so planning steps // know what the next goal is. // It doesn’t appear it is used at // this point. is_short_goal(X) :- retractall(short_goal(_)), assert(short_goal(X)).