Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling.

Slides:



Advertisements
Similar presentations
Approaches, Tools, and Applications Islam A. El-Shaarawy Shoubra Faculty of Eng.
Advertisements

Latest AI Research in RTS Games
A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft Santiago Ontanon, Gabriel Synnaeve, Alberto Uriarte, Florian Richoux, David.
Artificial Intelligence in Real Time Strategy Games Dan Li.
Machine Learning in Computer Games Learning in Computer Games By: Marc Ponsen.
The AGILO Autonomous Robot Soccer Team: Computational Principles, Experiences, and Perspectives Michael Beetz, Sebastian Buck, Robert Hanek, Thorsten Schmitt,
Artificial Intelligence in Game Design Introduction to Learning.
The Decision-Making Process IT Brainpower
Reinforcement Learning
Reinforcement Learning in Real-Time Strategy Games Nick Imrei Supervisors: Matthew Mitchell & Martin Dick.
1 Hybrid Agent-Based Modeling: Architectures,Analyses and Applications (Stage One) Li, Hailin.
Marakas: Decision Support Systems, 2nd Edition © 2003, Prentice-Hall Chapter Chapter 7: Expert Systems and Artificial Intelligence Decision Support.
Polyscheme John Laird February 21, Major Observations Polyscheme is a FRAMEWORK not an architecture – Explicitly does not commit to specific primitives.
Chapter 12: Intelligent Systems in Business
Developing Intelligent Agents and Multiagent Systems for Educational Applications Leen-Kiat Soh Department of Computer Science and Engineering University.
RoboCup: The Robot World Cup Initiative Based on Wikipedia and presentations by Mariya Miteva, Kevin Lam, Paul Marlow.
Multiagent Systems: Local Decisions vs. Global Coherence Leen-Kiat Soh, Nobel Khandaker, Adam Eck Computer Science & Engineering University of Nebraska.
A DAPTIVE I NTELLIGENT AGENT IN REAL - TIME STRATEGY GAMES An Introduction.
Introduction to the Project AbdelRahman Al OgailOmar Khaled Enayet Under the Supervision Of : Dr. Ibrahim Fathy Moawad.
Intelligent Systems for Decision Support
AI and GAMES CSC 8520, Villanova University Spring, 2004 Paula Matuszek & Robin McEntire.
10.3 Understanding Pattern Recognition Methods Chris Kramer.
By Saparila Worokinasih
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
Current Situation and Future Plans Abdelrahman Al-Ogail & Omar Enayet October
Evolutionary Computing Systems Lab (ECSL), University of Nevada, Reno 1 Authors : Siming Liu, Sushil Louis and Monica Nicolascu
More precisely called Branch of AI behind it.
Evolutionary Computing Systems Lab (ECSL), University of Nevada, Reno 1 Authors : Christopher Ballinger, Sushil Louis
Introduction Many decision making problems in real life
Knowledge acquisition for adative game AI Marc Ponsen et al. Science of Computer programming vol. 67, pp , 2007 장수형.
Introduction to AI Engine & Common Used AI Techniques Created by: Abdelrahman Al-Ogail Under Supervision of: Dr. Ibrahim Fathy.
Starcraft Opponent Modeling CSE 391: Intro to AI Luciano Cheng.
Introduction to the Heuristically Accelerated Hierarchical Reinforcement Learning in RTS Games Omar Enayet Amr Saqr AbdelRahman Al-Ogail Ahmed Atta.
C ASE -B ASED P LANNER P LATFORM FOR RTS G AMES An Introduction Abdelrahman Al-Ogail Ahmed Atta.
Artificial Intelligence Techniques Artificial Stupidity?
Fundamentals of Information Systems, Third Edition2 Principles and Learning Objectives Artificial intelligence systems form a broad and diverse set of.
Synthetic Cognitive Agent Situational Awareness Components Sanford T. Freedman and Julie A. Adams Department of Electrical Engineering and Computer Science.
StarCraft Learning Algorithms By Logan Yarnell, Steven Raines, and Dean Antel.
Kavita Singh CS-A What is Swarm Intelligence (SI)? “The emergent collective intelligence of groups of simple agents.”
Machine Learning in Computer Games Marc Ponsen 11/29/04.
AI and Computer Games (informational session) Lecture by: Dustin Dannenhauer Professor Héctor Muñoz-Avila Computer Science and Eng.
Chap. 1 GENERAL WISDOM AI Game Programming Wisdom.
Ibrahim Fathy, Mostafa Aref, Omar Enayet, and Abdelrahman Al-Ogail Faculty of Computer and Information Sciences Ain-Shams University ; Cairo ; Egypt.
CHECKERS: TD(Λ) LEARNING APPLIED FOR DETERMINISTIC GAME Presented By: Presented To: Amna Khan Mis Saleha Raza.
Game Theory, Social Interactions and Artificial Intelligence Supervisor: Philip Sterne Supervisee: John Richter.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology 1 Evolving Reactive NPCs for the Real-Time Simulation Game.
Evolutionary Computing Systems Lab (ECSL), University of Nevada, Reno 1 Authors : Siming Liu, Sushil Louis and Monica Nicolascu
Algorithmic, Game-theoretic and Logical Foundations
RADHA-KRISHNA BALLA 19 FEBRUARY, 2009 UCT for Tactical Assault Battles in Real-Time Strategy Games.
Course Overview  What is AI?  What are the Major Challenges?  What are the Main Techniques?  Where are we failing, and why?  Step back and look at.
University of Kurdistan Artificial Intelligence Methods (AIM) Lecturer: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture,
Reinforcement Learning AI – Week 22 Sub-symbolic AI Two: An Introduction to Reinforcement Learning Lee McCluskey, room 3/10
Transfer Learning in Sequential Decision Problems: A Hierarchical Bayesian Approach Aaron Wilson, Alan Fern, Prasad Tadepalli School of EECS Oregon State.
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
KNOWLEDGE MANAGEMENT UNIT II KNOWLEDGE MANAGEMENT AND TECHNOLOGY 1.
Organic Evolution and Problem Solving Je-Gun Joung.
Copyright © Pearson Education Limited 2015 STRATEGIC MANAGEMENT ESSENTIALS 1-1 Chapter One.
RADHA-KRISHNA BALLA 19 FEBRUARY, 2009 UCT for Tactical Assault Battles in Real-Time Strategy Games.
Adaptive Reinforcement Learning Agents in RTS Games Eric Kok.
Designing Intelligence Logical and Artificial Intelligence in Games Lecture 2.
Artificial Intelligence Hossaini Winter Outline book : Artificial intelligence a modern Approach by Stuart Russell, Peter Norvig. A Practical Guide.
An application of the genetic programming technique to strategy development Presented By PREMKUMAR.B M.Tech(CSE) PONDICHERRY UNIVERSITY.
The Game Development Process: Artificial Intelligence.
Introduction to Machine Learning, its potential usage in network area,
RoboCup: The Robot World Cup Initiative
Done Done Course Overview What is AI? What are the Major Challenges?
Artificial Intelligence and Searching
Market-based Dynamic Task Allocation in Mobile Surveillance Systems
Presentation transcript:

Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling.

 Introduction  Real-Time Strategy Games.  Why is AI Development slow in RTS Games.  AI Areas needing more research in RTS Games.  Latest Research  Introduction.  Research Papers and Theses. ▪ Introduction ▪ The Papers : Intro ▪ Case-Based Planning. ▪ Reinforcement Learning. ▪ Genetic Algorithms. ▪ Hybrid Approaches. ▪ Opponent Modeling Approaches. ▪ Misc. Approaches.

 Real-Time-Strategy (RTS) games can be viewed as simplified military simulations. Several players struggle over resources scattered over a terrain by setting up an economy, building armies, and guiding them into battle in real-time.  The current AI performance in commercial RTS games is poor by human standards.  They are characterized by enormous state spaces, large decision spaces, and asynchronous interactions.  RTS games also require reasoning at several levels of granularity, production-economic facility (usually expressed as resource management and technological development) and tactical skills necessary for combat confrontation.

 RTS game worlds feature many objects, imperfect information, micro actions, and fast-paced action. By contrast, World–class AI players mostly exist for slow– paced, turn–based, perfect information games in which the majority of moves have global consequences and planning abilities therefore can be outsmarted by mere enumeration.  Market dictated AI resource limitations. Up to now popular RTS games have been released solely by game companies who naturally are interested in maximizing their profit. Because graphics is driving games sales and companies strive for large market penetration only about 15% of the CPU time and memory is currently allocated for AI tasks. On the positive side, as graphics hardware is getting faster and memory getting cheaper, this percentage is likely to increase – provided game designers stop making RTS game worlds more realistic.  Lack of AI competition. In classic two–player games tough competition among programmers has driven AI research to unmatched heights. Currently, however, there is no such competition among real–time AI researchers in games other than computer soccer. The considerable man– power needed for designing and implementing RTS games and the reluctance of game companies to incorporate AI APIs in their products are big obstacles to AI competition in RTS games.

 Adversarial real–time planning. In fine–grained realistic simulations, agents cannot afford to think in terms of micro actions such as “move one step North”. Instead, abstractions of the world state have to be found that allow AI programs to conduct forward searches in a manageable abstract space and to translate found solutions back into action sequences in the original state space. Because the environment is also dynamic, hostile, and smart — adversarial real–time planning approaches need to be investigated.  Decision making under uncertainty. Initially, players are not aware of the enemies’ base locations and intentions. It is necessary to gather intelligence by sending out scouts and to draw conclusions to adapt. If no data about opponent locations and actions is available yet, plausible hypotheses have to be formed and acted upon.  Opponent modeling, learning. One of the biggest shortcomings of current (RTS) game AI systems is their inability to learn quickly. Human players only need a couple of games to spot opponents’ weaknesses and to exploit them in future games. New efficient machine learning techniques have to be developed to tackle these important problems.

 Spatial and temporal reasoning. Static and dynamic terrain analysis as well as understanding temporal relations of actions is of utmost importance in RTS games — and yet, current game AI programs largely ignore these issues and fall victim to simple common–sense reasoning.  Resource management. Players start the game by gathering local resources to build up defenses and attack forces, to upgrade weaponry, and to climb up the technology tree. At any given time the players have to balance the resources they spend in each category. For instance, a player who chooses to invest too many resources into upgrades, will become prone to attacks because of an insufficient number of units. Proper resource management is therefore a vital part of any successful strategy

 Collaboration. In RTS games groups of players can join forces and intelligence. How to coordinate actions effectively by communication among the parties is a challenging research problem. For instance, in case of mixed human/AI teams, the AI player often behaves awkwardly because it does not monitor the human’s actions, cannot infer the human’s intentions, and fails to synchronize attacks.  Pathfinding. Finding high–quality paths quickly in 2D terrains is of great importance in RTS games. In the past, only a small fraction of the CPU time could be devoted to AI tasks, of which finding shortest paths was the most time consuming. Hardware graphics accelerators are now allowing programs to spend more time on AI tasks. Still, the presence of hundreds of moving objects and the urge for more realistic simulations in RTS games make it necessary to improve and generalize pathfinding algorithms. Keeping unit formations and taking terrain properties, minimal turn radii, inertia, enemy influence, and fuel consumption into account greatly complicates the once simple problem of finding shortest paths.

 Current Implementation of RTS Games applies extensive usage of FSM that makes them highly predictable.  Adaptation is achieved either through Learning or planning or a mixture of both  Planning is beginning to appear in commercial games such as DemiGod and Latest Total War Game.  Learning has limited success so far.  Developers are experimenting on replacing the ordinary decision making systems (FSM, FUSM, Scripting, Decision Trees, and Markov Systems) with Learning Techniques

 More than 30 papers/theses talk about Planning and Learning in RTS Games  The Major 3 approaches to AI Research in RTS-GAMES concerning Learning and Planning are Case-Based Planning, Reinforcement Learning with its different techniques and Genetic Algorithms. Some Papers use a Hybrid approach of these techniques. Others use other planning algorithms like PDDL or opponent modeling techniques and other misc. techniques.  3 papers encourage the research in this field.  9 papers use Case-Based Planning Approach from ,1 uses a Hybrid CBR/GA approach in 2008,1 uses a Hybrid CBR/RL approach in 2007  10 papers use Reinforcement Learning with its different forms (Monte-Carlo, Dynamic Scripting and TD-Learning),1 uses TD- Learning with GA,1 uses Dynamic Scripting with GA  3 Papers use Genetic Algorithms.  3 Papers apply opponent modeling techniques.

 RTS Games and Real–Time AI Research – 2003  RTS Gaines A New AI Research Challenge – 2003  Call for AI Research in RTS Games

 Case-based planning is the reuse of past successful plans in order to solve new planning problems.  It’s an application of Case-Based Reasoning in planning.

 The David Aha Research Thread :  On the Role of Explanation for Hierarchical Case-Based Planning in RTS Games - after 2004  Learning to Win - Case-Based Plan Selection in a RTS Game  Defeating Novel Opponents in a Real-Time Strategy Game – 2005  The Santiago Ontanon Research Thread :  Case-Based Planning and Execution for RTS Games – 2007  Learning from Human Demonstrations for Real-Time Case-Based Planning – 2008  On-Line Case-Based Plan Adaptation for RTS Games  Situation Assessment for Plan Retrieval in RTS Games – 2009  Other Papers  Case-based plan recognition for RTS games - after 2003  Mining Replays of RTS Games to learn player strategies – 2007

 It is a sub-area of machine learning concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward.  Reinforcement learning differs from the supervised learning problem in that correct input/output pairs are never presented, nor sub- optimal actions explicitly corrected.  Further, there is a focus on on-line performance, which involves finding a balance between exploration and exploitation.

 Dynamic Scripting :  Goal-Directed Hierarchical Dynamic Scripting for RTS Games – 2006  Automatically Acquiring Domain Knowledge For Adaptive Game AI Using Evolutionary Learning – 2008  Monte-Carlo Planning :  UCT(Monte-Carlo) for Tactical Assault Battles in Real-Time Strategy Games. – 2003  Monte Carlo Planning in RTS Games - After 2004  Temporal-Difference Learning :  Learning Unit Values in Wargus Using Temporal Differences – 2005  Establishing an Evaluation Function for RTS games - After 2005  Dynamic Scripting VS Monte-Carlo Planning:  Adaptive reinforcement learning agents in RTS games – 2008  Hierarchical Reinforcement Learning  Hierarchical Reinforcement Learning in Computer Games - After 2006  Hierarchical Reinforcement Learning with Deictic repr. in a computer game- After 2006

 Genetic algorithms are a particular class of evolutionary algorithms (EA) that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover.

 Human-like Behavior in RTS Games – 2003  Co-evolving Real-Time Strategy Game Playing Influence Map Trees with genetic algorithms  Co-Evolution in Hierarchical AI for Strategy Games - after 2004

 Genetic Algorithms + Dynamic Scripting :  Improving Adaptive Game AI With Evolutionary Learning – 2004  Automatically Acquiring Domain Knowledge For Adaptive Game AI using Evolutionary Learning – 2005  Genetic Algorithms + TD-Learning :  Neural Networks in RTS AI – 2001  Genetic Algorithms + Case-Based Planning :  Stochastic Plan Optimization in Real-Time Strategy Games – 2008  Case-Based Reasoning + Reinforcement Learning :  Transfer Learning in Real-Time Strategy Games Using Hybrid CBR-RL

 Hierarchical Opponent Models for Real-Time Strategy Games – 2007  Opponent modeling in real-time strategy games - after 2007  Design of Autonomous Systems - Learning Adaptive playing a RTS game

 Supervised Learning :  Player Adaptive Cooperative Artificial Intelligence for RTS Games – 2007  PDDL :  A First Look at Build-Order Optimization in RTS games - after 2006  Finite-State Machines :  SORTS - A Human-Level Approach to Real-Time Strategy AI – 2007  Others :  Real-time challenge balance in an RTS game using rtNEAT – 2008  AI Techniques in RTS Games -September 2006

 RTS Games and Real–Time AI Research – Michael Buro & Timothy M. Furtak  Call for AI Research in RTS Games - Michael Buro – 2004  AIGameDev Forums.  GameDev.Net Forums.  Wikipedia.  Others