Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 188: Artificial Intelligence Fall 2006

Similar presentations


Presentation on theme: "CS 188: Artificial Intelligence Fall 2006"— Presentation transcript:

1 CS 188: Artificial Intelligence Fall 2006
Lecture 13: Advanced Reinforcement Learning 10/12/2006 Dan Klein – UC Berkeley

2 Midterms Exams are graded, will be in glookup today, returned and gone over in section Monday My impressions: long and fairly hard exam, class generally did fine You should expect that the final will be equally hard, but seem much less long We added 25 points and took it out of 100

3 Midcourse Reviews You liked: You wanted: Projects (19) Lectures (19)
Visual / demo presentations (14) Newsgroup (4) Pacman (3) You wanted: Debugging help / coding advice / see staff code (8) More time between projects (6) More problem sets (4) A webcast or podcast (4) More coding or more projects (4) Slides / reading earlier (3) Class later in the day (3) Lecture to be less fast / dense / technical / confusing (3)

4 Midcourse Reviews II Difficult / workload is: Dan’s office hours
Hard (15) Medium (9) Easy (7) Dan’s office hours Thursday (7) Not Thursday (9)

5 Midcourse Reviews I propose: Other:
I’ll hang out after class on Tuesdays, we can walk to my office if there are more questions I’ll add Thursday OH for a few weeks, and keep if attended We’ll spend more section time devoted to projects I’ll link to last term’s slides so you can get a preview I’ll keep coding demos for you There will be a (slight) shift from projects to written questions Rough “midterm grades” soon, to let you know where you stand (will incorporate at least first two projects) Other: I’ve asked about webcasting / podcasting, but it seems very unlikely this term Limits to how early I can get new slides up, since I do revise extensively from last term Can’t really change: programming language or time of day

6 Midcourse Reviews - Anonymous

7 Today How advanced reinforcement learning works for large problems
Some previews of fundamental ideas we’ll see throughout the rest of the term Next class we’ll start on probabilistic reasoning and reasoning about beliefs

8 Recap: Q-Learning Learn Q*(s,a) values from samples
Receive a sample (s,a,s’,r) On one hand: old estimate of return: But now we have a new estimate for this sample: Nudge the old estimate towards the new sample: Equivalently, average samples over time:

9 Q-Learning Q-learning produces tables of q-values:

10 Q-Learning In realistic situations, we cannot possibly learn about every single state! Too many states to visit them all in training Too many states to even hold the q-tables in memory Instead, we want to generalize: Learn about some small number of training states from experience Generalize that experience to new, similar states This is a fundamental idea in machine learning, and we’ll see it over and over again

11 Example: Pacman Let’s say we discover through experience that this state is bad: In naïve q-learning, we know nothing about this state or its q-states: Or even this one!

12 Feature-Based Representations
Solution: describe a state using a vector of features Features are functions from states to real numbers (often 0/1) that capture important properties of the state Example features: Distance to closest ghost Distance to closest dot Number of ghosts 1 / (dist to dot)2 Is Pacman in a tunnel? (0/1) …… etc. Can also describe a q-state (s, a) with features (e.g. action moves closer to food)

13 Linear Feature Functions
Using a feature representation, we can write a q-function (or value function) for any state using a few weights: Advantage: our experience is summed up in a few powerful numbers Disadvantage: states may share features but be very different in value!

14 Function Approximation
Q-learning with linear q-functions: Intuitive interpretation: Adjust weights of active features E.g. if something unexpectedly bad happens, disprefer all states with that state’s features Formal justification: online least squares (much later)

15 Example: Q-Pacman

16 Hierarchical Learning

17 Hierarchical RL Stratagus: Example of a large RL task, from Bhaskara Marthi’s thesis (w/ Stuart Russell) Stratagus is hard for reinforcement learning algorithms > states > 1030 actions at each point Time horizon ≈ 104 steps Stratagus is hard for human programmers Typically takes several person-months for game companies to write computer opponent Still, no match for experienced human players Programming involves much trial and error Hierarchical RL Humans supply high-level prior knowledge using partial program Learning algorithm fills in the details

18 Partial “Alisp” Program
(defun top () (loop (choose (gather-wood) (gather-gold)))) (defun gather-wood () (with-choice (dest *forest-list*) (nav dest) (action ‘get-wood) (nav *base-loc*) (action ‘dropoff))) (defun gather-gold () (with-choice (dest *goldmine-list*) (nav dest)) (action ‘get-gold) (nav *base-loc*)) (action ‘dropoff))) (defun nav (dest) (until (= (pos (get-state)) dest) (move ‘(N S E W NOOP)) (action move)))) Animation, fade outs… Program state Motivate Q(\omega,…) here Before walking through program, Alisp is an extension of Lisp that adds a choose operator Mention nav-choice wd have to be path-planning alg in complete program

19 Hierarchical RL They then define a hierarchical Q-function which learns a linear feature-based mini-Q-function at each choice point Very good at balancing resources and directing rewards to the right region Still not very good at the strategic elements of these kinds of games (i.e. the Markov game aspect) [DEMO]

20 Policy Search

21 Policy Search Problem: often the feature-based policies that work well aren’t the ones that approximate V / Q best E.g. your value functions from 1.3 were probably horrible estimates of future rewards, but they still produce good decisions We’ll see this distinction between modeling and prediction again later in the course Solution: learn the policy that maximizes rewards rather than the value that predicts rewards This is the idea behind policy search, such as what controlled the upside-down helicopter

22 Policy Search Simplest policy search: Problems:
Start with an initial linear value function or q-function Nudge each feature weight up and down and see if your policy is better than before Problems: How do we tell the policy got better? Need to run many sample episodes! If there are a lot of features, this can be impractical

23 Policy Search* Advanced policy search:
Write a stochastic (soft) policy: Turns out you can efficiently approximate the derivative of the returns with respect to the parameters w (details in the book, but you don’t have to know them) Take uphill steps, recalculate derivatives, etc.

24 Take a Deep Breath… We’re done with search and planning!
Next, we’ll look at how to reason with probabilities Diagnosis Tracking objects Speech recognition Robot mapping … lots more! Last part of course: machine learning

25 Digression / Preview

26 Linear regression Given examples Predict given a new point Temperature
40 26 24 Temperature 20 22 20 30 40 20 30 10 20 10 20 10 Figure 1: scatter(1:20,10+(1:20)+2*randn(1,20),'k','filled'); a=axis; a(3)=0; axis(a); Given examples Predict given a new point

27 Linear regression Prediction Prediction Temperature
10 20 30 40 22 24 26 40 Temperature 20 20 Figure 1: scatter(1:20,10+(1:20)+2*randn(1,20),'k','filled'); a=axis; a(3)=0; axis(a); Prediction Prediction

28 Ordinary Least Squares (OLS)
Error or “residual” Observation Prediction Figure 1: scatter(1:20,10+(1:20)+2*randn(1,20),'k','filled'); a=axis; a(3)=0; axis(a); 20

29 Overfitting Degree 15 polynomial [DEMO] 30 25 20 15 10 5 -5 -10 -15 2
-5 -10 -15 2 4 6 8 10 12 14 16 18 20 [DEMO]


Download ppt "CS 188: Artificial Intelligence Fall 2006"

Similar presentations


Ads by Google