Value Iteration Theorem: convergence to a optimal value Policy may converge faster Three components to return
Value Iteration Advantages compared with Expectimax: Given MDP: state space: 1,2 action: 1,2 transition:80% reward: state1 →1, state2→0 S1(1)S1(2) S1S2 S1 80% 20% 0 0 1 1 0.8 0.2 0.8 S1 S1(1) S2(1)S2(2)S2(1) Repeats !
Q-learning Compared with Value Iteration: same: MDP model seeking policy different: T(s,a,s’) and R(s,a,s’) unkown different ways of solving RDP (learned model vs. unlearned model) Reinforcement Learning policy, experience, reward model-based vs. model free passive learning vs. active learning
Q-learning Q-learning: Q-value iteration Process: sample: s →a→s’ r Update new Q-value based on the sample:
Q-learning Q-learning: converge to optimal policy Sample enough, leaning rate small enough Ways to explore: epsilon-greedy action selection : choose between acting randomly and acting accordingly to the best current Q-value