11 Reinforcement Learning

    This chapter is dedicated to the important issue of  reinforcement learning Markov decision processes, MDPs, are introduced as an extension of Hidden Markov process where the actions are chosen by the agent. Secondary rewards are important here too, in the sense that they provide the feedback to rate the past actions as good or not. The problem is that frequently it is difficult to assign rewards to each action. Actually, in some cases, the reward is only obtained after the completion of the process. Thus the need to have bookkeeping mechanisms allowing the backpropagation of final successes or failures. The difficult problem of handling such delayed rewards is the subject of reinforcement learning, represented by Q-learning in Ballard's book.
 
    Ballard starts his presentation of reinforcement learning by presenting some basic notation and outlining two situations: maze following and pole balancing. This is followed by a brief section introducing the concept of value and the policy improvement concept. It is felt that the novice reader will have substantial difficulties in following this section, because of some confusion and haste in the presentation of the concepts. For instance, the state value functions in Figure 11.3 are not refered to in the main text, which is restricted to discussing the value of policies. In addition, Q functions are defined as the expected return of starting in state x, taking an specific action u, and then following policy f thereafter. Actually, it should be observed that this latter policy is assumed optimal for the rest of the transitions, in order to allow the dynamic programming principle to be extended to the problem. It should be also observed that the definition of the reward estimation can strongly influence the determination of the optimal policy. Unfortunately, the previous examples (maze and pole) are not retaken in the book, leaving the reader wondering how they could be more formally approached in terms of reinforcement learning.
 
    After a section discussing how the Q-functions can be obtained, Ballard proceeds to present the interesting concept of temporal-difference learning, which is illustrated informally in terms of the classical solution of Backgammon. The next section introduces the approach of learning with a teacher, covering the situations of learning with an external critic and learning by watching. The important issue of partially observable MDPs is covered in terms of two strategies (avoiding bad states and learning state information from temporal sequences) and illustrated in Section 11.7. This chapter concludes with a summary outlining the most important covered issues.