Source Code

In past semesters, when doing this project, many students were struggling to get the basic algorithms to work, leaving little time for the real fun. So we provide a Java implementation of solving Markov Decision Processes (MDPs). To demonstrate how to use the Java package, we also show an implementation of the adaptive dynamic programming algorithm. The code serves several purposes. Firstly, you can use it as a base for your learning method. Secondly, if the package as a whole doesn't fit your needs, you can use part of it, e.g., the parser of the MDP description file and the MDP class itself. Thirdly, if you want to start from scratch all by yourself, you can use the code as a reference and compare your results against those produced by our code. Lastly, a concrete Java implementation of various algorithms in the textbook helps you understand the material. We consider trying our implementation to re-create the figures and tables in the textbook a good learning approach.

Markov Decision Process

Many learning algorithms we cover in the class are built upon Markov Decision Processes. A MDP consists of a matrix of states, a transition model and a reward function. To describe a MDP, use a text file like the following:

[Size]
rows = 3
cols = 4

[Transition model]
"(*,*), FORWARD" = 0.8
"(*,*), LEFT" = 0.1
"(*,*), RIGHT" = 0.1
#"(1,1),UP,(2,1)" = 0.8
#"(1,1),UP,(1,1)" = 0.1
#"(1,1),UP,(1,2)" = 0.1

[Rewards]
(*,*) = -0.04
(3,4) = 1
(2,4) = -1

[Holes]
1 = (2,2)

[Discount factor]
gamma =1 

[Terminal states]
1 = (3,4)
2 = (2,4)
The above text describes the 3x4 world in the textbook (Fig 17.1, Page 614). The file is of a Windows INI format, which is easier to read and edit than an xml file. Each section, whose name is enclosed in brackets, contains a list of key=value pairs. A line starting with "#" is comment and is ignored. The section names are case sensitive. If you have only an APPLE ][, which you only type upper case letters on, take a look at MDPFileParser.java and make due changes. The coordinates of each cell are row number and column number, in that order. Both row and column numbers start from 1. This is different from the textbook, which uses an X-Y notation. Some details of each section are listed below.

Note how the the clever "agent-centric" coordinates shorten the description: Basically the text above understands whichever direction the agent is (implicitly) commanded to move as the "forward" direction. Say the agent is at (2,2) and tries to go to (3,2), we would have a direction already (3-2, 2-2)=(1,0) or east-bound. The FORWARD/LEFT/RIGHT triple are relative to this direction. So in this description the absolute direction comes from the command and the relative (FORWARD, LEFT, RIGHT) directions are derived from the commanded and the current locations.

  1. The Size section is mandatory. rows and cols specify the size of a 2D environment.
  2. You can use wild card (*) for rewards. For example (*,*)=-0.04 means that every cell has a reward value of -0.04. The following two entries however overwrite the rewards for two cells. The order is crucial. If you have the wild card entry as the last one, all the rewards set before will be overridden, which is hardly what you wanted. Incidentally, you can write something like (*, 2)=-0.1 to say the second column has a reward value of -0.1.
  3. The transition model section is more complicated. A transition function is a 3-D function, i.e., T(s, a, s') giving the probability of reaching state s' from state s by taking action a. So a full representation of a transition function would consist of entries like
    (1,1),UP,(5,6)=0.0000008
    for every s, a and s'. That's 11x4x11 entries to write even for our miniature 3x4 world. Quite some typing exercise. Fortunately, for all the examples in Chapter 17, each cell has the same transition function and we can use an agent-centric representation, hence the 3 entries you saw in the above file.

    There is nothing that prevents you from having a more sophisticated transition model. In fact, you may need one in our Quake world. Then you need to fall back to the full model. Just to show you how to write entries of a full model, in the above file there are three commented out lines depicting the transition function at cell (1,1).

    The two notations can't be used at the same time. You need to put quotes around the key if you have white spaces in the key.

  4. Since we stick to the INI format, we have to write every entry in a "key=value" form, even when it's not necessary. That's what happened to the Holes and Terminate states section. The key "1", in the case of the Holes section, and the keys "1" and "2", in the case of the Terminate states section are there only for the integrity of the INI format and are ignored by the MDP file parser. The so-called holes are unreachable states.
BTW, if you want to use an INI file for other purposes, you will find IniParser.java and IniSection.java useful.

The class MarkovDecisionProcess along with a few other smaller helper classes define a basic MDP. The code is written with a 2D world in mind, though it's not difficult to factor out an interface to describe a more general MDP, which can be inherited to form a 2D or 3D MDP.

Solving a MDP using value iteration and policy iteration.

To bootstrap your more advanced learning algorithms, we implement the value iteration and policy iteration algorithms in Chapter 17. The implementation is done in a fashion that is independent to the underlying structure of the MDP, 2D or 3D. This flexibility is achieved by the abstract (not in a Java programming sense) interface provided by the MDP classes. You can try feeding the above MDP description file to the code and see how Figure 17.2 and Figure 17.3 were created. Take a look at ValueIteration.java and PolicyIteration.java, they'll help you understand the algorithms. Remember the utilities are a global property. You should not generally expect to obtain the utility value of a cell right next to a terminate cell just by adding its reward to the utility of the terminate cell. That's what makes solving a MDP complicated.

Pay attention to the argument in the text that shows why value iteration is guaranteed to converge. As a good exercise to see if you are comfortable our implementation, try to replicate Figure 17.5.

Policy iteration is essential for some reinforcement learning methods in Chapter 21. The two components of policy iteration are policy evaluation and policy improvement. The key to the policy evaluation algorithm is to assemble the NxN linear system. N is the number of cells of the environment. If we have an nxn world, the linear system is of O(n^4), not a small number at all. However, the linear system is very sparse. All the non-zero elements are along the main diagonal. There are efficient methods to store and solve sparse linear systems but they are out of the scope of this class. The algorithm in Figure 17.7 is slightly wrong: the initial policy cannot be just random. It has to be a random proper policy. Or else, if the discount factor is 1, the solution matrix could be singular. Try the following simple example
LEFTTERMINATE
LEFTDOWN
with the same transition model as the 3x4 world.

Reinforcement learning

Being able to solve a MDP gives us a foundation for reinforcement learning algorithms, which this project is mainly about. To show you how to use the MDP package, we implement the adaptive dynamic programming algorithm. The main difference between a real learning situation and solving a MDP is that we don't know the environment apriori. It's observable of course, at least partly. Or else there's nothing to learn. This implies, from the point of view of doing this project, that you, the God, know the environment and will provide information to your learning algorithm (the brain of your quagent so to speak). The code in ADPAgent implements the passive adaptive dynamic programming algorithm. Again, as an exercise, try to replicate Figure 21.3 using the code provided.

Usage and downloads

All the source code can be found in the source directory. Or you can just download the jar file. Two simple examples of using the package are ValueIterationTest.java and PolicyIterationTest.java, shown below.
import cs.decision.*;
import java.io.FileNotFoundException;

public class ValueIterationTest {

  public static void main(String args[]) {
    try {
      MDPFileParser parser = new MDPFileParser("textbook.txt");
      MarkovDecisionProcess mdp = parser.parse();

      ValueIteration vi = new ValueIteration(mdp);
      vi.setError(1e-4);
      vi.solve();
      mdp.dumpHTML(false);

    } catch (FileNotFoundException e){
        System.out.println(e);
    }
  }
}
import cs.decision.*;
import java.io.FileNotFoundException;

public class PolicyIterationTest {

  MarkovDecisionProcess mdp;

  public static void main(String[] args) {
    try {
      MDPFileParser parser = new MDPFileParser("textbook.txt");
      MarkovDecisionProcess mdp = parser.parse();
      
      PolicyIteration pi = new PolicyIteration(mdp);
      
      pi.solve();
      mdp.dumpHTML(false);
      
    } catch (FileNotFoundException e){
      System.out.println(e);
    }
  }
}
Both programs create the following result.
0.8115582191780821:RIGHT 0.8678082191780823:RIGHT 0.9178082191780822:RIGHT 1.0
0.7615582191780821:UP N 0.6602739726027398:UP -1.0
0.7053082191780822:UP 0.6553082191780824:LEFT 0.6114155251141554:LEFT 0.38792491121258266:LEFT
That's the utility values with the optimal policy.

Don't forget to put mdp.jar in your CLASSPATH when compiling and running the programs. We use JAMA to solve linear systems. The JAMA jar file is in the source directory too.

The following example shows how to set up a simulator to test your learning algorithm. ADPSimualtor.java employs two MDPs, one of which we know and the other the agent will learn.

import java.io.FileNotFoundException;
import cs.decision.*;
import cs.learning.*;

/**
 * Simulator for testing an ADP agent.
 */
public class ADPSimulator {

  MarkovDecisionProcess modelMDP;
  MarkovDecisionProcess learnedMDP;
  ADPAgent agent;
  
  public static void main(String[] args) throws FileNotFoundException {
    ADPSimulator simulator;
    
    if(args.length >= 1)
      simulator = new ADPSimulator(args[0]);
    else
      simulator = new ADPSimulator("textbook.txt");
    
    simulator.demo();
  }
  
  public ADPSimulator(String filename) throws FileNotFoundException{
    
    MDPFileParser parse = new MDPFileParser(filename);
    modelMDP = parse.parse();
    
    // Create the MDP to be learned
    learnedMDP = modelMDP.copyLayout();
    
    agent = new ADPAgent(learnedMDP);

  }
  
  public void demo() {
    // Generate a policy for the learned MDP
    //learnedMDP.generateProperPolicy();

    // This is the optimal policy.
    learnedMDP.setAction(1,1, 0);
    learnedMDP.setAction(2,1, 0);
    learnedMDP.setAction(3,1, 1);
    learnedMDP.setAction(1,2, 3);
    learnedMDP.setAction(3,2, 1);
    learnedMDP.setAction(1,3, 3);
    learnedMDP.setAction(2,3, 0);
    learnedMDP.setAction(3,3, 1);
    learnedMDP.setAction(1,4, 3);
    
    //Mark every state new
    for(State s=learnedMDP.getStartState(); s!=null; s=learnedMDP.getNextState())
      s.setVisited(false);
    
    learnedMDP.dumpHTML(true);
    run(100);
    learnedMDP.dumpTransitionModel();
  }
  
  public void run(int numTrials) {
    Percept percept = new Percept(null, 0.0);

    State s = learnedMDP.getStartState();
    State modelState;
    for(int trials=0; trials < numTrials; trials++) {

      learnedMDP.dumpHTML(false);
      
      percept.state = s;
      modelState = modelMDP.getCoincideState(s);
      percept.reward = modelMDP.getReward(modelState);
      
      // The agent decides what to do next
      Action a = agent.go(percept);
      
      if(a == null) {
        s = learnedMDP.getRandomReachableState();
      }else {
        // Given the state and the action, the simulator
        // determines the next state using the transition model
        modelState = modelMDP.transit(modelState, a);
        s = learnedMDP.getCoincideState(modelState);
      }
    }
  }
}

A word about development environments

Java, being a mainstream programming language, has attracted major software companies to pour money and human effort into it. As a consequence, a lot of good integrated development environments (IDEs) are out there. Of course, there is nothing wrong with being a real programmer by using the good old BSD vi plus a shell. Or (a little more civilized), emacs. Others who would like a more modern IDE will certainly enjoy eclipse. The main strength of eclipse, besides being totally free, is that it's built in with unit testing (JUnit) and refactoring, the two ingredients of so-called agile programming or extreme programming. It also integrates version control seamlessly. It's much more usable than the MS Visual Studio and VSS dual in nearly every aspect, except of course for generating standard-incompliant Java code.