Learning and Quagents


"That'll learn 'em!"

Reading

The most directly relevant reading for this assignment is sections 17.1 -- 17.5 and all of Chapter 21. You may well want to use references besides the text. Dana Ballard's book Natural Computation (used in 240) could be useful, and there are many many tutorials, books and articles on reinforcement learning.

Code, Algorithms, Demos...

Perhaps the pseudocode in the text is a bit opaque. In any event, thanks to Bo Hu, this year we have a suite of working programs, hints, and tools to help you with this assignment. Here They Are! .

Assignment

This is a team assignment. Teams of two or three are OK. Make sure there is a chance for each team member to shine! This assignment seems to break up nicely into fairly independent parts.

Gridworld

Do textbook exercise 17.6. Note that the MDP description language in the Helpful Material mentioned above does part 17.6a for you (or you may want to try the AIMA code referred to in the exercise).

You may want to try value iteration to learn utilities, as a precursor to 17.6.

Quagents and Reinforcement Learning

Use the Quagent simulator to do the Quakeworld equivalent of textbook exercise 21.6 or 21.7, only in a world of "real" Quagents.

Thus the goal is that your team implements and experiments with various active and passive reinforcement learning strategies and reports on the results.

Swap Meet?

The existence of an MDP description language opens the possibility of trading worlds with other teams, so that your learning algorithms can be tested on more than one world instance. We're encouraging such exchanges but not requiring them. If you swap worlds, be sure to describe that and what happened in your writeups.

Hints

We don't want to give much direction since we want to encourage creativity (within the constraints of mastering, implementing, and experimenting with classic reinforcement learning techniques of course.) Ideally, this is a chance to see what can be done in a realistic environment with these popular learning methods. But beware! There is a very good reason that these problems look so simple (3 x 4 gridworlds, few actions): increasing complexity demands many more trials and much more computation.

Thus one very natural thing to do is to incorporate more sophisticated perceptions into one's state. Why have to run into a wall or blunder into Kryptonite when you can sense them from afar? I think the one-word answer is "combinatorics". That is, every bit of state doubles the size of the state space to be learned, and soon you're wedged.

But another nice idea is to use extra information for "shaping", that is coaching and providing meta-advice to the learning algorithm. I think this is a fruitful field for further work. Some external "coach" (internal or external)sees the global picture and predicts rewards and penalties.

Although the gridworld simulator is certainly a model of the Quake world, that doesn't mean the learning methods must be model-based. If you want to do TD or Q-learning, (learning what to do without worrying about why) that would be a very welcome solution.

How can reinforcement learning, with its typically thousands of trials for reasonably-sized problems, be made useful in the "real world?" An extremely common trick, used in lots of real life computer-learning applications, is to use a simulator. Yes, yes, Quagents ARE a simulator, but here I mean a simple, fast, pared-down version of the Quake world. In fact one could use gridworld again. Especially if you restricted your bot to very simple actions, like "turnby 90", "walk 20", "pickup".

Use the controller to keep the bot in the small grid (i.e. don't issue commands that take it out of the grid). Controller can know what state bot is in. Create a very simple environment with a few obstacles (or dangers) or rewards. Train in gridworld simulation, transfer policy to "real" bot and see what happens. Only then start expanding the repertoire of possible actions and environmental states. So one idea might be to find the shortest path to the goal around a box. Another might be to learn the best way to approach and and pick up gold that is close to kryptonite. Your quagent controller can compute the rewards and punishments by suitably weighting energy and wealth. You expect different policies as the importance of these factors varies.

Suppose you actually did 21.6 just as written. You'd have a simulator in which each action took nanoseconds. Use it to learn the right policies, and THEN take those policies to the real Quake world that has an environment more or less like the one you did all your millions of learning trials on. Even if it's different, it could still be close enough that your off-line policy is a good starting place. Thus part of the fun is allowing real effects into your Quagent situation that the gridworld does not describe. One unavoidable one that comes to mind immediately is the issue of the bots not going exactly the commanded WALKBY distance. If you proceed this way then one cool thing to look at is how much the "real" (Quake) world can deviate from the "simulated" one. Different dangers, random events, different distances, different obstacles, etc.

It turns out that it may not be obvious, or even easy, to translate even the simulated Quake world into the simulated gridworld. So if you run into conceptual issues don't be surprised -- they are part of the fun. For instance, a basic question is how to translate the reward and penalty values from the Quake world to the gridworld simulator: how much is the reward at the exit and how much negative reward is kryptonite? The second question is the granularity. E.g., kryptonite has a certain range, possibly spanning several grids in the simulator. Should one consider it as a lump occupying a single grid, or neighboring grids all with negative rewards? We would love to see how far this learning paradigm can be pushed in whatever technical direction you choose. Suppose you use the bot's sensors to try to determine your state, for instance, or suppose you give the quagent some memory? There is lots of work by PhD student Andrew McCallum (check out the CS department's /u/ftp/pub/papers/robotics/ directory, papers of the form "95.mccallum" and "96.mccallum".) And of course there is a huge volume of literature, with a fair amount of it aimed at learning to control real hardware.

What to Hand In

Send CB a (strictly private and confidential) review of the performance of each of your teammates if there is anything special that I should know. Thats brown@cs.rochester.edu .

Make sure we can tell who did what in the project. Provide a thorough and thoughtful write-up of your work, formally describing the environment and the actions available to your quagent and quantifying its learning behavior under the different conditions (like different learning algorithms, learning rates, etc.) you set up. Don't forget the references section.

Aim for the goal that your project is so well-conceived, well-done, and well-reported that your report will take its place in the 242 Hall of Fame. I am hoping for ingenious, compelling experiments and conclusions. Graphics are nice, especially in plotting learning rates, comparative performance, etc. Maps and images of the environment your quagent inhabits could be fun.

Create a PDF document of your writeup and upload it to BB.

Write in good scientific style in the form of a computer science technical report. You may find useful this set of advice on writing and homework. There are some example writeups of student projects on the Main Assignment Page .

---
242 Home Page

This page is maintained by Nature Lover

Last update: 1.9.04.