Your assignment is to implement IBM Model 1,
as decribed in the paper *The Mathematics of Statistical
Machine Translation*, training parameters
using Expectation Maximization on
a parallel French-English corpus, and evaluating
the results on held-out test data in terms of
model perplexity. In particular, your implementation
should include:

- Insertion from the dummy NULL token
- Calculation of perplexity on training and test data
- Output of parameters in human-readable format

Training data can be found here: /p/mt/corpora/hansard. This directory contains parallel French-English text from the Canadian Parliament. Both sides (French and English) have been run through a tokenizer (to, for example, split off punctuation from words).

One of the goals of this assignment is get familiar with some of the issues involved in working with large corpora, in particular smoothing and pruning. It is a good idea to floor all probabilities at a low number, say 1e-07, to avoid numerical problems as well as dead-ends in the EM training. Similarly, you may need to "prune" low-valued parameters in order to make memory usage and file sizes manageable.

You may use any programming language, but please start from scratch! Please turn in:

- your source code
- a graph of perplexity over ten training iterations on training and test data
- some discussion of the translation table learned: examples of good and bad lexical pairs, what sort of problems might be "fooling" the alogrithm, or whatever strikes your eye!