layout | img | img_url | caption | title | active_tab |
---|---|---|---|---|---|
ling506_frame |
rosetta |
Rosetta stone (credit; calotype46) |
Homework 2 | Decoding |
homework |
Decoding is the problem of taking an input sentence in a foreign language:
honorables sénateurs , que se est - il passé ici , mardi dernier ?
and finding the best translation into a target language, according to a model:
honourable senators , what happened here on Tuesday ?
Your task is to find the most probable translation, given the foreign input, the translation likelihood model, and the English language model. We assume the traditional noisy channel decomposition:
We also assume that the distribution over all segmentations and all alignments is uniform. This means that there is no distortion model or segmentation model.
ssh in to cl.linguistics.illinois.edu
Once you've logged in, run this command:
git clone https://github.com/2016-Spring-UIUC-LING506/YOUR_GITHUB_USERNAME_GOES_HERE-decoding.git
In your github repository, you will find a Python program called
decode
, which is an implementation of a simple stack decoder.
The decoder translates monotonically — that is, without reordering the
English phrases — and by default it also uses very strict pruning limits,
which you can vary on the command line.
The provided decoder solves the search problem, but makes two assumptions, the first of which is that
This approximation means that you can use the dynamic programming Viterbi algorithm to find the best translation.
The second assumption is that there is no reordering of phrases during translation, therefore the reordering of source words can only be accomplished if the reordering is part of a memorized phrase pair. If word-for-word translations are all that are available for some sentence, the translations will always be in the source language order.
In the starter kit there is also the data
directory which contains a translation model,
a language model, and a set of input sentences to translate. Run the decoder using this
command:
decode > output
This loads the models and decodes the input sentences, storing the result in output. You can see the translations simply by looking at the file. To calculate their true model score, run the command:
grade < output
This command computes the probability of the output sentences according to the model. It works by summing over all possible ways that the model could have generated the English from the French. In general this is intractable, but because the phrase dictionary is fixed and sparse, the specific instances here can be computed in a few minutes. It is still easier to do this exactly than it is to find the optimal translation. In fact, if you look at the grade script you may get some hints about how to do the assignment!
Improving the search algorithm in the decoder — for instance by enabling it to search over permutations of English phrases — should permit you to find more probable translations of the input French sentences than the ones found by the baseline system.
The grade program will tell you the probability of your output.
Your task for this assignment is to find the English sentence with the highest possible probability. Formally, this means your goal is to solve the problem:
where
and
We will make the simplifying assumption that segmentation and reordering probabilities are uniform across all sentences, and hence constant. This results in a model whose probability density function does not sum to one. But from a practical perspective, it slightly simplifies the implementation without substantially harming empirical accuracy. This means that you only need consider the product of the phrase translation probabilities
$$ p(\mathbf{f} \mid \mathbf{e}) = \prod_{\langle i,i',j,j' \rangle \in \mathbf{a}} p \left (\mathbf{f}{\langle i,i' \rangle} \mid \mathbf{e}{\langle j,j' \rangle} \right ) $$
where $$ \langle i,i' \rangle $$ and $$ \langle j,j' \rangle $$ index phrases
in
Unfortunately, even with all of these simplifications, finding the most
probable English sentence is completely intractable! To compute it
exactly, for each English sentence you would need to compute
$$ p(\mathbf{f} \mid \mathbf{e}) $$
as a sum over all possible alignments with the French sentence:
Since this involves multiplying together many small probabilities, it is
helpful to work in logspace to avoid numerical underflow. We instead solve for
$$ \displaystyle \log p(\mathbf{f},\mathbf{a} \mid \mathbf{e}) + \log p(\mathbf{e}) = \log p(e_1 \mid START)
- \displaystyle \sum_{j=2}^J \log p(e_j \mid e_{j-1}e_{j-2})
- \log p(END \mid e_J) + \displaystyle \sum_{\langle i,i',j,j' \rangle \in a} \log p(\mathbf{f}{\langle i,i' \rangle} \mid \mathbf{e}{\langle j,j' \rangle}). $$
The default decoder already works with log probabilities, so it is
not necessary for you to perform any additional conversion; you can simply work
with the sum of the scores that the model provides for you. Note that
since probabilities are always less than or equal to one, their equivalent
values in logspace will always be negative or zero, respectively (you may
notice that grade
sometimes reports positive translation model
scores; this is because the sum it computes does not include
the large negative constant associated
with the log probabilities of segmentation and reordering). Hence your
translations will always have negative scores, and you will be looking for
the one with the smallest absolute value. In other words, we have
transformed the problem of finding the most probable translation into a
problem of finding the shortest path through a large graph of possible
outputs.
Under the phrase-based model we've given you, the goal is to find a phrase segmentation, translation of each resulting phrase, and permutation of those phrases such that the product of the phrase translation probabilities and the language model score of the resulting sentence is as high as possible. Arbitrary permutations of the English phrases are allowed, provided that the phrase translations are one-to-one and exactly cover the input sentence. Even with all of the simplifications we have made, this problem is still NP-Complete, so we recommend that you solve it using an approximate method like stack decoding, discussed in Chapter 6 of the textbook. In fact, the baseline score you must beat was achieved using stack decoding with reordering. You can trade efficiency for search effectiveness by implementing histogram pruning or threshold pruning, or by playing around with reordering limits as described in the textbook. Or, you might consider implementing other approaches to solving the search problem:
- Implement a greedy decoder.
- Reduce the problem to a traveling salesman problem (TSP) and decode using an off-the-shelf TSP solver.
- Reformulate the problem as a shortest-path problem through a finite-state lattice and decode using finite-state transducers.
- Decode using Lagrangian relaxation (often finds exact solutions!)
Several techniques used for the IBM Models (which have very similar search problems as phrase-based models) could also be adapted:
- Implement A* Search.
- Implement a bidirectional decoder.
- Decode using an integer linear programming (ILP) solver.
Also consider marginalizing over the different alignments:
But the sky's the limit! There are many, many ways to try to solve the decoding problem, and you can try anything you want as long as you follow the ground rules:
-
For this assignment you may work independently, or in groups.
- Groups must be publicly announced in class
- Every member of the group must reply to the Instructor's announcement of this HW on Piazza, listing all group members in the reply
- Everyone in the group will receive the same grade on the assignment
- Claims regarding who did or did not do what work will be completely ignored
-
You must turn in your code and writeup:
- All of your work must be regularly checked in to github. In order to receive full credit, you must have regular checkins to your repository, and each checkin must have a reasonable commit message.
- All of your work, including your code and LaTeX writeup, must be checked in and pushed to your github repository.
- Your writeup must be a clear, mathematical description of your algorithm and its motivation written in scientific style. This needn't be long, but it should be clear enough that one of your fellow students could re-implement it exactly.
-
You do not need any other data than what is provided. You should feel free to use additional codebases and libraries except for those expressly intended to decode machine translation models. You must write your own decoder. If you would like to base your solution on finite-state toolkits or generic solvers for traveling salesman problems or integer linear programming, that is fine. But machine translation software including (but not limited to) Moses, cdec, Joshua, or phrasal is off-limits. You may of course inspect these systems if you want to understand how they work. But be warned: they are generally quite complicated because they provide a great deal of other functionality that is not the focus of this assignment. It is possible to complete the assignment with a quite modest amount of python code. If you aren't sure whether something is permitted, ask us.
If you have any questions or you're confused about anything, just ask.