-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hyperparamters to reproduce paper result #32
Comments
I also had a hard time to reproduce the unsupervised MT result. I run exactly as suggested in ReadME on en-fr. At epoch 4, my "valid_fr-en_mt_bleu" is only a little above 1, but you had "valid_fr-en_mt_bleu -> 10.55". I run it a few times. |
@xianxl Our implementation based on fairseq contains some different methods from our paper. There will have some slight different experiment settings in fairseq. |
@StillKeepTry thanks for your reply. So what do you recommend to set word_mask_keep_rand in MASS-fairseq implementation? The default is "0,0,1" (which means no mask?) and this arg is not set in the training command you shared. |
I followed the exactly the same setting as what the git page shows, running on a machine with 8 V100 gpus as the paper describes: The git page claims that after 4 epochs, even without back translation the unsupervised BLEU should be close to the following numbers: epoch -> 4 However this is not what I got, my numbers are much worse at epoch 4: Could you please let us know if any param is wrong, or there are any hidden recipe that we're not aware of to reproduce the results? On the other hand, I also loaded your pre-trained en-fr model, and the results are much better. So alternatively, could you share the settings you used to train the pre-trained model? |
After some investigation, it seems that the suggested epoch size (200000) is really small and not the one used to produce the paper results. Could you confirm on this hypothesis? |
Can we conclude that the results are not reproducible? |
First, we used 50M monolingual data for each language during the training, this may be one reason (pre-training usually needs more data). Second, To explain this, I have uploaded some logs to this link from my previous experiments when epoch_size = 200000. It can obtain 8.24/5.45 (at 10 epochs), 11.95/8.21 (at 50 epochs), 13.38/9.34 (at 100 epochs), 14.26/10.06 (at 146 epochs). This is just the result of 146 epochs. While we take over 500 epochs in our experiments for pre-training. And in the latest code, you can try this hyperparameter for pre-training which result in better performance:
|
@StillKeepTry could you also please provide the log for bt steps for the en_de model you pretrained? Also log for pretraining and bt for en_fr would be highly appreciated as well! |
@k888 have you find the hyperparameters for en_de BT steps? |
Hello, thanks for your great work. And you said that you used 50M monolingual data(50,000,000 sentences) for each language during the training, the epoch_size is 200,000, so why the number of epoch training on 50M data for once is 62? why not 250? @StillKeepTry |
Could you share the full command (and default hyperparameters) you used? For example, I found --word_mask_keep_rand is "0.8,0.1,0.1" in MASS but "0,0,1" in MASS-fairseq.
The text was updated successfully, but these errors were encountered: