-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Has anybody pre-trained successfully on en-de translation with MASS ? #62
Comments
I have fixed some params in the uploaded model, Do you have a try? |
I did not mean to reload the uploaded model. I want to reproduce the results from scratch (from pre-training to finetune). But I failed on pre-training. Anyway, which params have you fixed? |
I also failed in pretraining from scratch. Here is my training script.
I use 170,000,000 German sentences and 100,000,000 English sentences, due to memory issues, I use transformer-base instead of transformer-big. Here is my fine tune script.
I could only get 22.86 BLEU points in translating German to English on newstest2016, which is far from what is reported in the paper. Could you give me some advices on pretraining from scratch and how to fully reproduce your results? |
@StillKeepTry Would you maybe be able to share the training log of the pre-trained models that are offered as downloads? |
Thanks! |
@StillKeepTry quick question: the logs indicate that the training was done with 5 million sentences. Does this mean that the pretrained models offered were trained with a subset of the monolingual data? |
@StillKeepTry Can you confirm that the provided pre-trained model was only trained with 5 million sentences? |
@StillKeepTry Could you confirm that the pre-trained models provided are trained on a subsample? If so did you randomly subsample the newscrawl data or how were the 5 (or 50) million sentences selected? |
@tdomhan It is trained on a subsample (50 million sentences). The corpus is first tokenized by mosesdecoder and then I remove the sentences which length > 175 after tokenization. Finally, I will randomly choose 50M sentences from the tokenized data. |
Thanks! |
I have tried a lot to reproduce the results on En-de, but I failed.
The text was updated successfully, but these errors were encountered: