-
Notifications
You must be signed in to change notification settings - Fork 303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The training script produce WER of 2.57% on librispeech test-clean #13
Conversation
Thanks! |
Could you also add a file
Please also put a link to the uploaded model after model averaging in the added file so that others can use it. See #10 (comment) |
Did you observe any WER improvements due to recent changes in bucketing on either of the test sets? |
The changes in bucketing were included in the model, but there were also some other changes. I am not sure how much it benefits from the changes in sampler. |
👍 👍 |
Alright -- anyway, good job :) |
@pkufool when you have time, would you mind telling me an earlier WER, possibly one from --epoch 23? |
I only decoded epochs greater than 30 before, will search for more epochs and update results here. |
Best WERs for some epochs, the *.txt filename means
|
Averaging epoch 15 - 34, (decoding with flag --epoch 34 --avg 20)
Decoding with HLG + 4gram lm rescore + attention decoder rescore.