-
Notifications
You must be signed in to change notification settings - Fork 303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Design thoughts #1
Comments
OK, that is pretty similar to Kaldi. I can work with that. I also recently started to dislike the "configuration layer" and somehow prefer to write just the necessary things in a given script/notebook... Some suggestions:
Mid/long-term considerations:
|
One more suggestion: let's try to aim to make the library friendly to use with jupyter notebooks by making most of the functionality importable (even if it's something used just by one recipe I guess); they are really great for prototyping/tinkering with these things |
Cool!
I guess I'm OK with that, if we mean things like the conformer being in the "library", but I think it could be a good idea to use local model.py to "put pieces together". The point is, we may want to experiment with a lot of new things, and I want it to be easy to do.
OK.
After seeing a draft I might agree. I'm OK to centralize some code code after patterns emerge. But I want any utilities and centralized classes to befairly simple and easily separable, so that you can understand one piece without understanding the whole.
Sure, in the longer term hopefully they can become much shorter.
I'm OK with settling on a particular order of tensors, e.g. (N, C, T), not sure if that's what you mean? (I think speechbrain does that?). |
my point was about the number of args/kwargs — currently the signature of forward in our models is inconsistent, so you can’t use them as drop in replacements with inference code. 100% plug and play doesn’t seem attainable (we may want to keep adding inputs to try out new things) but at the very least we could modify all signatures to accept (*args, **kwargs) so that simpler models can work with extra inputs. |
Mm, do speechbrain or ESPNet do that?
I'm cautious about this. Seems to me potentially a recipe for bugs.
…On Fri, Jul 2, 2021 at 8:01 PM Piotr Żelasko ***@***.***> wrote:
I'm OK with settling on a particular order of tensors, e.g. (N, C, T), not
sure if that's what you mean? (I think speechbrain does that?).
my point was about the number of args/kwargs — currently the signature of
forward in our models is inconsistent, so you can’t use them as drop in
replacements with inference code. 100% plug and play doesn’t seem
attainable (we may want to keep adding inputs to try out new things) but at
the very least we could modify all signatures to accept (*args, **kwargs)
so that simpler models can work with extra inputs.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#1 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAZFLOYUPVSCFI6IRISM65LTVWTANANCNFSM47TSKU2A>
.
|
I don't think they do. My knowledge might be out of date but the last time I checked, none of the existing PyTorch-based ASR frameworks I know of seriously addressed the matter of model deployment. |
We can consider such things later on. We are working on a deadline right now and I want to consider this a demo of k2/lhotse for now, so we can explore architectures, before really settling on common APIs. |
Guys (mostly Piotr but also anyone who's listening),
Firstly, on timeline, we need something working well in time for September 1st-ish, the time we give the tutorial.
So we can't be too ambitious: think, cleaned up and reorganized version of Snowfall, working by early-to-mid August. Sorry I have delayed this for so long. Liyong is working on replicating ESPNet results with k2 mechanisms, he is making good progress, we may want to incorporate parts of that.
I want to avoid big centralized APIs at the moment.
I also want to avoid the phenomenon in SpeechBrain and ESPNet where there is a kind of "configuration layer" where
you pass in configs, and these get parsed into actual python code by some other code. I would rather keep it all
Python code. Suppose we have a directory (this doesn't have to be the real name):
egs/librispeech/ASR/
then I am thinking we can have subdirectories of that where the scripts for different versions of experiments live.
We might have some data-prep scripts:
egs/librispeech/ASR/{prepare.sh,local/blahblah,...}
and these would write to some subdirectory, e.g. egs/librispeech/ASR/data/...
Then for different experiments we'd have the scripts in subdirectories, like:
egs/librispeech/ASR/tdnn_lstm_ctc/{model.py,train.py,decode.py,README.md}
and we might have
egs/librispeech/ASR/conformer_mmi/{model.py,train.py,decode.py,README.md}
that would refer to the alignment model in e.g. ../tdnn_lstm_ctc/8.pt, and to the data in ../data/blah...
The basic idea here is that if you want to change the experiment locally, you would copy-and-modify the scripts in conformer_mmi to e.g. conformer_mmi_1a/, and add them to your git repo if wanted. We would avoid overloading the scripts in these experiment directories with command-line options. Any back-compatibility would be at the level of the icefall Python libraries themselves. We could perhaps introduce versions of the data directories as well, e.g. data/, data2/ and so on (not sure whether it would make sense to have multiple versions of the data-prep scripts or use options).
In order to avoid overloading the model code, and utility code, with excessive back-compatibility, I suggest that we have versions of the model code and maybe even parts of the other libraries: e.g. snowfall/models1/. Then we can add options etc., but when it becomes oppressive we can just copy-and-modify to models2/ and strip out most of the options. This will tend to reduce "cyclomatic complexity" by keeping any given version of the code simple. At this point, let's think of this to some extent as a demo tool for k2 and lhotse, we don't have to think of it as some vast toolkit with zillions of features.
The text was updated successfully, but these errors were encountered: