-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sed_eval bindings? #184
Comments
I think this could be of interest, not only for mir tasks (e.g. instrument ID) but also directly for environmental sound eval via JAMS files. A possible complication however is that if I remember correctly the sed_eval paradigm is that given a collection of recordings for evaluation (a test set), intermediate metrics are aggregated from all files before computing the final set of metrics. This might be a little tricky to support given the current collection-agnostic paradigm JAMS follows? |
I would just simplify that to a collection of 1 for each call to |
Yes and no. Yes in that it allows you to get per-file scores. No in that averaging per-file scores will give you a different result compared to computing per-file intermediate stats and then a final set of metrics, where the latter is what the sed_eval folks (and consequently DCASE) are advocating for. By only supporting per-file metrics, we might encourage someone to do the former, which would give them eval results that are inconsistent with what's expected in the literature (or what will eventually be expected once the dust settles on SED). |
Well, I've never liked the collection-wise error reporting, but you could handle it gracefully by accepting a sed_eval object as an optional parameter. If none is provided, one is constructed. That way, you can get track-wise metrics easily, and collection-wise metrics with a bit more work. |
That sounds like a reasonable solution to me |
(Delayed update) This one is stalled for a couple of reasons relating to the sed_eval dependency chain. |
[Tagging @justinsalamon ]
The
jams.eval
module provides a unified interface between jams annotations andmir_eval
metrics. Would it be possible to add bindings tosed_eval
as well, for evaluatingtag_*
annotations? I haven't usedsed_eval
directly, but this seems like it would be useful for handling things like instrument detection.The text was updated successfully, but these errors were encountered: