-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For discussion: numba_scipy.stats #42
Comments
I really wanted to use stats.skew() and stats.kurtosis(), is there any way to do that? |
None of these alternative routes seem to be particularly appealing. The upside seems very limited indeed. Thanks for your perspective @luk-f-a Happy New Year! |
To apply the method of maximum likelihood, fast implementations of the pdfs and cdfs are needed, option 3 would not do. There are speed gains of factor 100 currently if for example norm.cdf is replaced by a custom implementation based on the erf in scipy.special.cython_function. |
I started a repository with fast implementations here https://github.com/HDembinski/numba-stats that work for me. It would be great to merge this into numba-scipy, but it is not straight-forward, since I did not implement the scipy API, just added some fast versions of norm.pdf, etc. For now, numba-stats wraps the special functions from scipy.special.cython_special independently of numba-scipy, but eventually once numba-scipy is stable, I would prefer to depend on numba-scipy. |
Adding to that, the speed gains are dramatic as mentioned before, I see up to a factor 100 in some cases, less for large arrays. There seems to be a very large call overhead in scipy. I added some benchmarks with pytest-benchmark to my repo, just run pytest and see what you get. |
In my field (high energy physics) having fast stats translates directly into fast turn-around when developing non-linear fits, which is the default for us. The speed-up in the stats functions translates very nicely into equivalent speed-ups of the fits. Which means we can build more complex fits and bootstrap our fit results. |
if you need fast code for stats, and don't need to follow the scipy API, then rvlib is a good library. sadly unmaintained, but the code is there if you want to use it. |
Thank you for pointing this out. rvlib claims to have a better API than Scipy, I could not see that from a quick look. I really want numba-scipy to offer this functionality. In the meantime, I realize that wrapping scipy is not that hard, a lot of scipy's implementations just call some C function. I am puzzled why it is so slow if the actual work is done in C anyway. |
What functionality? Fast pdf and cdf under a scipy API?
There might be some work being done that you are not considering. You say that it's slow, are you comparing the speed to a pure C implementation? |
Hi, @luk-f-a. Using the current master code in this project, could I implement Based on ur example on |
@dlee992 I don't think you can take what I did for
In your case, it does not sound like either of these points applies to you, so you shouldn't read too much into my conclusions, because the starting point is not the same. On the first point, it seems that you don't need identical API or jit transparency. Please note that JAX supports the Also, each function in the Going back to your question:
If you want to implement the functionality, ie build a function that produces the same results, the effort will be similar to building it in JAX. If you want to implement the functionality and replicate the API when called from normal python code, ie a non-jit function that calls |
@luk-f-a , many thanks for this thoughtful and detailed explaination! Now, I know my situation very well! I will figure out pro-and-cons about implmemting it using numba. In fact, JAX uses multithreading, and sometimes creates too many threads, even more than the max limit on linux... And some JAX issues (e.g., jax-ml/jax#11168) confirm this point, and JAX guys seem not want to "fix" this. |
hi everyone!
this is meant as a way to gather feedback on the current status of
numba_scipy.stats
. I'm pinging people that have expressed interest innumba_scipy.stats
and/or are involved in numba and scipy. I'd like to share what I've learned so far, and hopefully you'll share your perspective on this.Since last year I've been looking into
scipy.stats
with a view of gettingnumba-scipy.stats
started. I created a prototype in #16. It's viable, but the experience has led me to question the cost/benefit tradeoff of following that path.The main technical complication with
scipy.stats
is that is not function based, but object based. It relies on several of Python's OO features like inheritance and operator overloading. Numba has great support for function-based libraries (or method based, when the number of objects is limited) like Numpy. However, the support for classes (via jitclasses) is more limited and experimental. Outside of jitclasses, the only other option is to use theextending
module, with the added effort that it implies.The consequence of the above is that it will not be possible to fully imitate the behaviour of
scipy.stats
. At least not in the medium term, and not without a lot of work.Even if jitclasses worked exactly as python classes,
scipy.stats
has more than a hundred distributions, each of them with more than 10 methods. If we followed the way of how numba supports numpy, we are talking about 1000+ methods to re-write. In some cases there will be performance improvements, but in some cases there won't.Look at the following example:
There's no performance improvement at all, because most of the work is already done in C. This will be the case in many
scipy.stats
functions.To summarize, I see a few ways forward, each with pros and cons:
jitclass based solution
low-level numba extension (http://numba.pydata.org/numba-doc/latest/extending/low-level.html)
objmode
approach = no jitted solutionobjmode
, both in runtime and in boilerplate code. This last point might be made lighter by these: Calling objectmode function from nopython mode numba#5461 and Pass thru pyobjects numba#3282I personally lean towards option 3 at the moment. I might write some custom code that calls
special
functions if I really need performance. But I'm not feeling very attracted to the idea of re-implementing such a large module asscipy.stats
.It would be great to hear your perspective on this.
cc: @gioxc88 @francoislauger @stnatter @remidebette @rileymcdowell @person142 @stuartarchibald
The text was updated successfully, but these errors were encountered: