-
Notifications
You must be signed in to change notification settings - Fork 588
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overhauling the st.floats()
internals
#2907
Comments
This comment has been minimized.
This comment has been minimized.
If you can reproduce, that sounds like a bug that we'll want to patch on a way shorter timeframe than overhauling the whole |
We'll also want to support non-IEEE float types like |
This is mostly solved by #3327, with only "mask out bits that can't be set in narrower float types" remaining (plus the carefully handling required for non-finite numbers under this plan). |
Floating-point numbers are a fundamental datatype, but our backend
st.floats()
could be better. As per #1704, bounded floats have some weird distributional problems, and much worse shrinking behaviour than unbounded floats. On a related-ish note, the helper functions inhypothesis.internal.conjecture.floats
are not width-aware, which makes 32bit and 16bit float generation much less efficient than it could be.Fortunately, @rsokl and I are pretty sure that we can exploit a combination of rejection sampling, our existing custom bitwise float encoding, and bitmasks to solve both of these problems - and as a side effect, we'll have a single #2878-style
FloatsStrategy
which can grow #2701 filter rewriting in a future PR. The implementation trick:It's fiddly, but ensures that
width
(e.g. Numpy dtype) doesn't invalidate the rest of the test as would happen if we only drew 32bits for 32bit floatsThe text was updated successfully, but these errors were encountered: