-
Notifications
You must be signed in to change notification settings - Fork 544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Further fixes for Scipy 1.15 update for PR and nightly CI #6213
Conversation
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
/ok to test |
/ok to test |
@@ -629,7 +629,7 @@ def test_logistic_regression_model_default(dtype): | |||
|
|||
|
|||
@given( | |||
dtype=floating_dtypes(sizes=(32, 64)), | |||
dtype=st.sampled_from((np.float32, np.float64)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For my education: can you explain a bit why this is needed? A quick look at floating_dtypes
makes me think it also uses sampled_from
on the inside. But maybe returns strings instead of dtype objects?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The floating_dtypes
will generate all possible dtypes for the given sizes, including those with different endianness:
>>> from hypothesis.extra.numpy import floating_dtypes
>>> f = floating_dtypes(sizes=(16,32))
>>> f.example()
dtype('float16')
>>> f.example()
dtype('>f2')
>>> f.example()
dtype('float32')
>>> f.example()
dtype('>f4')
>>> f.example()
dtype('float16')
The change here represents an stronger assumption on the expected types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
I would suggest that maybe long-term we aim to use a centrally defined list of supported types. I had previously made an attempt to establish this here.
/merge |
Nightly CI revealed a bug between hypothesis
floating_dtypes(sizes=(32, 64)
and building sparse matrices, this PR usesst.sampled_from((np.float32, np.float64)
to solve the issue.Additionally, cudf.pandas active made one dataset in ARIMA pytests fail, so disabling that one while we look further into it.