Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytest parameterize support #26

Open
deadPix3l opened this issue Jun 10, 2023 · 2 comments
Open

Pytest parameterize support #26

deadPix3l opened this issue Jun 10, 2023 · 2 comments

Comments

@deadPix3l
Copy link

I know the docs say fixtures are not supported, and that's fine, I don't use them with one exception:

@pytest.mark.parameterize(cls=getSubclasses())
@given(...)
def test_subclass(cls, ...)

Currently I work around this by just copying the function and hardcoding it, but there are 20+ subclasses.

I get that fixtures introduce state that could muddy things. But I don't believe parameterize does?

@Zac-HD
Copy link
Owner

Zac-HD commented Jun 10, 2023

I'm definitely keen to support this, and would happily accept a pull request - it's just been low-priority until pretty recently.

Sample complications: we'll need to test this (HypoFuzz is currently quite under-tested, but new integrations will need tests so that we don't break things in future). The parameterize marked can be applied to the function, or to a class, or assigned as a global variable for implicit application - presumably we can hook pytest internals to get this. There may be other markers like xfail or skipif - currently those are ignored, but perhaps we should handle some? Pytest node ids often get weirder once parameterize is involved, so we probably can't use them as urls any more ("now you have two systems"...)

All this is manageable and we can reassign the inner_test with a partial() that sets the params, but you can probably see why it's been on the backburner!

@aarchiba
Copy link

aarchiba commented Feb 19, 2024

I just got bitten by xfail - that is, hypofuzz helpfully reported a failure that I had already marked xfail. Not necessarily serious but annoying that I can no longer just skim through the results and look for failures to fix.

In general, though, couldn't one replace parametrize with sampled_from? It's not quite the same, in particular you don't get separate test results for the different cases, but it still results in trying the various versions. You also can't cleanly label certain results with xfail. Here's an example of where this is useful:

https://github.com/nanograv/PINT/blob/master/tests/test_precision.py#L725

I know that the library we depend on fails for certain values of certain parameters, so I have to use parametrize so I can mark which ones fail.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants