Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Pytest] The XFail option doesn't work for pytest.xfail("..."). Only works with @pytest.mark.xfail. #321

Open
vnaithani opened this issue Jan 9, 2025 · 1 comment

Comments

@vnaithani
Copy link

vnaithani commented Jan 9, 2025

Description:

As per Pytest docs1, an xfail means that you expect a test to fail for some reason. A common example is a test for a feature not yet implemented, or a bug not yet fixed.

We can use the xfail marker to indicate that you expect a test to fail:

@pytest.mark.xfail
def test_function(): ...

Alternatively, you can also mark a test as XFAIL from within the test or its setup function imperatively:

def test_function():
    if not valid_config():
        pytest.xfail("failing configuration (but should work)")

Problem:

Currently, we support the X-Fail and X-Pass statuses with the following options:

Description Config file Environment variable CLI option Default value Required Possible values
XFail status for failed tests framework.pytest.xfailStatus.xfail QASE_PYTEST_XFAIL_STATUS_XFAIL --qase-pytest-xfail-status-xfail Skipped No Any string
XFail status for passed tests framework.pytest.xfailStatus.xpass QASE_PYTEST_XFAIL_STATUS_XPASS --qase-pytest-xfail-status-xpass Passed No Any string

Right now, only the @pytest.mark.xfail marker works, when using pytest.xfail("...") inside the test, the results are still reported as skipped despite defining the X-Fail option for the reporter


Why is this important?

when @pytest.mark.xfail (Decorator) is used this decorator is applied at the test function level and is generally used to indicate that the entire test is expected to fail. It marks the test as "expected to fail" during test execution, and pytest will report this accordingly.

However, pytest.xfail() (Context Manager) is used within the test to mark a specific section of the test as expected to fail. It's generally used when you don't want to mark the whole test function as expected to fail, but instead a specific part or block of code inside the test.

Footnotes

  1. About X-Fail mark: https://docs.pytest.org/en/stable/how-to/skipping.html#xfail-mark-test-functions-as-expected-to-fail

@vnaithani vnaithani changed the title The XFail option doesn't work for pytest.xfail("..."). Only works with @pytest.mark.xfail. [Pytest] The XFail option doesn't work for pytest.xfail("..."). Only works with @pytest.mark.xfail. Jan 9, 2025
gibiw added a commit that referenced this issue Jan 16, 2025
Resolved the following issues:
	1.	Custom statuses did not work when using `pytest.xfail` within the test body. #321
	2.	The test status was displayed incorrectly when using the `skipif` mark. #320
gibiw added a commit that referenced this issue Jan 16, 2025
Resolved the following issues:
	1.	Custom statuses did not work when using `pytest.xfail` within the test body. #321
	2.	The test status was displayed incorrectly when using the `skipif` mark. #320
gibiw added a commit that referenced this issue Jan 16, 2025
Resolved the following issues:
	1.	Custom statuses did not work when using `pytest.xfail` within the test body. #321
	2.	The test status was displayed incorrectly when using the `skipif` mark. #320
gibiw added a commit that referenced this issue Jan 16, 2025
Resolved the following issues:
	1.	Custom statuses did not work when using `pytest.xfail` within the test body. #321
	2.	The test status was displayed incorrectly when using the `skipif` mark. #320
@gibiw
Copy link
Contributor

gibiw commented Jan 16, 2025

@vnaithani Hi! Thank you for reporting the issue. We’ve fixed it in version 6.1.11.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants