-
Notifications
You must be signed in to change notification settings - Fork 18
Description
Describe the bug
Some unit tests that verify validation failures have mistakes in the test setup (e.g. using the wrong model) but the test only looks for a validation error so the test still passes. As a result, these tests do not actually verify the library behavior.
See this test which mixes attributes and amounts:
openjd-model-for-python/test/openjd/model/v2023_09/test_step_host_requirements.py
Lines 268 to 282 in 96651fa
| def test_non_standard_attribute_capability_noncompliant_value_string( | |
| self, field: str, value: str, error_count: int | |
| ) -> None: | |
| # Test the constraints on an amount capability value within the | |
| # anyOf and allOf clauses raise validation errors when they're violated. | |
| # GIVEN | |
| data = {"name": "attr.custom", field: [value]} | |
| # WHEN | |
| with pytest.raises(ValidationError) as excinfo: | |
| _parse_model(model=AmountRequirementTemplate, obj=data) | |
| # THEN | |
| assert len(excinfo.value.errors()) == error_count, str(excinfo.value) |
See this PR which fixes a test that mixes HostRequirements and AmountRequirements models: https://github.com/OpenJobDescription/openjd-model-for-python/pull/265/changes
Recommend setting a new pattern for testing for validation failures that causes a test to fail if the wrong model is used, then applying that patter across the codebase. See the PR as a possible solution.
Expected Behaviour
...
Current Behaviour
...
Reproduction Steps
...
Environment
...