Skip to content

Conversation

@nghielme
Copy link
Contributor

Description

📝 Please include a summary of the change.

  • Please also include relevant motivation and context.
  • List any dependencies that are required for this change.

This PR updates pytest test naming conventions across the hls4ml test suite, it address the issue #1436:

  • Adds utility functions in conftest.py for generating unique test case identifiers in pytest
  • Updates test parametrization across 60+ test files to use consistent naming patterns
  • Renames synthesis output JSON files to follow the new naming convention (e.g., test_activations-elu-Vitis-io_parallel.json instead of older patterns)

The motivation is to improve test discoverability, reduce naming conflicts, and make test output more readable when running subsets of tests.

Type of change

For a new feature or function, please create an issue first to discuss it
with us before submitting a pull request.

Note: Please delete options that are not relevant.

  • Bug fix (non-breaking change that fixes an issue)
  • Documentation update
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • A new research paper code implementation
  • Other (Specify)

Tests

📝 Please describe the tests that you ran to verify your changes.

  • Provide instructions so we can reproduce.
  • Please also list any relevant details for your test configuration.

Test Configuration:

The modification affects the tests, no additional test is needed.

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

nghielme and others added 6 commits February 12, 2026 15:39
…pytest

- Introduced `get_pytest_case_id` and `get_pytest_baseline_name` functions in `conftest.py` to create unique identifiers for parametrized test cases and baseline files.
- Updated multiple test files to utilize these functions for generating output directory names, ensuring consistency and avoiding collisions in test outputs.
- Refactored test function signatures to include the `request` parameter where necessary for accessing the pytest request context.
- Introduced `_sanitize_test_id` function to handle sanitization of test identifiers for file paths.
- Updated test files to utilize the new `get_pytest_case_id` function for generating baseline file names, ensuring uniformity in output directory naming.
- Removed obsolete baseline files and added new ones to reflect updated test cases.
@nghielme nghielme added the please test Trigger testing by creating local PR branch label Feb 12, 2026
@marco66colombo
Copy link
Contributor

Thanks for the PR, the naming consistency goal is very helpful.

One comment: fixtures in conftest.py are auto-discovered by pytest, so with fixtures there’s no need to import helpers from conftest.py directly.
Pytest best practice is to expose helpers as fixtures (e.g., case_id) instead of importing from conftest.py.

A possible alternative solution could be adding a fixture like:

# conftest.py
@pytest.fixture
def case_id(request):
    return get_pytest_case_id(request)

Then use it where needed:

def test_dense(case_id, backend, io_type):
    ... 
    output_dir = str(test_root_path / case_id)
    ...

@marco66colombo
Copy link
Contributor

marco66colombo commented Feb 12, 2026

On this job failure:

FAILED test_keras_api.py::test_activations[io_parallel-Vivado-activation_function2]
Baseline file .../test_keras_api_test_activations_io_parallel-Vivado-activation_function2.json not found

It looks like this specific baseline file (the activation_function2 one) is missing. The naming is consistent between the test output and the new baseline names, but that single file isn’t present, so the job fails (If I didn't miss anything). It also happens for other tests cases.
Probably the 2 is skipped and it goes 0,1,3,4,5. I think this is also why the assertions fail (the baseline 3 refers to actual test case 2 and so one, basically they're shifted by one).

Also, while digging into this I realized that the new names rely on pytest’s auto‑generated param IDs when the parameters are objects (like Keras layer instances). That’s why we get activation_function0/1/2/... instead of names like relu, elu, p_re_lu. If you want more readable names (and keep same behavior as for the old baseline names), it's possible to add explicit ids=:

@pytest.mark.parametrize(
    "activation_function",
    [...],
    ids=["relu", "leaky_relu", "elu", "p_re_lu", "sigmoid"],
)

- Replaced the `get_pytest_case_id` function with a new fixture `test_case_id` to streamline the generation of unique identifiers for test cases.
- Updated multiple test files to utilize the new `test_case_id` fixture, ensuring consistent output directory naming across tests.
- Removed obsolete references to the old identifier function.
- Updated the `garnet_models` and `garnet_stack_models` fixtures to replace the `request` parameter with `test_case_id` f
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

please test Trigger testing by creating local PR branch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants