Skip to content

Conversation

@bajertom
Copy link
Contributor

@bajertom bajertom commented Dec 12, 2025

Pull Request Checklist

  • implement the feature
  • extend the test coverage
  • include a release note

dependencies_to_test gathers both requires + recommends and tmt is going to fail when something could not be installed in the require phase already. If all required packages are available to install, but a recommended one is not, then it will fail also, but the message {package}: required by {test_name} might be a bit misleading, because it is not required, only recommended. Should the messaging be handled separately or is simply "needed by" enough to cover both requires+recommend in general?

Output:

---------------------------✄-------------------------------------
        prepare task #3: requires on default-0
        how: install
        summary: Install required packages
        name: requires
        where: default-0
        package: sed, grep, coreutils and 5 more
        warn: Installation failed, trying again after metadata refresh.
        package: sed, grep, coreutils and 5 more
        fail: Command 'rpm -q --whatprovides sed grep coreutils bash absent wrong nonexistent missing || dnf5 install -y  sed grep coreutils bash absent wrong nonexistent ridiculous' returned 1.
    
        failed packages: 
            missing: required by /tests/FIRST
            absent: required by /tests/FIRST, /tests/SECOND
            wrong: required by /tests/FIRST, /tests/SECOND
            nonexistent: required by /tests/FIRST
    report
        how: display
        summary: 2 pending
    cleanup
---------------------------✄-------------------------------------

Test coverage needed?

Fixes #4302

@happz
Copy link
Contributor

happz commented Dec 12, 2025

Pull Request Checklist

  • implement the feature
  • extend the test coverage
  • include a release note

dependencies_to_test gathers both requires + recommends and tmt is going to fail when something could not be installed in the require phase already. If all required packages are available to install, but a recommended one is not, then it will fail also, but the message {package}: required by {test_name} might be a bit misleading, because it is not required, only recommended. Should the messaging be handled separately or is simply "needed by" enough to cover both requires+recommend in general?

I'd say it should be separated, wth proper messaging, e.g. "failed to install required packages, see the details below, we're done here" vs "failed to install recommended packages, see the details below, and let's move on".

Output:

---------------------------✄-------------------------------------
        prepare task #3: requires on default-0
        how: install
        summary: Install required packages
        name: requires
        where: default-0
        package: sed, grep, coreutils and 5 more
        warn: Installation failed, trying again after metadata refresh.
        package: sed, grep, coreutils and 5 more
        fail: Command 'rpm -q --whatprovides sed grep coreutils bash absent wrong nonexistent missing || dnf5 install -y  sed grep coreutils bash absent wrong nonexistent ridiculous' returned 1.
    
        failed packages: 
            missing: required by /tests/FIRST
            absent: required by /tests/FIRST, /tests/SECOND
            wrong: required by /tests/FIRST, /tests/SECOND
            nonexistent: required by /tests/FIRST
    report
        how: display
        summary: 2 pending
    cleanup
---------------------------✄-------------------------------------

Test coverage needed?

Yes. We should have a test somewhere checking whether a non-existent package failed to install, it should be enough to extend it with an assert or two to check for new messages.

@LecrisUT LecrisUT self-assigned this Dec 13, 2025
@github-project-automation github-project-automation bot moved this to backlog in planning Dec 13, 2025
@LecrisUT LecrisUT moved this from backlog to implement in planning Dec 13, 2025
@LecrisUT LecrisUT added this to the 1.64 milestone Dec 13, 2025
@AthreyVinay AthreyVinay self-assigned this Jan 6, 2026
@psss psss modified the milestones: 1.64, 1.65 Jan 8, 2026
@bajertom bajertom force-pushed the tbajer-improve-logging-of-requires branch from 795a3dc to b46f5c7 Compare January 9, 2026 21:34
@bajertom
Copy link
Contributor Author

bajertom commented Jan 9, 2026

I have addressed comments and suggestions in b46f5c7
If recommended packages can not be installed, tmt will not raise an exception and therefore I had to use CommandOutput instead and parse the packages from there.

For the "unattributed_packages" - I don't know if that can even happen that packages would not be required/recommended - but maybe the "essential requires" like beakerlib and /usr/bin/flock could fall into this category?

@thrix thrix self-requested a review January 12, 2026 11:47
@bajertom bajertom moved this from implement to review in planning Jan 12, 2026
@bajertom bajertom moved this from review to implement in planning Jan 13, 2026
@bajertom
Copy link
Contributor Author

I've addressed the comments in latest commits, now extending the tests remains.

@bajertom bajertom requested a review from happz January 13, 2026 19:07
@happz happz added step | prepare Stuff related to the prepare step code | logging Changes related to debug logging ci | full test Pull request is ready for the full test execution labels Jan 14, 2026
@thrix thrix modified the milestones: 1.65, 1.66 Jan 15, 2026
@skycastlelily
Copy link
Collaborator

There are some conflicts.
You may should extend the test,at least verification that the correct package names are extracted from various error message formats.
And you might want to adding debug logging for regex pattern matching results in the tmt/steps/prepare/install.py file.
For the following reasons,
 ### Benefits of This Logging

  1. Troubleshooting: When package failures aren't detected, you can see exactly what the package manager output was and whether patterns matched
  2. Pattern Validation: Helps verify that regex patterns work correctly across different package manager versions
  3. Performance Monitoring: Shows if certain patterns are never matching (indicating they might be obsolete)
  4. User Support: Provides detailed information when users report attribution issues
    And,This debug logging would only appear when running with verbose logging enabled (-v or --debug), so it won't clutter normal output but provides valuable insight when needed.

Here is an example change:

def extract_package_name_from_package_manager_output(self, output: str) -> Iterator[str]:
    """Extract package names from package manager output using regex patterns."""
 
    
    self.logger.debug(f"Analyzing package manager output for failed packages:\n{output}")
    
    matched_packages = []
    for i, pattern in enumerate(self._PACKAGE_NAME_IN_PM_OUTPUT_PATTERNS):
        pattern_matches = []
        for match in pattern.finditer(output):
            package_name = match.group(1)
            pattern_matches.append(package_name)
            matched_packages.append(package_name)
            yield package_name
        
        if pattern_matches:
            self.logger.debug(f"Pattern {i} matched packages: {pattern_matches}")
        else:
            self.logger.debug(f"Pattern {i} found no matches")
    
    if not matched_packages:
        self.logger.debug("No failed packages extracted from output")
    else:
        self.logger.debug(f"Total extracted packages: {matched_packages}")

Copy link
Collaborator

@skycastlelily skycastlelily left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other than the nitpick and extending test suggestion, LGTM

Comment on lines +297 to +628
for dependency in test.require:
required_dependencies_to_tests[str(dependency)].append(test_name)
for dependency in test.recommend:
recommended_dependencies_to_tests[(str(dependency))].append(test_name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this would include the library dependency types also

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does work with uninstallable library:

        package: 5 packages requested
            bash
            coreutils
            grep
            library(yum/false-library)
            sed
        
            Required packages failed to install, aborting:
                library(yum/false-library): required by: /tests/FIRST
        fail: Command 'rpm -q --whatprovides coreutils 'library(yum/false-library)' sed grep bash || dnf5 install -y  coreutils 'library(yum/false-library)' sed grep bash' returned 1.

however only if the library is requested in this form: library(yum/false-library).
Due to recent changes - beaker-tasks repository being dropped from Testing Farm Image Mode - this format is now needed:

require:
  - type: library
    url: https://github.com/beakerlib/yum
    name: /false-library

However this ends up with After beakerlib processing, tests may have only simple requirements error, which is kind of confusing for the user. It originates from Prepare class where it checks whether dependency is 'simple' and throws an exception if it isn't, so it does not have a chance to pass the info to anything else. Not sure how to proceed here..

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does work with uninstallable library:

        package: 5 packages requested
            bash
            coreutils
            grep
            library(yum/false-library)
            sed
        
            Required packages failed to install, aborting:
                library(yum/false-library): required by: /tests/FIRST
        fail: Command 'rpm -q --whatprovides coreutils 'library(yum/false-library)' sed grep bash || dnf5 install -y  coreutils 'library(yum/false-library)' sed grep bash' returned 1.

however only if the library is requested in this form: library(yum/false-library). Due to recent changes - beaker-tasks repository being dropped from Testing Farm Image Mode - this format is now needed:

require:
  - type: library
    url: https://github.com/beakerlib/yum
    name: /false-library

However this ends up with After beakerlib processing, tests may have only simple requirements error, which is kind of confusing for the user. It originates from Prepare class where it checks whether dependency is 'simple' and throws an exception if it isn't, so it does not have a chance to pass the info to anything else. Not sure how to proceed here..

This error message appears because the library was either not recognized, or failed to install or something similar, and remained on the list. First, the libraries are resolved, cloned, installed, etc., and removed from require and recommend lists; then, this check verifies only the package-like items are left. So something is not working correctly in the processing of libraries on the list of required packages.

@LecrisUT could it be connected to one of your PRs?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Due to recent changes - beaker-tasks repository being dropped from Testing Farm Image Mode - this format is now needed:

Would be nice if we can officially drop it. I have not done the impact-check. On the Fedora side there is none, so probably beaker-tasks is the only provider for those?

However this ends up with After beakerlib processing, tests may have only simple requirements error, which is kind of confusing for the user. It originates from Prepare class where it checks whether dependency is 'simple' and throws an exception if it isn't, so it does not have a chance to pass the info to anything else. Not sure how to proceed here..

The error handling for that should be done in the discover step directly, not on the PrepareInstall. I need to check what is going on there, because I thought that that would already be the case, can you help me with a minimum reproducer?

As for proceeding with this, can you filter out all Library types, and for now, check explicitly for the BeakerlibLibrary + .format == "rpm", as those are the only sources of library(...). All the other Library types we drop and try to revisit after the Library refactors.

So something is not working correctly in the processing of libraries on the list of required packages.

@LecrisUT could it be connected to one of your PRs?

Yes and no. I have not touched that logic yet, so cannot say for certain what is going on, but something is wrong for sure. The main logic that is handling Library dependencies is in

def dependencies(
*,
original_require: list[Dependency],
original_recommend: Optional[list[Dependency]] = None,
parent: Optional[tmt.utils.Common] = None,
imported_lib_ids: ImportedIdentifiersType = None,
logger: tmt.log.Logger,
source_location: Optional[Path] = None,
target_location: Optional[Path] = None,
) -> LibraryDependenciesType:

def install_libraries(self, source: Path, target: Path) -> None:
"""
Install required beakerlib libraries for discovered tests.
:param source: Source directory of the tests.
:param target: Target directory where libraries should be installed.
"""

My understanding so far is that it is trying to go through the list and pick out all the Library types and drop all library types unless they were defined as library(yum/false-library) in which case forward it as a normal dependency for PrepareInstall to handle 1. Thinking about it more, normal Library types that are not BeakerlibLibrary + .format == "rpm" should really not be present there, so maybe there is a regression, let me check more closely.

Footnotes

  1. The logic in there is super convoluted and unnecessary, will try to refactor it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reproducer: any test with non-existing library likes this

 - type: library
   url: https://github.com/beakerlib/yum
   name: /false-library

or just

- url: https://github.com/beakerlib/yum
  name: /false-library

in require section of its main.fmf. Running it results in https://github.com/teemtee/tmt/blob/main/tmt/base.py#L712-L735 throwing

fail: Invalid requirement: DependencyFmfId(fmf_root=None, git_root=None, default_branch=None, url='https://github.com/beakerlib/yum', ref=None, path=None, name='/false-library', destination=None, nick=None, type='library')

plus the exception

plan failed.

The exception was caused by 1 earlier exceptions

Cause number 1:

    After beakerlib processing, tests may have only simple requirements

coming from here https://github.com/teemtee/tmt/blob/main/tmt/steps/prepare/__init__.py#L275-L317

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'd expect these library-like requirements be consumed by the layer responsible for beakerlib libraries and other complex requirements.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've made code changes to reflect the libraries problem. Now it raises at a Discover level without passing anything to Prepare.

Having uninstallable library results in:

        fail: Failed to process beakerlib libraries (/bad-library) for test '/tests/SECOND', '/tests/THIRD'.
        fail: Failed to process beakerlib libraries (/false-library) for test '/tests/THIRD'.
        fail: Failed to process beakerlib libraries (/wrong-library) for test '/tests/FIRST', '/tests/SECOND', '/tests/THIRD'.
    report
        how: display
        summary: no results found
    cleanup
        warn: Nothing to cleanup, no guests provisioned.

plan failed.

The exception was caused by 1 earlier exceptions

Cause number 1:

    Failed to process beakerlib libraries.

@bajertom bajertom force-pushed the tbajer-improve-logging-of-requires branch from 34eae31 to b86927f Compare January 21, 2026 21:38
@bajertom bajertom force-pushed the tbajer-improve-logging-of-requires branch from b86927f to 2c6a8b2 Compare January 23, 2026 18:00
@bajertom bajertom requested a review from LecrisUT January 23, 2026 18:05
_UNABLE_TO_LOCATE_PATTERN = re.compile(r'unable to locate package\s+([^\s]+)', re.IGNORECASE)
_NO_SUCH_PACKAGE_PATTERN = re.compile(r'ERROR:\s+([^\s:]+):\s+No such package', re.IGNORECASE)

_FAILED_PACKAGE_INSTALLATION_PATTERNS = [_UNABLE_TO_LOCATE_PATTERN, _NO_SUCH_PACKAGE_PATTERN]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More polishing :) If _UNABLE_TO_LOCATE_PATTERN is not used, it does not need to have a name, and it can be simply an item of the _FAILED_PACKAGE_INSTALLATION_PATTERNS list:

_FAILED_PACKAGE_INSTALLATION_PATTERNS = [
    re.compile(r'unable to locate package\s+([^\s]+)', re.IGNORECASE),
    re.compile(r'ERROR:\s+([^\s:]+):\s+No such package', re.IGNORECASE)
]

)

for line in lines:
self._logger.fail(line)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can emit these lines right from the loop above, no need to collect them in lines.

}

for library_name in library_names:
library_failures[library_name].add(test.name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be nice to also report the other unresolved dependencies - we learn about libraries only, but since we are in the business of reporting unresolved libraries, we can take it all the way. There's at least DependencyFile. Maybe it would be possible to collect unresolved dependencies in general, e.g. separate them into buckets by their class names, then by their "name", and collect "their" tests there? A final report would do the libraries in some nice way, then the DependencyFile, and then if there's any other class name left, some generic "don't know how to present it nicer yet, but this we failed to resolve" output.

Maybe something along the following:

# Collect all unresolved dependencies, per class, each dependency with its own list of tests.
unresolved_dependencies = collections.defaultdict(lambda: collections.defaultdict(list))

...

for dependency in (*test.require, *test.recommend):
    if isinstance(dependency, DependencySimple):
        continue

    unresolved_dependencies[dependency.__class__.__name__][dependency].append(test)

...

# This is the end, right?

# Report unresolved libraries
for dependency, tests in unresolved_dependencies.pop(tmt.base.DependencyFmfId.__name__, {}).items():
    test_names = ', '.join(f"'{name}'" for name in sorted(test.name for test in tests)
    self._logger.fail(
        f"Failed to process beakerlib library {dependency.name or "/"}), required by tests {test_names}."
    )

if unresolved_dependencies:
    # Rough reporting for dependencies we don't know how to report in a better way
    ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci | full test Pull request is ready for the full test execution code | logging Changes related to debug logging step | prepare Stuff related to the prepare step

Projects

Status: implement

Development

Successfully merging this pull request may close these issues.

Improve logging of requires

8 participants