Skip to content

Conversation

@TalShafir1
Copy link
Member

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @TalShafir1, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on a significant architectural change within the mp tool, transitioning its parallel execution strategy from using multiprocessing (processes) to asyncio.to_thread (threads). This shift aims to leverage Python's asynchronous capabilities for more efficient concurrency, particularly for I/O-bound operations, while also simplifying the underlying task management. The changes involve introducing new asynchronous utility functions, updating configuration settings to reflect the thread-based model, and performing minor refactoring in display components for improved code consistency. It's worth noting an apparent inconsistency where playbooks_repo.py still uses multiprocessing.Pool but now retrieves a 'threads' number, which might warrant further review.

Highlights

  • Concurrency Model Shift: The core mp tool has been refactored to utilize Python's asyncio.to_thread for parallel execution, moving away from multiprocessing.Pool in several key areas like building integrations and running pre-build tests.
  • Thread-Based Task Management: New asynchronous utility functions (threaded_build_items, threaded_test_items, threaded_validate_items, run_threaded) have been introduced in mp.core.utils.common.utils.py to manage tasks concurrently using threads, with configurable thread limits via asyncio.Semaphore.
  • Configuration Update: Configuration keys and associated logic for managing parallel execution have been updated from 'processes' to 'threads' across the mp.config and mp.core.config modules. This includes a new MAX_THREADS_SENTINEL value for dynamic thread allocation.
  • Display Class Refactoring: Several display-related classes (CliDisplay, HtmlFormat, MarkdownFormat) across run_pre_build_tests and validate modules have had their internal report variable names consistently renamed (e.g., tests_report to report, validation_results to results) for improved clarity and consistency.
  • Version and Dependency Updates: The mp package version has been incremented to 1.23.2, and dependencies ty and uv have been updated to their latest versions in uv.lock.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully refactors the multiprocessing logic to use asyncio with threads for parallel execution, which is a great improvement for I/O-bound tasks. The introduction of the threaded_*_items helper functions in mp.core.utils centralizes the concurrency logic nicely.

My review focuses on ensuring this change is applied consistently and correctly across the codebase. I've found a few areas that need attention:

  • A critical bug in the playbook validation flow where it incorrectly filters for successful validations instead of failures.
  • A couple of functions in the playbook repository that were missed in the migration and still use multiprocessing.Pool.
  • Some minor inconsistencies in naming and documentation related to the "processes" vs. "threads" change.

Overall, this is a solid refactoring. Addressing these points will complete the migration and improve the code's consistency.

@TalShafir1
Copy link
Member Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant refactoring to transition the project from multiprocessing to asyncio and anyio for improved concurrency, particularly in build and validation processes. Key changes include updating pyproject.toml to bump the version and add anyio as a dependency, and modifying various build and validation modules (integrations_repo.py, playbooks_repo.py, dependencies.py, integration.py, run_pre_build_tests/__init__.py, validate/flow/integrations/flow.py, validate/flow/playbooks/flow.py) to use async functions, asyncio.gather, and new mp.core.utils async helpers like run_threaded and async_build_items. The mp.core.unix module's run_pip_command and download_wheels_from_requirements were also converted to async. The configuration for parallel execution was updated, renaming 'processes' to 'threads' in mp.config and mp.core.config, and allowing a -1 value to signify maximum available threads. Display classes in mp.core.display_utils, mp.run_pre_build_tests/display, and mp.validate/display were slightly refactored for consistency. A review comment highlights that the _is_processes_in_range function in mp.config/__init__.py should be renamed to _is_threads_in_range to match the new terminology and avoid confusion, with corresponding updates to its call site.

Comment on lines +103 to +112
def _set_threads_number(threads: int) -> None:
if not isinstance(threads, int) or not _is_processes_in_range(threads):
msg: str = "Processes must be a positive integer or -1 to set the maximum available"
raise ValueError(msg)

mp.core.config.set_processes_number(processes)
mp.core.config.set_threads_number(threads)


def _is_processes_in_range(processes: int) -> bool:
return mp.core.config.PROCESSES_MIN_VALUE <= processes <= mp.core.config.PROCESSES_MAX_VALUE
def _is_processes_in_range(threads: int) -> bool:
return threads > 0 or threads == mp.core.config.MAX_THREADS_SENTINEL
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The function _is_processes_in_range is now validating the number of threads, but its name still refers to "processes". This could be confusing for future maintenance. It should be renamed to _is_threads_in_range to accurately reflect its purpose, and the call site should be updated accordingly.

Suggested change
def _set_threads_number(threads: int) -> None:
if not isinstance(threads, int) or not _is_processes_in_range(threads):
msg: str = "Processes must be a positive integer or -1 to set the maximum available"
raise ValueError(msg)
mp.core.config.set_processes_number(processes)
mp.core.config.set_threads_number(threads)
def _is_processes_in_range(processes: int) -> bool:
return mp.core.config.PROCESSES_MIN_VALUE <= processes <= mp.core.config.PROCESSES_MAX_VALUE
def _is_processes_in_range(threads: int) -> bool:
return threads > 0 or threads == mp.core.config.MAX_THREADS_SENTINEL
def _set_threads_number(threads: int) -> None:
if not isinstance(threads, int) or not _is_threads_in_range(threads):
msg: str = "Processes must be a positive integer or -1 to set the maximum available"
raise ValueError(msg)
mp.core.config.set_threads_number(threads)
def _is_threads_in_range(threads: int) -> bool:
return threads > 0 or threads == mp.core.config.MAX_THREADS_SENTINEL

validator: Callable[[_T], _R],
items: Iterable[_T],
) -> Iterable[_R]:
"""Run builder functions in parallel threads.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to "Run validate functions"

await mp.core.utils.async_build_items(self.build_integration, paths)

def build_integration(self, integration_path: Path) -> None:
async def build_integration(self, integration_path: Path) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a class private function?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Revert it back to use multi processing

Comment on lines +58 to +68
async def threaded_test_items(
builder: Callable[[_T, _T], _R],
items: Iterable[tuple[_T, _T]],
) -> Iterable[_R]:
"""Run builder functions in parallel threads.
Returns:
An iterable of all the return values of the builder functions.
"""
return await asyncio.gather(*[run_threaded(builder, p1, p2) for p1, p2 in items])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Revert if we remove the threaded tests

@AmitJ98 AmitJ98 self-requested a review December 31, 2025 09:59
@TalShafir1 TalShafir1 added Enhancement New feature or request mp mp related change labels Jan 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Enhancement New feature or request mp mp related change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants