Skip to content

Conversation

@binary-husky
Copy link
Collaborator

@binary-husky binary-husky commented Dec 24, 2025

Description

Support swanlab as logger.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @binary-husky, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates the SwanLab experiment tracking platform by introducing a new SwanlabMonitor class. This addition allows users to effectively log, visualize, and manage their machine learning experiments with SwanLab, offering detailed configuration options for experiment setup, data logging, and run lifecycle management. The implementation focuses on flexible initialization and robust data handling, including support for complex tabular data structures.

Highlights

  • SwanLab Integration: A new SwanlabMonitor class has been introduced, enabling seamless integration with the SwanLab experiment tracking platform for monitoring machine learning experiments.
  • Configurable Experiment Tracking: The SwanlabMonitor supports extensive configuration through config.monitor.monitor_args, allowing users to specify project details, workspace, experiment name, description, tags, logging mode, and resume policies.
  • API Key Management: SwanLab login is managed by checking the SWANLAB_API_KEY environment variable, ensuring secure and flexible authentication.
  • Enhanced Data Logging: The monitor provides dedicated methods for logging both scalar metrics (log) and tabular data (log_table), with log_table capable of converting pandas DataFrames into SwanLab ECharts Tables or CSV strings.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces SwanlabMonitor for experiment tracking. The implementation is a solid foundation. I've provided feedback focusing on improving robustness, consistency, and maintainability. Key points include a critical fix for API key handling to support all documented authentication methods, improving exception handling to prevent silent failures, and aligning the new monitor with existing conventions in the project for default arguments and documentation. These changes will help make the new monitor more reliable and user-friendly.

@modelscope modelscope deleted a comment from gemini-code-assist bot Dec 24, 2025
binary-husky and others added 3 commits December 25, 2025 03:00
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@binary-husky binary-husky requested a review from pan-x-c December 24, 2025 19:03
This monitor integrates with SwanLab (https://swanlab.cn/) to track experiments.
Supported monitor_args in config.monitor.monitor_args:
- api_key (Optional[str]): API key for swanlab.login(). If omitted, will read from env
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix the indent

@pan-x-c
Copy link
Collaborator

pan-x-c commented Dec 25, 2025

/unittest-module-utils

@github-actions
Copy link

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
28 27 1 0 0 0 1m

Failed Tests

Failed Tests ❌ Fail Message
❌ tests/utils/swanlab_test.py::TestSwanlabMonitor::test_swanlab_monitor_smoke The test failed in the call phase due to an assertion error

Tests

Test Name Status Flaky Duration
tests/utils/eval_utils_test.py::TestComputeScore::test_both_boxed_and_equivalent 53ms
tests/utils/eval_utils_test.py::TestComputeScore::test_both_boxed_and_not_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_empty_ground_truth 2ms
tests/utils/eval_utils_test.py::TestComputeScore::test_empty_solution_string 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_multiple_boxed_answers_in_solution 2ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_boxed_truth_raw_and_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_boxed_truth_raw_and_not_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_not_boxed 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_raw_and_ground_truth_boxed_equivalent 1ms
tests/utils/eval_utils_test.py::TestMathEvalUtils::test_extract_answer 4ms
tests/utils/eval_utils_test.py::TestMathEvalUtils::test_verify_math_answer 144ms
tests/utils/eval_utils_test.py::TestEvalUtils::test_is_equiv 5ms
tests/utils/log_test.py::LogTest::test_actor_log 7.2s
tests/utils/log_test.py::LogTest::test_group_by_node 1.8s
tests/utils/log_test.py::LogTest::test_no_actor_log 709ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_local_0__workspace_tests_utils_plugins 99ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_local_1_tests_utils_plugins 96ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_remote_0__workspace_tests_utils_plugins 12.2s
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_remote_1_tests_utils_plugins 10.6s
tests/utils/plugin_test.py::TestPluginLoader::test_passing_custom_class_0__workspace_tests_utils_plugins 6.0s
tests/utils/plugin_test.py::TestPluginLoader::test_passing_custom_class_1_tests_utils_plugins 5.7s
tests/utils/registry_test.py::TestRegistryWithRay::test_dynamic_import 5.7s
tests/utils/registry_test.py::TestRegistry::test_algorithm_registry_mapping 14ms
tests/utils/registry_test.py::TestRegistry::test_buffer_module_registry_mapping 493ms
tests/utils/registry_test.py::TestRegistry::test_common_module_registry_mapping 47ms
tests/utils/registry_test.py::TestRegistry::test_register_module 1ms
tests/utils/registry_test.py::TestRegistry::test_utils_module_registry_mapping 1ms
tests/utils/swanlab_test.py::TestSwanlabMonitor::test_swanlab_monitor_smoke 1ms

Github Test Reporter by CTRF 💚

@pan-x-c
Copy link
Collaborator

pan-x-c commented Dec 25, 2025

/unittest-module-utils

@github-actions
Copy link

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
28 27 1 0 0 0 1m 4s

Failed Tests

Failed Tests ❌ Fail Message
❌ tests/utils/swanlab_test.py::TestSwanlabMonitor::test_swanlab_monitor_smoke The test failed in the call phase

Tests

Test Name Status Flaky Duration
tests/utils/eval_utils_test.py::TestComputeScore::test_both_boxed_and_equivalent 210ms
tests/utils/eval_utils_test.py::TestComputeScore::test_both_boxed_and_not_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_empty_ground_truth 2ms
tests/utils/eval_utils_test.py::TestComputeScore::test_empty_solution_string 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_multiple_boxed_answers_in_solution 2ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_boxed_truth_raw_and_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_boxed_truth_raw_and_not_equivalent 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_not_boxed 1ms
tests/utils/eval_utils_test.py::TestComputeScore::test_solution_raw_and_ground_truth_boxed_equivalent 1ms
tests/utils/eval_utils_test.py::TestMathEvalUtils::test_extract_answer 4ms
tests/utils/eval_utils_test.py::TestMathEvalUtils::test_verify_math_answer 144ms
tests/utils/eval_utils_test.py::TestEvalUtils::test_is_equiv 5ms
tests/utils/log_test.py::LogTest::test_actor_log 6.7s
tests/utils/log_test.py::LogTest::test_group_by_node 1.8s
tests/utils/log_test.py::LogTest::test_no_actor_log 708ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_local_0__workspace_tests_utils_plugins 98ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_local_1_tests_utils_plugins 94ms
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_remote_0__workspace_tests_utils_plugins 13.2s
tests/utils/plugin_test.py::TestPluginLoader::test_load_plugins_remote_1_tests_utils_plugins 11.6s
tests/utils/plugin_test.py::TestPluginLoader::test_passing_custom_class_0__workspace_tests_utils_plugins 6.5s
tests/utils/plugin_test.py::TestPluginLoader::test_passing_custom_class_1_tests_utils_plugins 5.9s
tests/utils/registry_test.py::TestRegistryWithRay::test_dynamic_import 6.2s
tests/utils/registry_test.py::TestRegistry::test_algorithm_registry_mapping 15ms
tests/utils/registry_test.py::TestRegistry::test_buffer_module_registry_mapping 413ms
tests/utils/registry_test.py::TestRegistry::test_common_module_registry_mapping 45ms
tests/utils/registry_test.py::TestRegistry::test_register_module 1ms
tests/utils/registry_test.py::TestRegistry::test_utils_module_registry_mapping 1ms
tests/utils/swanlab_test.py::TestSwanlabMonitor::test_swanlab_monitor_smoke 2ms

Github Test Reporter by CTRF 💚

Copy link
Collaborator

@pan-x-c pan-x-c left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

others LGTM

cls.env_keys = ["SWANLAB_API_KEY", "SWANLAB_APIKEY", "SWANLAB_KEY", "SWANLAB_TOKEN"]
cls._original_env = {k: os.environ.get(k) for k in cls.env_keys}
if not any(os.getenv(k) for k in cls.env_keys):
os.environ["SWANLAB_API_KEY"] = "dummy_key_for_smoke_test"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change the key to avoid swanlab api key format check error

}


@MONITOR.register_module("swanlab")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Delete this line and register it directly in default_mapping like other monitor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants