-
Notifications
You must be signed in to change notification settings - Fork 0
Test/cli jira integration #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Add 4 tests covering the Jira extraction with quality metrics in CLI: - Full flow with single issue and comment - Multiple issues across multiple projects - Empty results handling - Changelog retrieval for reopen detection This improves coverage for the Feature 003 CLI integration code.
Add comprehensive tests for CLI functionality: - Interactive project selection (L option, EOF handling, retries) - GitHub analyzer run flow with close() in finally block - Error handling (KeyboardInterrupt, ConfigurationError) - CLI argument overrides (--output, --repos) - Auto-detect sources with no available sources - Interactive prompts when CLI args not provided - Truncated Jira project list display (>5 projects) Coverage improved from 89% to 99% (691 tests pass).
Summary of ChangesHello @amargiovanni, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the test suite for the CLI, focusing on the newly integrated Jira functionality. It ensures the interactive project selection process is robust and handles various user inputs and edge cases gracefully. Additionally, it validates the end-to-end Jira data extraction, metric calculation, and export within the CLI, alongside improving general error handling and argument parsing test coverage. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds a comprehensive suite of integration and unit tests for the new Jira CLI integration features. The tests cover interactive project selection, the main extraction flow, error handling, and CLI argument overrides. The coverage of different scenarios and edge cases is excellent. I have two main suggestions to improve the new test code. First, one test doesn't fully verify what its name implies, and I've suggested adding the missing assertion. Second, there is significant code duplication in test setup, which could be refactored into pytest fixtures to improve maintainability. Overall, this is a solid contribution that greatly improves test coverage.
| # ============================================================================= | ||
|
|
||
|
|
||
| class TestJiraIntegrationInCLI: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a lot of repeated setup code for creating mock configurations across the test methods in this file (e.g., mock_config, mock_jira_config). This makes the tests verbose and harder to maintain.
Consider using pytest fixtures to handle this setup. This will reduce duplication and make the tests cleaner.
You could define fixtures like this at the module level or within a conftest.py:
@pytest.fixture
def mock_analyzer_config(tmp_path):
"""Create a mock AnalyzerConfig."""
mock_config = Mock(spec=AnalyzerConfig)
mock_config.output_dir = tmp_path
mock_config.days = 30
mock_config.verbose = False
mock_config.validate = Mock()
return mock_config
@pytest.fixture
def mock_jira_config():
"""Create a mock JiraConfig."""
mock_jira_config = Mock()
mock_jira_config.base_url = "https://test.atlassian.net"
return mock_jira_configThen, your test methods can simply accept these fixtures as arguments:
def test_jira_extraction_full_flow(self, tmp_path, mock_analyzer_config, mock_jira_config):
# ... test logic using mock_analyzer_config and mock_jira_config
# No need to create them inside the test anymore.Applying this pattern to all new test classes in this file (TestJiraIntegrationInCLI, TestGitHubAnalyzerInMain, etc.) would significantly improve the code's maintainability.
| def test_many_jira_projects_shows_truncated_list(self, tmp_path): | ||
| """Test more than 5 Jira projects shows truncated list.""" | ||
| from datetime import datetime, timezone | ||
|
|
||
| from src.github_analyzer.api.jira_client import JiraIssue | ||
|
|
||
| mock_config = Mock(spec=AnalyzerConfig) | ||
| mock_config.output_dir = tmp_path | ||
| mock_config.days = 30 | ||
| mock_config.verbose = False | ||
| mock_config.validate = Mock() | ||
|
|
||
| mock_jira_config = Mock() | ||
| mock_jira_config.base_url = "https://test.atlassian.net" | ||
| mock_jira_config.jira_projects_file = "jira_projects.txt" | ||
|
|
||
| # 7 projects (more than 5) | ||
| project_keys = ["PROJ1", "PROJ2", "PROJ3", "PROJ4", "PROJ5", "PROJ6", "PROJ7"] | ||
|
|
||
| test_issue = JiraIssue( | ||
| key="PROJ1-1", | ||
| summary="Test", | ||
| description="Test", | ||
| status="Done", | ||
| issue_type="Task", | ||
| priority="Medium", | ||
| assignee="Test", | ||
| reporter="Test", | ||
| created=datetime(2025, 11, 1, tzinfo=timezone.utc), | ||
| updated=datetime(2025, 11, 1, tzinfo=timezone.utc), | ||
| resolution_date=datetime(2025, 11, 1, tzinfo=timezone.utc), | ||
| project_key="PROJ1", | ||
| ) | ||
|
|
||
| mock_client = Mock() | ||
| mock_client.search_issues.return_value = iter([test_issue]) | ||
| mock_client.get_comments.return_value = [] | ||
| mock_client.get_issue_changelog.return_value = [] | ||
|
|
||
| with ( | ||
| patch("sys.argv", ["prog", "--sources", "jira", "--quiet", "--days", "30", "--full"]), | ||
| patch.dict( | ||
| os.environ, | ||
| { | ||
| "JIRA_URL": "https://test.atlassian.net", | ||
| "JIRA_EMAIL": "test@example.com", | ||
| "JIRA_API_TOKEN": "test_token", | ||
| }, | ||
| clear=True, | ||
| ), | ||
| patch.object(main_module, "AnalyzerConfig") as MockConfig, | ||
| patch.object(main_module, "JiraConfig") as MockJiraConfig, | ||
| patch.object(main_module, "select_jira_projects", return_value=project_keys), | ||
| patch.object(main_module, "prompt_yes_no", return_value=True), | ||
| patch("src.github_analyzer.api.jira_client.JiraClient", return_value=mock_client), | ||
| ): | ||
| MockConfig.from_env.return_value = mock_config | ||
| MockJiraConfig.from_env.return_value = mock_jira_config | ||
|
|
||
| result = main() | ||
|
|
||
| assert result == 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is named test_many_jira_projects_shows_truncated_list and its docstring says it tests that a truncated list is shown, but it doesn't actually assert that the output is truncated. It only asserts that the main function returns 0.
To make this test effective and match its description, you should patch TerminalOutput and assert that its log method is called with the truncation message.
Here's how you can update the test:
- Add
from unittest.mock import callto your imports inside the test method. - Patch
TerminalOutputin yourwithblock:with ( # ... other patches patch.object(main_module, "TerminalOutput") as MockOutput, ): mock_output_instance = MockOutput.return_value # ... rest of the setup main()
- Add an assertion after
main()is called to check the log output:# Verify that the truncated list message was logged mock_output_instance.log.assert_any_call(" ... and 2 more", "info")
No description provided.