Skip to content

Conversation

@amargiovanni
Copy link
Contributor

No description provided.

Add 4 tests covering the Jira extraction with quality metrics in CLI:
- Full flow with single issue and comment
- Multiple issues across multiple projects
- Empty results handling
- Changelog retrieval for reopen detection

This improves coverage for the Feature 003 CLI integration code.
Add comprehensive tests for CLI functionality:
- Interactive project selection (L option, EOF handling, retries)
- GitHub analyzer run flow with close() in finally block
- Error handling (KeyboardInterrupt, ConfigurationError)
- CLI argument overrides (--output, --repos)
- Auto-detect sources with no available sources
- Interactive prompts when CLI args not provided
- Truncated Jira project list display (>5 projects)

Coverage improved from 89% to 99% (691 tests pass).
@gemini-code-assist
Copy link

Summary of Changes

Hello @amargiovanni, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the test suite for the CLI, focusing on the newly integrated Jira functionality. It ensures the interactive project selection process is robust and handles various user inputs and edge cases gracefully. Additionally, it validates the end-to-end Jira data extraction, metric calculation, and export within the CLI, alongside improving general error handling and argument parsing test coverage.

Highlights

  • Jira Interactive Project Selection Tests: Added comprehensive integration tests for the interactive selection of Jira projects within the CLI, covering various user input scenarios, including selecting by list number, handling EOF errors, retrying on invalid or empty inputs, and gracefully ignoring invalid project keys.
  • Jira CLI Extraction Workflow Tests: Introduced unit tests for the CLI's main function to validate the full Jira data extraction workflow, ensuring correct metrics calculation, export of issues and comments, and proper handling of multiple projects, empty results, and issue changelogs for reopen detection.
  • CLI Error Handling and Argument Override Tests: Enhanced test coverage for general CLI robustness, including tests for KeyboardInterrupts, unexpected exceptions, and configuration errors. Also added tests to verify that command-line arguments correctly override configuration settings for output directories and repository files.
  • GitHub Analyzer Flow Tests: Included new tests to ensure the GitHub analyzer flow within the main CLI function operates as expected, verifying analyzer creation, execution, and proper resource closure even in the event of exceptions.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@codecov
Copy link

codecov bot commented Nov 29, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

@amargiovanni amargiovanni merged commit 3c28953 into main Nov 29, 2025
6 checks passed
@amargiovanni amargiovanni deleted the test/cli-jira-integration branch November 29, 2025 03:36
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a comprehensive suite of integration and unit tests for the new Jira CLI integration features. The tests cover interactive project selection, the main extraction flow, error handling, and CLI argument overrides. The coverage of different scenarios and edge cases is excellent. I have two main suggestions to improve the new test code. First, one test doesn't fully verify what its name implies, and I've suggested adding the missing assertion. Second, there is significant code duplication in test setup, which could be refactored into pytest fixtures to improve maintainability. Overall, this is a solid contribution that greatly improves test coverage.

# =============================================================================


class TestJiraIntegrationInCLI:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a lot of repeated setup code for creating mock configurations across the test methods in this file (e.g., mock_config, mock_jira_config). This makes the tests verbose and harder to maintain.

Consider using pytest fixtures to handle this setup. This will reduce duplication and make the tests cleaner.

You could define fixtures like this at the module level or within a conftest.py:

@pytest.fixture
def mock_analyzer_config(tmp_path):
    """Create a mock AnalyzerConfig."""
    mock_config = Mock(spec=AnalyzerConfig)
    mock_config.output_dir = tmp_path
    mock_config.days = 30
    mock_config.verbose = False
    mock_config.validate = Mock()
    return mock_config

@pytest.fixture
def mock_jira_config():
    """Create a mock JiraConfig."""
    mock_jira_config = Mock()
    mock_jira_config.base_url = "https://test.atlassian.net"
    return mock_jira_config

Then, your test methods can simply accept these fixtures as arguments:

def test_jira_extraction_full_flow(self, tmp_path, mock_analyzer_config, mock_jira_config):
    # ... test logic using mock_analyzer_config and mock_jira_config
    # No need to create them inside the test anymore.

Applying this pattern to all new test classes in this file (TestJiraIntegrationInCLI, TestGitHubAnalyzerInMain, etc.) would significantly improve the code's maintainability.

Comment on lines +1083 to +1144
def test_many_jira_projects_shows_truncated_list(self, tmp_path):
"""Test more than 5 Jira projects shows truncated list."""
from datetime import datetime, timezone

from src.github_analyzer.api.jira_client import JiraIssue

mock_config = Mock(spec=AnalyzerConfig)
mock_config.output_dir = tmp_path
mock_config.days = 30
mock_config.verbose = False
mock_config.validate = Mock()

mock_jira_config = Mock()
mock_jira_config.base_url = "https://test.atlassian.net"
mock_jira_config.jira_projects_file = "jira_projects.txt"

# 7 projects (more than 5)
project_keys = ["PROJ1", "PROJ2", "PROJ3", "PROJ4", "PROJ5", "PROJ6", "PROJ7"]

test_issue = JiraIssue(
key="PROJ1-1",
summary="Test",
description="Test",
status="Done",
issue_type="Task",
priority="Medium",
assignee="Test",
reporter="Test",
created=datetime(2025, 11, 1, tzinfo=timezone.utc),
updated=datetime(2025, 11, 1, tzinfo=timezone.utc),
resolution_date=datetime(2025, 11, 1, tzinfo=timezone.utc),
project_key="PROJ1",
)

mock_client = Mock()
mock_client.search_issues.return_value = iter([test_issue])
mock_client.get_comments.return_value = []
mock_client.get_issue_changelog.return_value = []

with (
patch("sys.argv", ["prog", "--sources", "jira", "--quiet", "--days", "30", "--full"]),
patch.dict(
os.environ,
{
"JIRA_URL": "https://test.atlassian.net",
"JIRA_EMAIL": "test@example.com",
"JIRA_API_TOKEN": "test_token",
},
clear=True,
),
patch.object(main_module, "AnalyzerConfig") as MockConfig,
patch.object(main_module, "JiraConfig") as MockJiraConfig,
patch.object(main_module, "select_jira_projects", return_value=project_keys),
patch.object(main_module, "prompt_yes_no", return_value=True),
patch("src.github_analyzer.api.jira_client.JiraClient", return_value=mock_client),
):
MockConfig.from_env.return_value = mock_config
MockJiraConfig.from_env.return_value = mock_jira_config

result = main()

assert result == 0

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This test is named test_many_jira_projects_shows_truncated_list and its docstring says it tests that a truncated list is shown, but it doesn't actually assert that the output is truncated. It only asserts that the main function returns 0.

To make this test effective and match its description, you should patch TerminalOutput and assert that its log method is called with the truncation message.

Here's how you can update the test:

  1. Add from unittest.mock import call to your imports inside the test method.
  2. Patch TerminalOutput in your with block:
    with (
        # ... other patches
        patch.object(main_module, "TerminalOutput") as MockOutput,
    ):
        mock_output_instance = MockOutput.return_value
        # ... rest of the setup
        main()
  3. Add an assertion after main() is called to check the log output:
    # Verify that the truncated list message was logged
    mock_output_instance.log.assert_any_call("  ... and 2 more", "info")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants