-
Notifications
You must be signed in to change notification settings - Fork 0
feat(jira): implement quality metrics for Jira issues (Feature 003) #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Add comprehensive quality metrics calculation and export for Jira issues: Issue-level metrics (10 new columns in jira_issues_export.csv): - cycle_time_days: Days from creation to resolution - aging_days: Days since creation for open issues - comments_count: Total number of comments - description_quality_score: Quality score 0-100 based on length, AC, formatting - acceptance_criteria_present: Detection of Given/When/Then, AC:, checkboxes - comment_velocity_hours: Time to first comment - silent_issue: Issues with no comments - same_day_resolution: Resolved on creation day - cross_team_score: Collaboration score based on distinct commenters - reopen_count: Status transitions from Done to non-Done New aggregated CSV exports: - jira_project_metrics.csv: Per-project aggregations (14 columns) - jira_person_metrics.csv: Per-assignee metrics (6 columns) - jira_type_metrics.csv: Per-issue-type metrics (5 columns) Implementation details: - MetricsCalculator class with TDD approach (665 tests, 92% coverage) - JiraMetricsExporter for aggregated CSV exports - get_issue_changelog() with graceful 403/404 degradation - CLI integration: metrics auto-calculated with --sources jira
Summary of ChangesHello @amargiovanni, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Jira integration by introducing a robust system for calculating and exporting various quality metrics. It provides granular insights into individual issue quality, team performance, and project health through new and extended CSV reports. The changes aim to empower users with deeper analytical capabilities for their Jira data, covering aspects from issue lifecycle to collaboration patterns. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This is an excellent and comprehensive pull request that adds significant value with the new Jira quality metrics. The implementation is well-structured, following a clear plan laid out in the specification documents. The code is robust, with good error handling (especially the graceful degradation for the changelog API) and thorough test coverage, including unit, integration, and fixture updates. I've included a few minor suggestions to improve maintainability by enhancing type specificity and reducing code duplication, but overall, this is a very high-quality contribution.
src/github_analyzer/cli/main.py
Outdated
|
|
||
| output.log("Fetching comments...", "info") | ||
| all_comments = [] | ||
| issue_comments_map: dict[str, list] = {} # Map issue key to comments |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The type hint for issue_comments_map is dict[str, list], which is a bit vague. For better type safety and readability, you could use a more specific type like dict[str, list[JiraComment]]. A similar improvement can be made for issues_by_project on line 744 to dict[str, list[IssueMetrics]].
| issue_comments_map: dict[str, list] = {} # Map issue key to comments | |
| issue_comments_map: dict[str, list[JiraComment]] = {} # Map issue key to comments |
| @staticmethod | ||
| def _format_float(value: float | None) -> str: | ||
| """Format float with 2 decimal places, or empty string if None.""" | ||
| if value is None: | ||
| return "" | ||
| return f"{value:.2f}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add type annotation for changelog values return to satisfy mypy's no-any-return check.
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
Improve type safety by replacing generic `dict[str, list]` with specific types `dict[str, list[JiraComment]]` and `dict[str, list[IssueMetrics]]`.
Summary
New CSV Exports
Issue-Level Metrics
Test Plan