Skip to content

chore(nix): update flake.lock#1

Open
github-actions[bot] wants to merge 1 commit intomainfrom
update_flake_lock_action
Open

chore(nix): update flake.lock#1
github-actions[bot] wants to merge 1 commit intomainfrom
update_flake_lock_action

Conversation

@github-actions
Copy link

Automated changes by the update-flake-lock GitHub Action.

Flake lock file updates:

• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/00c21e4' (2026-02-04)
  → 'github:NixOS/nixpkgs/a82ccc3' (2026-02-13)

Running GitHub Actions on this PR

GitHub Actions will not run workflows on pull requests which are opened by a GitHub Action.

To run GitHub Actions workflows on this PR, close and re-open this pull request.

Flake lock file updates:

• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/00c21e4' (2026-02-04)
  → 'github:NixOS/nixpkgs/a82ccc3' (2026-02-13)
timvisher-dd pushed a commit that referenced this pull request Feb 16, 2026
…veyegge#1706)

* perf(doctor): fix O(n) full-table scans causing 130s doctor runs

Three performance fixes for bd doctor on large databases (23k+ issues):

1. CheckDuplicateIssues (66s → 10ms): Replace SearchIssues() that loaded
   ALL issues into memory with SQL GROUP BY aggregation. The old code
   transferred 23k full issue rows (50+ columns) over MySQL wire protocol
   just to count duplicates.

2. CheckStaleClosedIssues (57s → 4ms): Replace SearchIssues() that loaded
   ALL closed issues with SELECT COUNT(*) SQL query. Same root cause as #1.

3. ResolvePartialID (60s+ → <1s for missing IDs): The substring search
   fallback loaded ALL issues when exact match failed. Now passes the hash
   as a search query to leverage SQL-level id LIKE filtering instead of
   transferring the entire database to Go for in-memory matching.

Total bd doctor runtime: 130s → 6s (22x speedup).
Total gt doctor runtime: infinite hang → 15s.

Root cause: These checks used store.SearchIssues() which does SELECT id
then GetIssuesByIDs() (full row fetch). On Dolt server mode with 23k+
issues, transferring all rows over MySQL wire protocol is catastrophically
slow. SQL aggregation and filtering avoid the data transfer entirely.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* test(doctor): add 7 missing tests for performance PR

CheckStaleClosedIssues (6 tests):
- DisabledSmallCount: threshold=0, <10k closed → OK
- DisabledLargeCount: threshold=0, ≥10k closed → warning
- EnabledWithCleanable: threshold=30d, old issues → correct count
- EnabledNoneCleanable: threshold=30d, recent issues → OK
- PinnedExcluded: all pinned → 0 cleanable
- MixedPinnedAndStale: 5 stale + 3 pinned → reports 5

CheckDuplicateIssues (2 tests):
- MultipleDuplicateGroups: 2+ groups → correct groupCount/dupCount
- ZeroDuplicatesNullHandling: SUM() NULL → defaults to 0

ResolvePartialID (1 test):
- TitleFalsePositive: hash in title but different ID → rejected

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants

Comments