Skip to content

feat: add comprehensive integration tests and documentation#94

Merged
omsherikar merged 2 commits intomainfrom
feature/pattern-learning-phase9
Jan 27, 2026
Merged

feat: add comprehensive integration tests and documentation#94
omsherikar merged 2 commits intomainfrom
feature/pattern-learning-phase9

Conversation

@omsherikar
Copy link
Contributor

@omsherikar omsherikar commented Jan 27, 2026

Phase 9: Testing & Documentation

Integration Tests (test_patterns_integration.py):

  • Full workflow: refactor → feedback → learn → rank
  • Multiple feedback improves pattern statistics
  • Multi-project pattern isolation
  • Pattern database persistence across sessions
  • Feedback persistence across sessions
  • CLI feedback collection workflow
  • Learning improves ranking over time
  • Pattern cleanup preserves recent patterns

Documentation (docs/PATTERN_LEARNING.md):

  • Comprehensive guide covering all aspects of pattern learning
  • Configuration examples (YAML and Python API)
  • Usage examples for CLI and Python API
  • Complete API reference for all pattern learning classes
  • Storage location and file structure documentation
  • Best practices and troubleshooting guide
  • Advanced usage examples
  • CI/CD integration examples

Test Coverage:

  • 132 total tests (124 unit + 8 integration)
  • All tests passing
  • Comprehensive coverage of all pattern learning components

Deliverables:

  • Complete integration test suite
  • Comprehensive user documentation
  • API documentation with examples
  • Best practices guide

Summary by CodeRabbit

  • Documentation

    • Added a comprehensive Pattern Learning System guide covering feedback-driven learning, pattern fingerprinting, ranking, project-specific adaptation, persistent storage, YAML and Python configuration, core API usage, examples, best practices, troubleshooting, advanced scenarios, performance notes, CI/CD guidance, and operational details.
  • Tests

    • Added an integration test suite validating end-to-end learning workflows, persistence across sessions, per-project isolation, CLI feedback collection, ranking improvements over time, and pattern maintenance behavior.

✏️ Tip: You can customize this high-level summary in your review settings.

…rn learning

Phase 9: Testing & Documentation

Integration Tests (test_patterns_integration.py):
- Full workflow: refactor → feedback → learn → rank
- Multiple feedback improves pattern statistics
- Multi-project pattern isolation
- Pattern database persistence across sessions
- Feedback persistence across sessions
- CLI feedback collection workflow
- Learning improves ranking over time
- Pattern cleanup preserves recent patterns

Documentation (docs/PATTERN_LEARNING.md):
- Comprehensive guide covering all aspects of pattern learning
- Configuration examples (YAML and Python API)
- Usage examples for CLI and Python API
- Complete API reference for all pattern learning classes
- Storage location and file structure documentation
- Best practices and troubleshooting guide
- Advanced usage examples
- CI/CD integration examples

Test Coverage:
- 132 total tests (124 unit + 8 integration)
- All tests passing
- Comprehensive coverage of all pattern learning components

Deliverables:
- Complete integration test suite
- Comprehensive user documentation
- API documentation with examples
- Best practices guide
Copilot AI review requested due to automatic review settings January 27, 2026 19:29
@github-actions github-actions bot added documentation Improvements or additions to documentation enhancement New feature or request size: x-large testing labels Jan 27, 2026
@coderabbitai
Copy link

coderabbitai bot commented Jan 27, 2026

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

Adds a new Pattern Learning System guide (docs/PATTERN_LEARNING.md) and a comprehensive integration test suite (tests/test_patterns_integration.py) that validate end-to-end learning workflows, persistence, CLI interaction, multi-project isolation, and ranking behavior.

Changes

Cohort / File(s) Summary
Documentation
docs/PATTERN_LEARNING.md
New comprehensive guide describing fingerprinting, feedback collection, learning workflows, ranking, storage locations, YAML/Python configuration, API references (PatternStorage, PatternFingerprinter, PatternLearner, RefactoringRanker, RuleTuner, LearningService), examples, troubleshooting, CI/CD notes.
Integration Tests
tests/test_patterns_integration.py
New integration test suite adding fixtures (temp_storage_dir, refactron_with_storage) and multiple test classes covering full refactor→feedback→learn→rank workflow, multi-project isolation, persistence across sessions, CLI feedback collection, ranking improvements over time, and pattern cleanup/retention checks.

Sequence Diagram(s)

(omitted)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Poem

🐰 I nibble notes of patterns learned,
Hopping through tests that proudly turned,
Feedback seeds and fingerprints grow,
Rankings rise where insights flow,
Storage safe — onward we go! 🥕✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add comprehensive integration tests and documentation' accurately reflects the main changes: a new integration test module (test_patterns_integration.py with 463 lines) and comprehensive documentation (PATTERN_LEARNING.md with 568 lines).
Docstring Coverage ✅ Passed Docstring coverage is 90.91% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds Phase 9 integration tests and comprehensive user documentation for Refactron’s Pattern Learning system.

Changes:

  • Introduces an integration test suite covering end-to-end pattern learning workflows, persistence, and CLI feedback invocation.
  • Adds a detailed Pattern Learning guide including configuration, API reference, and usage examples.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 5 comments.

File Description
tests/test_patterns_integration.py New integration tests for refactor → feedback → learn → rank workflows, persistence, and CLI feedback path.
docs/PATTERN_LEARNING.md New comprehensive documentation for pattern learning concepts, configuration, storage, and examples.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 107 to 114
# If ranking is enabled, operations should be ranked
if refactron.pattern_ranker:
for op in result2.operations:
assert "ranking_score" in op.metadata
# The operation we accepted should have a higher score
if op.operation_id == first_op.operation_id:
assert op.metadata["ranking_score"] > 0

Copy link

Copilot AI Jan 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the second refactor pass, this check compares operation_id values across two separate refactor() runs. RefactoringOperation.operation_id is a UUID generated per instance, so the IDs won’t match and this assertion will never execute, meaning the test doesn’t actually validate that the previously-accepted pattern gets a higher ranking score. Consider matching on code_pattern_hash (or another stable signature such as operation_type + file_path + line_number) and asserting its score increases vs. the others or vs. the pre-learning run.

Suggested change
# If ranking is enabled, operations should be ranked
if refactron.pattern_ranker:
for op in result2.operations:
assert "ranking_score" in op.metadata
# The operation we accepted should have a higher score
if op.operation_id == first_op.operation_id:
assert op.metadata["ranking_score"] > 0
# If ranking is enabled, operations should be ranked and
# the previously accepted pattern should receive a higher score
if refactron.pattern_ranker:
# Baseline score from the first run for the accepted operation
baseline_score = first_op.metadata.get("ranking_score")
accepted_pattern_hash = first_op.metadata["code_pattern_hash"]
# Find operations in the second run that match the accepted pattern
matching_ops = [
op
for op in result2.operations
if op.metadata.get("code_pattern_hash") == accepted_pattern_hash
]
assert matching_ops, "No operations in second run match accepted pattern"
for op in result2.operations:
assert "ranking_score" in op.metadata
# The matching operations should have a higher ranking score than before learning
if baseline_score is not None:
for op in matching_ops:
assert op.metadata["ranking_score"] > baseline_score

Copilot uses AI. Check for mistakes.
Comment on lines 392 to 413
# Record feedback on first operation (accept it)
if initial_ops:
op1 = initial_ops[0]
refactron.record_feedback(
operation_id=op1.operation_id,
action="accepted",
operation=op1,
)

# Refactor again - pattern should be learned
result2 = refactron.refactor(test_file, preview=True)

# Verify pattern was learned
patterns = refactron.pattern_storage.load_patterns()
assert len(patterns) > 0

# If ranking is enabled, the accepted operation should have a ranking score
if refactron.pattern_ranker and result2.operations:
for op in result2.operations:
if op.operation_type == op1.operation_type:
assert "ranking_score" in op.metadata
assert op.metadata["ranking_score"] >= 0
Copy link

Copilot AI Jan 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

op1 is only defined inside the if initial_ops: block, but it’s referenced later when asserting on op1.operation_type. If initial_ops is empty, this test will raise UnboundLocalError. Either pytest.skip() when no operations are generated, or structure the assertions so they only run when op1 is set.

Copilot uses AI. Check for mistakes.
Comment on lines 345 to 357
# Mock Refactron to use our temp storage
def mock_refactron_init(self, config=None):
from refactron.core.config import RefactronConfig

if config is None:
config = RefactronConfig(
enable_pattern_learning=True,
pattern_storage_dir=temp_storage_dir,
)
# Call original __init__ but override storage
original_init(self, config)
self.pattern_storage = PatternStorage(storage_dir=temp_storage_dir)

Copy link

Copilot AI Jan 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test defines mock_refactron_init but never applies it via monkeypatch, and it calls original_init which isn’t defined anywhere in this scope. As written, this is dead code that’s also broken if someone tries to wire it up later. Either remove the unused mock (and the unused monkeypatch fixture / PatternStorage import), or properly patch Refactron.__init__ by capturing the original and restoring it after the test.

Copilot uses AI. Check for mistakes.
Comment on lines 231 to 241
# Verify patterns are stored separately
patterns1 = refactron1.pattern_storage.load_patterns()
patterns2 = refactron2.pattern_storage.load_patterns()

# Verify storage isolation (different directories)
assert storage1_dir != storage2_dir

# If operations were generated and patterns learned, verify isolation
if op1 and op2 and (len(patterns1) > 0 or len(patterns2) > 0):
# Patterns should be stored in separate directories
assert storage1_dir.exists() or storage2_dir.exists()
Copy link

Copilot AI Jan 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is named/claimed as "patterns isolated by project", but the assertions only verify the two storage directories differ and that at least one directory exists. It doesn’t actually validate isolation (e.g., that patterns learned in storage1 are absent from storage2 and vice versa). Consider asserting on the contents of patterns1/patterns2 (e.g., their learned pattern_hash sets don’t overlap when feedback is recorded in both projects).

Copilot uses AI. Check for mistakes.
Comment on lines 552 to 554
refactron refactor . --preview
env:
REFACTRON_PATTERN_STORAGE_DIR: ${{ github.workspace }}/.refactron/patterns
Copy link

Copilot AI Jan 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The GitHub Actions example sets REFACTRON_PATTERN_STORAGE_DIR, but the codebase doesn’t appear to read this environment variable (pattern storage is configured via RefactronConfig.pattern_storage_dir or auto-detected defaults). This will mislead users. Either document the supported configuration mechanism(s) here, or implement reading this env var and mapping it to pattern_storage_dir.

Suggested change
refactron refactor . --preview
env:
REFACTRON_PATTERN_STORAGE_DIR: ${{ github.workspace }}/.refactron/patterns
# Pattern storage is configured via RefactronConfig.pattern_storage_dir
# or your project's Refactron configuration file (see Configuration Guide).
refactron refactor . --preview

Copilot uses AI. Check for mistakes.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Fix all issues with AI agents
In `@docs/PATTERN_LEARNING.md`:
- Around line 1-3: Replace the emphasized tagline "**Learn from your refactoring
decisions to improve suggestions over time**" with a real heading or plain
sentence to satisfy markdownlint MD036; for example, change that bold line to an
H2/H3 (e.g., "## Learn from your refactoring decisions to improve suggestions
over time") or a normal sentence under the existing "# Pattern Learning System
Guide" heading so the text is not formatted as emphasis-as-heading.
- Around line 132-140: The fenced code block that starts with "📊 Refactoring
Suggestions (ranked by learned patterns)" is missing a language specifier;
update the opening backticks of that code fence to include a language hint (use
`text`) so the fence becomes ```text to satisfy markdownlint MD040 and correctly
mark the console/output snippet.

In `@tests/test_patterns_integration.py`:
- Around line 372-414: The test test_learning_improves_ranking_over_time can
reference an undefined op1 when no operations are returned; update the test to
early-skip when refactor results contain no operations: after result1 =
refactron.refactor(...) check if result1.operations is empty and return/skip the
test (or pytest.skip) so op1 is not referenced, and likewise skip subsequent
checks that inspect result2.operations, record_feedback, and the pattern
assertions (pattern_storage.load_patterns and pattern_ranker-related assertions)
when there are no operations to avoid NameError and false failures.
- Around line 334-366: The test test_cli_feedback_collection_workflow defines
mock_refactron_init but references undefined original_init and never applies
monkeypatch; fix by capturing the original initializer (e.g., original_init =
Refactron.__init__) before defining mock_refactron_init, use monkeypatch.setattr
to replace Refactron.__init__ with mock_refactron_init, and ensure
PatternStorage is imported/used as in the mock so the test uses temp_storage_dir
deterministically; restore or rely on test teardown to avoid side effects.
- Around line 1-4: Several test function signatures and comment lines exceed
Black's 88-character limit; wrap long function signatures and comments to <=88
characters and reflow long parameter lists or long descriptions into multiple
indented lines consistent with Black/PEP8 style (specifically update
test_refactor_feedback_learn_rank_workflow,
test_multiple_feedback_improves_pattern,
test_learning_improves_ranking_over_time,
test_pattern_cleanup_preserves_recent_patterns and the long comments around
lines ~358, ~365, ~459). Break long argument lists across lines, place each
argument on its own line or group logically, wrap long comment text at ~88
chars, and then run Black to reformat the file to ensure compliance.
🧹 Nitpick comments (1)
tests/test_patterns_integration.py (1)

107-113: Match by fingerprint instead of operation_id across runs.
operation_id is often regenerated on each refactor, so this assertion can become a no-op. Use the stable code_pattern_hash (already computed) or operation type.

♻️ Proposed fix
-                if op.operation_id == first_op.operation_id:
+                if op.metadata.get("code_pattern_hash") == pattern_hash:
                     assert op.metadata["ranking_score"] > 0

Comment on lines +1 to +3
# Pattern Learning System Guide

**Learn from your refactoring decisions to improve suggestions over time**
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Check if the file exists and read its content
if [ -f "docs/PATTERN_LEARNING.md" ]; then
  echo "=== File exists. First 50 lines: ==="
  head -50 "docs/PATTERN_LEARNING.md" | cat -n
else
  echo "File not found. Searching for similar files..."
  find . -name "*PATTERN_LEARNING*" -o -name "*pattern*learning*"
fi

Repository: Refactron-ai/Refactron_lib

Length of output: 1867


🏁 Script executed:

# Also check the file size to understand its structure
if [ -f "docs/PATTERN_LEARNING.md" ]; then
  wc -l "docs/PATTERN_LEARNING.md"
fi

Repository: Refactron-ai/Refactron_lib

Length of output: 98


🏁 Script executed:

# Check if markdownlint is available and run it on the file
if command -v markdownlint &> /dev/null; then
  echo "=== markdownlint results: ==="
  markdownlint "docs/PATTERN_LEARNING.md" || true
else
  echo "markdownlint not available, checking file content manually..."
fi

Repository: Refactron-ai/Refactron_lib

Length of output: 131


🏁 Script executed:

# Check content around line 132 to verify the MD040 issue mentioned
sed -n '125,140p' "docs/PATTERN_LEARNING.md" | cat -n

Repository: Refactron-ai/Refactron_lib

Length of output: 450


🏁 Script executed:

# Also verify the exact content at lines 1-3 to confirm the snippet
sed -n '1,3p' "docs/PATTERN_LEARNING.md" | cat -n

Repository: Refactron-ai/Refactron_lib

Length of output: 198


Replace the bold tagline with a real heading.
markdownlint MD036 flags emphasis-as-heading here; use a heading or plain sentence instead.

♻️ Proposed fix
-**Learn from your refactoring decisions to improve suggestions over time**
+## Learn from your refactoring decisions to improve suggestions over time
🧰 Tools
🪛 GitHub Actions: Pre-commit

[error] 1-1: Trailing whitespace detected. Removed trailing spaces.


[error] 1-1: End-of-file fixer modified the file.

🪛 markdownlint-cli2 (0.18.1)

3-3: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)

🤖 Prompt for AI Agents
In `@docs/PATTERN_LEARNING.md` around lines 1 - 3, Replace the emphasized tagline
"**Learn from your refactoring decisions to improve suggestions over time**"
with a real heading or plain sentence to satisfy markdownlint MD036; for
example, change that bold line to an H2/H3 (e.g., "## Learn from your
refactoring decisions to improve suggestions over time") or a normal sentence
under the existing "# Pattern Learning System Guide" heading so the text is not
formatted as emphasis-as-heading.

Comment on lines +132 to +140
```
📊 Refactoring Suggestions (ranked by learned patterns)

Operation 1: Extract Constant
File: myfile.py:42
Ranking Score: 0.85 ⭐
Risk: Low (0.2)
...
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if the file exists and view the specific lines
fd PATTERN_LEARNING.md

# Check line count and read the specific lines (132-140)
wc -l docs/PATTERN_LEARNING.md
sed -n '130,145p' docs/PATTERN_LEARNING.md | cat -n

Repository: Refactron-ai/Refactron_lib

Length of output: 524


🏁 Script executed:

#!/bin/bash
# Search for markdownlint configuration files
fd -e ".markdownlintrc" -e "markdownlint.json" -e ".markdownlint.yaml" -e ".markdownlint.yml" -e ".markdownlintignore"

# Also check if markdownlint is mentioned in package.json or other config
find . -maxdepth 2 -name "*.json" -o -name "*.yaml" -o -name "*.yml" | head -20 | xargs grep -l "markdownlint" 2>/dev/null

Repository: Refactron-ai/Refactron_lib

Length of output: 54


🌐 Web query:

markdownlint MD040 rule language specifier code fence

💡 Result:

markdownlint rule MD040 (fenced-code-language) warns when you use a fenced code block (````` or ~~~) without a language info string after the opening fence. Adding a language enables syntax highlighting and satisfies the rule. [1][2]

Violates MD040

```
echo "hello"
```

Fix (specify a language)

```bash
echo "hello"
```

If it’s not a real language, explicitly mark as plain text

```text
some output / generic text
```

(You can also disable/configure the rule in your markdownlint config if you truly need unlabeled fences.) [1]

Sources: [1] [2]


Specify a language for the output code fence.

The code fence at line 133 lacks a language specifier. markdownlint MD040 requires a language hint after the opening backticks; text is appropriate for console output.

♻️ Proposed fix
-```
+```text
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
```
📊 Refactoring Suggestions (ranked by learned patterns)
Operation 1: Extract Constant
File: myfile.py:42
Ranking Score: 0.85 ⭐
Risk: Low (0.2)
...
```
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

132-132: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@docs/PATTERN_LEARNING.md` around lines 132 - 140, The fenced code block that
starts with "📊 Refactoring Suggestions (ranked by learned patterns)" is missing
a language specifier; update the opening backticks of that code fence to include
a language hint (use `text`) so the fence becomes ```text to satisfy
markdownlint MD040 and correctly mark the console/output snippet.

Comment on lines 334 to 366
def test_cli_feedback_collection_workflow(self, temp_storage_dir, monkeypatch):
"""Test that CLI feedback collection works end-to-end."""
from click.testing import CliRunner

from refactron.cli import feedback

runner = CliRunner()

# Create a test operation ID
operation_id = "test-op-123"

# Mock Refactron to use our temp storage
def mock_refactron_init(self, config=None):
from refactron.core.config import RefactronConfig

if config is None:
config = RefactronConfig(
enable_pattern_learning=True,
pattern_storage_dir=temp_storage_dir,
)
# Call original __init__ but override storage
original_init(self, config)
self.pattern_storage = PatternStorage(storage_dir=temp_storage_dir)

# This is a simplified test - full CLI integration is tested in test_patterns_feedback.py
# Here we just verify the command exists and can be invoked
result = runner.invoke(
feedback,
[operation_id, "--action", "accepted", "--reason", "Test"],
)

# Command should exist (may fail if storage not set up, but command should be callable)
assert result.exit_code in (0, 1) # 0 = success, 1 = error (but command exists)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, find the test file
fd "test_patterns_integration.py" --type f

Repository: Refactron-ai/Refactron_lib

Length of output: 104


🏁 Script executed:

#!/bin/bash
# Read the test file around lines 334-366
wc -l tests/test_patterns_integration.py

Repository: Refactron-ai/Refactron_lib

Length of output: 108


🏁 Script executed:

#!/bin/bash
# Check the relevant section
sed -n '330,370p' tests/test_patterns_integration.py | cat -n

Repository: Refactron-ai/Refactron_lib

Length of output: 1921


🏁 Script executed:

#!/bin/bash
# Check imports at the top of the file and in the test method
head -50 tests/test_patterns_integration.py | cat -n

Repository: Refactron-ai/Refactron_lib

Length of output: 1885


🏁 Script executed:

#!/bin/bash
# Look for Refactron and PatternStorage imports/usage
rg -n "(from refactron|import.*Refactron|PatternStorage)" tests/test_patterns_integration.py | head -20

Repository: Refactron-ai/Refactron_lib

Length of output: 545


🏁 Script executed:

#!/bin/bash
# Check Refactron class __init__ definition
rg -n "class Refactron" refactron/ -A 20 | head -40

Repository: Refactron-ai/Refactron_lib

Length of output: 2491


🏁 Script executed:

#!/bin/bash
# Verify the Refactron class can be patched this way
fd "*.py" refactron/ -x grep -l "class Refactron" 2>/dev/null

Repository: Refactron-ai/Refactron_lib

Length of output: 52


🏁 Script executed:

#!/bin/bash
# Get the full __init__ method of Refactron
sed -n '52,80p' refactron/core/refactron.py | cat -n

Repository: Refactron-ai/Refactron_lib

Length of output: 1350


Fix undefined original_init and apply monkeypatch.

The test references an undefined original_init variable (F821) on line 26, and the monkeypatch argument is unused. Capture the original initializer before the mock definition and apply the patch so the test uses temp storage deterministically.

Proposed fix
-        # Mock Refactron to use our temp storage
-        def mock_refactron_init(self, config=None):
+        # Mock Refactron to use our temp storage
+        original_init = Refactron.__init__
+        def mock_refactron_init(self, config=None, *args, **kwargs):
             from refactron.core.config import RefactronConfig

             if config is None:
                 config = RefactronConfig(
                     enable_pattern_learning=True,
                     pattern_storage_dir=temp_storage_dir,
                 )
             # Call original __init__ but override storage
-            original_init(self, config)
+            original_init(self, config, *args, **kwargs)
             self.pattern_storage = PatternStorage(storage_dir=temp_storage_dir)
+
+        monkeypatch.setattr(Refactron, "__init__", mock_refactron_init)
🧰 Tools
🪛 GitHub Actions: Pre-commit

[error] 351-351: Flake8: F821 undefined name 'original_init'.

🪛 Ruff (0.14.14)

334-334: Unused method argument: monkeypatch

(ARG002)


355-355: Undefined name original_init

(F821)

🤖 Prompt for AI Agents
In `@tests/test_patterns_integration.py` around lines 334 - 366, The test
test_cli_feedback_collection_workflow defines mock_refactron_init but references
undefined original_init and never applies monkeypatch; fix by capturing the
original initializer (e.g., original_init = Refactron.__init__) before defining
mock_refactron_init, use monkeypatch.setattr to replace Refactron.__init__ with
mock_refactron_init, and ensure PatternStorage is imported/used as in the mock
so the test uses temp_storage_dir deterministically; restore or rely on test
teardown to avoid side effects.

Critical Fixes:
- Fix operation ID comparison bug: match by code_pattern_hash instead of operation_id
  - Operation IDs are unique UUIDs per instance, so comparison across refactor() calls never matches
  - Now matches operations by pattern hash and compares ranking scores correctly
  - Changed assertion to >= baseline_score (score may be same or higher after learning)

- Fix UnboundLocalError risk: guard against op1 being undefined
  - Added pytest.skip() when no operations generated
  - Prevents UnboundLocalError when initial_ops is empty

Important Fixes:
- Remove dead/broken mock code: unused mock_refactron_init function
  - Removed unused mock function that referenced undefined original_init
  - Removed unused PatternStorage import
  - Simplified test to just verify CLI command exists

- Improve isolation test: verify actual pattern isolation
  - Added assertions to verify patterns from project1 are not in project2 storage
  - Verify storage directories are separate (isolation mechanism)
  - Check pattern hashes don't overlap when both projects learn patterns

- Fix misleading documentation: remove non-existent env var
  - Removed REFACTRON_PATTERN_STORAGE_DIR env var example
  - Added note that pattern storage is configured via config file or Python API
  - Clarified actual configuration mechanism

All tests passing (8/8 integration tests)
@omsherikar omsherikar merged commit 703aaf4 into main Jan 27, 2026
15 of 16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation enhancement New feature or request performance size: x-large testing

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant