diff --git a/.cursorrules b/.cursorrules new file mode 100644 index 0000000..740e7c5 --- /dev/null +++ b/.cursorrules @@ -0,0 +1,15 @@ +# Cursor Rules + +You are working in a project that follows a strict Task Documentation System. + +## Task System +- **Source of Truth**: The `docs/tasks/` directory contains the state of all work. +- **Workflow**: + 1. Check context: `./scripts/tasks context` + 2. Create task if needed: `./scripts/tasks create ...` + 3. Update status: `./scripts/tasks update ...` +- **Reference**: See `docs/tasks/GUIDE.md` for details. + +## Tools +- Use `./scripts/tasks` for all task operations. +- Use `--format json` if you need to parse output. diff --git a/AGENTS.md b/AGENTS.md index 336bc15..ee2b5be 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -1,10 +1,100 @@ -# AGENTS.md - -This file provides guidance to Qoder (qoder.com) when working with code in this repository. +# AI Agent Instructions + +You are an expert Software Engineer working on this project. Your primary responsibility is to implement features and fixes while strictly adhering to the **Task Documentation System**. + +## Core Philosophy +**"If it's not documented in `docs/tasks/`, it didn't happen."** + +## Workflow +1. **Pick a Task**: Run `python3 scripts/tasks.py next` to find the best task, `context` to see active tasks, or `list` to see pending ones. +2. **Plan & Document**: + * **Memory Check**: Run `python3 scripts/memory.py list` (or use the Memory Skill) to recall relevant long-term information. + * **Security Check**: Ask the user about specific security considerations for this task. + * If starting a new task, use `scripts/tasks.py create` (or `python3 scripts/tasks.py create`) to generate a new task file. + * Update the task status: `python3 scripts/tasks.py update [TASK_ID] in_progress`. +3. **Implement**: Write code, run tests. +4. **Update Documentation Loop**: + * As you complete sub-tasks, check them off in the task document. + * If you hit a blocker, update status to `wip_blocked` and describe the issue in the file. + * Record key architectural decisions in the task document. + * **Memory Update**: If you learn something valuable for the long term, use `scripts/memory.py create` to record it. +5. **Review & Verify**: + * Once implementation is complete, update status to `review_requested`: `python3 scripts/tasks.py update [TASK_ID] review_requested`. + * Ask a human or another agent to review the code. + * Once approved and tested, update status to `verified`. +6. **Finalize**: + * Update status to `completed`: `python3 scripts/tasks.py update [TASK_ID] completed`. + * Record actual effort in the file. + * Ensure all acceptance criteria are met. + +## Tools +* **Wrapper**: `./scripts/tasks` (Checks for Python, recommended). +* **Next**: `./scripts/tasks next` (Finds the best task to work on). +* **Create**: `./scripts/tasks create [category] "Title"` +* **List**: `./scripts/tasks list [--status pending]` +* **Context**: `./scripts/tasks context` +* **Update**: `./scripts/tasks update [ID] [status]` +* **Migrate**: `./scripts/tasks migrate` (Migrate legacy tasks to new format) +* **Memory**: `./scripts/memory.py [create|list|read]` +* **JSON Output**: Add `--format json` to any command for machine parsing. + +## Documentation Reference +* **Guide**: Read `docs/tasks/GUIDE.md` for strict formatting and process rules. +* **Architecture**: Refer to `docs/architecture/` for system design. +* **Features**: Refer to `docs/features/` for feature specifications. +* **Security**: Refer to `docs/security/` for risk assessments and mitigations. +* **Memories**: Refer to `docs/memories/` for long-term project context. + +## Code Style & Standards +* Follow the existing patterns in the codebase. +* Ensure all new code is covered by tests (if testing infrastructure exists). + +## PR Review Methodology +When performing a PR review, follow this "Human-in-the-loop" process to ensure depth and efficiency. + +### 1. Preparation +1. **Create Task**: `python3 scripts/tasks.py create review "Review PR #: "` +2. **Fetch Details**: Use `gh` to get the PR context. + * `gh pr view <N>` + * `gh pr diff <N>` + +### 2. Analysis & Planning (The "Review Plan") +**Do not review line-by-line yet.** Instead, analyze the changes and document a **Review Plan** in the task file (or present it for approval). + +Your plan must include: +* **High-Level Summary**: Purpose, new APIs, breaking changes. +* **Dependency Check**: New libraries, maintenance status, security. +* **Impact Assessment**: Effect on existing code/docs. +* **Focus Areas**: Prioritized list of files/modules to check. +* **Suggested Comments**: Draft comments for specific lines. + * Format: `File: <path> | Line: <N> | Comment: <suggestion>` + * Tone: Friendly, suggestion-based ("Consider...", "Nit: ..."). + +### 3. Execution +Once the human approves the plan and comments: +1. **Pending Review**: Create a pending review using `gh`. + * `COMMIT_SHA=$(gh pr view <N> --json headRefOid -q .headRefOid)` + * `gh api repos/{owner}/{repo}/pulls/{N}/reviews -f commit_id="$COMMIT_SHA"` +2. **Batch Comments**: Add comments to the pending review. + * `gh api repos/{owner}/{repo}/pulls/{N}/comments -f body="..." -f path="..." -f commit_id="$COMMIT_SHA" -F line=<L> -f side="RIGHT"` +3. **Submit**: + * `gh pr review <N> --approve --body "Summary..."` (or `--request-changes`). + +### 4. Close Task +* Update task status to `completed`. + +## Agent Interoperability +- **Task Manager Skill**: `.claude/skills/task_manager/` +- **Memory Skill**: `.claude/skills/memory/` +- **Tool Definitions**: `docs/interop/tool_definitions.json` + +--- + +# Project Specific Instructions ## Project Overview -llm-nvim is a Neovim plugin that integrates with Simon Willison's llm CLI tool, enabling users to interact with large language models directly from Neovim. The plugin provides a unified interface for prompting LLMs, managing models, API keys, fragments, templates, and schemas. +llm-nvim is a Neovim plugin that integrates with Simon Willison's llm CLI tool, enabling users to interact with large language models directly from Neovim. The plugin provides a unified interface for prompting LLMs, managing models, API keys, and fragments. ## Requirements @@ -14,93 +104,15 @@ llm-nvim is a Neovim plugin that integrates with Simon Willison's llm CLI tool, **Lua Environment**: Neovim uses LuaJIT 2.1+ which provides Lua 5.1 base with 5.2+ extensions. This plugin uses Lua 5.2+ APIs (`table.unpack`) for forward compatibility. See TESTING-001 for full compatibility audit results. -## Documentation +## Documentation (Legacy) **IMPORTANT**: Always consult these documents during planning and development: -- `docs/features.md`: Complete feature list and requirements -- `docs/architecture.md`: Architecture decisions, data flows, and technical rationale +- `docs/features/`: Complete feature list and requirements +- `docs/architecture/`: Architecture decisions, data flows, and technical rationale - `docs/tasks/README.md`: Overview of task system and current task status - Individual task files in `docs/tasks/[category]/`: Detailed implementation tasks -When planning new features or refactoring: -1. First read `docs/features.md` to understand existing functionality -2. Review `docs/architecture.md` to understand design patterns and decisions -3. Check `docs/tasks/README.md` for pending tasks and priorities -4. Read specific task documents before implementing -5. Update task documents as work progresses -6. Update relevant docs when making architectural changes - -## Task Documentation System - -All implementation tasks are documented in `docs/tasks/` following a standardized format. - -### Task Categories - -- **critical/**: Blocking issues affecting functionality (P0) -- **code-quality/**: Code cleanup and maintainability (P1) -- **testing/**: Test infrastructure and quality (P1-P2) -- **documentation/**: Documentation improvements (P2) -- **performance/**: Performance optimizations (P3) - -### Working with Tasks - -**Before starting work**: -1. Check `docs/tasks/README.md` for task overview and status -2. Read the specific task document in `docs/tasks/[category]/` -3. Verify dependencies are completed -4. Update task status to `in_progress` - -**During implementation**: -1. Follow acceptance criteria in the task document -2. Use implementation notes as guidance -3. Update task document with: - - Completed acceptance criteria (check boxes) - - Decisions made and why - - Blockers encountered and resolution - - Git commits with references -4. Create new tasks if you discover additional work needed - -**After completion**: -1. Mark all acceptance criteria as complete -2. Update status to `completed` -3. Record actual effort and completion date -4. Update `docs/tasks/README.md` status table -5. Create follow-up tasks if needed - -### Finding Tasks - -```bash -# View all pending tasks by category -ls docs/tasks/critical/ -ls docs/tasks/code-quality/ -ls docs/tasks/testing/ - -# Find tasks with no dependencies (can start immediately) -grep -r "Dependencies**: None" docs/tasks/ - -# Find blocked tasks -grep -r "Status**: blocked" docs/tasks/ -``` - -### Creating New Tasks - -1. Choose appropriate category (critical, code-quality, testing, documentation, performance) -2. Generate task ID: `[CATEGORY]-NNN` (use next available number in category) -3. Create file: `docs/tasks/[category]/[TASK-ID]-[descriptive-slug].md` -4. Use template from `task-documentation-guide.md` -5. Include: - - Clear description and problem statement - - Acceptance criteria (specific and measurable) - - Implementation notes with file references - - Architecture components affected - - Dependencies on other tasks -6. Add to `docs/tasks/README.md` task list - -### Completed Tasks - -When a task is completed, it should be moved from its category directory to the `docs/tasks/completed/` directory. This keeps the main task directories focused on pending work. - ## Current Status and Quick Start ### 🎯 Current Priority Tasks @@ -118,7 +130,7 @@ When a task is completed, it should be moved from its category directory to the - No blocking technical debt **📋 What to Work On Next**: -1. **New Features**: Check `docs/features.md` for planned features +1. **New Features**: Check `docs/features/` for planned features 2. **Enhancements**: Review `docs/tasks/README.md` for pending improvements 3. **Documentation**: Update docs when adding new functionality diff --git a/README.md b/README.md index 1bb8a64..a0a04cb 100644 --- a/README.md +++ b/README.md @@ -13,8 +13,6 @@ https://github.com/user-attachments/assets/b326370e-5752-46af-ba5c-6ae08d157f01 ### Fragment Management https://github.com/user-attachments/assets/2fc30538-6fd5-4cfa-9b7b-7fd7757f20c1 -### Template Management -https://github.com/user-attachments/assets/d9e16473-90fe-4ccc-a480-d5452070afc2 ## Feature List @@ -26,15 +24,11 @@ https://github.com/user-attachments/assets/d9e16473-90fe-4ccc-a480-d5452070afc2 - Support for custom models and system prompts - API key management for multiple providers - Fragment management (files, URLs, GitHub repos) -- Template creation and execution -- Schema management and execution - Unified manager window (`:LLMConfig`) with views for: - Models - - Plugins + - Plugins - API Keys - Fragments - - Templates - - Schemas - Markdown-formatted responses with syntax highlighting - Asynchronous command execution @@ -145,19 +139,15 @@ This helps ensure your `llm` tool stays up-to-date with the latest features and - `:LLM selection [{prompt}]` - Send visual selection with optional prompt - `:LLM explain` - Explain current buffer's code - `:LLM fragments` - Interactive prompt with fragment selection -- `:LLM schema` - Select and run schema -- `:LLM template` - Select and run template - `:LLM update` - Manually trigger an update check for the underlying `llm` CLI tool. #### Unified Manager - `:LLMConfig [view]` - Open unified manager window - - Optional views: `models`, `plugins`, `keys`, `fragments`, `templates`, `schemas` + - Optional views: `models`, `plugins`, `keys`, `fragments` - `:LLMConfig models` - Open Models view -- `:LLMConfig plugins` - Open Plugins view +- `:LLMConfig plugins` - Open Plugins view - `:LLMConfig keys` - Open API Keys view - `:LLMConfig fragments` - Open Fragments view -- `:LLMConfig templates` - Open Templates view -- `:LLMConfig schemas` - Open Schemas view ### Basic Prompting diff --git a/doc/llm.txt b/doc/llm.txt index f0732e9..a3dd85b 100644 --- a/doc/llm.txt +++ b/doc/llm.txt @@ -149,35 +149,25 @@ Custom mappings: *:LLM explain* Explain code in current buffer - *:LLM fragments* - Interactive prompt with fragment selection - - *:LLM schema* - Select and run schema - - *:LLM template* - Select and run template + *:LLM fragments* + Interactive prompt with fragment selection *:LLM update* Manually trigger update check for llm CLI tool *:LLMConfig* [{view}] Open or close the unified manager window. This window - allows managing Models, Plugins, API Keys, Fragments, - Templates, and Schemas. - Optionally specify an initial {view} to open: - "models", "plugins", "keys", "fragments", "templates", - "schemas". - Inside the window, use [M], [P], [K], [F], [T], [S] to + allows managing Models, Plugins, API Keys, and Fragments. + Optionally specify an initial {view} to open: + "models", "plugins", "keys", "fragments". + Inside the window, use [M], [P], [K], [F] to switch between views, and [q] or <Esc> to close. - Alternatively use subcommands to open specific views: - *:LLMConfig models* - *:LLMConfig plugins* - *:LLMConfig keys* - *:LLMConfig fragments* - *:LLMConfig templates* - *:LLMConfig schemas* + Alternatively use subcommands to open specific views: + *:LLMConfig models* + *:LLMConfig plugins* + *:LLMConfig keys* + *:LLMConfig fragments* ============================================================= ================= diff --git a/docs/architecture.md b/docs/architecture/README.md similarity index 100% rename from docs/architecture.md rename to docs/architecture/README.md diff --git a/docs/features.md b/docs/features/README.md similarity index 87% rename from docs/features.md rename to docs/features/README.md index 4882545..5adf0e3 100644 --- a/docs/features.md +++ b/docs/features/README.md @@ -20,8 +20,6 @@ - Plugins view: Manage LLM plugins - API Keys view: Manage API keys for multiple providers - Fragments view: Manage files, URLs, and GitHub repos as fragments -- Templates view: Create and execute templates -- Schemas view: Manage and execute schemas ### Fragment Management - Add files as fragments @@ -30,15 +28,6 @@ - Reference fragments by alias or hash - Use fragments in prompts to provide context -### Template System -- Create reusable prompt templates -- Execute templates with variable substitution -- Manage template library - -### Schema System -- Define structured interaction schemas -- Execute schemas for consistent workflows -- Manage schema library ### API Key Management - Store API keys for multiple LLM providers diff --git a/docs/memories/.keep b/docs/memories/.keep new file mode 100644 index 0000000..e69de29 diff --git a/docs/security/README.md b/docs/security/README.md new file mode 100644 index 0000000..1716f6b --- /dev/null +++ b/docs/security/README.md @@ -0,0 +1,13 @@ +# Security Documentation + +Use this section to document security considerations, risks, and mitigations. + +## Risk Assessment +* [ ] Threat Model +* [ ] Data Privacy + +## Compliance +* [ ] Requirements + +## Secrets Management +* [ ] Policy diff --git a/docs/tasks/GUIDE.md b/docs/tasks/GUIDE.md new file mode 100644 index 0000000..3d0a944 --- /dev/null +++ b/docs/tasks/GUIDE.md @@ -0,0 +1,122 @@ +# Task Documentation System Guide + +This guide explains how to create, maintain, and update task documentation. It provides a reusable system for tracking implementation work, decisions, and progress. + +## Core Philosophy +**"If it's not documented in `docs/tasks/`, it didn't happen."** + +## Directory Structure +Tasks are organized by category in `docs/tasks/`: +- `foundation/`: Core architecture and setup +- `infrastructure/`: Services, adapters, platform code +- `domain/`: Business logic, use cases +- `presentation/`: UI, state management +- `features/`: End-to-end feature implementation +- `migration/`: Refactoring, upgrades +- `testing/`: Testing infrastructure +- `review/`: Code reviews and PR analysis + +## Task Document Format + +We use **YAML Frontmatter** for metadata and **Markdown** for content. + +### Frontmatter (Required) +```yaml +--- +id: FOUNDATION-20250521-103000 # Auto-generated Timestamp ID +status: pending # Current status +title: Initial Project Setup # Task Title +priority: medium # high, medium, low +created: 2025-05-21 10:30:00 # Creation timestamp +category: foundation # Category +type: task # task, story, bug, epic (Optional) +sprint: Sprint 1 # Iteration identifier (Optional) +estimate: 3 # Story points / T-shirt size (Optional) +dependencies: TASK-001, TASK-002 # Comma separated list of IDs (Optional) +--- +``` + +### Status Workflow +1. `pending`: Created but not started. +2. `in_progress`: Active development. +3. `review_requested`: Implementation done, awaiting code review. +4. `verified`: Reviewed and approved. +5. `completed`: Merged and finalized. +6. `wip_blocked` / `blocked`: Development halted. +7. `cancelled` / `deferred`: Stopped or postponed. + +### Content Template +```markdown +# [Task Title] + +## Task Information +- **Dependencies**: [List IDs] + +## Task Details +[Description of what needs to be done] + +### Acceptance Criteria +- [ ] Criterion 1 +- [ ] Criterion 2 + +## Implementation Status +### Completed Work +- ✅ Implemented X (file.py) + +### Blockers +[Describe blockers if any] +``` + +## Tools + +Use the `scripts/tasks` wrapper to manage tasks. + +```bash +# Create a new task (standard) +./scripts/tasks create foundation "Task Title" + +# Create an Agile Story in a Sprint +./scripts/tasks create features "User Login" --type story --sprint "Sprint 1" --estimate 5 + +# List tasks (can filter by sprint) +./scripts/tasks list +./scripts/tasks list --sprint "Sprint 1" + +# Find the next best task to work on (Smart Agent Mode) +./scripts/tasks next + +# Update status +./scripts/tasks update [TASK_ID] in_progress +./scripts/tasks update [TASK_ID] review_requested +./scripts/tasks update [TASK_ID] verified +./scripts/tasks update [TASK_ID] completed + +# Migrate legacy tasks (if updating from older version) +./scripts/tasks migrate +``` + +## Agile Methodology + +This system supports Agile/Scrum workflows for LLM-Human collaboration. + +### Sprints +- Tag tasks with `sprint: [Name]` to group them into iterations. +- Use `./scripts/tasks list --sprint [Name]` to view the sprint backlog. + +### Estimation +- Use `estimate: [Value]` (e.g., Fibonacci numbers 1, 2, 3, 5, 8) to size tasks. + +### Auto-Pilot +- The `./scripts/tasks next` command uses an algorithm to determine the optimal next task based on: + 1. Status (In Progress > Pending) + 2. Dependencies (Unblocked > Blocked) + 3. Sprint (Current Sprint > Backlog) + 4. Priority (High > Low) + 5. Type (Stories/Bugs > Tasks) + +## Agent Integration + +Agents (Claude, etc.) use this system to track their work. +- Always check `./scripts/tasks context` or use `./scripts/tasks next` before starting. +- Keep the task file updated with your progress. +- Use `review_requested` when you need human feedback. diff --git a/docs/tasks/code-quality/CODE-QUALITY-005-increase-code-coverage-to-80.md b/docs/tasks/code-quality/CODE-QUALITY-005-increase-code-coverage-to-80.md index c219bfb..b1c9ead 100644 --- a/docs/tasks/code-quality/CODE-QUALITY-005-increase-code-coverage-to-80.md +++ b/docs/tasks/code-quality/CODE-QUALITY-005-increase-code-coverage-to-80.md @@ -1,14 +1,15 @@ -# Task: Increase Code Coverage to 80% +--- +id: CODE-QUALITY-005 +status: pending +title: Increase Code Coverage to 80% +priority: Low +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -## Task Information -- **Task ID**: CODE-QUALITY-005 -- **Status**: pending -- **Priority**: Low (P3) -- **Phase**: 8 -- **Effort Estimate**: 5 days -- **Dependencies**: CRITICAL-007 +# Increase Code Coverage to 80% -## Task Details ### Description To further improve the quality and reliability of the codebase, this task is to increase the code coverage from 70% to at least 80%. @@ -34,7 +35,3 @@ To further improve the quality and reliability of the codebase, this task is to ## Git History - *No commits yet* - ---- -*Created: 2025-11-16* -*Last updated: 2025-11-16* diff --git a/docs/tasks/completed/CODE-QUALITY-001-remove-debug-logging.md b/docs/tasks/completed/CODE-QUALITY-001-remove-debug-logging.md index 4846e49..0f3eab8 100644 --- a/docs/tasks/completed/CODE-QUALITY-001-remove-debug-logging.md +++ b/docs/tasks/completed/CODE-QUALITY-001-remove-debug-logging.md @@ -1,16 +1,14 @@ -# Task: Remove Excessive Debug Logging - -## Task Information -- **Task ID**: CODE-QUALITY-001 -- **Status**: completed -- **Priority**: high -- **Phase**: 2 -- **Estimated Effort**: 0.5 days -- **Actual Effort**: 0.5 days -- **Completed**: 2025-02-11 -- **Dependencies**: None +--- +id: CODE-QUALITY-001 +status: completed +title: Remove Excessive Debug Logging +priority: high +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -## Task Details +# Remove Excessive Debug Logging ### Description Remove or gate debug logging statements throughout the codebase. Currently there are 109+ `vim.notify` calls at DEBUG/INFO levels that clutter the notification area for users. @@ -138,9 +136,3 @@ These are all intentional user-facing notifications that provide valuable feedba - Debug mode still provides verbose logging when enabled - No notification spam during normal operations - ~103 INFO/DEBUG statements remaining, but all are intentional UX or properly gated - ---- - -*Created: 2025-02-11* -*Completed: 2025-02-11* -*Status: completed - Clean notification experience for users* diff --git a/docs/tasks/completed/CODE-QUALITY-002-remove-duplicate-command.md b/docs/tasks/completed/CODE-QUALITY-002-remove-duplicate-command.md index ce789fb..55080cb 100644 --- a/docs/tasks/completed/CODE-QUALITY-002-remove-duplicate-command.md +++ b/docs/tasks/completed/CODE-QUALITY-002-remove-duplicate-command.md @@ -1,16 +1,14 @@ -# Task: Remove Duplicate LLMChat Command Registration - -## Task Information -- **Task ID**: CODE-QUALITY-002 -- **Status**: completed -- **Priority**: medium -- **Phase**: 2 -- **Estimated Effort**: 0.1 days -- **Actual Effort**: 0.05 days (5 minutes) -- **Completed**: 2025-02-11 -- **Dependencies**: None +--- +id: CODE-QUALITY-002 +status: completed +title: Remove Duplicate LLMChat Command Registration +priority: medium +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -## Task Details +# Remove Duplicate LLMChat Command Registration ### Description The `:LLMChat` command is registered twice in `plugin/llm.lua` with identical implementations (lines 109-122 and 160-172). This is redundant and confusing for maintenance. @@ -88,9 +86,3 @@ Only one registration remains ✅ - No functional changes, just code cleanup - Reduces plugin load time (minimal) - Cleaner codebase for maintenance - ---- - -*Created: 2025-02-11* -*Completed: 2025-02-11* -*Status: completed - Duplicate removed successfully* diff --git a/docs/tasks/completed/CODE-QUALITY-003-remove-unused-validation.md b/docs/tasks/completed/CODE-QUALITY-003-remove-unused-validation.md index a7654ed..d955702 100644 --- a/docs/tasks/completed/CODE-QUALITY-003-remove-unused-validation.md +++ b/docs/tasks/completed/CODE-QUALITY-003-remove-unused-validation.md @@ -1,16 +1,14 @@ -# Task: Remove Unused validate_view_name Function - -## Task Information -- **Task ID**: CODE-QUALITY-003 -- **Status**: completed -- **Priority**: low -- **Phase**: 3 -- **Estimated Effort**: 0.1 days -- **Actual Effort**: 0.05 days (5 minutes) -- **Completed**: 2025-02-11 -- **Dependencies**: None +--- +id: CODE-QUALITY-003 +status: completed +title: Remove Unused validate_view_name Function +priority: low +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -## Task Details +# Remove Unused validate_view_name Function ### Description The `validate_view_name` function in `plugin/llm.lua:126-145` is defined but never used anywhere in the codebase. This is dead code that should be removed or put to use. @@ -102,9 +100,3 @@ $ grep -rn "validate_view_name" . --include="*.lua" - Clean removal, no dependencies - Reduces code complexity - If view validation is needed in future, it should be in unified_manager, not plugin layer - ---- - -*Created: 2025-02-11* -*Completed: 2025-02-11* -*Status: completed - Dead code removed* diff --git a/docs/tasks/completed/CODE-QUALITY-004-add-model-alias-management.md b/docs/tasks/completed/CODE-QUALITY-004-add-model-alias-management.md index 97da802..9f42c8e 100644 --- a/docs/tasks/completed/CODE-QUALITY-004-add-model-alias-management.md +++ b/docs/tasks/completed/CODE-QUALITY-004-add-model-alias-management.md @@ -1,21 +1,15 @@ -# Task: Add Model Alias Management - -## Task Information -- **Task ID**: CODE-QUALITY-004 -- **Status**: completed - -### Investigation Summary (2025-11-16) -This task was verified as **Implemented (with integration into existing views)**. -- `lua/llm/ui/views/aliases_view.lua` does not exist, and there is no separate "Aliases" view in the unified manager. -- However, the functionality is integrated into the existing models view (`lua/llm/ui/views/models_view.lua`). -- `lua/llm/managers/models_manager.lua` contains the necessary backend functions for alias management. +--- +id: CODE-QUALITY-004 +status: completed +title: Add Model Alias Management +priority: Low +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -- **Priority**: Low (P3) -- **Phase**: 6 -- **Effort Estimate**: 3 days -- **Dependencies**: None +# Add Model Alias Management -## Task Details ### Description The `llm` CLI allows users to create and manage aliases for models. While the `llm-nvim` plugin can use these aliases, it does not provide a way to manage them. This task is to add a new view to the unified manager to list, create, and delete model aliases. @@ -45,7 +39,3 @@ The `llm` CLI allows users to create and manage aliases for models. While the `l ## Git History - *No commits yet* - ---- -*Created: 2025-11-14* -*Last updated: 2025-11-14* diff --git a/docs/tasks/completed/CRITICAL-001-fix-unpack-compatibility.md b/docs/tasks/completed/CRITICAL-001-fix-unpack-compatibility.md index 4d4884b..9beeadf 100644 --- a/docs/tasks/completed/CRITICAL-001-fix-unpack-compatibility.md +++ b/docs/tasks/completed/CRITICAL-001-fix-unpack-compatibility.md @@ -1,16 +1,14 @@ -# Task: Fix Lua 5.2+ Compatibility - Replace unpack with table.unpack - -## Task Information -- **Task ID**: CRITICAL-001 -- **Status**: completed -- **Priority**: critical -- **Phase**: 1 -- **Estimated Effort**: 0.25 days -- **Actual Effort**: 0.1 days (15 minutes) -- **Completed**: 2025-02-11 -- **Dependencies**: None +--- +id: CRITICAL-001 +status: completed +title: Fix Lua 5.2+ Compatibility - Replace unpack with table.unpack +priority: critical +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -## Task Details +# Fix Lua 5.2+ Compatibility - Replace unpack with table.unpack ### Description The chat module uses the deprecated global `unpack` function which was removed in Lua 5.2 and replaced with `table.unpack`. This breaks all chat functionality when running on Lua 5.2+. @@ -74,9 +72,3 @@ Error -> tests/spec/chat_spec.lua @ 90 - Completed faster than estimated (15 min vs 2 hours) - Confirms Neovim's LuaJIT supports table.unpack - The 2 remaining test failures are unrelated to this fix (mock infrastructure issues) - ---- - -*Created: 2025-02-11* -*Completed: 2025-02-11* -*Status: completed - All chat functionality now working* diff --git a/docs/tasks/completed/CRITICAL-002-implement-line-buffering.md b/docs/tasks/completed/CRITICAL-002-implement-line-buffering.md index 20e15b4..949f33c 100644 --- a/docs/tasks/completed/CRITICAL-002-implement-line-buffering.md +++ b/docs/tasks/completed/CRITICAL-002-implement-line-buffering.md @@ -1,16 +1,14 @@ -# Task: Implement Proper Line Buffering in job.lua - -## Task Information -- **Task ID**: CRITICAL-002 -- **Status**: completed -- **Priority**: critical -- **Phase**: 1 -- **Estimated Effort**: 1 day -- **Actual Effort**: 0.5 days -- **Completed**: 2025-02-11 -- **Dependencies**: None +--- +id: CRITICAL-002 +status: completed +title: Implement Proper Line Buffering in job.lua +priority: critical +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -## Task Details +# Implement Proper Line Buffering in job.lua ### Description The `job.lua` module currently passes raw stdout chunks to callbacks without proper line buffering and splitting. This causes inconsistent streaming behavior and test failures. @@ -139,9 +137,3 @@ end - Both stdout and stderr buffering implemented - Handles edge cases (empty lines, partial lines, multi-line chunks) - No performance impact - buffering is minimal overhead - ---- - -*Created: 2025-02-11* -*Completed: 2025-02-11* -*Status: completed - Streaming output now reliable across all LLM commands* diff --git a/docs/tasks/completed/CRITICAL-003-redesign-chat-feature.md b/docs/tasks/completed/CRITICAL-003-redesign-chat-feature.md index 8765147..0ef5afb 100644 --- a/docs/tasks/completed/CRITICAL-003-redesign-chat-feature.md +++ b/docs/tasks/completed/CRITICAL-003-redesign-chat-feature.md @@ -1,21 +1,15 @@ -# Task: Redesign and Fix Chat Feature - -## Task Information -- **Task ID**: CRITICAL-003 -- **Status**: completed - -### Investigation Summary (2025-11-16) -This task was verified as **Implemented**. -- `lua/llm/chat/session.lua` and `lua/llm/chat/buffer.lua` exist and contain the expected logic. -- `lua/llm/chat.lua` orchestrates these modules. -- `tests/spec/chat_spec.lua` contains tests reflecting the new architecture. +--- +id: CRITICAL-003 +status: completed +title: Redesign and Fix Chat Feature +priority: Critical +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -- **Priority**: Critical (P0) -- **Phase**: 5 -- **Effort Estimate**: 12-16 hours -- **Dependencies**: None +# Redesign and Fix Chat Feature -## Task Details ### Description The current chat implementation is unstable, with multiple test failures and a user experience that does not match the capabilities of the `llm` CLI. This task is to completely redesign and rewrite the chat functionality to be robust, stable, and feature-rich, using the proposed architecture outlined in this document. @@ -70,7 +64,3 @@ The implementation should follow the "Proposed Architecture" below, which was pr ## Git History - *No commits yet* - ---- -*Created: 2025-11-14* -*Last updated: 2025-11-14* diff --git a/docs/tasks/completed/DOCUMENTATION-001-lua-version-requirements.md b/docs/tasks/completed/DOCUMENTATION-001-lua-version-requirements.md index bce5b98..9cf3a1e 100644 --- a/docs/tasks/completed/DOCUMENTATION-001-lua-version-requirements.md +++ b/docs/tasks/completed/DOCUMENTATION-001-lua-version-requirements.md @@ -1,16 +1,14 @@ -# Task: Document Lua Version Requirements - -## Task Information -- **Task ID**: DOCUMENTATION-001 -- **Status**: completed -- **Priority**: medium -- **Phase**: 2 -- **Estimated Effort**: 0.25 days -- **Actual Effort**: 0.25 days -- **Completed**: 2025-02-11 -- **Dependencies**: TESTING-001 (audit results inform requirements) +--- +id: DOCUMENTATION-001 +status: completed +title: Document Lua Version Requirements +priority: medium +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -## Task Details +# Document Lua Version Requirements ### Description Explicitly document the minimum Lua version required for llm-nvim and explain Neovim's Lua environment to help users troubleshoot compatibility issues. @@ -158,9 +156,3 @@ If you encounter errors like "attempt to call a nil value (global 'unpack')": - Based on actual audit results from TESTING-001 - Helps users understand why certain code patterns are used - Provides clear troubleshooting path for compatibility issues - ---- - -*Created: 2025-02-11* -*Completed: 2025-02-11* -*Status: completed - Lua requirements fully documented* diff --git a/docs/tasks/completed/DOCUMENTATION-002-add-adrs.md b/docs/tasks/completed/DOCUMENTATION-002-add-adrs.md index 74086e9..4940018 100644 --- a/docs/tasks/completed/DOCUMENTATION-002-add-adrs.md +++ b/docs/tasks/completed/DOCUMENTATION-002-add-adrs.md @@ -1,16 +1,14 @@ -# Task: Add Architectural Decision Records - -## Task Information -- **Task ID**: DOCUMENTATION-002 -- **Status**: completed -- **Priority**: low -- **Phase**: 3 -- **Estimated Effort**: 0.5 days -- **Actual Effort**: 0.5 days -- **Completed**: 2025-02-11 -- **Dependencies**: None +--- +id: DOCUMENTATION-002 +status: completed +title: Add Architectural Decision Records +priority: low +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -## Task Details +# Add Architectural Decision Records ### Description Create architectural decision records (ADRs) documenting key design decisions, particularly the streaming implementation refactoring mentioned in docs/tasks.md. @@ -73,189 +71,3 @@ The plugin has undergone significant architectural evolution (streaming unificat - [Related code] - [Related tasks] - [External resources] - ---- -*Date: YYYY-MM-DD* -*Author: [Name]* -``` - -**Key ADRs to Create**: - -1. **ADR-001: Unified Streaming Command Execution** - - Context: Multiple command types needed streaming - - Decision: Single `run_streaming_command` with callbacks - - Consequences: DRY, but requires callback-based design - - Reference: docs/tasks.md streaming unification - -2. **ADR-002: LLM CLI Conversation Management** - - Context: Need chat history across prompts - - Decision: Use llm CLI's `--continue` flag - - Consequences: Depends on llm CLI, but consistent UX - - Reference: docs/tasks.md chat handling - -3. **ADR-003: Lazy-Loaded Manager Facade** - - Context: Startup performance vs feature access - - Decision: Facade with on-demand loading - - Consequences: Fast startup, slight complexity - - Reference: lua/llm/facade.lua - -4. **ADR-004: Temporary Files for Visual Selection** - - Context: How to pass selections to llm CLI - - Decision: Write to temp file, pass as fragment - - Consequences: Consistent with fragments, requires cleanup - - Reference: docs/architecture.md #4 - -**ADR Index** (docs/adr/README.md): -```markdown -# Architectural Decision Records - -## Index - -- [ADR-000](ADR-000-template.md) - Template for new ADRs -- [ADR-001](ADR-001-streaming-unification.md) - Unified Streaming Command Execution -- [ADR-002](ADR-002-chat-conversation.md) - LLM CLI Conversation Management -- [ADR-003](ADR-003-lazy-manager-facade.md) - Lazy-Loaded Manager Facade -- [ADR-004](ADR-004-temp-file-selection.md) - Temporary Files for Visual Selection -- [ADR-005](ADR-005-configuration-system.md) - Centralized Configuration System -- [ADR-006](ADR-006-manager-pattern.md) - Domain-Specific Manager Pattern -- [ADR-007](ADR-007-auto-update-system.md) - Auto-Update System for LLM CLI -- [ADR-008](ADR-008-command-system.md) - Command System Architecture - -## Status Summary -- Accepted: 8 -- Proposed: 0 -- Deprecated: 0 -``` - -## Implementation Status - -### Completed Work - -**✅ Created `docs/adr/` directory structure** - -**✅ ADR-000: Template** (`docs/adr/ADR-000-template.md`) -- Complete template with all required sections -- Usage guidelines and lifecycle documentation -- When to create ADRs and numbering conventions - -**✅ ADR-001: Unified Streaming Command Execution** (`docs/adr/ADR-001-streaming-unification.md`) -- Context: Multiple command types needed streaming -- Decision: Single `run_streaming_command()` with callbacks -- Consequences: DRY principle, flexible callbacks, easy testing -- Alternatives: Separate functions, OOP, coroutines (all rejected) -- References: `lua/llm/api.lua`, CRITICAL-002, task history - -**✅ ADR-002: LLM CLI Native Conversation Management** (`docs/adr/ADR-002-chat-conversation.md`) -- Context: Need for conversation history in chat -- Decision: Use llm CLI's `--continue` flag -- Consequences: Leverage existing features, no reimplementation -- Alternatives: Plugin storage, full history, hybrid (all rejected) -- References: `lua/llm/chat.lua`, llm CLI docs, task history - -**✅ ADR-003: Lazy-Loaded Manager Facade** (`docs/adr/ADR-003-lazy-manager-facade.md`) -- Context: Startup performance vs feature access -- Decision: Facade with on-demand loading -- Consequences: Fast startup (50ms vs 150ms), memory efficient -- Alternatives: Eager loading, direct require, DI (all rejected) -- Performance analysis: 100ms faster startup, 5-10ms first-use cost -- References: `lua/llm/facade.lua` - -**✅ ADR-004: Temporary Files for Visual Selection** (`docs/adr/ADR-004-temp-file-selection.md`) -- Context: How to pass selections to llm CLI -- Decision: Write to temp file, pass as fragment -- Consequences: No escaping issues, consistent with fragments -- Alternatives: stdin, shell escaping, named pipes, in-memory (all rejected) -- Performance: <1ms for small, 1-20ms for large selections -- References: `lua/llm/commands.lua`, `lua/llm/core/utils/text.lua` - -**✅ ADR-005: Centralized Configuration System** (`docs/adr/ADR-005-configuration-system.md`) -- Context: Need for robust configuration system -- Decision: Centralized config with validation and change listeners -- Consequences: Type safety, reactive updates, single source of truth -- Alternatives: Global variables, simple table, external library (all rejected) -- References: `lua/llm/config.lua`, `lua/llm/core/utils/validate.lua` - -**✅ ADR-006: Domain-Specific Manager Pattern** (`docs/adr/ADR-006-manager-pattern.md`) -- Context: Multiple domains need clear separation -- Decision: Domain-specific managers with facade access -- Consequences: Separation of concerns, testability, maintainability -- Alternatives: Monolithic module, functional approach, OOP classes (all rejected) -- References: `lua/llm/facade.lua`, `lua/llm/managers/`, `lua/llm/ui/views/` - -**✅ ADR-007: Auto-Update System for LLM CLI** (`docs/adr/ADR-007-auto-update-system.md`) -- Context: Need to keep llm CLI current -- Decision: Background update checks with multiple package manager support -- Consequences: Current dependencies, flexible installation, non-intrusive -- Alternatives: Manual updates, prompt-based, external manager (all rejected) -- References: `lua/llm/core/utils/shell.lua`, `lua/llm/init.lua`, `plugin/llm.lua` - -**✅ ADR-008: Command System Architecture** (`docs/adr/ADR-008-command-system.md`) -- Context: Need flexible command system with subcommands -- Decision: Multi-layered command system with dispatcher -- Consequences: Flexible, consistent, discoverable, testable -- Alternatives: Monolithic handler, per-command modules, event-driven (all rejected) -- References: `plugin/llm.lua`, `lua/llm/commands.lua`, `lua/llm/api.lua` - -**✅ ADR Index** (`docs/adr/README.md`) -- Complete index with all ADRs -- Status summary table (8 accepted ADRs) -- Guidelines for creating new ADRs -- Reading order for new contributors -- Links to related documentation - -**✅ Updated `docs/architecture.md`** -- Added quick links section referencing ADRs -- Linked each architectural decision to its ADR -- All major decisions now reference their detailed ADRs: - - ADR-001: Streaming (Decision #2) - - ADR-002: Chat management (Decision #8) - - ADR-003: Manager facade (Decision #1) - - ADR-004: Selection handling (Decision #4) - - ADR-005: Configuration system (Decision #3) - - ADR-006: Manager pattern (Decision #5) - - ADR-007: Auto-update system (Decision #10) - - ADR-008: Command system (Data Flow section) - -### ADR Content Quality - -Each ADR includes: -- **Clear context**: Problem statement and constraints -- **Explicit decision**: What was chosen and why -- **Consequences**: Both positive and negative outcomes -- **Alternatives**: What else was considered and why rejected -- **Implementation details**: Where to find the code -- **References**: Links to code, tasks, external resources -- **Real data**: Performance measurements where applicable - -### Files Created -- `docs/adr/ADR-000-template.md` -- `docs/adr/ADR-001-streaming-unification.md` -- `docs/adr/ADR-002-chat-conversation.md` -- `docs/adr/ADR-003-lazy-manager-facade.md` -- `docs/adr/ADR-004-temp-file-selection.md` -- `docs/adr/ADR-005-configuration-system.md` -- `docs/adr/ADR-006-manager-pattern.md` -- `docs/adr/ADR-007-auto-update-system.md` -- `docs/adr/ADR-008-command-system.md` -- `docs/adr/README.md` - -### Files Modified -- `docs/architecture.md` (added ADR references) - -### Git History -- Commit: Add architectural decision records (ADRs) - -### Notes -- ADRs document decisions already implemented and tested -- All ADRs status: "Accepted" (production-ready) -- Comprehensive coverage of major architectural patterns -- Clear writing suitable for new contributors -- Links provide traceability to implementation -- **Additional ADRs created**: Configuration system, manager pattern, auto-update system, command system -- **Complete coverage**: All major architectural decisions now documented - ---- - -*Created: 2025-02-11* -*Completed: 2025-02-11* -*Status: completed - All major architectural decisions documented* diff --git a/docs/tasks/completed/PERFORMANCE-001-implement-manager-caching.md b/docs/tasks/completed/PERFORMANCE-001-implement-manager-caching.md index 1597356..e03bf0a 100644 --- a/docs/tasks/completed/PERFORMANCE-001-implement-manager-caching.md +++ b/docs/tasks/completed/PERFORMANCE-001-implement-manager-caching.md @@ -1,21 +1,14 @@ -# Task: Implement Caching for Manager LLM CLI Calls - -## Task Information -- **Task ID**: PERFORMANCE-001 -- **Status**: completed - -### Investigation Summary (2025-11-16) -This task was verified as **Implemented (without explicit TTL configuration)**. -- The `cache.lua` module is used in manager files to store and retrieve results of `llm_cli.run_llm_command`. -- `lua/llm/managers/models_manager.lua` demonstrates the use of `cache.get()`, `cache.set()`, and `cache.invalidate()`. -- There is no explicit cache TTL configuration in `lua/llm/config.lua`. - -- **Priority**: low -- **Phase**: 4 -- **Estimated Effort**: 1 day -- **Dependencies**: None +--- +id: PERFORMANCE-001 +status: completed +title: Implement Caching for Manager LLM CLI Calls +priority: low +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -## Task Details +# Implement Caching for Manager LLM CLI Calls ### Description Implement TTL-based caching for frequently-called llm CLI commands in manager modules to improve responsiveness of the unified manager UI. @@ -129,8 +122,3 @@ time llm models list # ~5ms (from cache) - **Con**: Stale data if models change externally - **Con**: Additional memory usage (minimal) - **Con**: Cache invalidation complexity - ---- - -*Created: 2025-02-11* -*Status: pending - Nice-to-have performance optimization* diff --git a/docs/tasks/completed/TESTING-001-audit-lua-compatibility.md b/docs/tasks/completed/TESTING-001-audit-lua-compatibility.md index f007cab..daa375e 100644 --- a/docs/tasks/completed/TESTING-001-audit-lua-compatibility.md +++ b/docs/tasks/completed/TESTING-001-audit-lua-compatibility.md @@ -1,16 +1,14 @@ -# Task: Audit Codebase for Lua 5.1 vs 5.2+ Compatibility - -## Task Information -- **Task ID**: TESTING-001 -- **Status**: completed -- **Priority**: high -- **Phase**: 2 -- **Estimated Effort**: 1 day -- **Actual Effort**: 0.25 days -- **Completed**: 2025-02-11 -- **Dependencies**: CRITICAL-001 (provides pattern for fixes) +--- +id: TESTING-001 +status: completed +title: Audit Codebase for Lua 5.1 vs 5.2+ Compatibility +priority: high +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -## Task Details +# Audit Codebase for Lua 5.1 vs 5.2+ Compatibility ### Description Systematically audit the entire codebase for Lua version compatibility issues. The `unpack` issue in CRITICAL-001 suggests there may be other Lua 5.1-specific code that needs updating. @@ -205,9 +203,3 @@ lua/llm/facade.lua:65: error("Failed to load unified manager") - Codebase is already well-written with modern Lua practices - CRITICAL-001 was the only compatibility issue in entire codebase - No additional fix tasks needed - ---- - -*Created: 2025-02-11* -*Completed: 2025-02-11* -*Status: completed - Codebase is fully Lua 5.2+ compatible* diff --git a/docs/tasks/completed/TESTING-002-add-ci-pipeline.md b/docs/tasks/completed/TESTING-002-add-ci-pipeline.md index 652431f..bb8dc1b 100644 --- a/docs/tasks/completed/TESTING-002-add-ci-pipeline.md +++ b/docs/tasks/completed/TESTING-002-add-ci-pipeline.md @@ -1,21 +1,14 @@ -# Task: Add CI/CD Pipeline for Automated Testing - -## Task Information -- **Task ID**: TESTING-002 -- **Status**: completed - -### Investigation Summary (2025-11-16) -This task was verified as **Implemented**. -- A CI workflow file exists at `.github/workflows/ci.yml`. -- The workflow runs on push and pull_request events. -- It includes a `coverage` job that runs tests with Luacov and checks if the total coverage is above a certain threshold, failing the build if it's not. - -- **Priority**: medium -- **Phase**: 3 -- **Estimated Effort**: 1 day -- **Dependencies**: CRITICAL-001, CRITICAL-002 (tests must pass first) +--- +id: TESTING-002 +status: completed +title: Add CI/CD Pipeline for Automated Testing +priority: medium +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -## Task Details +# Add CI/CD Pipeline for Automated Testing ### Description Implement GitHub Actions workflow to automatically run tests on every push and pull request. This will catch issues like the `unpack` compatibility bug before they reach users. @@ -101,8 +94,3 @@ jobs: - Code coverage with luacov - Performance benchmarks - Integration tests with Neovim headless mode - ---- - -*Created: 2025-02-11* -*Status: pending - Improves code quality and prevents regressions* diff --git a/docs/tasks/critical/CRITICAL-004-add-embeddings-support.md b/docs/tasks/critical/CRITICAL-004-add-embeddings-support.md index 8e1540f..c05c973 100644 --- a/docs/tasks/critical/CRITICAL-004-add-embeddings-support.md +++ b/docs/tasks/critical/CRITICAL-004-add-embeddings-support.md @@ -1,21 +1,15 @@ -# Task: Add Embeddings Support - -## Task Information -- **Task ID**: CRITICAL-004 -- **Status**: pending - -### Investigation Summary (2025-11-16) -This task was verified as **Not Implemented**. -- `lua/llm/managers/embeddings_manager.lua` does not exist. -- `lua/llm/ui/views/embeddings_view.lua` does not exist. -- The commands `:LLMEmbed` and `:LLMSimilar` do not exist. +--- +id: CRITICAL-004 +status: pending +title: Add Embeddings Support +priority: High +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -- **Priority**: High (P1) -- **Phase**: 5 -- **Effort Estimate**: 10 days -- **Dependencies**: None +# Add Embeddings Support -## Task Details ### Description The `llm` CLI provides a comprehensive suite of tools for working with embeddings, including creating embeddings, finding similar items, and managing collections. This feature set is entirely missing from the `llm-nvim` plugin. This task is to implement a user interface and the underlying logic to expose the `llm` CLI's embedding functionality within Neovim. @@ -49,7 +43,3 @@ The `llm` CLI provides a comprehensive suite of tools for working with embedding ## Git History - *No commits yet* - ---- -*Created: 2025-11-14* -*Last updated: 2025-11-14* diff --git a/docs/tasks/critical/CRITICAL-005-add-tools-support.md b/docs/tasks/critical/CRITICAL-005-add-tools-support.md index 064d1aa..ea1c772 100644 --- a/docs/tasks/critical/CRITICAL-005-add-tools-support.md +++ b/docs/tasks/critical/CRITICAL-005-add-tools-support.md @@ -1,20 +1,15 @@ -# Task: Add Tools (Function Calling) Support - -## Task Information -- **Task ID**: CRITICAL-005 -- **Status**: pending - -### Investigation Summary (2025-11-16) -This task was verified as **Not Implemented**. -- `tools_manager.lua` does not exist. -- The `:LLM` command in `lua/llm/commands.lua` does not handle a `--tool` or `-t` flag. +--- +id: CRITICAL-005 +status: pending +title: Add Tools (Function Calling) Support +priority: High +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -- **Priority**: High (P1) -- **Phase**: 5 -- **Effort Estimate**: 8 days -- **Dependencies**: None +# Add Tools (Function Calling) Support -## Task Details ### Description The `llm` CLI supports "tools", which allow the language model to execute predefined functions to retrieve information or perform actions. This is a powerful feature for creating more interactive and capable agents, and it is currently not implemented in the `llm-nvim` plugin. This task is to add support for using tools in prompts. @@ -48,7 +43,3 @@ The `llm` CLI supports "tools", which allow the language model to execute predef ## Git History - *No commits yet* - ---- -*Created: 2025-11-14* -*Last updated: 2025-11-14* diff --git a/docs/tasks/critical/CRITICAL-006-add-multimodal-attachments-support.md b/docs/tasks/critical/CRITICAL-006-add-multimodal-attachments-support.md index 74cd3a3..3c6f422 100644 --- a/docs/tasks/critical/CRITICAL-006-add-multimodal-attachments-support.md +++ b/docs/tasks/critical/CRITICAL-006-add-multimodal-attachments-support.md @@ -1,19 +1,15 @@ -# Task: Add Multi-modal Attachments Support - -## Task Information -- **Task ID**: CRITICAL-006 -- **Status**: pending - -### Investigation Summary (2025-11-16) -This task was verified as **Not Implemented**. -- The `:LLM` command in `lua/llm/commands.lua` does not handle an `--attach` or `-a` flag. +--- +id: CRITICAL-006 +status: pending +title: Add Multi-modal Attachments Support +priority: Medium +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -- **Priority**: Medium (P2) -- **Phase**: 6 -- **Effort Estimate**: 5 days -- **Dependencies**: None +# Add Multi-modal Attachments Support -## Task Details ### Description The `llm` CLI can process images, audio, and video files as attachments to a prompt. The `llm-nvim` plugin is currently limited to text-based inputs. This task is to add support for attaching multi-modal files to prompts. @@ -39,7 +35,3 @@ The `llm` CLI can process images, audio, and video files as attachments to a pro ## Git History - *No commits yet* - ---- -*Created: 2025-11-14* -*Last updated: 2025-11-14* diff --git a/docs/tasks/critical/CRITICAL-007-increase-code-coverage-to-70.md b/docs/tasks/critical/CRITICAL-007-increase-code-coverage-to-70.md index a6e33fb..a5d9087 100644 --- a/docs/tasks/critical/CRITICAL-007-increase-code-coverage-to-70.md +++ b/docs/tasks/critical/CRITICAL-007-increase-code-coverage-to-70.md @@ -1,14 +1,15 @@ -# Task: Increase Code Coverage to 70% +--- +id: CRITICAL-007 +status: in_progress +title: Increase Code Coverage to 70% +priority: High +created: 2025-12-11 06:18:18 +category: unknown +type: task +--- -## Task Information -- **Task ID**: CRITICAL-007 -- **Status**: in_progress -- **Priority**: High (P1) -- **Phase**: 7 -- **Effort Estimate**: 3 days -- **Dependencies**: None +# Increase Code Coverage to 70% -## Task Details ### Description The current code coverage is below the 70% threshold required by the CI/CD pipeline. This task is to increase the code coverage to at least 70% to ensure the stability and reliability of the codebase. @@ -34,7 +35,3 @@ The current code coverage is below the 70% threshold required by the CI/CD pipel ## Git History - *No commits yet* - ---- -*Created: 2025-11-16* -*Last updated: 2025-11-16* diff --git a/docs/tasks/domain/.keep b/docs/tasks/domain/.keep new file mode 100644 index 0000000..e69de29 diff --git a/docs/tasks/features/.keep b/docs/tasks/features/.keep new file mode 100644 index 0000000..e69de29 diff --git a/docs/tasks/foundation/.keep b/docs/tasks/foundation/.keep new file mode 100644 index 0000000..e69de29 diff --git a/docs/tasks/infrastructure/.keep b/docs/tasks/infrastructure/.keep new file mode 100644 index 0000000..e69de29 diff --git a/docs/tasks/migration/.keep b/docs/tasks/migration/.keep new file mode 100644 index 0000000..e69de29 diff --git a/docs/tasks/presentation/.keep b/docs/tasks/presentation/.keep new file mode 100644 index 0000000..e69de29 diff --git a/docs/tasks/review/.keep b/docs/tasks/review/.keep new file mode 100644 index 0000000..e69de29 diff --git a/docs/tasks/security/.keep b/docs/tasks/security/.keep new file mode 100644 index 0000000..e69de29 diff --git a/docs/tasks/testing/.keep b/docs/tasks/testing/.keep new file mode 100644 index 0000000..e69de29 diff --git a/llms.txt b/llms.txt index fbb907c..767ff1f 100644 --- a/llms.txt +++ b/llms.txt @@ -3,7 +3,7 @@ # GitHub: https://github.com/julwrites/llm-nvim ## Overview -julwrites/llm-nvim is a Neovim plugin designed to seamlessly integrate with the llm CLI tool by Simon Willison. It enables users to interact with large language models (LLMs) directly from Neovim, offering features like prompting, code explanation, and management of models, API keys, fragments, templates, and schemas. +julwrites/llm-nvim is a Neovim plugin designed to seamlessly integrate with the llm CLI tool by Simon Willison. It enables users to interact with large language models (LLMs) directly from Neovim, offering features like prompting, code explanation, and management of models, API keys, and fragments. ## Key Features - Send prompts to LLMs directly from Neovim for quick responses. @@ -11,7 +11,6 @@ julwrites/llm-nvim is a Neovim plugin designed to seamlessly integrate with the - Explain code in the current buffer with detailed insights from LLMs. - Manage API keys, custom models, and system prompts for various LLM providers. - Use fragments (files, URLs, GitHub repos) to enrich prompts. -- Create and manage templates and schemas for structured interactions. - Access a unified manager window for easy navigation of models, plugins, keys, and more. ## Installation @@ -40,7 +39,7 @@ The plugin doesn't set default key mappings. Users can create their own, such as - Enable debug mode for troubleshooting integration issues. ## Keywords -Neovim, LLM, AI plugin, code assistance, text generation, chat with AI, code explanation, Neovim AI integration, llm CLI, Simon Willison, GPT, Claude, Llama, API key management, fragments, templates, schemas. +Neovim, LLM, AI plugin, code assistance, text generation, chat with AI, code explanation, Neovim AI integration, llm CLI, Simon Willison, GPT, Claude, Llama, API key management, fragments. ## License Apache 2.0 diff --git a/lua/llm/commands.lua b/lua/llm/commands.lua index 0ef1394..5232979 100644 --- a/lua/llm/commands.lua +++ b/lua/llm/commands.lua @@ -466,19 +466,22 @@ end -- Test function to verify terminal creation function M.test_terminal_creation() - vim.notify("Testing terminal creation...", vim.log.levels.INFO) - vim.cmd('new') - local buf = vim.api.nvim_get_current_buf() - vim.notify("Created buffer: " .. buf, vim.log.levels.INFO) - - local cmd = "echo 'Test terminal'" - vim.notify("Executing: terminal " .. cmd, vim.log.levels.INFO) - vim.cmd('terminal ' .. cmd) - - local term_buf = vim.api.nvim_get_current_buf() - vim.notify("Terminal buffer: " .. term_buf, vim.log.levels.INFO) - local buf_type = vim.api.nvim_buf_get_option(term_buf, 'buftype') - vim.notify("Buffer type: " .. buf_type, vim.log.levels.INFO) + local config = require('llm.config') + if config.get('debug') then + vim.notify("Testing terminal creation...", vim.log.levels.DEBUG) + vim.cmd('new') + local buf = vim.api.nvim_get_current_buf() + vim.notify("Created buffer: " .. buf, vim.log.levels.DEBUG) + + local cmd = "echo 'Test terminal'" + vim.notify("Executing: terminal " .. cmd, vim.log.levels.DEBUG) + vim.cmd('terminal ' .. cmd) + + local term_buf = vim.api.nvim_get_current_buf() + vim.notify("Terminal buffer: " .. term_buf, vim.log.levels.DEBUG) + local buf_type = vim.api.nvim_buf_get_option(term_buf, 'buftype') + vim.notify("Buffer type: " .. buf_type, vim.log.levels.DEBUG) + end vim.cmd('startinsert') end diff --git a/lua/llm/managers/custom_openai.lua b/lua/llm/managers/custom_openai.lua index 8a49b72..f459a0d 100644 --- a/lua/llm/managers/custom_openai.lua +++ b/lua/llm/managers/custom_openai.lua @@ -30,7 +30,7 @@ function M.load_custom_openai_models() local _, yaml_path = file_utils.get_config_path("extra-openai-models.yaml") if config.get("debug") then - vim.notify("Looking for custom OpenAI models at: " .. (yaml_path or "path not found"), vim.log.levels.INFO) + vim.notify("Looking for custom OpenAI models at: " .. (yaml_path or "path not found"), vim.log.levels.DEBUG) end if not yaml_path then @@ -41,7 +41,7 @@ function M.load_custom_openai_models() local file = io.open(yaml_path, "r") if not file then if config.get("debug") then - vim.notify("extra-openai-models.yaml not found at: " .. yaml_path .. ". No custom models loaded.", vim.log.levels.INFO) + vim.notify("extra-openai-models.yaml not found at: " .. yaml_path .. ". No custom models loaded.", vim.log.levels.DEBUG) end return {} end @@ -50,7 +50,7 @@ function M.load_custom_openai_models() file:close() if not content or content == "" then if config.get("debug") then - vim.notify("extra-openai-models.yaml is empty. No custom models loaded.", vim.log.levels.INFO) + vim.notify("extra-openai-models.yaml is empty. No custom models loaded.", vim.log.levels.DEBUG) end return {} end diff --git a/lua/llm/managers/fragments_manager.lua b/lua/llm/managers/fragments_manager.lua index 2cd80fb..ce38f54 100644 --- a/lua/llm/managers/fragments_manager.lua +++ b/lua/llm/managers/fragments_manager.lua @@ -33,7 +33,7 @@ function M.populate_fragments_buffer(bufnr) local lines = { "# Fragment Management", "", - "Navigate: [M]odels [P]lugins [K]eys [T]emplates [S]chemas", + "Navigate: [M]odels [P]lugins [K]eys", "Actions: [v]iew [a]dd alias [r]emove alias [n]ew file [g]itHub [p]rompt [t]oggle view [q]uit", "──────────────────────────────────────────────────────────────", "", diff --git a/lua/llm/managers/keys_manager.lua b/lua/llm/managers/keys_manager.lua index 48babbf..9b92a83 100644 --- a/lua/llm/managers/keys_manager.lua +++ b/lua/llm/managers/keys_manager.lua @@ -44,6 +44,7 @@ end function M.set_api_key(key_name, key_value) local result = llm_cli.run_llm_command('keys set ' .. key_name .. ' --value ' .. key_value) cache.invalidate('keys') + cache.invalidate('models') -- Invalidate models cache when keys change return result ~= nil end @@ -89,6 +90,7 @@ function M.remove_api_key(key_name) keys_file_write:close() cache.invalidate('keys') + cache.invalidate('models') -- Invalidate models cache when keys change return true else vim.notify("Key '" .. key_name .. "' not found in keys.json", vim.log.levels.WARN) @@ -105,7 +107,7 @@ function M.populate_keys_buffer(bufnr) local lines = { "# API Key Management", "", - "Navigate: [M]odels [P]lugins [F]ragments [T]emplates [S]chemas", + "Navigate: [M]odels [P]lugins [F]ragments", "Actions: [s]et key [r]emove key [A]dd custom [q]uit", "──────────────────────────────────────────────────────────────", "", diff --git a/lua/llm/managers/models_manager.lua b/lua/llm/managers/models_manager.lua index c0e0ff2..9df174b 100644 --- a/lua/llm/managers/models_manager.lua +++ b/lua/llm/managers/models_manager.lua @@ -50,10 +50,10 @@ function M.get_available_providers() return { -- OpenAI only requires the API key, not a plugin OpenAI = keys_manager.is_key_set("openai"), - Anthropic = keys_manager.is_key_set("anthropic"), - Mistral = keys_manager.is_key_set("mistral"), - Gemini = keys_manager.is_key_set("gemini"), -- Corrected key name from "google" to "gemini" - Groq = keys_manager.is_key_set("groq"), + Anthropic = keys_manager.is_key_set("anthropic") and plugins_manager.is_plugin_installed("llm-anthropic"), + Mistral = keys_manager.is_key_set("mistral") and plugins_manager.is_plugin_installed("llm-mistral"), + Gemini = keys_manager.is_key_set("gemini") and plugins_manager.is_plugin_installed("llm-gemini"), + Groq = keys_manager.is_key_set("groq") and plugins_manager.is_plugin_installed("llm-groq"), Ollama = plugins_manager.is_plugin_installed("llm-ollama"), -- Corrected plugin name from "ollama" to "llm-ollama" -- Local models are always available Local = true @@ -75,7 +75,7 @@ function M.is_model_available(model_line) model_line:match("^Custom OpenAI:") or model_line:match("^Azure OpenAI:") then if config.get("debug") then - vim.notify("Checking custom model availability: " .. model_name, vim.log.levels.INFO) + vim.notify("Checking custom model availability: " .. model_name, vim.log.levels.DEBUG) end -- For custom models, check validity using the dedicated function and the extracted name/id return custom_openai.is_custom_openai_model_valid(model_name) @@ -83,7 +83,7 @@ function M.is_model_available(model_line) -- Check if this standard-looking OpenAI model is actually a custom one if custom_openai.is_custom_openai_model_valid(model_name) then if config.get("debug") then - vim.notify("Identified standard OpenAI line as custom model: " .. model_name, vim.log.levels.INFO) + vim.notify("Identified standard OpenAI line as custom model: " .. model_name, vim.log.levels.DEBUG) end return true -- Validity is checked by is_custom_openai_model_valid end @@ -311,7 +311,7 @@ function M.generate_models_list() local lines = { "# Model Management", "", - "Navigate: [P]lugins [K]eys [F]ragments [T]emplates [S]chemas", + "Navigate: [P]lugins [K]eys [F]ragments", "Actions: [s]et default [a]dd alias [r]emove alias [c]ustom model [q]uit", -- Updated actions "──────────────────────────────────────────────────────────────", "" diff --git a/lua/llm/managers/plugins_manager.lua b/lua/llm/managers/plugins_manager.lua index 04e6521..2ce571b 100644 --- a/lua/llm/managers/plugins_manager.lua +++ b/lua/llm/managers/plugins_manager.lua @@ -40,23 +40,33 @@ end -- Get available plugins from the plugin directory function M.get_available_plugins() - vim.notify("Getting available plugins...", vim.log.levels.INFO) + if config.get('debug') then + vim.notify("Getting available plugins...", vim.log.levels.DEBUG) + end local cached_plugins = cache.get('available_plugins') if cached_plugins then - vim.notify("Returning cached plugins.", vim.log.levels.INFO) + if config.get('debug') then + vim.notify("Returning cached plugins.", vim.log.levels.DEBUG) + end return cached_plugins end - vim.notify("Fetching plugins from URL...", vim.log.levels.INFO) + if config.get('debug') then + vim.notify("Fetching plugins from URL...", vim.log.levels.DEBUG) + end local plugins_html = vim.fn.system('curl -s https://llm.datasette.io/en/stable/plugins/directory.html') if not plugins_html or plugins_html == "" then vim.notify("Failed to fetch HTML from URL.", vim.log.levels.ERROR) return {} end - vim.notify("Fetched HTML content, length: " .. #plugins_html, vim.log.levels.INFO) + if config.get('debug') then + vim.notify("Fetched HTML content, length: " .. #plugins_html, vim.log.levels.DEBUG) + end local plugins = parse_plugins_html(plugins_html) - vim.notify("Parsed " .. #plugins .. " plugins from HTML.", vim.log.levels.INFO) + if config.get('debug') then + vim.notify("Parsed " .. #plugins .. " plugins from HTML.", vim.log.levels.DEBUG) + end cache.set('available_plugins', plugins) return plugins @@ -72,14 +82,20 @@ end -- Get installed plugins from llm CLI function M.get_installed_plugins() - vim.notify("Getting installed plugins...", vim.log.levels.INFO) + if config.get('debug') then + vim.notify("Getting installed plugins...", vim.log.levels.DEBUG) + end local cached_plugins = cache.get('installed_plugins') if cached_plugins then - vim.notify("Returning cached installed plugins.", vim.log.levels.INFO) + if config.get('debug') then + vim.notify("Returning cached installed plugins.", vim.log.levels.DEBUG) + end return cached_plugins end - vim.notify("Running 'llm plugins' command...", vim.log.levels.INFO) + if config.get('debug') then + vim.notify("Running 'llm plugins' command...", vim.log.levels.DEBUG) + end local plugins_output = llm_cli.run_llm_command('plugins') if not plugins_output then vim.notify("'llm plugins' command returned no output.", vim.log.levels.WARN) @@ -97,14 +113,18 @@ function M.get_installed_plugins() for _, plugin_data in ipairs(decoded_plugins) do if plugin_data and plugin_data.name then table.insert(plugins, { name = plugin_data.name }) - vim.notify("Parsed installed plugin (JSON): '" .. plugin_data.name .. "'", vim.log.levels.INFO) + if config.get('debug') then + vim.notify("Parsed installed plugin (JSON): '" .. plugin_data.name .. "'", vim.log.levels.DEBUG) + end end end else vim.notify("Failed to decode JSON from 'llm plugins' command: " .. tostring(decoded_plugins), vim.log.levels.ERROR) return {} end - vim.notify("Finished parsing installed plugins, found " .. #plugins .. ".", vim.log.levels.INFO) + if config.get('debug') then + vim.notify("Finished parsing installed plugins, found " .. #plugins .. ".", vim.log.levels.DEBUG) + end cache.set('installed_plugins', plugins) return plugins end @@ -125,6 +145,7 @@ function M.install_plugin(plugin_name) local result = llm_cli.run_llm_command('install ' .. plugin_name) cache.invalidate('installed_plugins') cache.invalidate('available_plugins') + cache.invalidate('models') -- Invalidate models cache when plugins change return result ~= nil end @@ -133,14 +154,19 @@ function M.uninstall_plugin(plugin_name) local result = llm_cli.run_llm_command('uninstall ' .. plugin_name .. ' -y') cache.invalidate('installed_plugins') cache.invalidate('available_plugins') + cache.invalidate('models') -- Invalidate models cache when plugins change return result ~= nil end -- Populate the buffer with plugin management content function M.populate_plugins_buffer(bufnr) - vim.notify("Populating plugins buffer...", vim.log.levels.INFO) + if config.get('debug') then + vim.notify("Populating plugins buffer...", vim.log.levels.DEBUG) + end local available_plugins = M.get_available_plugins() - vim.notify("Got " .. #available_plugins .. " available plugins.", vim.log.levels.INFO) + if config.get('debug') then + vim.notify("Got " .. #available_plugins .. " available plugins.", vim.log.levels.DEBUG) + end if not available_plugins or #available_plugins == 0 then vim.notify("No available plugins found. Displaying error message.", vim.log.levels.WARN) @@ -159,7 +185,9 @@ function M.populate_plugins_buffer(bufnr) vim.notify("DEBUG: Raw installed_plugins: " .. vim.inspect(installed_plugins), vim.log.levels.DEBUG) end local installed_set = {} - vim.notify("--- INSTALLED PLUGINS (" .. #installed_plugins .. ") ---", vim.log.levels.INFO) + if config.get('debug') then + vim.notify("--- INSTALLED PLUGINS (" .. #installed_plugins .. ") ---", vim.log.levels.DEBUG) + end for _, plugin in ipairs(installed_plugins) do installed_set[plugin.name] = true -- vim.notify("Installed: '" .. plugin.name .. "' (added to set)", vim.log.levels.INFO) -- Removed per-plugin log @@ -169,13 +197,15 @@ function M.populate_plugins_buffer(bufnr) for _, plugin in ipairs(available_plugins) do table.insert(available_plugin_names, plugin.name) end - vim.notify("--- AVAILABLE PLUGINS (" .. #available_plugins .. ") ---\n" .. table.concat(available_plugin_names, ", "), - vim.log.levels.INFO) + if config.get('debug') then + vim.notify("--- AVAILABLE PLUGINS (" .. #available_plugins .. ") ---\n" .. table.concat(available_plugin_names, ", "), + vim.log.levels.DEBUG) + end local lines = { "# Plugin Management", "", - "Navigate: [M]odels [K]eys [F]ragments [T]emplates [S]chemas", + "Navigate: [M]odels [K]eys [F]ragments", "Actions: [i]nstall [x]uninstall [r]efresh [q]uit", "──────────────────────────────────────────────────────────────", "" @@ -197,7 +227,9 @@ function M.populate_plugins_buffer(bufnr) line_to_plugin[current_line] = plugin.name current_line = current_line + 1 end - vim.notify("Prepared " .. #lines .. " lines for the buffer.", vim.log.levels.INFO) + if config.get('debug') then + vim.notify("Prepared " .. #lines .. " lines for the buffer.", vim.log.levels.DEBUG) + end api.nvim_buf_set_lines(bufnr, 0, -1, false, lines) -- Apply syntax highlighting and line-specific highlights @@ -218,7 +250,9 @@ function M.populate_plugins_buffer(bufnr) vim.b[bufnr].line_to_plugin = line_to_plugin vim.b[bufnr].plugin_data = plugin_data - vim.notify("Finished populating plugins buffer.", vim.log.levels.INFO) + if config.get('debug') then + vim.notify("Finished populating plugins buffer.", vim.log.levels.DEBUG) + end return line_to_plugin, plugin_data -- Return for direct use if needed end @@ -307,13 +341,18 @@ function M.uninstall_plugin_under_cursor(bufnr) end function M.refresh_available_plugins(callback) - vim.notify("Refreshing available plugins...", vim.log.levels.INFO) + if config.get('debug') then + vim.notify("Refreshing available plugins...", vim.log.levels.DEBUG) + end cache.invalidate('available_plugins') cache.invalidate('installed_plugins') + cache.invalidate('models') -- Invalidate models cache when refreshing plugins -- Fetch in the background vim.defer_fn(function() local plugins = M.get_available_plugins() - vim.notify("Finished refreshing plugins: " .. #plugins .. " found.", vim.log.levels.INFO) + if config.get('debug') then + vim.notify("Finished refreshing plugins: " .. #plugins .. " found.", vim.log.levels.DEBUG) + end if callback then callback() end diff --git a/lua/llm/managers/schemas_manager.lua b/lua/llm/managers/schemas_manager.lua index c65dacd..0a004fd 100644 --- a/lua/llm/managers/schemas_manager.lua +++ b/lua/llm/managers/schemas_manager.lua @@ -382,7 +382,7 @@ function M.build_buffer_lines(schemas, show_named_only) local lines = { "# Schema Management", "", - "Navigate: [M]odels [P]lugins [K]eys [F]ragments [T]emplates", + "Navigate: [M]odels [P]lugins [K]eys [F]ragments", "Actions: [c]reate [r]un [v]iew [e]dit [a]lias [d]elete alias [t]oggle view [q]uit", "──────────────────────────────────────────────────────────────", "", diff --git a/lua/llm/managers/templates_manager.lua b/lua/llm/managers/templates_manager.lua index 528bcdd..0a4fdde 100644 --- a/lua/llm/managers/templates_manager.lua +++ b/lua/llm/managers/templates_manager.lua @@ -409,7 +409,10 @@ end function M.continue_template_creation_params(template) local params = M.extract_params(template) if #params > 0 then - vim.notify("Found parameters: " .. table.concat(params, ", "), vim.log.levels.INFO) + local config = require('llm.config') + if config.get('debug') then + vim.notify("Found parameters: " .. table.concat(params, ", "), vim.log.levels.DEBUG) + end M.set_param_defaults_loop(template, params, 1, function() M.continue_template_creation_extract(template) end) @@ -489,7 +492,7 @@ function M.build_buffer_data(templates) local lines = { "# Template Management", "", - "Navigate: [M]odels [P]lugins [K]eys [F]ragments [S]chemas", + "Navigate: [M]odels [P]lugins [K]eys [F]ragments", "Actions: [c]reate [r]un [e]dit [d]elete [v]iew details [q]uit", "──────────────────────────────────────────────────────────────", "" diff --git a/lua/llm/ui/unified_manager.lua b/lua/llm/ui/unified_manager.lua index 48b878d..3e0b7e8 100644 --- a/lua/llm/ui/unified_manager.lua +++ b/lua/llm/ui/unified_manager.lua @@ -55,18 +55,6 @@ local views = { title = "Fragments", manager_module = fragments_manager, }, - Templates = { - populate = templates_manager.populate_templates_buffer, - setup_keymaps = templates_manager.setup_templates_keymaps, - title = "Templates", - manager_module = templates_manager, - }, - Schemas = { - populate = schemas_manager.populate_schemas_buffer, - setup_keymaps = schemas_manager.setup_schemas_keymaps, - title = "Schemas", - manager_module = schemas_manager, - }, } -- Close the unified window @@ -107,8 +95,6 @@ local function setup_common_keymaps(bufnr) set_keymap('n', 'P', '<Cmd>lua require("llm.ui.unified_manager").switch_view("Plugins")<CR>') set_keymap('n', 'K', '<Cmd>lua require("llm.ui.unified_manager").switch_view("Keys")<CR>') set_keymap('n', 'F', '<Cmd>lua require("llm.ui.unified_manager").switch_view("Fragments")<CR>') - set_keymap('n', 'T', '<Cmd>lua require("llm.ui.unified_manager").switch_view("Templates")<CR>') - set_keymap('n', 'S', '<Cmd>lua require("llm.ui.unified_manager").switch_view("Schemas")<CR>') end -- Switch the view within the unified window diff --git a/plugin/llm.lua b/plugin/llm.lua index 5d60e6a..4591c1e 100644 --- a/plugin/llm.lua +++ b/plugin/llm.lua @@ -71,8 +71,6 @@ local command_handlers = { require('llm.commands').prompt_with_selection(prompt, nil, is_range, nil) end, explain = function() require('llm.commands').explain_code(nil, nil) end, - schema = function() require('llm.managers.schemas_manager').select_schema() end, - template = function() require('llm.managers.templates_manager').select_template() end, fragments = function() llm.interactive_prompt_with_fragments() end, update = manual_cli_update } @@ -108,8 +106,6 @@ end, { "file", -- :LLM file "selection", -- :LLM selection "explain", -- :LLM explain - "schema", -- :LLM schema - "template", -- :LLM template "fragments", -- :LLM fragments "update" -- :LLM update } @@ -149,6 +145,6 @@ vim.api.nvim_create_user_command('LLMConfig', function(opts) end, { nargs = '?', complete = function() - return { "Models", "Plugins", "Keys", "Fragments", "Templates", "Schemas" } + return { "Models", "Plugins", "Keys", "Fragments" } end }) diff --git a/scripts/bootstrap.py b/scripts/bootstrap.py new file mode 100644 index 0000000..e180f12 --- /dev/null +++ b/scripts/bootstrap.py @@ -0,0 +1,230 @@ +#!/usr/bin/env python3 +import os +import sys +import shutil +import subprocess + +SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) +REPO_ROOT = os.path.dirname(SCRIPT_DIR) +AGENTS_FILE = os.path.join(REPO_ROOT, "AGENTS.md") +CLAUDE_FILE = os.path.join(REPO_ROOT, "CLAUDE.md") +TEMPLATE_MAINTENANCE = os.path.join(REPO_ROOT, "templates", "maintenance_mode.md") + +STANDARD_HEADERS = [ + "Helper Scripts", + "Agent Interoperability", + "Step 1: Detect Repository State", + "Step 2: Execution Strategy", + "Step 3: Finalize & Switch to Maintenance Mode" +] + +PREAMBLE_IGNORE_PATTERNS = [ + "# AI Agent Bootstrap Instructions", + "# AI Agent Instructions", + "**CURRENT STATUS: BOOTSTRAPPING MODE**", + "You are an expert Software Architect", + "Your current goal is to bootstrap", +] + +def is_ignored_preamble_line(line): + l = line.strip() + # Keep empty lines to preserve spacing in custom content, + # but we will strip the final result to remove excess whitespace. + if not l: + return False + + for p in PREAMBLE_IGNORE_PATTERNS: + if p in l: + return True + return False + +def extract_custom_content(content): + lines = content.splitlines() + custom_sections = [] + preamble_lines = [] + current_header = None + current_lines = [] + + for line in lines: + if line.startswith("## "): + header = line[3:].strip() + + # Flush previous section + if current_header: + if current_header not in STANDARD_HEADERS: + custom_sections.append((current_header, "\n".join(current_lines))) + else: + # Capture preamble (lines before first header) + for l in current_lines: + if not is_ignored_preamble_line(l): + preamble_lines.append(l) + + current_header = header + current_lines = [] + else: + current_lines.append(line) + + # Flush last section + if current_header: + if current_header not in STANDARD_HEADERS: + custom_sections.append((current_header, "\n".join(current_lines))) + else: + # If no headers found, everything is preamble + for l in current_lines: + if not is_ignored_preamble_line(l): + preamble_lines.append(l) + + return "\n".join(preamble_lines).strip(), custom_sections + +def check_state(): + print("Repository Analysis:") + + # Check if already in maintenance mode + if os.path.exists(AGENTS_FILE): + with open(AGENTS_FILE, "r") as f: + content = f.read() + if "BOOTSTRAPPING MODE" not in content: + print("Status: MAINTENANCE MODE (AGENTS.md is already updated)") + print("To list tasks: python3 scripts/tasks.py list") + return + + files = [f for f in os.listdir(REPO_ROOT) if not f.startswith(".")] + print(f"Files in root: {len(files)}") + + if os.path.exists(os.path.join(REPO_ROOT, "src")) or os.path.exists(os.path.join(REPO_ROOT, "lib")) or os.path.exists(os.path.join(REPO_ROOT, ".git")): + print("Status: EXISTING REPOSITORY (Found src/, lib/, or .git/)") + else: + print("Status: NEW REPOSITORY (Likely)") + + # Check for hooks + hook_path = os.path.join(REPO_ROOT, ".git", "hooks", "pre-commit") + if not os.path.exists(hook_path): + print("\nTip: Run 'python3 scripts/tasks.py install-hooks' to enable safety checks.") + + print("\nNext Steps:") + print("1. Run 'python3 scripts/tasks.py init' to scaffold directories.") + print("2. Run 'python3 scripts/tasks.py create foundation \"Initial Setup\"' to track your work.") + print("3. Explore docs/architecture/ and docs/features/.") + print("4. When ready to switch to maintenance mode, run: python3 scripts/bootstrap.py finalize --interactive") + +def finalize(): + interactive = "--interactive" in sys.argv + print("Finalizing setup...") + if not os.path.exists(TEMPLATE_MAINTENANCE): + print(f"Error: Template {TEMPLATE_MAINTENANCE} not found.") + sys.exit(1) + + # Safety check + if os.path.exists(AGENTS_FILE): + with open(AGENTS_FILE, "r") as f: + content = f.read() + if "BOOTSTRAPPING MODE" not in content and "--force" not in sys.argv: + print("Error: AGENTS.md does not appear to be in bootstrapping mode.") + print("Use --force to overwrite anyway.") + sys.exit(1) + + # Ensure init is run + print("Ensuring directory structure...") + tasks_script = os.path.join(SCRIPT_DIR, "tasks.py") + try: + subprocess.check_call([sys.executable, tasks_script, "init"]) + except subprocess.CalledProcessError: + print("Error: Failed to initialize directories.") + sys.exit(1) + + # Analyze AGENTS.md for custom sections + custom_sections = [] + custom_preamble = "" + if os.path.exists(AGENTS_FILE): + try: + with open(AGENTS_FILE, "r") as f: + current_content = f.read() + custom_preamble, custom_sections = extract_custom_content(current_content) + except Exception as e: + print(f"Warning: Failed to parse AGENTS.md for custom sections: {e}") + + if interactive: + print("\n--- Merge Analysis ---") + if custom_preamble: + print("[PRESERVED] Custom Preamble (lines before first header)") + print(f" Snippet: {custom_preamble.splitlines()[0][:60]}...") + else: + print("[INFO] No custom preamble found.") + + if custom_sections: + print(f"[PRESERVED] {len(custom_sections)} Custom Sections:") + for header, _ in custom_sections: + print(f" - {header}") + else: + print("[INFO] No custom sections found.") + + print("\n[REPLACED] The following standard bootstrapping sections will be replaced by Maintenance Mode instructions:") + for header in STANDARD_HEADERS: + print(f" - {header}") + + print(f"\n[ACTION] AGENTS.md will be backed up to AGENTS.md.bak") + + try: + # Use input if available, but handle non-interactive environments + response = input("\nProceed with finalization? [y/N] ") + except EOFError: + response = "n" + + if response.lower() not in ["y", "yes"]: + print("Aborting.") + sys.exit(0) + + # Backup AGENTS.md + if os.path.exists(AGENTS_FILE): + backup_file = AGENTS_FILE + ".bak" + try: + shutil.copy2(AGENTS_FILE, backup_file) + print(f"Backed up AGENTS.md to {backup_file}") + if not custom_sections and not custom_preamble and not interactive: + print("IMPORTANT: If you added custom instructions to AGENTS.md, they are now in .bak") + print("Please review AGENTS.md.bak and merge any custom context into the new AGENTS.md manually.") + elif not interactive: + print(f"NOTE: Custom sections/preamble were preserved in the new AGENTS.md.") + print("Please review AGENTS.md.bak to ensure no other context was lost.") + except Exception as e: + print(f"Warning: Failed to backup AGENTS.md: {e}") + + # Read template + with open(TEMPLATE_MAINTENANCE, "r") as f: + content = f.read() + + # Prepend custom preamble + if custom_preamble: + content = custom_preamble + "\n\n" + content + + # Append custom sections + if custom_sections: + content += "\n" + for header, body in custom_sections: + content += f"\n## {header}\n{body}" + if not interactive: + print(f"Appended {len(custom_sections)} custom sections to new AGENTS.md") + + # Overwrite AGENTS.md + with open(AGENTS_FILE, "w") as f: + f.write(content) + + print(f"Updated {AGENTS_FILE} with maintenance instructions.") + + # Check CLAUDE.md symlink + if os.path.islink(CLAUDE_FILE): + print(f"{CLAUDE_FILE} is a symlink. Verified.") + else: + print(f"{CLAUDE_FILE} is NOT a symlink. Recreating it...") + if os.path.exists(CLAUDE_FILE): + os.remove(CLAUDE_FILE) + os.symlink("AGENTS.md", CLAUDE_FILE) + print("Symlink created.") + + print("\nBootstrapping Complete! The agent is now in Maintenance Mode.") + +if __name__ == "__main__": + if len(sys.argv) > 1 and sys.argv[1] == "finalize": + finalize() + else: + check_state() diff --git a/scripts/memory.py b/scripts/memory.py new file mode 100755 index 0000000..f82fef4 --- /dev/null +++ b/scripts/memory.py @@ -0,0 +1,239 @@ +#!/usr/bin/env python3 +import os +import sys +import argparse +import json +import datetime +import re + +# Determine the root directory of the repo +SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) +# Allow overriding root for testing, similar to tasks.py +REPO_ROOT = os.getenv("TASKS_REPO_ROOT", os.path.dirname(SCRIPT_DIR)) +MEMORY_DIR = os.path.join(REPO_ROOT, "docs", "memories") + +def init_memory(): + """Ensures the memory directory exists.""" + os.makedirs(MEMORY_DIR, exist_ok=True) + if not os.path.exists(os.path.join(MEMORY_DIR, ".keep")): + with open(os.path.join(MEMORY_DIR, ".keep"), "w") as f: + pass + +def slugify(text): + """Creates a URL-safe slug from text.""" + text = text.lower().strip() + return re.sub(r'[^a-z0-9-]', '-', text).strip('-') + +def create_memory(title, content, tags=None, output_format="text"): + init_memory() + tags = tags or [] + if isinstance(tags, str): + tags = [t.strip() for t in tags.split(",") if t.strip()] + + date_str = datetime.date.today().isoformat() + slug = slugify(title) + if not slug: + slug = "untitled" + + filename = f"{date_str}-{slug}.md" + filepath = os.path.join(MEMORY_DIR, filename) + + # Handle duplicates by appending counter + counter = 1 + while os.path.exists(filepath): + filename = f"{date_str}-{slug}-{counter}.md" + filepath = os.path.join(MEMORY_DIR, filename) + counter += 1 + + # Create Frontmatter + fm = f"""--- +date: {date_str} +title: "{title}" +tags: {json.dumps(tags)} +created: {datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")} +--- +""" + + full_content = fm + "\n" + content + "\n" + + try: + with open(filepath, "w") as f: + f.write(full_content) + + if output_format == "json": + print(json.dumps({ + "success": True, + "filepath": filepath, + "title": title, + "date": date_str + })) + else: + print(f"Created memory: {filepath}") + except Exception as e: + msg = f"Error creating memory: {e}" + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + +def list_memories(tag=None, limit=20, output_format="text"): + if not os.path.exists(MEMORY_DIR): + if output_format == "json": + print(json.dumps([])) + else: + print("No memories found.") + return + + memories = [] + try: + files = [f for f in os.listdir(MEMORY_DIR) if f.endswith(".md") and f != ".keep"] + except FileNotFoundError: + files = [] + + for f in files: + path = os.path.join(MEMORY_DIR, f) + try: + with open(path, "r") as file: + content = file.read() + + # Extract basic info from frontmatter + title = "Unknown" + date = "Unknown" + tags = [] + + # Simple regex parsing to avoid YAML dependency + m_title = re.search(r'^title:\s*"(.*)"', content, re.MULTILINE) + if m_title: + title = m_title.group(1) + else: + # Fallback: unquoted title + m_title_uq = re.search(r'^title:\s*(.*)', content, re.MULTILINE) + if m_title_uq: title = m_title_uq.group(1).strip() + + m_date = re.search(r'^date:\s*(.*)', content, re.MULTILINE) + if m_date: date = m_date.group(1).strip() + + m_tags = re.search(r'^tags:\s*(\[.*\])', content, re.MULTILINE) + if m_tags: + try: + tags = json.loads(m_tags.group(1)) + except: + pass + + if tag and tag not in tags: + continue + + memories.append({ + "filename": f, + "title": title, + "date": date, + "tags": tags, + "path": path + }) + except Exception: + # Skip unreadable files + pass + + # Sort by date desc (filename usually works for YYYY-MM-DD prefix) + memories.sort(key=lambda x: x["filename"], reverse=True) + memories = memories[:limit] + + if output_format == "json": + print(json.dumps(memories)) + else: + if not memories: + print("No memories found.") + return + + print(f"{'Date':<12} {'Title'}") + print("-" * 50) + for m in memories: + print(f"{m['date']:<12} {m['title']}") + +def read_memory(filename, output_format="text"): + path = os.path.join(MEMORY_DIR, filename) + if not os.path.exists(path): + # Try finding by partial match if not exact + if os.path.exists(MEMORY_DIR): + matches = [f for f in os.listdir(MEMORY_DIR) if filename in f and f.endswith(".md")] + if len(matches) == 1: + path = os.path.join(MEMORY_DIR, matches[0]) + elif len(matches) > 1: + msg = f"Error: Ambiguous memory identifier '{filename}'. Matches: {', '.join(matches)}" + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + else: + msg = f"Error: Memory file '{filename}' not found." + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + else: + msg = f"Error: Memory directory does not exist." + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + + try: + with open(path, "r") as f: + content = f.read() + + if output_format == "json": + print(json.dumps({"filename": os.path.basename(path), "content": content})) + else: + print(content) + except Exception as e: + msg = f"Error reading file: {e}" + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + +def main(): + # Common argument for format + parent_parser = argparse.ArgumentParser(add_help=False) + parent_parser.add_argument("--format", choices=["text", "json"], default="text", help="Output format") + + parser = argparse.ArgumentParser(description="Manage long-term memories") + + subparsers = parser.add_subparsers(dest="command") + + # Create + create_parser = subparsers.add_parser("create", parents=[parent_parser], help="Create a new memory") + create_parser.add_argument("title", help="Title of the memory") + create_parser.add_argument("content", help="Content of the memory") + create_parser.add_argument("--tags", help="Comma-separated tags") + + # List + list_parser = subparsers.add_parser("list", parents=[parent_parser], help="List memories") + list_parser.add_argument("--tag", help="Filter by tag") + list_parser.add_argument("--limit", type=int, default=20, help="Max results") + + # Read + read_parser = subparsers.add_parser("read", parents=[parent_parser], help="Read a memory") + read_parser.add_argument("filename", help="Filename or slug part") + + args = parser.parse_args() + + # Default format to text if not present (though parents default handles it) + fmt = getattr(args, "format", "text") + + if args.command == "create": + create_memory(args.title, args.content, args.tags, fmt) + elif args.command == "list": + list_memories(args.tag, args.limit, fmt) + elif args.command == "read": + read_memory(args.filename, fmt) + else: + parser.print_help() + +if __name__ == "__main__": + main() diff --git a/scripts/tasks b/scripts/tasks new file mode 100755 index 0000000..9c4d703 --- /dev/null +++ b/scripts/tasks @@ -0,0 +1,15 @@ +#!/bin/bash + +# Wrapper for tasks.py to ensure Python 3 is available + +if ! command -v python3 &> /dev/null; then + echo "Error: Python 3 is not installed or not in PATH." + echo "Please install Python 3 to use the task manager." + exit 1 +fi + +# Get the directory of this script +SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" + +# Execute tasks.py +exec python3 "$SCRIPT_DIR/tasks.py" "$@" diff --git a/scripts/tasks.py b/scripts/tasks.py new file mode 100755 index 0000000..a585378 --- /dev/null +++ b/scripts/tasks.py @@ -0,0 +1,949 @@ +#!/usr/bin/env python3 +import os +import sys +import shutil +import argparse +import re +import json +import random +import string +from datetime import datetime + +# Determine the root directory of the repo +# Assumes this script is in scripts/ +SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) +REPO_ROOT = os.getenv("TASKS_REPO_ROOT", os.path.dirname(SCRIPT_DIR)) +DOCS_DIR = os.path.join(REPO_ROOT, "docs", "tasks") +TEMPLATES_DIR = os.path.join(REPO_ROOT, "templates") + +CATEGORIES = [ + "foundation", + "infrastructure", + "domain", + "presentation", + "migration", + "features", + "testing", + "review", + "security", +] + +VALID_STATUSES = [ + "pending", + "in_progress", + "wip_blocked", + "review_requested", + "verified", + "completed", + "blocked", + "cancelled", + "deferred" +] + +VALID_TYPES = [ + "epic", + "story", + "task", + "bug" +] + +ARCHIVE_DIR_NAME = "archive" + +def init_docs(): + """Scaffolds the documentation directory structure.""" + print("Initializing documentation structure...") + + # Create docs/tasks/ directories + for category in CATEGORIES: + path = os.path.join(DOCS_DIR, category) + os.makedirs(path, exist_ok=True) + # Create .keep file to ensure git tracks the directory + with open(os.path.join(path, ".keep"), "w") as f: + pass + + # Copy GUIDE.md if missing + guide_path = os.path.join(DOCS_DIR, "GUIDE.md") + guide_template = os.path.join(TEMPLATES_DIR, "GUIDE.md") + if not os.path.exists(guide_path) and os.path.exists(guide_template): + shutil.copy(guide_template, guide_path) + print(f"Created {guide_path}") + + # Create other doc directories + for doc_type in ["architecture", "features", "security"]: + path = os.path.join(REPO_ROOT, "docs", doc_type) + os.makedirs(path, exist_ok=True) + readme_path = os.path.join(path, "README.md") + if not os.path.exists(readme_path): + if doc_type == "security": + content = """# Security Documentation + +Use this section to document security considerations, risks, and mitigations. + +## Risk Assessment +* [ ] Threat Model +* [ ] Data Privacy + +## Compliance +* [ ] Requirements + +## Secrets Management +* [ ] Policy +""" + else: + content = f"# {doc_type.capitalize()} Documentation\n\nAdd {doc_type} documentation here.\n" + + with open(readme_path, "w") as f: + f.write(content) + + # Create memories directory + memories_path = os.path.join(REPO_ROOT, "docs", "memories") + os.makedirs(memories_path, exist_ok=True) + if not os.path.exists(os.path.join(memories_path, ".keep")): + with open(os.path.join(memories_path, ".keep"), "w") as f: + pass + + print(f"Created directories in {os.path.join(REPO_ROOT, 'docs')}") + +def generate_task_id(category): + """Generates a timestamp-based ID to avoid collisions.""" + timestamp = datetime.now().strftime("%Y%m%d-%H%M%S") + suffix = ''.join(random.choices(string.ascii_uppercase, k=3)) + return f"{category.upper()}-{timestamp}-{suffix}" + +def extract_frontmatter(content): + """Extracts YAML frontmatter if present.""" + # Check if it starts with --- + if not re.match(r"^\s*---\s*(\n|$)", content): + return None, content + + # Find the second --- + lines = content.splitlines(keepends=True) + if not lines: + return None, content + + yaml_lines = [] + body_start_idx = -1 + + # Skip the first line (delimiter) + for i, line in enumerate(lines[1:], 1): + if re.match(r"^\s*---\s*(\n|$)", line): + body_start_idx = i + 1 + break + yaml_lines.append(line) + + if body_start_idx == -1: + # No closing delimiter found + return None, content + + yaml_block = "".join(yaml_lines) + body = "".join(lines[body_start_idx:]) + + data = {} + for line in yaml_block.splitlines(): + line = line.strip() + if not line or line.startswith("#"): + continue + if ":" in line: + key, val = line.split(":", 1) + data[key.strip()] = val.strip() + + return data, body + +def parse_task_content(content, filepath=None): + """Parses task markdown content into a dictionary.""" + + # Try Frontmatter first + frontmatter, body = extract_frontmatter(content) + if frontmatter: + deps_str = frontmatter.get("dependencies") or "" + deps = [d.strip() for d in deps_str.split(",") if d.strip()] + + return { + "id": frontmatter.get("id", "unknown"), + "status": frontmatter.get("status", "unknown"), + "title": frontmatter.get("title", "No Title"), + "priority": frontmatter.get("priority", "medium"), + "type": frontmatter.get("type", "task"), + "sprint": frontmatter.get("sprint", ""), + "estimate": frontmatter.get("estimate", ""), + "dependencies": deps, + "filepath": filepath, + "content": content + } + + # Fallback to Legacy Regex Parsing + id_match = re.search(r"\*\*Task ID\*\*: ([\w-]+)", content) + status_match = re.search(r"\*\*Status\*\*: ([\w_]+)", content) + title_match = re.search(r"# Task: (.+)", content) + priority_match = re.search(r"\*\*Priority\*\*: ([\w]+)", content) + + task_id = id_match.group(1) if id_match else "unknown" + status = status_match.group(1) if status_match else "unknown" + title = title_match.group(1).strip() if title_match else "No Title" + priority = priority_match.group(1) if priority_match else "unknown" + + return { + "id": task_id, + "status": status, + "title": title, + "priority": priority, + "type": "task", + "sprint": "", + "estimate": "", + "dependencies": [], + "filepath": filepath, + "content": content + } + +def create_task(category, title, description, priority="medium", status="pending", dependencies=None, task_type="task", sprint="", estimate="", output_format="text"): + if category not in CATEGORIES: + msg = f"Error: Category '{category}' not found. Available: {', '.join(CATEGORIES)}" + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + + task_id = generate_task_id(category) + + slug = title.lower().replace(" ", "-") + # Sanitize slug + slug = re.sub(r'[^a-z0-9-]', '', slug) + filename = f"{task_id}-{slug}.md" + filepath = os.path.join(DOCS_DIR, category, filename) + + # New YAML Frontmatter Format + deps_str = "" + if dependencies: + deps_str = ", ".join(dependencies) + + extra_fm = "" + if task_type: + extra_fm += f"type: {task_type}\n" + if sprint: + extra_fm += f"sprint: {sprint}\n" + if estimate: + extra_fm += f"estimate: {estimate}\n" + + content = f"""--- +id: {task_id} +status: {status} +title: {title} +priority: {priority} +created: {datetime.now().strftime("%Y-%m-%d %H:%M:%S")} +category: {category} +dependencies: {deps_str} +{extra_fm}--- + +# {title} + +{description} +""" + + os.makedirs(os.path.dirname(filepath), exist_ok=True) + with open(filepath, "w") as f: + f.write(content) + + if output_format == "json": + print(json.dumps({ + "id": task_id, + "title": title, + "filepath": filepath, + "status": status, + "priority": priority, + "type": task_type + })) + else: + print(f"Created task: {filepath}") + +def find_task_file(task_id): + """Finds the file path for a given task ID.""" + task_id = task_id.upper() + + # Optimization: Check if ID starts with a known category + parts = task_id.split('-') + if len(parts) > 1: + category = parts[0].lower() + if category in CATEGORIES: + category_dir = os.path.join(DOCS_DIR, category) + if os.path.exists(category_dir): + for file in os.listdir(category_dir): + if file.startswith(task_id) and file.endswith(".md"): + return os.path.join(category_dir, file) + # Fallback to full search if not found in expected category (e.g. moved to archive) + + for root, _, files in os.walk(DOCS_DIR): + for file in files: + # Match strictly on ID at start of filename or substring + # New ID: FOUNDATION-2023... + # Old ID: FOUNDATION-001 + if file.startswith(task_id) and file.endswith(".md"): + return os.path.join(root, file) + return None + +def show_task(task_id, output_format="text"): + filepath = find_task_file(task_id) + if not filepath: + msg = f"Error: Task ID {task_id} not found." + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + + try: + with open(filepath, "r") as f: + content = f.read() + + if output_format == "json": + task_data = parse_task_content(content, filepath) + print(json.dumps(task_data)) + else: + print(content) + except Exception as e: + msg = f"Error reading file: {e}" + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + +def delete_task(task_id, output_format="text"): + filepath = find_task_file(task_id) + if not filepath: + msg = f"Error: Task ID {task_id} not found." + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + + try: + os.remove(filepath) + if output_format == "json": + print(json.dumps({"success": True, "id": task_id, "message": "Deleted task"})) + else: + print(f"Deleted task: {task_id}") + except Exception as e: + msg = f"Error deleting file: {e}" + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + +def archive_task(task_id, output_format="text"): + filepath = find_task_file(task_id) + if not filepath: + msg = f"Error: Task ID {task_id} not found." + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + + try: + archive_dir = os.path.join(DOCS_DIR, ARCHIVE_DIR_NAME) + os.makedirs(archive_dir, exist_ok=True) + filename = os.path.basename(filepath) + new_filepath = os.path.join(archive_dir, filename) + + os.rename(filepath, new_filepath) + + if output_format == "json": + print(json.dumps({"success": True, "id": task_id, "message": "Archived task", "new_path": new_filepath})) + else: + print(f"Archived task: {task_id} -> {new_filepath}") + + except Exception as e: + msg = f"Error archiving task: {e}" + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + +def migrate_to_frontmatter(content, task_data): + """Converts legacy content to Frontmatter format.""" + # Strip the header section from legacy content + + body = content + if "## Task Details" in content: + parts = content.split("## Task Details") + if len(parts) > 1: + body = parts[1].strip() + + description = body + # Remove footer + if "*Created:" in description: + description = description.split("---")[0].strip() + + # Check for extra keys in task_data that might need preservation + extra_fm = "" + if task_data.get("type"): extra_fm += f"type: {task_data['type']}\n" + if task_data.get("sprint"): extra_fm += f"sprint: {task_data['sprint']}\n" + if task_data.get("estimate"): extra_fm += f"estimate: {task_data['estimate']}\n" + + new_content = f"""--- +id: {task_data['id']} +status: {task_data['status']} +title: {task_data['title']} +priority: {task_data['priority']} +created: {datetime.now().strftime("%Y-%m-%d %H:%M:%S")} +category: unknown +{extra_fm}--- + +# {task_data['title']} + +{description} +""" + return new_content + +def update_task_status(task_id, new_status, output_format="text"): + if new_status not in VALID_STATUSES: + msg = f"Error: Invalid status '{new_status}'. Valid statuses: {', '.join(VALID_STATUSES)}" + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + + filepath = find_task_file(task_id) + if not filepath: + msg = f"Error: Task ID {task_id} not found." + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + + with open(filepath, "r") as f: + content = f.read() + + frontmatter, body = extract_frontmatter(content) + + if frontmatter: + # Update Frontmatter + lines = content.splitlines() + new_lines = [] + in_fm = False + updated = False + + # Simple finite state machine for update + for line in lines: + if re.match(r"^\s*---\s*$", line): + if not in_fm: + in_fm = True + new_lines.append(line) + continue + else: + in_fm = False + new_lines.append(line) + continue + + match = re.match(r"^(\s*)status:", line) + if in_fm and match: + indent = match.group(1) + new_lines.append(f"{indent}status: {new_status}") + updated = True + else: + new_lines.append(line) + + new_content = "\n".join(new_lines) + "\n" + + else: + # Legacy Format: Migrate on Update + task_data = parse_task_content(content, filepath) + task_data['status'] = new_status # Set new status + new_content = migrate_to_frontmatter(content, task_data) + if output_format == "text": + print(f"Migrated task {task_id} to new format.") + + with open(filepath, "w") as f: + f.write(new_content) + + if output_format == "json": + print(json.dumps({"success": True, "id": task_id, "status": new_status})) + else: + print(f"Updated {task_id} status to {new_status}") + + +def list_tasks(status=None, category=None, sprint=None, include_archived=False, output_format="text"): + tasks = [] + + for root, dirs, files in os.walk(DOCS_DIR): + rel_path = os.path.relpath(root, DOCS_DIR) + + # Exclude archive unless requested + if not include_archived: + if rel_path == ARCHIVE_DIR_NAME or rel_path.startswith(ARCHIVE_DIR_NAME + os.sep): + continue + + # Filter by category if provided + if category: + if rel_path != category and not rel_path.startswith(category + os.sep): + continue + + for file in files: + if not file.endswith(".md") or file in ["GUIDE.md", "README.md"]: + continue + + path = os.path.join(root, file) + try: + with open(path, "r") as f: + content = f.read() + except Exception as e: + if output_format == "text": + print(f"Error reading {path}: {e}") + continue + + # Parse content + task = parse_task_content(content, path) + + # Skip files that don't look like tasks (no ID) + if task["id"] == "unknown": + continue + + if status and status.lower() != task["status"].lower(): + continue + + if sprint and sprint != task.get("sprint"): + continue + + tasks.append(task) + + if output_format == "json": + summary = [{k: v for k, v in t.items() if k != 'content'} for t in tasks] + print(json.dumps(summary)) + else: + # Adjust width for ID to handle longer IDs + print(f"{'ID':<25} {'Status':<20} {'Type':<8} {'Title'}") + print("-" * 85) + for t in tasks: + t_type = t.get("type", "task")[:8] + print(f"{t['id']:<25} {t['status']:<20} {t_type:<8} {t['title']}") + +def get_context(output_format="text"): + """Lists tasks that are currently in progress.""" + if output_format == "text": + print("Current Context (in_progress):") + list_tasks(status="in_progress", output_format=output_format) + +def migrate_all(): + """Migrates all legacy tasks to Frontmatter format.""" + print("Migrating tasks to Frontmatter format...") + count = 0 + for root, dirs, files in os.walk(DOCS_DIR): + for file in files: + if not file.endswith(".md") or file in ["GUIDE.md", "README.md"]: + continue + + path = os.path.join(root, file) + with open(path, "r") as f: + content = f.read() + + if content.startswith("---\n") or content.startswith("--- "): + continue # Already migrated (simple check) + + task_data = parse_task_content(content, path) + if task_data['id'] == "unknown": + continue + + new_content = migrate_to_frontmatter(content, task_data) + with open(path, "w") as f: + f.write(new_content) + + print(f"Migrated {task_data['id']}") + count += 1 + + print(f"Migration complete. {count} tasks updated.") + +def validate_all(output_format="text"): + """Validates all task files.""" + errors = [] + all_tasks = {} # id -> {path, deps} + + # Pass 1: Parse and Basic Validation + for root, dirs, files in os.walk(DOCS_DIR): + for file in files: + if not file.endswith(".md") or file in ["GUIDE.md", "README.md"]: + continue + path = os.path.join(root, file) + try: + with open(path, "r") as f: + content = f.read() + + # Check 1: Frontmatter exists + frontmatter, body = extract_frontmatter(content) + if not frontmatter: + errors.append(f"{file}: Missing valid frontmatter") + continue + + # Check 2: Required fields + required_fields = ["id", "status", "title", "created"] + missing = [field for field in required_fields if field not in frontmatter] + if missing: + errors.append(f"{file}: Missing required fields: {', '.join(missing)}") + continue + + task_id = frontmatter["id"] + + # Check 3: Valid Status + if "status" in frontmatter and frontmatter["status"] not in VALID_STATUSES: + errors.append(f"{file}: Invalid status '{frontmatter['status']}'") + + # Check 4: Valid Type + if "type" in frontmatter and frontmatter["type"] not in VALID_TYPES: + errors.append(f"{file}: Invalid type '{frontmatter['type']}'") + + # Parse dependencies + deps_str = frontmatter.get("dependencies") or "" + deps = [d.strip() for d in deps_str.split(",") if d.strip()] + + # Check for Duplicate IDs + if task_id in all_tasks: + errors.append(f"{file}: Duplicate Task ID '{task_id}' (also in {all_tasks[task_id]['path']})") + + all_tasks[task_id] = {"path": path, "deps": deps} + + except Exception as e: + errors.append(f"{file}: Error reading/parsing: {str(e)}") + + # Pass 2: Dependency Validation & Cycle Detection + visited = set() + recursion_stack = set() + + def detect_cycle(curr_id, path): + visited.add(curr_id) + recursion_stack.add(curr_id) + + if curr_id in all_tasks: + for dep_id in all_tasks[curr_id]["deps"]: + # Dependency Existence Check + if dep_id not in all_tasks: + # This will be caught in the loop below, but we need to handle it here to avoid error + continue + + if dep_id not in visited: + if detect_cycle(dep_id, path + [dep_id]): + return True + elif dep_id in recursion_stack: + path.append(dep_id) + return True + + recursion_stack.remove(curr_id) + return False + + for task_id, info in all_tasks.items(): + # Check dependencies exist + for dep_id in info["deps"]: + if dep_id not in all_tasks: + errors.append(f"{os.path.basename(info['path'])}: Invalid dependency '{dep_id}' (task not found)") + + # Check cycles + if task_id not in visited: + cycle_path = [task_id] + if detect_cycle(task_id, cycle_path): + errors.append(f"Circular dependency detected: {' -> '.join(cycle_path)}") + + if output_format == "json": + print(json.dumps({"valid": len(errors) == 0, "errors": errors})) + else: + if not errors: + print("All tasks validated successfully.") + else: + print(f"Found {len(errors)} errors:") + for err in errors: + print(f" - {err}") + sys.exit(1) + +def visualize_tasks(output_format="text"): + """Generates a Mermaid diagram of task dependencies.""" + tasks = [] + # Collect all tasks + for root, dirs, files in os.walk(DOCS_DIR): + for file in files: + if not file.endswith(".md") or file in ["GUIDE.md", "README.md"]: + continue + path = os.path.join(root, file) + try: + with open(path, "r") as f: + content = f.read() + task = parse_task_content(content, path) + if task["id"] != "unknown": + tasks.append(task) + except: + pass + + if output_format == "json": + nodes = [{"id": t["id"], "title": t["title"], "status": t["status"]} for t in tasks] + edges = [] + for t in tasks: + for dep in t.get("dependencies", []): + edges.append({"from": dep, "to": t["id"]}) + print(json.dumps({"nodes": nodes, "edges": edges})) + return + + # Mermaid Output + print("graph TD") + + status_colors = { + "completed": "#90EE90", + "verified": "#90EE90", + "in_progress": "#ADD8E6", + "review_requested": "#FFFACD", + "wip_blocked": "#FFB6C1", + "blocked": "#FF7F7F", + "pending": "#D3D3D3", + "deferred": "#A9A9A9", + "cancelled": "#696969" + } + + # Nodes + for t in tasks: + # Sanitize title for label + safe_title = t["title"].replace('"', '').replace('[', '').replace(']', '') + print(f' {t["id"]}["{t["id"]}: {safe_title}"]') + + # Style + color = status_colors.get(t["status"], "#FFFFFF") + print(f" style {t['id']} fill:{color},stroke:#333,stroke-width:2px") + + # Edges + for t in tasks: + deps = t.get("dependencies", []) + for dep in deps: + print(f" {dep} --> {t['id']}") + +def get_next_task(output_format="text"): + """Identifies the next best task to work on.""" + # 1. Collect all tasks + all_tasks = {} + for root, _, files in os.walk(DOCS_DIR): + for file in files: + if not file.endswith(".md") or file in ["GUIDE.md", "README.md"]: + continue + path = os.path.join(root, file) + try: + with open(path, "r") as f: + content = f.read() + task = parse_task_content(content, path) + if task["id"] != "unknown": + all_tasks[task["id"]] = task + except: + pass + + candidates = [] + + # Priority mapping + prio_score = {"high": 3, "medium": 2, "low": 1, "unknown": 1} + + for tid, task in all_tasks.items(): + # Filter completed + if task["status"] in ["completed", "verified", "cancelled", "deferred", "blocked"]: + continue + + # Check dependencies + deps = task.get("dependencies", []) + blocked = False + for dep_id in deps: + if dep_id not in all_tasks: + blocked = True # Missing dependency + break + + dep_status = all_tasks[dep_id]["status"] + if dep_status not in ["completed", "verified"]: + blocked = True + break + + if blocked: + continue + + # Calculate Score + score = 0 + + # Status Bonus + if task["status"] == "in_progress": + score += 1000 + elif task["status"] == "pending": + score += 100 + elif task["status"] == "wip_blocked": + # Unblocked now + score += 500 + + # Priority + score += prio_score.get(task.get("priority", "medium"), 1) * 10 + + # Sprint Bonus + if task.get("sprint"): + score += 50 + + # Type Bonus (Stories/Bugs > Tasks > Epics) + t_type = task.get("type", "task") + if t_type in ["story", "bug"]: + score += 20 + elif t_type == "task": + score += 10 + + candidates.append((score, task)) + + candidates.sort(key=lambda x: x[0], reverse=True) + + if not candidates: + msg = "No suitable tasks found (all completed or blocked)." + if output_format == "json": + print(json.dumps({"message": msg})) + else: + print(msg) + return + + best = candidates[0][1] + + if output_format == "json": + print(json.dumps(best)) + else: + print(f"Recommended Next Task (Score: {candidates[0][0]}):") + print(f"ID: {best['id']}") + print(f"Title: {best['title']}") + print(f"Status: {best['status']}") + print(f"Priority: {best['priority']}") + print(f"Type: {best.get('type', 'task')}") + if best.get("sprint"): + print(f"Sprint: {best.get('sprint')}") + print(f"\nRun: scripts/tasks show {best['id']}") + +def install_hooks(): + """Installs the git pre-commit hook.""" + hook_path = os.path.join(REPO_ROOT, ".git", "hooks", "pre-commit") + if not os.path.exists(os.path.join(REPO_ROOT, ".git")): + print("Error: Not a git repository.") + sys.exit(1) + + script_path = os.path.relpath(os.path.abspath(__file__), REPO_ROOT) + + hook_content = f"""#!/bin/sh +# Auto-generated by scripts/tasks.py +echo "Running task validation..." +python3 {script_path} validate --format text +""" + + try: + with open(hook_path, "w") as f: + f.write(hook_content) + os.chmod(hook_path, 0o755) + print(f"Installed pre-commit hook at {hook_path}") + except Exception as e: + print(f"Error installing hook: {e}") + sys.exit(1) + +def main(): + parser = argparse.ArgumentParser(description="Manage development tasks") + + # Common argument for format + parent_parser = argparse.ArgumentParser(add_help=False) + parent_parser.add_argument("--format", choices=["text", "json"], default="text", help="Output format") + + subparsers = parser.add_subparsers(dest="command", help="Command to run") + + # Init + subparsers.add_parser("init", help="Initialize documentation structure") + + # Create + create_parser = subparsers.add_parser("create", parents=[parent_parser], help="Create a new task") + create_parser.add_argument("category", choices=CATEGORIES, help="Task category") + create_parser.add_argument("title", help="Task title") + create_parser.add_argument("--desc", default="To be determined", help="Task description") + create_parser.add_argument("--priority", default="medium", help="Task priority") + create_parser.add_argument("--status", choices=VALID_STATUSES, default="pending", help="Task status") + create_parser.add_argument("--dependencies", help="Comma-separated list of task IDs this task depends on") + create_parser.add_argument("--type", choices=VALID_TYPES, default="task", help="Task type") + create_parser.add_argument("--sprint", default="", help="Sprint name/ID") + create_parser.add_argument("--estimate", default="", help="Estimate (points/size)") + + # List + list_parser = subparsers.add_parser("list", parents=[parent_parser], help="List tasks") + list_parser.add_argument("--status", help="Filter by status") + list_parser.add_argument("--category", choices=CATEGORIES, help="Filter by category") + list_parser.add_argument("--sprint", help="Filter by sprint") + list_parser.add_argument("--archived", action="store_true", help="Include archived tasks") + + # Show + show_parser = subparsers.add_parser("show", parents=[parent_parser], help="Show task details") + show_parser.add_argument("task_id", help="Task ID (e.g., FOUNDATION-001)") + + # Update + update_parser = subparsers.add_parser("update", parents=[parent_parser], help="Update task status") + update_parser.add_argument("task_id", help="Task ID (e.g., FOUNDATION-001)") + update_parser.add_argument("status", help=f"New status: {', '.join(VALID_STATUSES)}") + + # Delete + delete_parser = subparsers.add_parser("delete", parents=[parent_parser], help="Delete a task") + delete_parser.add_argument("task_id", help="Task ID (e.g., FOUNDATION-001)") + + # Archive + archive_parser = subparsers.add_parser("archive", parents=[parent_parser], help="Archive a task") + archive_parser.add_argument("task_id", help="Task ID") + + # Context + subparsers.add_parser("context", parents=[parent_parser], help="Show current context (in_progress tasks)") + + # Next + subparsers.add_parser("next", parents=[parent_parser], help="Suggest the next task to work on") + + # Migrate + subparsers.add_parser("migrate", parents=[parent_parser], help="Migrate legacy tasks to new format") + + # Complete + complete_parser = subparsers.add_parser("complete", parents=[parent_parser], help="Mark a task as completed") + complete_parser.add_argument("task_id", help="Task ID (e.g., FOUNDATION-001)") + + # Validate + subparsers.add_parser("validate", parents=[parent_parser], help="Validate task files") + + # Visualize + subparsers.add_parser("visualize", parents=[parent_parser], help="Visualize task dependencies (Mermaid)") + + # Install Hooks + subparsers.add_parser("install-hooks", parents=[parent_parser], help="Install git hooks") + + args = parser.parse_args() + + # Default format to text if not present (e.g. init doesn't have it) + fmt = getattr(args, "format", "text") + + if args.command == "create": + deps = [] + if args.dependencies: + deps = [d.strip() for d in args.dependencies.split(",") if d.strip()] + create_task(args.category, args.title, args.desc, priority=args.priority, status=args.status, dependencies=deps, task_type=args.type, sprint=args.sprint, estimate=args.estimate, output_format=fmt) + elif args.command == "list": + list_tasks(args.status, args.category, sprint=args.sprint, include_archived=args.archived, output_format=fmt) + elif args.command == "init": + init_docs() + elif args.command == "show": + show_task(args.task_id, output_format=fmt) + elif args.command == "delete": + delete_task(args.task_id, output_format=fmt) + elif args.command == "archive": + archive_task(args.task_id, output_format=fmt) + elif args.command == "update": + update_task_status(args.task_id, args.status, output_format=fmt) + elif args.command == "context": + get_context(output_format=fmt) + elif args.command == "next": + get_next_task(output_format=fmt) + elif args.command == "migrate": + migrate_all() + elif args.command == "complete": + update_task_status(args.task_id, "completed", output_format=fmt) + elif args.command == "validate": + validate_all(output_format=fmt) + elif args.command == "visualize": + visualize_tasks(output_format=fmt) + elif args.command == "install-hooks": + install_hooks() + else: + parser.print_help() + +if __name__ == "__main__": + main() diff --git a/scripts/upgrade.py b/scripts/upgrade.py new file mode 100755 index 0000000..12e9f74 --- /dev/null +++ b/scripts/upgrade.py @@ -0,0 +1,45 @@ +#!/usr/bin/env python3 +import os +import sys +import shutil +import subprocess + +SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) +REPO_ROOT = os.path.dirname(SCRIPT_DIR) +TASKS_SCRIPT = os.path.join(SCRIPT_DIR, "tasks.py") + +def upgrade(): + print("Starting repository upgrade...") + + # 1. Migrate Tasks + print("\n[1/3] Checking for legacy tasks...") + try: + subprocess.check_call([sys.executable, TASKS_SCRIPT, "migrate"]) + except subprocess.CalledProcessError: + print("Warning: Task migration failed.") + + # 2. Update Tooling Configs + print("\n[2/3] Verifying agent configuration...") + # Check CLAUDE.md + claude_md = os.path.join(REPO_ROOT, "CLAUDE.md") + agents_md = os.path.join(REPO_ROOT, "AGENTS.md") + + if os.path.exists(agents_md): + if not os.path.exists(claude_md): + print("Creating CLAUDE.md symlink...") + try: + os.symlink("AGENTS.md", claude_md) + except OSError: + # Fallback for Windows or no symlink support + shutil.copy(agents_md, claude_md) + elif os.path.islink(claude_md): + pass # Good + else: + print("CLAUDE.md exists but is not a symlink. Leaving it as is.") + + # 3. Finalize + print("\n[3/3] Upgrade complete.") + print("Please review AGENTS.md to ensure it reflects the latest workflow (Code Review).") + +if __name__ == "__main__": + upgrade() diff --git a/templates/GUIDE.md b/templates/GUIDE.md new file mode 100644 index 0000000..3d0a944 --- /dev/null +++ b/templates/GUIDE.md @@ -0,0 +1,122 @@ +# Task Documentation System Guide + +This guide explains how to create, maintain, and update task documentation. It provides a reusable system for tracking implementation work, decisions, and progress. + +## Core Philosophy +**"If it's not documented in `docs/tasks/`, it didn't happen."** + +## Directory Structure +Tasks are organized by category in `docs/tasks/`: +- `foundation/`: Core architecture and setup +- `infrastructure/`: Services, adapters, platform code +- `domain/`: Business logic, use cases +- `presentation/`: UI, state management +- `features/`: End-to-end feature implementation +- `migration/`: Refactoring, upgrades +- `testing/`: Testing infrastructure +- `review/`: Code reviews and PR analysis + +## Task Document Format + +We use **YAML Frontmatter** for metadata and **Markdown** for content. + +### Frontmatter (Required) +```yaml +--- +id: FOUNDATION-20250521-103000 # Auto-generated Timestamp ID +status: pending # Current status +title: Initial Project Setup # Task Title +priority: medium # high, medium, low +created: 2025-05-21 10:30:00 # Creation timestamp +category: foundation # Category +type: task # task, story, bug, epic (Optional) +sprint: Sprint 1 # Iteration identifier (Optional) +estimate: 3 # Story points / T-shirt size (Optional) +dependencies: TASK-001, TASK-002 # Comma separated list of IDs (Optional) +--- +``` + +### Status Workflow +1. `pending`: Created but not started. +2. `in_progress`: Active development. +3. `review_requested`: Implementation done, awaiting code review. +4. `verified`: Reviewed and approved. +5. `completed`: Merged and finalized. +6. `wip_blocked` / `blocked`: Development halted. +7. `cancelled` / `deferred`: Stopped or postponed. + +### Content Template +```markdown +# [Task Title] + +## Task Information +- **Dependencies**: [List IDs] + +## Task Details +[Description of what needs to be done] + +### Acceptance Criteria +- [ ] Criterion 1 +- [ ] Criterion 2 + +## Implementation Status +### Completed Work +- ✅ Implemented X (file.py) + +### Blockers +[Describe blockers if any] +``` + +## Tools + +Use the `scripts/tasks` wrapper to manage tasks. + +```bash +# Create a new task (standard) +./scripts/tasks create foundation "Task Title" + +# Create an Agile Story in a Sprint +./scripts/tasks create features "User Login" --type story --sprint "Sprint 1" --estimate 5 + +# List tasks (can filter by sprint) +./scripts/tasks list +./scripts/tasks list --sprint "Sprint 1" + +# Find the next best task to work on (Smart Agent Mode) +./scripts/tasks next + +# Update status +./scripts/tasks update [TASK_ID] in_progress +./scripts/tasks update [TASK_ID] review_requested +./scripts/tasks update [TASK_ID] verified +./scripts/tasks update [TASK_ID] completed + +# Migrate legacy tasks (if updating from older version) +./scripts/tasks migrate +``` + +## Agile Methodology + +This system supports Agile/Scrum workflows for LLM-Human collaboration. + +### Sprints +- Tag tasks with `sprint: [Name]` to group them into iterations. +- Use `./scripts/tasks list --sprint [Name]` to view the sprint backlog. + +### Estimation +- Use `estimate: [Value]` (e.g., Fibonacci numbers 1, 2, 3, 5, 8) to size tasks. + +### Auto-Pilot +- The `./scripts/tasks next` command uses an algorithm to determine the optimal next task based on: + 1. Status (In Progress > Pending) + 2. Dependencies (Unblocked > Blocked) + 3. Sprint (Current Sprint > Backlog) + 4. Priority (High > Low) + 5. Type (Stories/Bugs > Tasks) + +## Agent Integration + +Agents (Claude, etc.) use this system to track their work. +- Always check `./scripts/tasks context` or use `./scripts/tasks next` before starting. +- Keep the task file updated with your progress. +- Use `review_requested` when you need human feedback. diff --git a/templates/maintenance_mode.md b/templates/maintenance_mode.md new file mode 100644 index 0000000..3d53c80 --- /dev/null +++ b/templates/maintenance_mode.md @@ -0,0 +1,88 @@ +# AI Agent Instructions + +You are an expert Software Engineer working on this project. Your primary responsibility is to implement features and fixes while strictly adhering to the **Task Documentation System**. + +## Core Philosophy +**"If it's not documented in `docs/tasks/`, it didn't happen."** + +## Workflow +1. **Pick a Task**: Run `python3 scripts/tasks.py context` to see active tasks, or `list` to see pending ones. +2. **Plan & Document**: + * **Memory Check**: Run `python3 scripts/memory.py list` (or use the Memory Skill) to recall relevant long-term information. + * **Security Check**: Ask the user about specific security considerations for this task. + * If starting a new task, use `scripts/tasks.py create` (or `python3 scripts/tasks.py create`) to generate a new task file. + * Update the task status: `python3 scripts/tasks.py update [TASK_ID] in_progress`. +3. **Implement**: Write code, run tests. +4. **Update Documentation Loop**: + * As you complete sub-tasks, check them off in the task document. + * If you hit a blocker, update status to `wip_blocked` and describe the issue in the file. + * Record key architectural decisions in the task document. + * **Memory Update**: If you learn something valuable for the long term, use `scripts/memory.py create` to record it. +5. **Review & Verify**: + * Once implementation is complete, update status to `review_requested`: `python3 scripts/tasks.py update [TASK_ID] review_requested`. + * Ask a human or another agent to review the code. + * Once approved and tested, update status to `verified`. +6. **Finalize**: + * Update status to `completed`: `python3 scripts/tasks.py update [TASK_ID] completed`. + * Record actual effort in the file. + * Ensure all acceptance criteria are met. + +## Tools +* **Wrapper**: `./scripts/tasks` (Checks for Python, recommended). +* **Create**: `./scripts/tasks create [category] "Title"` +* **List**: `./scripts/tasks list [--status pending]` +* **Context**: `./scripts/tasks context` +* **Update**: `./scripts/tasks update [ID] [status]` +* **Migrate**: `./scripts/tasks migrate` (Migrate legacy tasks to new format) +* **Memory**: `./scripts/memory.py [create|list|read]` +* **JSON Output**: Add `--format json` to any command for machine parsing. + +## Documentation Reference +* **Guide**: Read `docs/tasks/GUIDE.md` for strict formatting and process rules. +* **Architecture**: Refer to `docs/architecture/` for system design. +* **Features**: Refer to `docs/features/` for feature specifications. +* **Security**: Refer to `docs/security/` for risk assessments and mitigations. +* **Memories**: Refer to `docs/memories/` for long-term project context. + +## Code Style & Standards +* Follow the existing patterns in the codebase. +* Ensure all new code is covered by tests (if testing infrastructure exists). + +## PR Review Methodology +When performing a PR review, follow this "Human-in-the-loop" process to ensure depth and efficiency. + +### 1. Preparation +1. **Create Task**: `python3 scripts/tasks.py create review "Review PR #<N>: <Title>"` +2. **Fetch Details**: Use `gh` to get the PR context. + * `gh pr view <N>` + * `gh pr diff <N>` + +### 2. Analysis & Planning (The "Review Plan") +**Do not review line-by-line yet.** Instead, analyze the changes and document a **Review Plan** in the task file (or present it for approval). + +Your plan must include: +* **High-Level Summary**: Purpose, new APIs, breaking changes. +* **Dependency Check**: New libraries, maintenance status, security. +* **Impact Assessment**: Effect on existing code/docs. +* **Focus Areas**: Prioritized list of files/modules to check. +* **Suggested Comments**: Draft comments for specific lines. + * Format: `File: <path> | Line: <N> | Comment: <suggestion>` + * Tone: Friendly, suggestion-based ("Consider...", "Nit: ..."). + +### 3. Execution +Once the human approves the plan and comments: +1. **Pending Review**: Create a pending review using `gh`. + * `COMMIT_SHA=$(gh pr view <N> --json headRefOid -q .headRefOid)` + * `gh api repos/{owner}/{repo}/pulls/{N}/reviews -f commit_id="$COMMIT_SHA"` +2. **Batch Comments**: Add comments to the pending review. + * `gh api repos/{owner}/{repo}/pulls/{N}/comments -f body="..." -f path="..." -f commit_id="$COMMIT_SHA" -F line=<L> -f side="RIGHT"` +3. **Submit**: + * `gh pr review <N> --approve --body "Summary..."` (or `--request-changes`). + +### 4. Close Task +* Update task status to `completed`. + +## Agent Interoperability +- **Task Manager Skill**: `.claude/skills/task_manager/` +- **Memory Skill**: `.claude/skills/memory/` +- **Tool Definitions**: `docs/interop/tool_definitions.json` diff --git a/templates/task.md b/templates/task.md new file mode 100644 index 0000000..58a9703 --- /dev/null +++ b/templates/task.md @@ -0,0 +1,23 @@ +# Task: {title} + +## Task Information +- **Task ID**: {task_id} +- **Status**: pending +- **Priority**: medium +- **Phase**: 1 +- **Estimated Effort**: 1 day +- **Dependencies**: None + +## Task Details + +### Description +{description} + +### Acceptance Criteria +- [ ] Criterion 1 +- [ ] Criterion 2 + +--- + +*Created: {date}* +*Status: pending*