Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ You are an expert Software Engineer working on this project. Your primary respon
**"If it's not documented in `docs/tasks/`, it didn't happen."**

## Workflow
1. **Pick a Task**: Run `python3 scripts/tasks.py context` to see active tasks, or `list` to see pending ones.
1. **Pick a Task**: Run `python3 scripts/tasks.py next` to find the best task, `context` to see active tasks, or `list` to see pending ones.
2. **Plan & Document**:
* **Memory Check**: Run `python3 scripts/memory.py list` (or use the Memory Skill) to recall relevant long-term information.
* **Security Check**: Ask the user about specific security considerations for this task.
Expand Down
117 changes: 117 additions & 0 deletions AGENTS.md.bak
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
# AI Agent Instructions

You are an expert Software Engineer working on this project. Your primary responsibility is to implement features and fixes while strictly adhering to the **Task Documentation System**.

## Core Philosophy
**"If it's not documented in `docs/tasks/`, it didn't happen."**

## Workflow
1. **Pick a Task**: Run `python3 scripts/tasks.py context` to see active tasks, or `list` to see pending ones.
2. **Plan & Document**:
* **Memory Check**: Run `python3 scripts/memory.py list` (or use the Memory Skill) to recall relevant long-term information.
* **Security Check**: Ask the user about specific security considerations for this task.
* If starting a new task, use `scripts/tasks.py create` (or `python3 scripts/tasks.py create`) to generate a new task file.
* Update the task status: `python3 scripts/tasks.py update [TASK_ID] in_progress`.
3. **Implement**: Write code, run tests.
4. **Update Documentation Loop**:
* As you complete sub-tasks, check them off in the task document.
* If you hit a blocker, update status to `wip_blocked` and describe the issue in the file.
* Record key architectural decisions in the task document.
* **Memory Update**: If you learn something valuable for the long term, use `scripts/memory.py create` to record it.
5. **Review & Verify**:
* Once implementation is complete, update status to `review_requested`: `python3 scripts/tasks.py update [TASK_ID] review_requested`.
* Ask a human or another agent to review the code.
* Once approved and tested, update status to `verified`.
6. **Finalize**:
* Update status to `completed`: `python3 scripts/tasks.py update [TASK_ID] completed`.
* Record actual effort in the file.
* Ensure all acceptance criteria are met.

## Tools
* **Wrapper**: `./scripts/tasks` (Checks for Python, recommended).
* **Next**: `./scripts/tasks next` (Finds the best task to work on).
* **Create**: `./scripts/tasks create [category] "Title"`
* **List**: `./scripts/tasks list [--status pending]`
* **Context**: `./scripts/tasks context`
* **Update**: `./scripts/tasks update [ID] [status]`
* **Migrate**: `./scripts/tasks migrate` (Migrate legacy tasks to new format)
* **Link**: `./scripts/tasks link [ID] [DEP_ID]` (Add dependency).
* **Unlink**: `./scripts/tasks unlink [ID] [DEP_ID]` (Remove dependency).
* **Index**: `./scripts/tasks index` (Generate INDEX.yaml).
* **Graph**: `./scripts/tasks graph` (Visualize dependencies).
* **Validate**: `./scripts/tasks validate` (Check task files).
* **Memory**: `./scripts/memory.py [create|list|read]`
* **JSON Output**: Add `--format json` to any command for machine parsing.

## Documentation Reference
* **Guide**: Read `docs/tasks/GUIDE.md` for strict formatting and process rules.
* **Architecture**: Refer to `docs/architecture/` for system design.
* **Features**: Refer to `docs/features/` for feature specifications.
* **Security**: Refer to `docs/security/` for risk assessments and mitigations.
* **Memories**: Refer to `docs/memories/` for long-term project context.

## Code Style & Standards
* Follow the existing patterns in the codebase.
* Ensure all new code is covered by tests (if testing infrastructure exists).

## PR Review Methodology
When performing a PR review, follow this "Human-in-the-loop" process to ensure depth and efficiency.

### 1. Preparation
1. **Create Task**: `python3 scripts/tasks.py create review "Review PR #<N>: <Title>"`
2. **Fetch Details**: Use `gh` to get the PR context.
* `gh pr view <N>`
* `gh pr diff <N>`

### 2. Analysis & Planning (The "Review Plan")
**Do not review line-by-line yet.** Instead, analyze the changes and document a **Review Plan** in the task file (or present it for approval).

Your plan must include:
* **High-Level Summary**: Purpose, new APIs, breaking changes.
* **Dependency Check**: New libraries, maintenance status, security.
* **Impact Assessment**: Effect on existing code/docs.
* **Focus Areas**: Prioritized list of files/modules to check.
* **Suggested Comments**: Draft comments for specific lines.
* Format: `File: <path> | Line: <N> | Comment: <suggestion>`
* Tone: Friendly, suggestion-based ("Consider...", "Nit: ...").

### 3. Execution
Once the human approves the plan and comments:
1. **Pending Review**: Create a pending review using `gh`.
* `COMMIT_SHA=$(gh pr view <N> --json headRefOid -q .headRefOid)`
* `gh api repos/{owner}/{repo}/pulls/{N}/reviews -f commit_id="$COMMIT_SHA"`
2. **Batch Comments**: Add comments to the pending review.
* `gh api repos/{owner}/{repo}/pulls/{N}/comments -f body="..." -f path="..." -f commit_id="$COMMIT_SHA" -F line=<L> -f side="RIGHT"`
3. **Submit**:
* `gh pr review <N> --approve --body "Summary..."` (or `--request-changes`).

### 4. Close Task
* Update task status to `completed`.

## Project Specific Instructions

### Core Directives
- **API First**: The Bible AI API is the primary source for data. Scraping (`pkg/app/passage.go` fallback) is deprecated and should be avoided for new features.
- **Secrets**: Do not commit secrets. Use `pkg/secrets` to retrieve them from Environment or Google Secret Manager.
- **Testing**: Run tests from the root using `go test ./pkg/...`.

### Code Guidelines
- **Go Version**: 1.24+
- **Naming**:
- Variables: `camelCase`
- Functions: `PascalCase` (exported), `camelCase` (internal)
- Packages: `underscore_case`
- **Structure**:
- `pkg/app`: Business logic.
- `pkg/bot`: Platform integration.
- `pkg/utils`: Shared utilities.

### Local Development
- **Setup**: Create a `.env` file with `TELEGRAM_ID` and `TELEGRAM_ADMIN_ID`.
- **Run**: `go run main.go`
- **Testing**: Use `ngrok` to tunnel webhooks or send mock HTTP requests.

## Agent Interoperability
- **Task Manager Skill**: `.claude/skills/task_manager/`
- **Memory Skill**: `.claude/skills/memory/`
- **Tool Definitions**: `docs/interop/tool_definitions.json`
112 changes: 0 additions & 112 deletions CLAUDE.md

This file was deleted.

1 change: 1 addition & 0 deletions CLAUDE.md
48 changes: 48 additions & 0 deletions docs/TESTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Testing Strategy

This project employs a hybrid testing strategy to ensure code quality while minimizing external dependencies and costs.

## Test Categories

### 1. Unit Tests (Standard)
* **Default Behavior:** By default, all tests run in "mock mode".
* **Goal:** Fast, reliable, and cost-free verification of logic.
* **Mechanism:** External services (Bible AI API, BibleGateway scraping) are mocked using function replacement (e.g., `SubmitQuery`, `GetPassageHTML`) or interface mocking.
* **Execution:** these tests are run automatically on every Pull Request (MR).

### 2. Integration Tests (Live)
* **Conditional Behavior:** Specific tests are capable of switching to "live mode" when appropriate environment variables are detected.
* **Goal:** Verify that the application correctly interacts with real external services (Contract Testing) and that credentials/configurations are valid.
* **Execution:** These tests should be run on a scheduled basis (e.g., nightly or weekly) or manually when verifying infrastructure changes.

## Live Tests & Configuration

The following tests support live execution:

### `TestSubmitQuery`
* **File:** `pkg/app/api_client_test.go`
* **Description:** Verifies connectivity to the Bible AI API.
* **Trigger:**
* `BIBLE_API_URL` is set AND
* `BIBLE_API_URL` is NOT `https://example.com`
* **Required Variables:**
* `BIBLE_API_URL`: The endpoint of the Bible AI API.
* `BIBLE_API_KEY`: A valid API key.
* **Rationale:** Ensures that the client code (request marshaling, auth headers) matches the actual API expectation and that the API is reachable.

### `TestUserDatabaseIntegration`
* **File:** `pkg/app/database_integration_test.go`
* **Description:** Verifies Read/Write operations to Google Cloud Firestore/Datastore.
* **Trigger:**
* `GCLOUD_PROJECT_ID` is set.
* **Required Variables:**
* `GCLOUD_PROJECT_ID`: The Google Cloud Project ID.
* *Note:* Requires active Google Cloud credentials (e.g., `GOOGLE_APPLICATION_CREDENTIALS` or `gcloud auth`).
* **Rationale:** Verifies that database permissions and client initialization are correct, preventing runtime errors in production. Uses a specific test user ID (`test-integration-user-DO-NOT-DELETE`) to avoid affecting real user data.

## Rationale for Strategy

1. **Cost Reduction:** The Bible AI API may incur costs per call. Mocking prevents racking up bills during routine development.
2. **Speed:** Live calls are slow. Mocked tests run instantly.
3. **Reliability:** External services can be flaky. Mocked tests only fail if the code is broken.
4. **Verification:** We still need to know if the API changed or if our secrets are wrong. The conditional integration tests provide this safety net without the daily cost/latency penalty.
20 changes: 8 additions & 12 deletions docs/tasks/GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,21 +91,17 @@ Use the `scripts/tasks` wrapper to manage tasks.
./scripts/tasks update [TASK_ID] verified
./scripts/tasks update [TASK_ID] completed

# Migrate legacy tasks (if updating from older version)
./scripts/tasks migrate

# Manage Dependencies
./scripts/tasks link [TASK_ID] [DEPENDENCY_ID]
./scripts/tasks unlink [TASK_ID] [DEPENDENCY_ID]
./scripts/tasks link [TASK_ID] [DEP_ID]
./scripts/tasks unlink [TASK_ID] [DEP_ID]

# Generate Dependency Index (docs/tasks/INDEX.yaml)
./scripts/tasks index
# Visualization & Analysis
./scripts/tasks graph # Show dependency graph
./scripts/tasks index # Generate INDEX.yaml
./scripts/tasks validate # Check for errors

# Visualize Dependencies (Mermaid Graph)
./scripts/tasks graph

# Validate Task Files
./scripts/tasks validate
# Migrate legacy tasks (if updating from older version)
./scripts/tasks migrate
```

## Agile Methodology
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
id: MIGRATION-20251229-060122-RTG
status: completed
title: Update Agent Harness
priority: medium
created: 2025-12-29 06:01:22
category: migration
dependencies:
type: task
---

# Update Agent Harness

To be determined
19 changes: 15 additions & 4 deletions pkg/app/api_client_test.go
Original file line number Diff line number Diff line change
@@ -1,15 +1,26 @@
package app

import (
"os"
"testing"
)

func TestSubmitQuery(t *testing.T) {
t.Run("Success", func(t *testing.T) {
// Force cleanup of environment to ensure we test Secret Manager fallback
// This handles cases where the runner might have lingering env vars
defer SetEnv("BIBLE_API_URL", "https://example.com")()
defer SetEnv("BIBLE_API_KEY", "api_key")()
// Check if we should run integration test against real API
// If BIBLE_API_URL is set and not example.com, we assume integration test mode
realURL, hasURL := os.LookupEnv("BIBLE_API_URL")
if hasURL && realURL != "" && realURL != "https://example.com" {
t.Logf("Running integration test against real API: %s", realURL)
// Ensure we have a key
if _, hasKey := os.LookupEnv("BIBLE_API_KEY"); !hasKey {
t.Log("Warning: BIBLE_API_URL set but BIBLE_API_KEY missing. Test might fail.")
}
} else {
// Mock mode
defer SetEnv("BIBLE_API_URL", "https://example.com")()
defer SetEnv("BIBLE_API_KEY", "api_key")()
}

ResetAPIConfigCache()

Expand Down
15 changes: 15 additions & 0 deletions pkg/app/devo_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ import (
"testing"
"time"

"golang.org/x/net/html"

"github.com/julwrites/BotPlatform/pkg/def"
"github.com/julwrites/ScriptureBot/pkg/utils"
)
Expand Down Expand Up @@ -81,6 +83,19 @@ func TestGetDevotionalData(t *testing.T) {
defer UnsetEnv("BIBLE_API_KEY")()
ResetAPIConfigCache()

// Mock GetPassageHTML to prevent external calls during fallback
originalGetPassageHTML := GetPassageHTML
defer func() { GetPassageHTML = originalGetPassageHTML }()

GetPassageHTML = func(ref, ver string) *html.Node {
return mockGetPassageHTML(`
<div class="bcv">Genesis 1</div>
<div class="passage-text">
<p>Mock devotional content.</p>
</div>
`)
}

var env def.SessionData
env.Props = map[string]interface{}{"ResourcePath": "../../resource"}
env.Res = GetDevotionalData(env, "DTMSV")
Expand Down
Loading
Loading