Conversation
Conversational troubleshooting assistant for CAT engine manuals using Claude Sonnet 4.5 + existing FTS5 RAG search. Streams responses via SSE with source citations. Graceful fallback to search when API unavailable. New files: - src/services/llm_service.py — Anthropic SDK wrapper (sync, retry) - src/services/chat_service.py — RAG pipeline orchestration - src/prompts/manuals_assistant.py — system prompt + context formatting - src/routes/chat.py — chat endpoints with SSE streaming - templates/manuals/chat.html — mobile-first chat UI - tests/test_chat.py — 21 unit + integration tests Modified: - src/config.py — Anthropic API config vars - src/models.py — ChatSession model - src/services/manuals_service.py — get_context_for_llm() - src/app.py — blueprint registration + LLM init - requirements.txt — anthropic SDK Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b313fd3253
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (c.indexOf('csrf_token=') === 0) return c.substring(11); | ||
| } | ||
| return ''; | ||
| } |
There was a problem hiding this comment.
Inject a real CSRF token for chat POST requests
This request path is CSRF-protected, but the client code can only read a token from a meta[name="csrf-token"] tag or a csrf_token cookie, and this template does not render either value. In non-testing configs (WTF_CSRF_ENABLED=True), getCSRFToken() falls through to '', so /manuals/chat/api/message will be rejected with 400 and the chat UI cannot send messages.
Useful? React with 👍 / 👎.
| if not api_key: | ||
| logger.warning("ANTHROPIC_API_KEY not set — chat assistant disabled") | ||
| return None |
There was a problem hiding this comment.
Reset singleton when chat service is disabled
When ANTHROPIC_API_KEY is missing, this branch returns None but leaves the module-level _service untouched. If the process previously initialized a valid client (e.g., prior app factory call), get_llm_service() will keep returning the stale instance, so chat remains enabled with old credentials instead of being gracefully disabled.
Useful? React with 👍 / 👎.
Replace duplicate FTS5 query in get_context_for_llm() with thin wrapper around search_manuals(). Both search UI and LLM now share one search path with equipment filters, authority boost, and tag-aware ranking. - Rewrite system prompt for collaborative guide role (triage, not oracle) - Add format_search_results() and format_page_content() context formats - Add detect_equipment() auto-detection from query text - Add equipment filter dropdown to chat UI - Add get_pages_content() for future deep-dive phase - Expand tests from 21 to 40 covering all new functions Co-authored-by: Cursor <cursoragent@cursor.com>
Search-Integrated Chat RedesignRewired the LLM chat to use the proven Changes
Test validationQuery "valve lash" + equipment "3516" should now return identical top results in both search UI and chat assistant. |
Conversational chat input (e.g., "What is the valve lash procedure for the 3516?") returned 0 results because FTS5 implicit AND required ALL words including stop words on one page. Added _extract_search_query() that strips stop words and uses OR for >3 content words. BM25 ranking naturally scores multi-match pages higher. Co-authored-by: Cursor <cursoragent@cursor.com>
Summary
New Files
src/services/llm_service.py— Anthropic SDK wrapper (sync, retry, streaming, cost tracking)src/services/chat_service.py— RAG pipeline orchestration, conversation history, token budgetsrc/prompts/manuals_assistant.py— System prompt, context formatting, message buildingsrc/routes/chat.py— Chat endpoints with SSE streamingtemplates/manuals/chat.html— Mobile-first chat UI (XSS-safe DOM rendering)tests/test_chat.py— 21 unit + integration testsModified Files
src/config.py— Anthropic API config varssrc/models.py— ChatSession model for conversation persistencesrc/services/manuals_service.py—get_context_for_llm()for RAG context retrievalsrc/app.py— Blueprint registration + LLM service initrequirements.txt— anthropic SDKTest plan
flask db upgrade/manuals/chatand test with real queries🤖 Generated with Claude Code