Conversation
…t AI instruction files Migrate pipeline files from .plx to .mthds extension, restructure the inference deck into separate files (llm, img_gen, extract, custom), update backend configs, and consolidate AI coding instructions into CLAUDE.md by removing .blackboxrules, .cursor/rules, .windsurfrules.md, AGENTS.md, and .github/copilot-instructions.md. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace typing.List with builtin list[str] and remove unnecessary from __future__ import annotations to fix annotation resolution failure after pipelex update. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| best-gemini = "nano-banana-pro" | ||
| best-blackforestlabs = "flux-2-pro" | ||
|
|
||
| default-general = "flux-2-pro" |
There was a problem hiding this comment.
Image generation alias references non-existent model
High Severity
The aliases best-blackforestlabs and default-general both point to "flux-2-pro", but no backend configuration file defines a model with that name. The fal.toml backend only defines flux-pro, flux-pro/v1.1, flux-pro/v1.1-ultra, flux-2, and fast-lightning-sdxl. Since default-general is used by the gen-image preset (which is the choice_default), all default image generation will fail to resolve a model. The intended model is likely flux-2.
| [extract.aliases] | ||
| default-premium = "azure-document-intelligence" | ||
| default-extract-document = "mistral-document-ai-2505" | ||
| default-extract-image = "mistral-document-ai-2505" |
There was a problem hiding this comment.
Extract alias references non-existent extraction model
High Severity
The aliases default-extract-document and default-extract-image both point to "mistral-document-ai-2505", but no backend defines this model. The Mistral backend only defines mistral-ocr-2503, mistral-ocr-2505, mistral-ocr-2512, and mistral-ocr. Since default-extract-document is the choice_default for extraction, all default document extraction will fail to resolve a model.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 94541fb8eb
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| @@ -28,7 +28,7 @@ description = "Find code files that implement or use elements mentioned in docs" | |||
| inputs = { doc_file = "DocumentationFile", repo_map = "RepositoryMap" } | |||
| output = "FilePath[]" | |||
| model = { model = "llm_for_large_codebase", temperature = 0.1 } | |||
There was a problem hiding this comment.
Replace removed model alias with a valid deck reference
This pipe still uses model = "llm_for_large_codebase", but this commit deletes .pipelex/inference/deck/base_deck.toml (where that identifier was defined) and the new deck files (1_llm_deck.toml + x_custom_llm_deck.toml) do not reintroduce it, so model resolution for this step will fail at runtime (the same stale reference is also present in cocode/pipelines/text_utils.mthds).
Useful? React with 👍 / 👎.


Note
Medium Risk
Touches core inference configuration (model availability, defaults, and inputs), so misconfigurations could break model selection or pipeline execution across environments.
Overview
Refactors Pipelex inference configuration by removing large editor/assistant rule files (
.blackboxrules,.cursor/rules/*,.github/copilot-instructions.md) and restructuring the model deck from a singlebase_deck.tomlinto separate decks for LLMs, image generation, and extraction.Updates backend model catalogs across providers to add a
thinking_modefield, expand model input support to includepdffor many LLMs, adjust/extend model lists and token limits (notably new Claude/GPT/Gemini/Mistral entries), and tighten some image-gen prompt rules (e.g.,positive_only).Written by Cursor Bugbot for commit 94541fb. This will update automatically on new commits. Configure here.