Add documentation for model selection and custom providers #1
+389
−0
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Users needed clarity on whether they can use custom local models (e.g., LMStudio instances) or are restricted to predefined models. This adds comprehensive documentation explaining model provider configuration.
Changes
New documentation:
docs/models.md(384 lines) covering:openai/gpt-oss-20b), Ollama (port 11434, default modelgpt-oss:20b)~/.codex/config.tomlwith all available options (auth, wire protocols, HTTP headers, retry settings)CODEX_OSS_BASE_URL,CODEX_OSS_PORT,OPENAI_BASE_URL)Updated links:
README.md: Added model selection documentation to docs sectiondocs/config.md: Added reference to model selection guideExample Configuration
Configuration applies to both CLI and IDE usage.
Original prompt
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.