Skip to content

Add LLM fallback chains, multi-LLM init wizard, and OpenAI OAuth#7

Merged
initializ-mk merged 2 commits intomainfrom
core/llm-fallback
Feb 25, 2026
Merged

Add LLM fallback chains, multi-LLM init wizard, and OpenAI OAuth#7
initializ-mk merged 2 commits intomainfrom
core/llm-fallback

Conversation

@initializ-mk
Copy link
Contributor

Summary

  • LLM Fallback Chains: Adds FallbackChain (implements llm.Client) with cooldown tracking and error classification. When the primary provider fails (rate limit, auth error, server error), requests automatically failover to the next provider in the chain.
  • Runtime Integration: ResolveModelConfig resolves fallback providers from forge.yaml, FORGE_MODEL_FALLBACKS env var, or auto-detection from available API keys. Runner builds a FallbackChain wrapping primary + fallback clients.
  • Multi-LLM Init Wizard: New FallbackStep in the TUI wizard lets users select fallback providers and enter API keys during forge init. Also supports --fallbacks flag for non-interactive mode.
  • OpenAI OAuth: Browser-based login (Authorization Code + PKCE) via auth.openai.com with a local callback server. OAuthClient auto-refreshes expired tokens. ResponsesClient implements llm.Client for the OpenAI Responses API (required by ChatGPT OAuth/Codex endpoint).
  • Model Selection: Provider-specific model choices during init based on auth method (OAuth vs API key), with friendly display names.
  • Bug Fixes: OAuth/API-key routing (prevents Codex endpoint misuse when API key is configured), multi-select UX (auto-check cursor item on Enter).

Test plan

  • cd forge-core && go test ./llm/... ./runtime/... — fallback chain, cooldown, errors, config resolution, OAuth packages
  • cd forge-cli && go build ./... — compilation check
  • Run forge init, select a primary provider, verify fallback step appears, add a fallback, verify forge.yaml includes fallbacks section
  • Run forge init, select OpenAI, choose "Login with OpenAI", verify browser opens and OAuth completes
  • Configure OpenAI with API key, verify it uses Chat Completions (not Codex Responses endpoint)
  • Set both ANTHROPIC_API_KEY and OPENAI_API_KEY, verify auto-detected fallback in startup banner
  • forge init my-agent --model-provider anthropic --fallbacks openai,gemini --non-interactive — verify forge.yaml has fallbacks

- Fallback chain: FallbackChain, CooldownTracker, error classification
  in forge-core/llm/ with full test coverage
- Runtime integration: ResolveModelConfig resolves fallbacks from
  forge.yaml, FORGE_MODEL_FALLBACKS env var, and auto-detection
- Runner builds FallbackChain wrapping primary + fallback clients
- TUI wizard: FallbackStep for selecting fallback providers and
  collecting API keys during forge init
- OpenAI OAuth: browser-based login (Authorization Code + PKCE) via
  auth.openai.com with local callback server
- ResponsesClient: new llm.Client for the OpenAI Responses API used
  by ChatGPT OAuth tokens (Codex endpoint)
- OAuthClient: auto-refreshing wrapper that detects token expiry
- Model selection: provider-specific model choices during init based
  on auth method (OAuth vs API key)
- Fix OAuth/API-key routing: only use stored OAuth credentials when
  no API key is configured, preventing Codex endpoint misuse
- Fix multi-select UX: auto-check cursor item on Enter when nothing
  is checked, so single-selection works intuitively
Update TestBuildTemplateData_DefaultModels to expect gpt-5.2-2025-12-11
instead of the old gpt-4o-mini default.
@initializ-mk initializ-mk merged commit 1533a15 into main Feb 25, 2026
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant