-
Notifications
You must be signed in to change notification settings - Fork 0
Align Codex health check payload with OpenAI Responses protocol #13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Co-authored-by: lwaay2025 <218015601+lwaay2025@users.noreply.github.com>
Co-authored-by: lwaay2025 <218015601+lwaay2025@users.noreply.github.com>
|
@copilot 你要去确认一下 是否支持max_tokens 目前的传入格式是否是response接口协议的 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR fixes a protocol mismatch in Codex health check requests by aligning them with the OpenAI Responses API specification. Previously, health checks were incorrectly sending Chat Completions-style payloads (with messages and implicit max_tokens) to the /responses endpoint, which caused response protocol errors.
Changes:
- Updated Codex health check payload to use Responses API format:
inputfield withmax_output_tokensinstead ofmessages - Modified test cases to validate platform-specific request formats (Codex vs. Claude) with separate field expectations
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| services/healthcheckservice.go | Changed Codex request body from Chat Completions format to Responses API format (input + max_output_tokens) |
| services/healthcheckservice_test.go | Updated test structure to validate Codex-specific fields (input, max_output_tokens) vs Claude fields (messages, max_tokens) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Codex Responses 协议 | ||
| if platform == "codex" { | ||
| reqBody := map[string]interface{}{ | ||
| "model": model, | ||
| "messages": []map[string]string{ | ||
| {"role": "user", "content": "hi"}, | ||
| }, | ||
| "model": model, | ||
| "input": "hi", | ||
| "max_output_tokens": 1, | ||
| } |
Copilot
AI
Jan 12, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Codex request format has been updated to use the OpenAI Responses API protocol (input field and max_output_tokens), but the ConnectivityTestService at services/connectivitytestservice.go lines 257-266 still uses the old Chat Completions format (messages field) for /responses endpoints. This inconsistency means health checks and connectivity tests will send different request formats to the same Codex endpoints, which could cause confusion or different behavior between the two services. Consider updating ConnectivityTestService to use the same Responses API format for consistency.
| switch v := reqData["input"].(type) { | ||
| case string: | ||
| if strings.TrimSpace(v) == "" { | ||
| t.Error("input string should not be empty") | ||
| } | ||
| case []interface{}: | ||
| if len(v) == 0 { | ||
| t.Error("input array is empty") | ||
| return | ||
| } | ||
| default: | ||
| t.Errorf("unexpected input type %T", v) | ||
| } |
Copilot
AI
Jan 12, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test accepts both string and array types for the input field (lines 369-381), but the actual implementation at healthcheckservice.go:753 only sends a string ("hi"). The test should be more specific and only expect a string type to match the actual implementation, unless there's a requirement to support array inputs that isn't currently implemented. This flexibility could mask bugs if the implementation accidentally produces an array.
Codex availability checks were using a chat-style request body instead of the OpenAI Responses schema, causing response protocol mismatches.
input: "hi"withmax_output_tokensper Responses API instead ofmessages.Example request body (Codex health check):
{ "model": "gpt-4o-mini", "input": "hi", "max_output_tokens": 1 }Original prompt
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.