-
Notifications
You must be signed in to change notification settings - Fork 115
fix: keep message id/status for Responses API round-trip #1754
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
When a reasoning item is present in the conversation history, OpenAI's Responses API requires the following message item to have 'id' and 'status' fields. Without these, the API returns: "Item of type 'reasoning' was provided without its required following item." Changes: - Add responses_message_id and responses_message_status fields to Message - Capture message id/status in from_llm_responses_output() - Emit id/status in to_responses_dict() when available - Add tests for the new behavior Co-authored-by: openhands <openhands@all-hands.dev> Co-authored-by: smolpaws <engel@enyst.org>
|
Validating this. It's weird because it started recently. |
all-hands-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR correctly implements message id/status preservation for OpenAI Responses API round-trips. However, there are a few areas that could be improved: missing test coverage for deserialization, potential type safety improvements, and clarifications around atomic field handling.
| assert msg["type"] == "message" | ||
| assert msg["role"] == "assistant" | ||
| assert "id" not in msg | ||
| assert "status" not in msg |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟠 Important: Missing test coverage for the deserialization path. The current tests verify that to_responses_dict() correctly includes id/status fields, but there's no test verifying that from_llm_responses_output() correctly extracts these fields from OpenAI response items.
Consider adding a test that verifies the complete round-trip behavior by testing Message.from_llm_responses_output() with mock response items containing id and status fields.
| "content": content_items, | ||
| } | ||
| # Include id/status if available (required with reasoning item) | ||
| if self.responses_message_id: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 Suggestion: The id and status fields are checked independently, which means it's possible to include one without the other in the output. According to the PR description, OpenAI requires both fields when a reasoning item is present.
Consider whether these should be set atomically (both or neither) with a single condition: if self.responses_message_id and self.responses_message_status:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@OpenHands if one or both don't exist, but a reasoning item exists, let's NOT send back either of these two nor the "orphan" reasoning item.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm on it! enyst can track my progress at all-hands.dev
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Final summary (since my last update)
What changed
-
Updated
Message.to_responses_dict()(assistant role) so that when aresponses_reasoning_itemis present but eitherresponses_message_idorresponses_message_statusis missing, we emit neither:- the assistant
"message"item, nor - the
"reasoning"item
This prevents sending an “orphan” reasoning item and matches the PR comment request.
- the assistant
-
Also ensured that
idandstatusare only ever emitted on the assistant"message"item atomically (both or neither).
Checklist against your request
- If one or both of (
responses_message_id,responses_message_status) don’t exist and a reasoning item exists, do not send back either of these two nor the reasoning item. - Changes are minimal and localized (single file change).
- Tests run and passing (
uv run pytest -q tests/sdk/llm/test_message.py).
Repo state
- Working tree clean; no extra diffs.
- Changes already committed and pushed to
origin/fix/responses-reasoning-message-id(commit49fe542c).
| # OpenAI Responses message id/status (required when reasoning item is present) | ||
| responses_message_id: str | None = Field( | ||
| default=None, | ||
| description="OpenAI Responses message id from model output", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 Nit: The field descriptions could be more detailed. Consider explaining that these fields are required by OpenAI when re-sending conversation history that includes a reasoning item, to clarify that they're for round-trip preservation rather than general message metadata.
When a reasoning item is present in the conversation history, OpenAI's Responses API requires the following message item to have 'id' and 'status' fields. Without these, the API returns:
"Item of type 'reasoning' was provided without its required following item."
Changes:
Checklist