Skip to content

Conversation

@enyst
Copy link
Collaborator

@enyst enyst commented Jan 17, 2026

When a reasoning item is present in the conversation history, OpenAI's Responses API requires the following message item to have 'id' and 'status' fields. Without these, the API returns:
"Item of type 'reasoning' was provided without its required following item."

Changes:

  • Add responses_message_id and responses_message_status fields to Message
  • Capture message id/status in from_llm_responses_output()
  • Emit id/status in to_responses_dict() when available
  • Add tests for the new behavior

Checklist

  • If the PR is changing/adding functionality, are there tests to reflect this?
  • If there is an example, have you run the example to make sure that it works?
  • If there are instructions on how to run the code, have you followed the instructions and made sure that it works?
  • If the feature is significant enough to require documentation, is there a PR open on the OpenHands/docs repository with the same branch name?
  • Is the github CI passing?

When a reasoning item is present in the conversation history, OpenAI's
Responses API requires the following message item to have 'id' and 'status'
fields. Without these, the API returns:
"Item of type 'reasoning' was provided without its required following item."

Changes:
- Add responses_message_id and responses_message_status fields to Message
- Capture message id/status in from_llm_responses_output()
- Emit id/status in to_responses_dict() when available
- Add tests for the new behavior

Co-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: smolpaws <engel@enyst.org>
@enyst enyst marked this pull request as draft January 17, 2026 22:13
@enyst
Copy link
Collaborator Author

enyst commented Jan 17, 2026

Validating this. It's weird because it started recently.

@enyst enyst changed the title fix: preserve message id/status for Responses API reasoning round-trip fix: keep message id/status for Responses API round-trip Jan 17, 2026
Copy link
Collaborator

@all-hands-bot all-hands-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR correctly implements message id/status preservation for OpenAI Responses API round-trips. However, there are a few areas that could be improved: missing test coverage for deserialization, potential type safety improvements, and clarifications around atomic field handling.

assert msg["type"] == "message"
assert msg["role"] == "assistant"
assert "id" not in msg
assert "status" not in msg
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 Important: Missing test coverage for the deserialization path. The current tests verify that to_responses_dict() correctly includes id/status fields, but there's no test verifying that from_llm_responses_output() correctly extracts these fields from OpenAI response items.

Consider adding a test that verifies the complete round-trip behavior by testing Message.from_llm_responses_output() with mock response items containing id and status fields.

"content": content_items,
}
# Include id/status if available (required with reasoning item)
if self.responses_message_id:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Suggestion: The id and status fields are checked independently, which means it's possible to include one without the other in the output. According to the PR description, OpenAI requires both fields when a reasoning item is present.

Consider whether these should be set atomically (both or neither) with a single condition: if self.responses_message_id and self.responses_message_status:

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@OpenHands if one or both don't exist, but a reasoning item exists, let's NOT send back either of these two nor the "orphan" reasoning item.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm on it! enyst can track my progress at all-hands.dev

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Final summary (since my last update)

What changed

  • Updated Message.to_responses_dict() (assistant role) so that when a responses_reasoning_item is present but either responses_message_id or responses_message_status is missing, we emit neither:

    • the assistant "message" item, nor
    • the "reasoning" item
      This prevents sending an “orphan” reasoning item and matches the PR comment request.
  • Also ensured that id and status are only ever emitted on the assistant "message" item atomically (both or neither).

Checklist against your request

  • If one or both of (responses_message_id, responses_message_status) don’t exist and a reasoning item exists, do not send back either of these two nor the reasoning item.
  • Changes are minimal and localized (single file change).
  • Tests run and passing (uv run pytest -q tests/sdk/llm/test_message.py).

Repo state

  • Working tree clean; no extra diffs.
  • Changes already committed and pushed to origin/fix/responses-reasoning-message-id (commit 49fe542c).

View full conversation

# OpenAI Responses message id/status (required when reasoning item is present)
responses_message_id: str | None = Field(
default=None,
description="OpenAI Responses message id from model output",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Nit: The field descriptions could be more detailed. Consider explaining that these fields are required by OpenAI when re-sending conversation history that includes a reasoning item, to clarify that they're for round-trip preservation rather than general message metadata.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants