Skip to content

Conversation

@walterchenchn
Copy link

@walterchenchn walterchenchn commented Jan 13, 2026

What does this PR do?

Unify the return values of the reward function to make the logic clearer

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

@CLAassistant
Copy link

CLAassistant commented Jan 13, 2026

CLA assistant check
All committers have signed the CLA.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to unify the return values of the reward function, which is a good refactoring goal. However, the current implementation introduces a critical bug related to the REMAX baseline calculation and also makes the function's type hint and docstring incorrect and misleading. I've provided two review comments with high and critical severity to address these issues. Please review them carefully.

Updated return type annotation for the reward computation method.
@walterchenchn walterchenchn changed the title [data] feat: TransferQueue - Unify the return reward [data] feat: TransferQueue - Unify the return of reward Jan 13, 2026
@0oshowero0 0oshowero0 requested a review from Copilot January 13, 2026 08:57
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors the _compute_or_extract_reward method in the PPO ray trainer to simplify and unify its return value logic. The parameter return_dict is renamed to reward_for_val to better reflect its purpose, and the return value handling is streamlined.

Changes:

  • Renamed parameter return_dict to reward_for_val to clarify that it controls validation vs training reward computation paths
  • Simplified return logic: now returns either a tensor (for REMAX with sum_reward) or a tuple (all other cases), removing the dictionary return type
  • Updated all call sites to match the new return signature (tuple unpacking instead of dictionary access)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

If return_dict=True: dict with "reward_tensor" and "reward_extra_info"
If return_dict=False and sum_reward=True: summed reward_tensor (1D tensor)
If return_dict=False and sum_reward=False: reward_tensor (2D tensor)
If reward_for_val=True: reward_tensor (2D tensor)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add descriptions for reward_extra_infos

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

reward_tensor = reward_tensor.sum(dim=-1)
reward_extra_info = result.get("reward_extra_info", {})
return {"reward_tensor": reward_tensor, "reward_extra_info": reward_extra_info}
return reward_tensor, reward_extra_info
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unify with reward_extra_info_dict?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

walterchenchn and others added 2 commits January 13, 2026 17:49
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@ji-huazhong
Copy link
Collaborator

please run pre-commit run first

@walterchenchn
Copy link
Author

please run pre-commit run first

done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants