[rl] refactor torchtitan model registery in vllm#2194
Open
wwwjn wants to merge 23 commits intogh/wwwjn/5/basefrom
Open
[rl] refactor torchtitan model registery in vllm#2194wwwjn wants to merge 23 commits intogh/wwwjn/5/basefrom
wwwjn wants to merge 23 commits intogh/wwwjn/5/basefrom
Conversation
[ghstack-poisoned]
This was referenced Jan 2, 2026
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
tianyu-l
reviewed
Jan 19, 2026
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
tianyu-l
reviewed
Feb 19, 2026
| # model_flavor during registration because we can not pass torchtitan job_config from LLM() Api | ||
| model_flavor="0.6B", | ||
| from torchtitan.experiments.rl.unified.infra.parallelism_utils import ( | ||
| create_parallel_dims_from_vllm_config, |
Contributor
There was a problem hiding this comment.
I hope we could put all torchtitan-vllm glue code in one file / folder, and carefully document why we need each class / method. This one sounds one of them.
Contributor
Author
There was a problem hiding this comment.
Nice catch, refactored this part
wwwjn
added a commit
that referenced
this pull request
Feb 22, 2026
…ight tying (#2410) Stack from [ghstack](https://github.com/ezyang/ghstack/tree/0.13.0) (oldest at bottom): * #2395 * #2244 * #2221 * #2194 * #2191 * __->__ #2410 This is a alternative fix to #2402 (comment). Weight updating between trainer and generator is totally broken because: It's caused by we called "reload_weights" when updating the weights. The reload_weights has following steps: - initialize_layerwise_reload(model): Saves the current real GPU tensors as info.kernel_tensors, and replace all parameters with meta tensor. - Call model.load_weights(weights_iter): This function is written by us and calls set_model_state_dict, Internally, set_model_state_dict tries to do param.data.copy_(loaded_weight) for each parameter. When parameters are meta tensor, it will do "no-op". So the weights never get updated In this PR: - Totally bypass reload_weights, and don't load from a file when we update the weights - Gets the model via self.engine.model_executor.driver_worker.get_model() - Iterates over model.named_parameters() to find the matching parameter by name - Does param.data.copy_(new_tensor) directly
wwwjn
commented
Feb 24, 2026
tianyu-l
reviewed
Feb 25, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack from ghstack (oldest at bottom):