-
Notifications
You must be signed in to change notification settings - Fork 3k
[model] fix: qwen3-vl-30b npu_patch fix #4888
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request attempts to fix an issue in the NPU patch for Qwen3-VL MoE models. However, the change introduces a critical bug due to a tensor shape mismatch. The modification assumes routing_weights has the full expert dimension, but the calling function passes a tensor with only the top-k probabilities. This will cause an index-out-of-bounds error during execution. I've provided a comment with a detailed explanation and a suggestion to revert the change.
| num_tokens = hidden_states.shape[0] | ||
| top_k = router_indices.shape[1] | ||
| batch_idx = torch.arange(num_tokens, device=routing_weights.device) | ||
| batch_idx = batch_idx.unsqueeze(1).expand(-1, top_k) | ||
| selected_probs = routing_weights[batch_idx, router_indices] | ||
| next_states = torch_npu.npu_moe_token_unpermute(output, row_ids_map, probs=selected_probs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change appears to introduce a bug due to a mismatch in tensor shapes between the caller and this function.
The calling function, NPUQwen3VLMoeTextSparseMoeBlock.forward, reassigns routing_weights to the top-k probability values at line 225:
# verl/models/transformers/npu_patch.py:225
routing_weights, router_indices = torch.topk(routing_weights, self.top_k, dim=-1)This results in routing_weights having a shape of (num_tokens, top_k) when passed to this function during training.
However, the new code here attempts to index routing_weights using router_indices, which contains expert indices that can be larger than top_k - 1:
# verl/models/transformers/npu_patch.py:192
selected_probs = routing_weights[batch_idx, router_indices]This will lead to an index out of bounds error.
The original implementation seems correct, as it directly uses the (num_tokens, top_k) routing_weights tensor, which appears to be the expected input for npu_moe_token_unpermute.
If the intent is for routing_weights to be the full (num_tokens, num_experts) probability distribution, the fix should be in NPUQwen3VLMoeTextSparseMoeBlock.forward to avoid overwriting routing_weights. Without that change in the caller, this PR is incorrect.
| num_tokens = hidden_states.shape[0] | |
| top_k = router_indices.shape[1] | |
| batch_idx = torch.arange(num_tokens, device=routing_weights.device) | |
| batch_idx = batch_idx.unsqueeze(1).expand(-1, top_k) | |
| selected_probs = routing_weights[batch_idx, router_indices] | |
| next_states = torch_npu.npu_moe_token_unpermute(output, row_ids_map, probs=selected_probs) | |
| next_states = torch_npu.npu_moe_token_unpermute(output, row_ids_map, probs=routing_weights) |
What does this PR do?
Checklist Before Starting
[{modules}] {type}: {description}(This will be checked by the CI){modules}includefsdp,megatron,sglang,vllm,rollout,trainer,ci,training_utils,recipe,hardware,deployment,ray,worker,single_controller,misc,perf,model,algo,env,tool,ckpt,doc,data,cfg,reward,like[megatron, fsdp, doc]{type}is infeat,fix,refactor,chore,test[BREAKING]to the beginning of the title.[BREAKING][fsdp, megatron] feat: dynamic batchingTest
API and Usage Example
# Add code snippet or script demonstrating how to use thisDesign & Code Changes
Checklist Before Submitting
Important
Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.
pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=alwaysci-requestchannel in theverlSlack workspace. (If not accessible, please try the Feishu group (飞书群).)recipesubmodule, please also update the reference to the submodule commit viagit submodule update --remoteorcd recipe && git pull origin main.