[Feat] Fast-dLLM-v2 Decoding Strategy Support#16
[Feat] Fast-dLLM-v2 Decoding Strategy Support#16drewjin wants to merge 8 commits intoSJTU-DENG-Lab:feat/fast-dllm-v2from
Conversation
…ctory; update GitHub workflows to grant write permissions for issues and pull requests
[Fix]: Fix PR Bot Workflow Bug
|
👋 Hi! Thank you for contributing to the Diffulex project. Please remember to run We appreciate you taking this step! Our team will review your contribution, and we look forward to your awesome work! 🚀 |
…ribution guidelines
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Comment |
… modify uvicorn index URL, and improve error handling in attention module; remove unused profiling function from example scripts
Motivation
Description
This Pull Request introduces support for the Fast-dLLM-v2 (fdv2) native inference strategy. Building upon the existing implementation of Block Diffusion (bd), we are now integrating the fdv2 strategy featuring a specialized Intra-block Dual-Cache mechanism.
The implementation involves a hierarchical state management system, which increases the complexity of the orchestration logic (refer to the attached diagram):
ACTIVE,TO_CACHE, andIN_CACHE.CACHINGandDECODING.Key Scheduling Logic:
The state management significantly alters the scheduling behavior compared to traditional methods:
TODO List
fdv2strategy engine.fdv2attention metadata.fdv2attention kernels for the new strategy.