Skip to content

Comments

Add Attention sink in export#215

Open
kirklandsign wants to merge 2 commits intohuggingface:mainfrom
kirklandsign:attention-sink
Open

Add Attention sink in export#215
kirklandsign wants to merge 2 commits intohuggingface:mainfrom
kirklandsign:attention-sink

Conversation

@kirklandsign
Copy link

No description provided.

kirklandsign and others added 2 commits February 19, 2026 17:42
Introduce CustomRingKVCacheWithSink and ETCustomAttentionSinkCache that
preserve the first sink_size tokens while using a ring buffer for the
remaining window. Add get_custom_sdpa_for_attention_sink to build
per-layer attention masks with sink token preservation. Wire the
attention_sink parameter through replace_with_et_custom_kv_cache.

Co-authored-by: Claude <noreply@anthropic.com>
Register a dedicated custom_sdpa_attention_sink attention implementation
when the attention_sink option is provided, with priority over the
existing ring KV cache SDPA path. Pass attention_sink through to the
cache setup at export time.
try:
from executorch.examples.models.llama.source_transformation.attention_sink import (
CachePositionsManagerWithSink,
_create_causal_mask_for_attention_sink,
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will need to wait for et side change to merge first

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant