Replies: 1 comment 1 reply
-
|
Hi ! Could you open a PR / issue to add the Qasper dataset ?
One thing specific to SnapKV is that it performs really well if the
question is included in the context (see "with question" dashed line in
the readme of the evaluation directory). SnapKV retrieves KV pairs based on
the latest tokens that often contain... the question.
Have you excluded the question from the context in your comparison ? This
is how we report results in kvpress to ensure the compression is
Independent from whatever comes after it.
Le sam. 22 févr. 2025, 08:05, Henry Ma ***@***.***> a écrit :
… I am uncertain whether the issue stems from my deployment environment or
variations in the input data. However, I have observed that despite the
average input length alse being roughly 4k, the performance of
expected_attention on the Qasper dataset is significantly inferior to that
of snapKV. I would greatly appreciate it if we could cross-validate these
results collectively to ensure accuracy and consistency.
—
Reply to this email directly, view it on GitHub
<#52>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADE64VOQVTUPCRIKBXBNSJT2RAOVLAVCNFSM6AAAAABXUWFW5OVHI2DSMVQWIX3LMV43ERDJONRXK43TNFXW4OZYGAYDCMRXGU>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am uncertain whether the issue stems from my deployment environment or variations in the input data. However, I have observed that despite the average input length alse being roughly 4k, the performance of expected_attention on the Qasper dataset is significantly inferior to that of snapKV. I would greatly appreciate it if we could cross-validate these results collectively to ensure accuracy and consistency.
Beta Was this translation helpful? Give feedback.
All reactions