ggml-cpu: add RVV repack GEMM and GEMV for quantization types #6
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
This PR adds repacking and GEMM/GEMV kernels for quantization types for RVV (VLEN=256).
Key Changes
quantize_matfor 4x8 and 4x1 (scalar)Tile Sizes
Testing
Kernels were functionally tested on QEMU for VLENs (128-bit, 256-bit) for a range of input sizes.
Benchmarking Results
End-to-end benchmarking on
BananaPI-BPI F3 (VLEN=256)Q2_K
Prompt Processing
Token Generation
Q4_0
Prompt Processing
Token Generation
Q4_K
Prompt Processing
Token Generation
IQ4_NL
Prompt Processing
Token Generation
Q8_0
Prompt Processing
Token Generation
Future Work
Subsequent PRs plan to extend these kernels for other VLENs.