-
Notifications
You must be signed in to change notification settings - Fork 12.1k
ggml: aarch64: Implement SVE Kernels for Int 8 Quantization #14117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
ggml: aarch64: Implement SVE Kernels for Int 8 Quantization #14117
Conversation
The CI failures seem CMake-related and are also occurring in other PRs. Since I haven’t modified any CMake files, they don’t appear to be caused by this PR. |
@Vithulep There seems to be one failure related to
Not sure if it's directly related, but it might. |
ggml/src/ggml-quants.c
Outdated
@@ -340,20 +340,37 @@ void dequantize_row_q5_1(const block_q5_1 * GGML_RESTRICT x, float * GGML_RESTRI | |||
} | |||
} | |||
|
|||
// SVE Support added for Scaler Implementation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dequantization from Q8_0
is not performed during inference, and so it's not as time-sensitive (hence the plain scalar code, which is also simpler to maintain).
Only quantization of the intermediate tensors matmultiplied with types having Q8_0
as their vec_dot_type
is exercised during the perplexity and speed benchmarks you've shared.
Did you test the dequantization changes for correctness outside of inference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dequantize_row_q8_0() kernel is called during Inference but very small no. of times. 1 call for every token generation. Hence Its effect can't be seen in speedup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dequantize_row_q8_0()
kernel is called during Inference but very small no. of times. 1 call for every token generation.
Ah yes I had forgotten that ggml_get_rows
dequantizes what it extracts. It's used when converting tokens to embeddings at the beginning of the model graph. It's not really a bottleneck, though.
Thanks, I was wrong about it not being called.
… (ggml-org#330) * Check for reverse prompt by characters instead of tokens (ggml-org#292) * Update main.cpp Wording. * Cleanup. * Remove unnecessary use of std::stringstream. --------- Co-authored-by: Johnman <tjohnman@github> Co-authored-by: Georgi Gerganov <[email protected]>
just curious, since there's no performance gain (seems even slight drop) with adding sve version, why replacing the NEON version? |
This PR adds SVE kernel support for Int8 datatype specific to ARM architecture.
Major code changes:
Performance
The performance remained nearly the same before and after the PR changes; however, this PR introduces an SVE intrinsic implementation in Mamba int8, achieving comparable performance.
Task1: Prompt Length: 128 tokens, Generated Tokens: 1 token
Task2: Prompt Length: 1024 tokens, Generated Tokens: 1 token
Task3: Prompt Length: 8192 tokens, Generated Tokens: 1 token
The command used to measure the performance is
Perplexity
There is no change in model accuracy as a result of this PR. And below is the summary.