-
Notifications
You must be signed in to change notification settings - Fork 12.1k
model : disable SWA for Phi models #13676
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
FWIW, many configs of phi-4-mini are hard-coded inside python code, so it won't even take the one from |
// for backward compatibility ; see: https://github.com/ggerganov/llama.cpp/pull/8931 | ||
if ((hparams.n_layer == 32 || hparams.n_layer == 40) && hparams.n_ctx_train == 4096) { | ||
// default value for Phi-3-mini-4k-instruct and Phi-3-medium-4k-instruct | ||
LLAMA_LOG_WARN("%s: assuming n_swa = 2047 for Phi-3-mini-4k-instruct and Phi-3-medium-4k-instruct\n", __func__); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I didn't saw your review request.
Hmm I think these configs still need to be kept. The original issue talked about phi-4 and not phi-3, and these branches of code was to make sure old phi-3 model works correctly.
Probably phi-4 conflicts with this but I'll have a look. In anw, I think n_swa
should still be set (it can't be forced to 0), so that the model can mask out tokens correctly for long sequence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that n_swa
value never does anything when n_swa_pattern == 1
, which has always been like this. So unless I am missing something, these configs didn't do anything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm ok it seems like n_swa_pattern = 1
meaning all layers use SWA. I had a look at modeling_phi3.py and seems like use_sliding_windows
is set to the same value globally.
In anw, I think we can just remove SWA altogether for phi-3/phi-4 if you feel like we spent too much efforts fixing this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With our current logic n_swa_pattern = 1
actually means that none of the layers use SWA:
llama.cpp/src/llama-hparams.cpp
Lines 73 to 79 in 3398305
bool llama_hparams::is_swa(uint32_t il) const { | |
if (il < n_layer) { | |
return n_swa > 0 && n_swa_pattern > 0 && il % n_swa_pattern < (n_swa_pattern - 1); | |
} | |
GGML_ABORT("fatal error"); | |
} |
Some models have a parameter dense_attention_every_n_layers
:
https://huggingface.co/microsoft/Phi-3-small-128k-instruct/blob/main/config.json#L18
This probably corresponds to our n_swa_pattern
, but the conversion scripts and model loading logic completely ignores this atm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I haven't noticed about that dense_attention_every_n_layers
. I think for now let's just disable the whole thing (as you are currently doing) and come back to this later. I don't think many users are still using these models, especially using it for long context. So probably it's not worth our time trying to fix this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think the only correct way to handle all Phi models is to update the convert scripts to provide information about the dense/swa layer pattern, which we can then use to create the respective cache. We also have to rework n_swa_pattern
in some way to support "all-SWA layers" - maybe n_swa_pattern == 0
could correspond to this:
n_swa_pattern == 0
: all layers are SWAn_swa_pattern == 1
: all layers are non-SWAn_swa_pattern == n
: every nth layer is non-SWA
Merging this for now as I agree it's more effort to resolve and this was incorrect even before the recent SWA changes.
Ok so the problem seems to be isolated to phi-4 only. They tricked transformers code to disable SWA by setting SWA to a value higher than n_ctx_orig: https://huggingface.co/microsoft/Phi-4-multimodal-instruct/blob/main/config.json#L214 So I think the fix in llama.cpp is to check if |
* model : disable SWA for Phi models ggml-ci * model : update warning message * model : print warning only if n_swa > 0 * model : fix typo
fix #13665
It seems that there is some confusion with the Phi models configs. It is not clear what is the SWA window and to which layers it is applied.
As an example, if we look at Phi-3-mini-4k it says that the sliding window is 2047, but there is no field that specifies the SWA layers. Another example is Phi-3.5-MoE which has a sliding window equal to the context size - so technically there is no SWA.
So for now I think the simplest solution is to disable SWA for this family of models and to at least avoid breaking the latest Phi-4 that do not rely on SWA.