Skip to content

model : disable SWA for Phi models #13676

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
May 21, 2025
Merged

model : disable SWA for Phi models #13676

merged 4 commits into from
May 21, 2025

Conversation

ggerganov
Copy link
Member

@ggerganov ggerganov commented May 21, 2025

fix #13665

It seems that there is some confusion with the Phi models configs. It is not clear what is the SWA window and to which layers it is applied.

As an example, if we look at Phi-3-mini-4k it says that the sliding window is 2047, but there is no field that specifies the SWA layers. Another example is Phi-3.5-MoE which has a sliding window equal to the context size - so technically there is no SWA.

So for now I think the simplest solution is to disable SWA for this family of models and to at least avoid breaking the latest Phi-4 that do not rely on SWA.

@ggerganov ggerganov requested a review from ngxson May 21, 2025 07:24
@ngxson
Copy link
Collaborator

ngxson commented May 21, 2025

FWIW, many configs of phi-4-mini are hard-coded inside python code, so it won't even take the one from config.json. I had a hard time finding some of the configs in modeling_phi3.py (yes it's the correct file name 😂 ) before giving up on adding vision support for it.

Comment on lines -856 to -859
// for backward compatibility ; see: https://github.com/ggerganov/llama.cpp/pull/8931
if ((hparams.n_layer == 32 || hparams.n_layer == 40) && hparams.n_ctx_train == 4096) {
// default value for Phi-3-mini-4k-instruct and Phi-3-medium-4k-instruct
LLAMA_LOG_WARN("%s: assuming n_swa = 2047 for Phi-3-mini-4k-instruct and Phi-3-medium-4k-instruct\n", __func__);
Copy link
Collaborator

@ngxson ngxson May 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I didn't saw your review request.

Hmm I think these configs still need to be kept. The original issue talked about phi-4 and not phi-3, and these branches of code was to make sure old phi-3 model works correctly.

Probably phi-4 conflicts with this but I'll have a look. In anw, I think n_swa should still be set (it can't be forced to 0), so that the model can mask out tokens correctly for long sequence.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that n_swa value never does anything when n_swa_pattern == 1, which has always been like this. So unless I am missing something, these configs didn't do anything.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm ok it seems like n_swa_pattern = 1 meaning all layers use SWA. I had a look at modeling_phi3.py and seems like use_sliding_windows is set to the same value globally.

In anw, I think we can just remove SWA altogether for phi-3/phi-4 if you feel like we spent too much efforts fixing this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With our current logic n_swa_pattern = 1 actually means that none of the layers use SWA:

bool llama_hparams::is_swa(uint32_t il) const {
if (il < n_layer) {
return n_swa > 0 && n_swa_pattern > 0 && il % n_swa_pattern < (n_swa_pattern - 1);
}
GGML_ABORT("fatal error");
}

Some models have a parameter dense_attention_every_n_layers:

https://huggingface.co/microsoft/Phi-3-small-128k-instruct/blob/main/config.json#L18

This probably corresponds to our n_swa_pattern, but the conversion scripts and model loading logic completely ignores this atm.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I haven't noticed about that dense_attention_every_n_layers. I think for now let's just disable the whole thing (as you are currently doing) and come back to this later. I don't think many users are still using these models, especially using it for long context. So probably it's not worth our time trying to fix this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think the only correct way to handle all Phi models is to update the convert scripts to provide information about the dense/swa layer pattern, which we can then use to create the respective cache. We also have to rework n_swa_pattern in some way to support "all-SWA layers" - maybe n_swa_pattern == 0 could correspond to this:

  • n_swa_pattern == 0: all layers are SWA
  • n_swa_pattern == 1: all layers are non-SWA
  • n_swa_pattern == n: every nth layer is non-SWA

Merging this for now as I agree it's more effort to resolve and this was incorrect even before the recent SWA changes.

@ngxson
Copy link
Collaborator

ngxson commented May 21, 2025

Ok so the problem seems to be isolated to phi-4 only. They tricked transformers code to disable SWA by setting SWA to a value higher than n_ctx_orig: https://huggingface.co/microsoft/Phi-4-multimodal-instruct/blob/main/config.json#L214

So I think the fix in llama.cpp is to check if n_swa >= n_ctx_orig. If that's the case, we disable n_swa. This will make sure phi-3 models with SWA still work correctly

@ggerganov ggerganov merged commit b44890d into master May 21, 2025
46 checks passed
@ggerganov ggerganov deleted the gg/kv-cache-fix-phi4-swa branch May 21, 2025 10:09
infil00p pushed a commit to baseweight/llama.cpp that referenced this pull request May 22, 2025
* model : disable SWA for Phi models

ggml-ci

* model : update warning message

* model : print warning only if n_swa > 0

* model : fix typo
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Eval bug: phi-4 crashes with new versions
2 participants