Open
Description
Prerequisites
- I am running the latest code. Mention the version if possible as well.
- I carefully followed the README.md.
- I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- I reviewed the Discussions, and have a new and useful enhancement to share.
Feature Description
Update the Readme with a note on determining the best tensors to place on a GPU for a MoE using llama-gguf tool
Motivation
./build/bin/llama-gguf /path/to/model.gguf r n
(r: read, n: no check of tensor data)
It can be combined with a awk/sort one-liner to see tensors sorted by size decreasing, then by name:
./build/bin/llama-gguf /path/to/model.gguf r n
| awk '/read_0.+size =/ { gsub(/[=,]+/, "", $0); print $6, $4 }'
| sort -k1,1rn -k2,2
| less
I see testing emerging for GPU poor folks running large MoEs on modest hardware that placing the biggest tensor layers on GPU 0 via --override-tensor flag is best practice for speed.
Possible Implementation
No response