Open
Description
LoRA training should in principle be achievable with the same tools as the full finetune, it's simply not implemented. I don't know the exact details of LoRA training so I will need to read up on them. Investigate the quality difference between
- training a LoRA on top of a full-precision model, merging the LoRA, and then quantizing the model vs.
- training a LoRA on top of a quantized model and merging it vs.
- training a LoRA on top of a quantized model and not merging it.
Metadata
Metadata
Assignees
Labels
No labels
Type
Projects
Status