Skip to content

LoRA training example #13485

Open
@JohannesGaessler

Description

@JohannesGaessler

LoRA training should in principle be achievable with the same tools as the full finetune, it's simply not implemented. I don't know the exact details of LoRA training so I will need to read up on them. Investigate the quality difference between

  1. training a LoRA on top of a full-precision model, merging the LoRA, and then quantizing the model vs.
  2. training a LoRA on top of a quantized model and merging it vs.
  3. training a LoRA on top of a quantized model and not merging it.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    Status

    Todo

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions