Open
Description
Right now the finetuning example seems to only work correctly for CPU-only training or max. GPU layers. But in principle it should be possible to use the same partial offloading logic that is used for prompt processing to accelerate the training of models that need more memory than there is VRAM available.
Metadata
Metadata
Assignees
Type
Projects
Status
Todo