Skip to content

Partial offload support for training #13486

Open
@JohannesGaessler

Description

@JohannesGaessler

Right now the finetuning example seems to only work correctly for CPU-only training or max. GPU layers. But in principle it should be possible to use the same partial offloading logic that is used for prompt processing to accelerate the training of models that need more memory than there is VRAM available.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    Status

    Todo

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions