Skip to content

llama.cpp full-cuda--b1-f675b20 Public Latest

Install from the command line
$ docker pull ghcr.io/puwei0000/llama.cpp:full-cuda--b1-f675b20

Recent tagged image versions

Loading

Details


Last published

12 months ago

Total downloads

120