Open
Description
Name and Version
I'm encountering an issue with llama.cpp after upgrading to CUDA 12.8. My setup includes an RTX A6000 Pro (Blackwell) and an A100 80GB GPU. The A6000 works fine when using CUDA_VISIBLE_DEVICES=0. However, when switching to CUDA_VISIBLE_DEVICES=1 for the A100, the application crashes.
I went back and compiled for all arches and then tried 80 and 120. The Blackwell works but the A100 (even though shows up ok via nvidia-smi) seems to crash at start. Even listing llama cli --help shows this error and it always defaults to cpu.
Version:
ggml_cuda_init: failed to initialize CUDA: initialization error
version: 5400 (c6a2c9e7)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
Cuda 12.8
NVIDIA-SMI 570.133.20 Driver Version: 570.133.20
Ubuntu 22.04
./lama.cpp/build/bin/llama-cli -hf lmstudio-community/Qwen3-4B-GGUF --n-gpu-layers 99 --ctx-size 32000
ggml_cuda_init: failed to initialize CUDA: initialization error
warning: no usable GPU found, --gpu-layers option will be ignored
warning: one possible reason is that llama.cpp was compiled without GPU support
warning: consult docs/build.md for compilation instructions
Operating systems
Linux
GGML backends
CUDA
Hardware
RTX A6000 pro Blackwell
A100 80gb
Models
Any model
Problem description & steps to reproduce
Compile with 12.8 and use a A100. It has trouble initializing.
First Bad Commit
No response
Relevant log output
Cuda 12.8
NVIDIA-SMI 570.133.20 Driver Version: 570.133.20
Ubuntu 22.04
./lama.cpp/build/bin/llama-cli -hf lmstudio-community/Qwen3-4B-GGUF --n-gpu-layers 99 --ctx-size 32000
ggml_cuda_init: failed to initialize CUDA: initialization error
warning: no usable GPU found, --gpu-layers option will be ignored
warning: one possible reason is that llama.cpp was compiled without GPU support
warning: consult docs/build.md for compilation instructions