Skip to content

Compile bug: ld returned 1 exit status (file bigger than 2gb) #13446

Open
@Z841973620

Description

@Z841973620

Git commit

62d4250

Operating systems

Linux

GGML backends

CUDA

Problem description & steps to reproduce

when compiling for all cuda architectures (50,52,60,61,70,75,80,86,89,90) , the generated ggml-cuda static/dynamic libraries are too large to be linked correctly

delete one target cuda architecture to make the generated file less than 2gb can compile correctly

First Bad Commit

No response

Compile command

cmake -B build -DBUILD_SHARED_LIBS=OFF -DLLAMA_CURL=ON -DGGML_CUDA=ON -DGGML_CUDA_FA_ALL_QUANTS=ON -DCMAKE_CUDA_ARCHITECTURES="50;52;60;61;70;75;80;86;89;90"

cmake --build build --config Release -j --target llama-server

Relevant log output

Consolidate compiler generated dependencies of target build_info
Consolidate compiler generated dependencies of target ggml-base
[  0%] Built target build_info
[  3%] Built target ggml-base
Consolidate compiler generated dependencies of target ggml-cpu
[ 10%] Built target ggml-cpu
Consolidate compiler generated dependencies of target ggml-cuda
[ 84%] Built target ggml-cuda
Consolidate compiler generated dependencies of target ggml
[ 84%] Building CXX object ggml/src/CMakeFiles/ggml.dir/ggml-backend-reg.cpp.o
[ 84%] Linking CXX static library libggml.a
[ 84%] Built target ggml
Consolidate compiler generated dependencies of target llama
[ 84%] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o
[ 84%] Building CXX object src/CMakeFiles/llama.dir/llama-adapter.cpp.o
[ 86%] Building CXX object src/CMakeFiles/llama.dir/llama-arch.cpp.o
[ 86%] Building CXX object src/CMakeFiles/llama.dir/llama-batch.cpp.o
[ 86%] Building CXX object src/CMakeFiles/llama.dir/llama-chat.cpp.o
[ 86%] Building CXX object src/CMakeFiles/llama.dir/llama-context.cpp.o
[ 87%] Building CXX object src/CMakeFiles/llama.dir/llama-grammar.cpp.o
[ 87%] Building CXX object src/CMakeFiles/llama.dir/llama-graph.cpp.o
[ 87%] Building CXX object src/CMakeFiles/llama.dir/llama-hparams.cpp.o
[ 87%] Building CXX object src/CMakeFiles/llama.dir/llama-impl.cpp.o
[ 89%] Building CXX object src/CMakeFiles/llama.dir/llama-io.cpp.o
[ 89%] Building CXX object src/CMakeFiles/llama.dir/llama-kv-cache.cpp.o
[ 89%] Building CXX object src/CMakeFiles/llama.dir/llama-memory.cpp.o
[ 89%] Building CXX object src/CMakeFiles/llama.dir/llama-mmap.cpp.o
[ 91%] Building CXX object src/CMakeFiles/llama.dir/llama-model-loader.cpp.o
[ 91%] Building CXX object src/CMakeFiles/llama.dir/llama-model.cpp.o
[ 91%] Building CXX object src/CMakeFiles/llama.dir/llama-quant.cpp.o
[ 91%] Building CXX object src/CMakeFiles/llama.dir/llama-sampling.cpp.o
[ 93%] Building CXX object src/CMakeFiles/llama.dir/llama-vocab.cpp.o
[ 93%] Building CXX object src/CMakeFiles/llama.dir/unicode.cpp.o
[ 93%] Building CXX object src/CMakeFiles/llama.dir/unicode-data.cpp.o
[ 94%] Linking CXX static library libllama.a
[ 94%] Built target llama
Consolidate compiler generated dependencies of target common
[ 94%] Building CXX object common/CMakeFiles/common.dir/arg.cpp.o
[ 96%] Building CXX object common/CMakeFiles/common.dir/chat.cpp.o
[ 96%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o
[ 96%] Building CXX object common/CMakeFiles/common.dir/console.cpp.o
[ 96%] Building CXX object common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o
[ 98%] Building CXX object common/CMakeFiles/common.dir/llguidance.cpp.o
[ 98%] Building CXX object common/CMakeFiles/common.dir/log.cpp.o
[ 98%] Building CXX object common/CMakeFiles/common.dir/ngram-cache.cpp.o
[100%] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o
[100%] Building CXX object common/CMakeFiles/common.dir/speculative.cpp.o
[100%] Linking CXX static library libcommon.a
[100%] Built target common
Consolidate compiler generated dependencies of target llama-bench
[100%] Building CXX object tools/llama-bench/CMakeFiles/llama-bench.dir/llama-bench.cpp.o
[100%] Linking CXX executable ../../bin/llama-bench
/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/Scrt1.o: in function `_start':
(.text+0x2a): relocation truncated to fit: R_X86_64_GOTPCRELX against symbol `__libc_start_main@@GLIBC_2.2.5' defined in .text section in /lib/x86_64-linux-gnu/libc.so.6
/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/Scrt1.o:(.eh_frame+0x20): relocation truncated to fit: R_X86_64_PC32 against `.text'
/usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/crti.o: in function `_init':
(.init+0xb): relocation truncated to fit: R_X86_64_REX_GOTPCRELX against undefined symbol `__gmon_start__'
CMakeFiles/llama-bench.dir/llama-bench.cpp.o: in function `create_printer(output_formats)':
llama-bench.cpp:(.text+0x51): relocation truncated to fit: R_X86_64_PC32 against symbol `vtable for markdown_printer' defined in .data.rel.ro.local._ZTV16markdown_printer[_ZTV16markdown_printer] section in CMakeFiles/llama-bench.dir/llama-bench.cpp.o
llama-bench.cpp:(.text+0x6b): relocation truncated to fit: R_X86_64_PC32 against symbol `vtable for sql_printer' defined in .data.rel.ro.local._ZTV11sql_printer[_ZTV11sql_printer] section in CMakeFiles/llama-bench.dir/llama-bench.cpp.o
llama-bench.cpp:(.text+0x96): relocation truncated to fit: R_X86_64_PC32 against symbol `vtable for csv_printer' defined in .data.rel.ro.local._ZTV11csv_printer[_ZTV11csv_printer] section in CMakeFiles/llama-bench.dir/llama-bench.cpp.o
llama-bench.cpp:(.text+0xb8): relocation truncated to fit: R_X86_64_PC32 against symbol `vtable for json_printer' defined in .data.rel.ro.local._ZTV12json_printer[_ZTV12json_printer] section in CMakeFiles/llama-bench.dir/llama-bench.cpp.o
llama-bench.cpp:(.text+0xde): relocation truncated to fit: R_X86_64_PC32 against symbol `vtable for jsonl_printer' defined in .data.rel.ro.local._ZTV13jsonl_printer[_ZTV13jsonl_printer] section in CMakeFiles/llama-bench.dir/llama-bench.cpp.o
CMakeFiles/llama-bench.dir/llama-bench.cpp.o: in function `pair_str(std::pair<int, int> const&)':
llama-bench.cpp:(.text+0x801): relocation truncated to fit: R_X86_64_PC32 against `.bss'
llama-bench.cpp:(.text+0x834): relocation truncated to fit: R_X86_64_PC32 against `.bss'
llama-bench.cpp:(.text+0x88e): additional relocation overflows omitted from the output
../../bin/llama-bench: PC-relative offset overflow in PLT entry for `__printf_chk@@GLIBC_2.3.4'
collect2: error: ld returned 1 exit status
make[3]: *** [tools/llama-bench/CMakeFiles/llama-bench.dir/build.make:109: bin/llama-bench] Error 1
make[2]: *** [CMakeFiles/Makefile2:3549: tools/llama-bench/CMakeFiles/llama-bench.dir/all] Error 2
make[1]: *** [CMakeFiles/Makefile2:3556: tools/llama-bench/CMakeFiles/llama-bench.dir/rule] Error 2
make: *** [Makefile:1323: llama-bench] Error 2

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions