Download Latest Version koboldcpp-1.100.1 source code.zip (45.6 MB)
Email in envelope

Get an email when there's a new version of KoboldCpp

Home / v1.84.2
Name Modified Size InfoDownloads / Week
Parent folder
koboldcpp_cu12.exe 2025-02-17 606.3 MB
koboldcpp.exe 2025-02-17 488.9 MB
koboldcpp-mac-arm64 2025-02-17 26.6 MB
koboldcpp-linux-x64-nocuda 2025-02-17 77.4 MB
koboldcpp-linux-x64-cuda1210 2025-02-17 676.7 MB
koboldcpp-linux-x64-cuda1150 2025-02-17 593.3 MB
koboldcpp_oldcpu.exe 2025-02-17 489.1 MB
koboldcpp_nocuda.exe 2025-02-17 76.3 MB
koboldcpp-1.84.2 source code.tar.gz 2025-02-17 33.8 MB
koboldcpp-1.84.2 source code.zip 2025-02-17 34.2 MB
README.md 2025-02-17 2.8 kB
Totals: 11 Items   3.1 GB 0

koboldcpp-1.84.2

This is mostly a bugfix release as 1.83.1 had some issues, but there were too many changes for another patch release.

  • Added support for using aria2c and wget for model downloading if detected on system. (credits @henk717).
  • It's also now possible to specify multiple URLs when loading multipart models online with --model [url1] [url2]... (CLI only), which will allow KoboldCpp to download multiple model file URLs.
  • Added automatic recovery in admin mode if it fails when switching to a faulty config, it will attempt to rollback to the original known-good config.
  • Fixed MoE experts override not working for Deepseek
  • Fixed multiple loader bugs when using the AutoGuess adapter.
  • Fixed images failing to generate when using the AutoGuess adapter.
  • Removed TTS caching as it was not very good.
  • Updated Kobold Lite, multiple fixes and improvements
  • Fix websearch button visibility
  • Improved instruct formatting in classic UI
  • Fixed some LaTeX and markdown edge cases
  • Upped max length slider to 1024 if detected context is larger than 4096.
  • Merged fixes and improvements from upstream

Hotfix 1.84.1 - vulkan iq1 support and fixed lite instruct icon display Hotfix 1.84.2 - Fixed autoguess errors, fixed incoherency issue due to flash attention in rtx4090 with mistral small.

This build may still have minor issues - if you have problems please use 1.82.4 for now, I am working on a fix.

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller. If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller. If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster). If you're using Linux, select the appropriate Linux binary file instead (not exe). If you're on a modern MacOS (M1, M2, M3) you can try the koboldcpp-mac-arm64 MacOS binary. If you're using AMD, we recommend trying the Vulkan option (available in all releases) first, for best support. Alternatively, you can try koboldcpp_rocm at YellowRoseCx's fork here

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI. and then once loaded, you can connect like this (or use the full koboldai client): http://localhost:5001

For more information, be sure to run the program from command line with the --help flag. You can also refer to the readme and the wiki.

Source: README.md, updated 2025-02-17