Feature: Custom Self-Hosted Model Support [new Council Members] #75
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Feature: Custom Self-Hosted Model Support
Summary
This PR adds support for integrating custom self-hosted models (e.g., Ollama, vLLM, or private endpoints) into the LLM Council. Users can now define custom API endpoints and keys for specific models, allowing them to participate in the council alongside standard OpenRouter models.
Changes
CUSTOM_MODELSdictionary to map model IDs to their specificapi_urlandapi_key.km/maxaias an example/default custom model in the configuration.CUSTOM_MODELSbefore making requests.CUSTOM_MODELS, the request is sent to its configured URL with its specific headers; otherwise, it defaults to OpenRouter.How to Test
In
backend/config.py, add a custom model definition:Add
"my-local-model"to theCOUNCIL_MODELSlist in backend/config.py.Start the backend and frontend.
Send a query. The application should successfully send a request to
http://localhost:11434/...for the custom model and display its response in the UI alongside other council members.