-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
Open
Labels
Description
What happened?
Hi @vasilisazayka,
First of all thanks for this PR-16053, very helpful for our recent developments in SAP.
I was trying to pass thinking params for gemini-2.5-flash to disable the thinking. But it's not working.
{
"thinking_config": {
"thinking_budget": 0
}
}I think we need to deal with thinking params as well here somehow.
litellm/llms/sap/chat/transformation.py
def get_supported_openai_params(self, model):
params = [
"frequency_penalty",
"logit_bias",
"logprobs",
"top_logprobs",
"max_tokens",
"max_completion_tokens",
"prediction",
"n",
"presence_penalty",
"seed",
"stop",
"stream",
"stream_options",
"temperature",
"top_p",
"tools",
"tool_choice",
"function_call",
"functions",
"extra_headers",
"parallel_tool_calls",
"response_format",
"timeout",
]
if (
model.startswith('anthropic')
or model.startswith("amazon")
or model.startswith("cohere")
or model.startswith("alephalpha")
or model == "gpt-4"
):
params.remove("response_format")
if model.startswith("gemini") or model.startswith("amazon"):
params.remove("tool_choice")
return paramsPlease help me in this issue.
Relevant log output
What part of LiteLLM is this about?
SDK (litellm Python package)
What LiteLLM version are you on ?
v1.80.10
Twitter / LinkedIn details
No response