-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Does StopAtTools returns tool result directly to user instead of to LLM? #632
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I modified the example here a little to use StopAtTools.
It generates the same graph, so it seems result of get_weather is still to LLM and LLM will process it. |
The tool is executed and it's result is returned to the user in You can check that by running your example with |
@pakrym-oai
From the error message, it seems LLM needs output of tool to continue. |
This is the area of the library the definitely needs improvement. You are correct, you need to present the output manually via FunctionCallOutput(
call_id=call_id,
output=...,
type="function_call_output",
) The very unsatisfying workaround is to stream the agent response and pick up the call id from the stream: async for event in result.stream_events():
if event.type == "run_item_stream_event":
if event.item.type == "tool_call_item":
if event.item.raw_item.type == "function_call":
call_id = event.item.raw_item.call_id Another problem you might hit is that the final output is stringified vs being the original object returned by the tool. You can workaround that by storing the result of the tool on the context object rather than returning it directly. @function_tool
async def last_tool(ctx: RunContextWrapper[MyContext]) -> None:
ctx.context.last_tool_result = ... result = Runner.run_streamed(
assistant_agent,
[...],
context=context,
) I'm sorry that this is so tricky today, we are looking at making the experience better. |
Please read this first
Question
From the document,
A list of tool names, any of which will stop the agent from running further.
,and comment of tool_use_behavior in src/agents/agent.py,
it seems result of tool in StopAtTools won't be sent to LLM, but return to user directly.
I tried it and I think StopAtTools is configured correctly, because it enters
https://github.com/openai/openai-agents-python/blob/main/src/agents/_run_impl.py#L760-L761
but still, the result is sent to LLM but not directly to user.
The text was updated successfully, but these errors were encountered: