Skip to content

Does StopAtTools returns tool result directly to user instead of to LLM? #632

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
xinwo opened this issue Apr 30, 2025 · 4 comments
Open
Labels
question Question about using the SDK

Comments

@xinwo
Copy link

xinwo commented Apr 30, 2025

Please read this first

  • Have you read the docs?Agents SDK docs
  • Have you searched for related issues? Others may have had similar requests

Question

From the document,
A list of tool names, any of which will stop the agent from running further.,
and comment of tool_use_behavior in src/agents/agent.py,

    - A list of tool names: The agent will stop running if any of the tools in the list are called.
        The final output will be the output of the first matching tool call. The LLM does not
        process the result of the tool call.

it seems result of tool in StopAtTools won't be sent to LLM, but return to user directly.

I tried it and I think StopAtTools is configured correctly, because it enters
https://github.com/openai/openai-agents-python/blob/main/src/agents/_run_impl.py#L760-L761
but still, the result is sent to LLM but not directly to user.

@xinwo xinwo added the question Question about using the SDK label Apr 30, 2025
@xinwo
Copy link
Author

xinwo commented Apr 30, 2025

I modified the example here a little to use StopAtTools.

from agents import Agent, function_tool
from agents.extensions.visualization import draw_graph

@function_tool
def get_weather(city: str) -> str:
    return f"The weather in {city} is sunny."

spanish_agent = Agent(
    name="Spanish agent",
    instructions="You only speak Spanish.",
)

english_agent = Agent(
    name="English agent",
    instructions="You only speak English",
)

triage_agent = Agent(
    name="Triage agent",
    instructions="Handoff to the appropriate agent based on the language of the request.",
    handoffs=[spanish_agent, english_agent],
    tools=[get_weather],
    tool_use_behavior={"stop_at_tool_names": ["get_weather"]}
)

draw_graph(triage_agent)

It generates the same graph, so it seems result of get_weather is still to LLM and LLM will process it.

@pakrym-oai
Copy link
Contributor

The tool is executed and it's result is returned to the user in result.final_output it is not added back to conversation automatically or sent to the model.

You can check that by running your example with export OPENAI_LOG=debug set.

@xinwo
Copy link
Author

xinwo commented May 1, 2025

@pakrym-oai
Thank you.
I tried it, and it does work like you said, but error happens after the tool in stop_at_tool_names is called.

ERROR:openai.agents:Error streaming response: Error code: 400 - {'error': {'message': 'No tool output found for function call call_HIU9G7cu95eSi8aZQR3WYosj.', 'type': 'invalid_request_error', 'param': 'input', 'code': None}}

From the error message, it seems LLM needs output of tool to continue.
So I have to send something LLM myself on behalf of tool? How to do that?

@pakrym-oai
Copy link
Contributor

This is the area of the library the definitely needs improvement.

You are correct, you need to present the output manually via FunctionCallOutput. But to be able to do that you need the callId. The final result doesn't give you and easy way to continue the run or pass the output into the next run.

FunctionCallOutput(
                call_id=call_id,
                output=...,
                type="function_call_output",
            )

The very unsatisfying workaround is to stream the agent response and pick up the call id from the stream:

        async for event in result.stream_events():
            if event.type == "run_item_stream_event":
                if event.item.type == "tool_call_item":
                    if event.item.raw_item.type == "function_call":
                        call_id = event.item.raw_item.call_id

Another problem you might hit is that the final output is stringified vs being the original object returned by the tool. You can workaround that by storing the result of the tool on the context object rather than returning it directly.

@function_tool
async def last_tool(ctx: RunContextWrapper[MyContext]) -> None:
    ctx.context.last_tool_result = ...
result = Runner.run_streamed(
            assistant_agent,
            [...],
            context=context,
        )

I'm sorry that this is so tricky today, we are looking at making the experience better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Question about using the SDK
Projects
None yet
Development

No branches or pull requests

2 participants