Skip to content

Providing a pydantic model instead of docstring for tool parameters. #646

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
HG2407 opened this issue May 5, 2025 · 2 comments
Open
Labels
question Question about using the SDK

Comments

@HG2407
Copy link

HG2407 commented May 5, 2025

Please read this first

  • Have you read the docs?Agents SDK docs
  • Have you searched for related issues? Others may have had similar requests

Question

@rm-openai What if I want some parameters to be required and others to be optional, I can easily do this via pydantic, where any field that I have not wrapped in an optional object will be required. Is there a way to replicate this here ? Because as far as I understand, if I do strict_mode = False, while low, there's a chance that llm might not pass anything, I want to avoid that.

Secondly, suppose my tool calls an api, which is linked to the backend, now I want to pass an auth token to the api, which I currently do by adding it as a parameter in each tool and asking agent to provide it through its system prompt, is there a better way to do it ?

Edit: I just checked there's something called function_schema that you use to extract the parameters and description from the docstring and then convert them to pydantic model. is there a way to directly provide a pydantic model to this ?

@HG2407 HG2407 added the question Question about using the SDK label May 5, 2025
@pakrym-oai
Copy link
Contributor

pakrym-oai commented May 5, 2025

All parameters must be required for strict mode but the can be nullable.

Secondly, suppose my tool calls an api, which is linked to the backend, now I want to pass an auth token to the api, which I currently do by adding it as a parameter in each tool and asking agent to provide it through its system prompt, is there a better way to do it ?

Use agent context for this. Context is available to tools but never leaves the process.

context: RunContextWrapper[AirlineAgentContext], confirmation_number: str, new_seat: str

@HG2407
Copy link
Author

HG2407 commented May 6, 2025

hey @pakrym-oai thanks for the reply. I have another doubt.

 async def main():
    current_agent: Agent[AirlineAgentContext] = triage_agent
    input_items: list[TResponseInputItem] = []
    context = AirlineAgentContext()

    # Normally, each input from the user would be an API request to your app, and you can wrap the request in a trace()
    # Here, we'll just use a random UUID for the conversation ID
    conversation_id = uuid.uuid4().hex[:16]

    while True:
        user_input = input("Enter your message: ")
        with trace("Customer service", group_id=conversation_id):
            input_items.append({"content": user_input, "role": "user"})
            result = await Runner.run(current_agent, input_items, context=context)

            for new_item in result.new_items:
                agent_name = new_item.agent.name
                if isinstance(new_item, MessageOutputItem):
                    print(f"{agent_name}: {ItemHelpers.text_message_output(new_item)}")
                elif isinstance(new_item, HandoffOutputItem):
                    print(
                        f"Handed off from {new_item.source_agent.name} to {new_item.target_agent.name}"
                    )
                elif isinstance(new_item, ToolCallItem):
                    print(f"{agent_name}: Calling a tool")
                elif isinstance(new_item, ToolCallOutputItem):
                    print(f"{agent_name}: Tool call output: {new_item.output}")
                else:
                    print(f"{agent_name}: Skipping item: {new_item.__class__.__name__}")
            input_items = result.to_input_list()
            current_agent = result.last_agent

suppose here, when I get the chat history through input_items, I also want to see which particular agent sent a particular message. Is there a way to do that ? I just need it for debugging purposes.

P.S.: I also want to use handoff_filters.remove_all_tools to filter out the tool calls.

input_items = result.to_input_list()
filtered_input_items = handoff_filters.remove_all_tools(input_items)

But I get this error:
AttributeError: 'list' object has no attribute 'input_history'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Question about using the SDK
Projects
None yet
Development

No branches or pull requests

2 participants