Skip to content

Human-In-The-Loop Architecture should be implemented on top priority! #636

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
adhishthite opened this issue May 1, 2025 · 0 comments
Open
Labels
enhancement New feature or request

Comments

@adhishthite
Copy link

Please read this first

  • Have you read the docs? Agents SDK docs : Yes
  • Have you searched for related issues? Others may have had similar requests : Yes

Describe the feature

The OpenAI Agents SDK currently offers impressive capabilities for autonomous and tool-augmented agents. However, a critical gap exists in supporting Human-In-The-Loop (HITL) workflows, which are essential in many real-world applications where full automation is either unsafe, undesirable, or legally restricted.

This feature request is to natively support a Human-In-The-Loop architecture within the Agents SDK, enabling agent workflows to pause, await human input/approval, and then resume seamlessly. Such support should ideally be baked into the core execution model with minimal overhead for developers to implement.


Key capabilities expected:

  1. Agent Pausing and Checkpointing:

    • Agents should be able to pause execution at a given step and emit a structured "awaiting human input" signal.
    • The current state of the agent (tools used, inputs, intermediate steps) should be checkpointed and resumable.
  2. Human Feedback Integration:

    • Support for injecting human feedback (e.g., override outputs, provide context, or approve/reject steps).
    • This feedback should be accessible to the agent as a system message or function input on resumption.
  3. Out-of-the-box Interfaces:

    • Optional utilities (even CLI/Web-based scaffolds) to receive/resume agent flows after human approval.
    • This would speed up prototyping and reduce boilerplate for developers needing human supervision.
  4. Auditability and Traceability:

    • When human intervention occurs, that should be logged as part of the agent's trace/run history.
    • This is especially valuable for regulated domains like healthcare, legal, or finance.
  5. Timeout and Escalation Support:

    • Developers should be able to set timeouts or fallback behaviors in case human input is not received within a window.

Why it matters

Many enterprise and safety-critical applications—customer support, medical triage, contract review, moderation, compliance, etc.—require a responsible balance between automation and human oversight. Without native HITL support, developers are forced to build brittle, custom logic around agent runners, which breaks the elegance and composability the SDK otherwise offers.

Adding this will not only accelerate adoption but also make the SDK production-grade for a much broader range of use cases where trust, review, or decision-making delegation is non-trivial.

@adhishthite adhishthite added the enhancement New feature or request label May 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant