You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have you searched for related issues? Others may have had similar requests : Yes
Describe the feature
The OpenAI Agents SDK currently offers impressive capabilities for autonomous and tool-augmented agents. However, a critical gap exists in supporting Human-In-The-Loop (HITL) workflows, which are essential in many real-world applications where full automation is either unsafe, undesirable, or legally restricted.
This feature request is to natively support a Human-In-The-Loop architecture within the Agents SDK, enabling agent workflows to pause, await human input/approval, and then resume seamlessly. Such support should ideally be baked into the core execution model with minimal overhead for developers to implement.
Key capabilities expected:
Agent Pausing and Checkpointing:
Agents should be able to pause execution at a given step and emit a structured "awaiting human input" signal.
The current state of the agent (tools used, inputs, intermediate steps) should be checkpointed and resumable.
Human Feedback Integration:
Support for injecting human feedback (e.g., override outputs, provide context, or approve/reject steps).
This feedback should be accessible to the agent as a system message or function input on resumption.
Out-of-the-box Interfaces:
Optional utilities (even CLI/Web-based scaffolds) to receive/resume agent flows after human approval.
This would speed up prototyping and reduce boilerplate for developers needing human supervision.
Auditability and Traceability:
When human intervention occurs, that should be logged as part of the agent's trace/run history.
This is especially valuable for regulated domains like healthcare, legal, or finance.
Timeout and Escalation Support:
Developers should be able to set timeouts or fallback behaviors in case human input is not received within a window.
Why it matters
Many enterprise and safety-critical applications—customer support, medical triage, contract review, moderation, compliance, etc.—require a responsible balance between automation and human oversight. Without native HITL support, developers are forced to build brittle, custom logic around agent runners, which breaks the elegance and composability the SDK otherwise offers.
Adding this will not only accelerate adoption but also make the SDK production-grade for a much broader range of use cases where trust, review, or decision-making delegation is non-trivial.
The text was updated successfully, but these errors were encountered:
Please read this first
Describe the feature
The OpenAI Agents SDK currently offers impressive capabilities for autonomous and tool-augmented agents. However, a critical gap exists in supporting Human-In-The-Loop (HITL) workflows, which are essential in many real-world applications where full automation is either unsafe, undesirable, or legally restricted.
This feature request is to natively support a Human-In-The-Loop architecture within the Agents SDK, enabling agent workflows to pause, await human input/approval, and then resume seamlessly. Such support should ideally be baked into the core execution model with minimal overhead for developers to implement.
Key capabilities expected:
Agent Pausing and Checkpointing:
Human Feedback Integration:
Out-of-the-box Interfaces:
Auditability and Traceability:
Timeout and Escalation Support:
Why it matters
Many enterprise and safety-critical applications—customer support, medical triage, contract review, moderation, compliance, etc.—require a responsible balance between automation and human oversight. Without native HITL support, developers are forced to build brittle, custom logic around agent runners, which breaks the elegance and composability the SDK otherwise offers.
Adding this will not only accelerate adoption but also make the SDK production-grade for a much broader range of use cases where trust, review, or decision-making delegation is non-trivial.
The text was updated successfully, but these errors were encountered: