Skip to content

When using Japanese in AzureOpenAI, answers may not be displayed #649

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
sasaki000 opened this issue May 5, 2025 · 2 comments
Open

When using Japanese in AzureOpenAI, answers may not be displayed #649

sasaki000 opened this issue May 5, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@sasaki000
Copy link

sasaki000 commented May 5, 2025

When using Azure LLM, if you specify Japanese in the prompt, the answer may not be displayed.
The version of OpenAI-agents is Version: 0.0.14.

  • The following code may return a blank answer.
import agents

model_name = 'gpt-4.1-mini'

custom_client = agents.AsyncOpenAI(
    api_key='xxxx',
    base_url='https://yyyy.openai.azure.com/openai/deployments/gpt-4.1-mini',
    default_headers={'api-key': 'xxxx'},
    default_query={'api-version': '2025-04-01-preview'},
)
agents.set_default_openai_client(client=custom_client, use_for_tracing=False)
agents.set_default_openai_api(api='chat_completions')  # Default Model
agents.set_tracing_disabled(disabled=True)  # Disable Trace

model = agents.OpenAIChatCompletionsModel(
    model=model_name,
    openai_client=custom_client,
)


agent = agents.Agent(
    name='Assistant',
    instructions='あなたは役に立つアシスタントです',
    # instructions='You are a helpful assistant',
    model=model,
    # model_settings=agents.ModelSettings(temperature=0.0),  # ←Setting this increases the frequency with which results are not displayed.
    # hooks=LoggingAgentHooks()
)

# For Jupyter notebooks with existing event loops
result = await agents.Runner.run(
    starting_agent=agent,
    input='プログラミングにおける、再帰について俳句を書いてください。'
    # input='Write a haiku about recursion in programming.'
)

# print('----')
print(result.final_output)
# print('----')

This sometimes occurs when the prompt is set to Japanese. It does not occur when the prompt is set to English.
The debug log is attached below.
Could this be because the content from LLM is null? Why is it null?

[2025-05-06 07:50:17 - openai.agents:90 - DEBUG] Tracing is disabled. Not creating trace Agent workflow
[2025-05-06 07:50:17 - openai.agents:90 - DEBUG] Setting current trace: no-op
[2025-05-06 07:50:17 - openai.agents:90 - DEBUG] Tracing is disabled. Not creating span <agents.tracing.span_data.AgentSpanData object at 0x7f751c227ed0>
[2025-05-06 07:50:17 - openai.agents:90 - DEBUG] Running agent Assistant (turn 1)
[2025-05-06 07:50:17 - openai.agents:90 - DEBUG] Tracing is disabled. Not creating span <agents.tracing.span_data.GenerationSpanData object at 0x7f751c21f6b0>
[2025-05-06 07:50:17 - openai.agents:90 - DEBUG] [
  {
    "content": "\u3042\u306a\u305f\u306f\u5f79\u306b\u7acb\u3064\u30a2\u30b7\u30b9\u30bf\u30f3\u30c8\u3067\u3059",
    "role": "system"
  },
  {
    "role": "user",
    "content": "\u30d7\u30ed\u30b0\u30e9\u30df\u30f3\u30b0\u306b\u304a\u3051\u308b\u3001\u518d\u5e30\u306b\u3064\u3044\u3066\u4ff3\u53e5\u3092\u66f8\u3044\u3066\u304f\u3060\u3055\u3044\u3002"
  }
]
Tools:
[]
Stream: False
Tool choice: NOT_GIVEN
Response format: NOT_GIVEN

[2025-05-06 07:50:17 - openai._base_client:90 - DEBUG] Request options: {'method': 'post', 'url': '/chat/completions', 'headers': {'User-Agent': 'Agents/Python 0.0.14'}, 'files': None, 'idempotency_key': 'stainless-python-retry-0ac5b9fb-f94b-474b-8652-bc053a2daaf7', 'json_data': {'messages': [{'content': 'あなたは役に立つアシスタントです', 'role': 'system'}, {'role': 'user', 'content': 'プログラミングにおける、再帰について俳句を書いてください。'}], 'model': 'gpt-4.1-mini', 'stream': False}}
[2025-05-06 07:50:17 - openai._base_client:90 - DEBUG] Sending HTTP Request: POST https://yyyy.openai.azure.com/openai/deployments/gpt-4.1-mini/chat/completions?api-version=2025-04-01-preview
[2025-05-06 07:50:18 - httpx:94 - INFO] HTTP Request: POST https://yyyy.openai.azure.com/openai/deployments/gpt-4.1-mini/chat/completions?api-version=2025-04-01-preview "HTTP/1.1 200 OK"
[2025-05-06 07:50:18 - openai._base_client:90 - DEBUG] HTTP Response: POST https://yyyy.openai.azure.com/openai/deployments/gpt-4.1-mini/chat/completions?api-version=2025-04-01-preview "200 OK" Headers({'content-length': '1086', 'content-type': 'application/json', 'apim-request-id': '909d22a1-2694-4147-8425-307754af658b', 'strict-transport-security': 'max-age=31536000; includeSubDomains; preload', 'x-content-type-options': 'nosniff', 'x-ms-region': 'East US 2', 'x-ratelimit-remaining-requests': '249', 'x-ratelimit-limit-requests': '250', 'x-ratelimit-remaining-tokens': '249347', 'x-ratelimit-limit-tokens': '250000', 'cmp-upstream-response-duration': '268', 'x-accel-buffering': 'no', 'x-ms-rai-invoked': 'true', 'x-envoy-upstream-service-time': '312', 'x-request-id': '04551af5-9cb9-4607-90c2-82ab2c72da32', 'ms-azureml-model-time': '310', 'x-ms-client-request-id': '909d22a1-2694-4147-8425-307754af658b', 'azureml-model-session': 'v20250415-5-168444713-5', 'x-ms-deployment-name': 'gpt-4.1-mini', 'date': 'Mon, 05 May 2025 22:50:18 GMT'})
[2025-05-06 07:50:18 - openai._base_client:90 - DEBUG] request_id: 04551af5-9cb9-4607-90c2-82ab2c72da32
[2025-05-06 07:50:18 - openai.agents:90 - DEBUG] LLM resp:
{
  "content": null,
  "refusal": null,
  "role": "assistant",
  "annotations": [],
  "audio": null,
  "function_call": null,
  "tool_calls": null
}

[2025-05-06 07:50:18 - openai.agents:90 - DEBUG] Resetting current trace
@sasaki000 sasaki000 added the bug Something isn't working label May 5, 2025
@pakrym-oai
Copy link
Contributor

When you say "The following code may return a blank answer." do you mean sometimes you'll get the answer and sometimes empty result?

Can you run the sample with OPENAI_LOG=debug and confirm that the model output has empty content?

@sasaki000 sasaki000 changed the title When using AzureOpenAI, the answer is not displayed When using Japanese in AzureOpenAI, answers may not be displayed May 5, 2025
@sasaki000
Copy link
Author

Thanks for your reply.
Yes, sometimes it returns an answer, sometimes it returns an empty result.
I ran the example with OPENAI_LOG=debug and added a log where the model output contains empty content.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants