-
Notifications
You must be signed in to change notification settings - Fork 807
feat(semantic-conventions-ai): Add reasoning attributes #3330
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed everything up to c088e28 in 45 seconds. Click for details.
- Reviewed
40
lines of code in3
files - Skipped
0
files when reviewing. - Skipped posting
3
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py:76
- Draft comment:
New reasoning-related attributes (e.g., LLM_USAGE_REASONING_TOKENS, LLM_REQUEST_REASONING_EFFORT, etc.) are added consistently. Please ensure documentation and tests are updated to clarify their semantics. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
2. packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/version.py:1
- Draft comment:
Version bump to 0.4.13 in version.py. This change aligns with the new feature release. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
3. packages/opentelemetry-semantic-conventions-ai/pyproject.toml:10
- Draft comment:
Version bump in pyproject.toml to 0.4.13 is correctly updated. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
Workflow ID: wflow_K9VxxnojVE6AJbTe
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
WalkthroughAdded four GenAI reasoning span-attribute constants to SpanAttributes and bumped the semconv-ai package version to 0.4.13. Updated many instrumentation packages' pyproject dependency constraints to require opentelemetry-semantic-conventions-ai ^0.4.13; small dependency loosenings in a couple of pyprojects. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py (1)
79-79
: Reasoning tokens: confirm alignment with current OTel GenAI registry and plan for migration.
- The OTel GenAI attributes registry currently lists
gen_ai.usage.input_tokens
andgen_ai.usage.output_tokens
(and deprecatesgen_ai.usage.prompt_tokens
andgen_ai.usage.completion_tokens
). It does not (yet) includegen_ai.usage.reasoning_tokens
. Treat this as experimental and be ready to migrate if the spec lands a different shape (e.g., token-type dimensions on metrics or different attribute keys). (opentelemetry.io)- Optional: consider recording “reasoning” counts primarily via the
gen_ai.client.token.usage
metric with a token-type dimension, to avoid proliferating token attributes on spans. (opentelemetry.io)If you keep this attribute (reasonable for now), add an “unstable” note and a migration plan.
Example diff to annotate instability:
- LLM_USAGE_REASONING_TOKENS = "gen_ai.usage.reasoning_tokens" + # NOTE: Not (yet) in the OTel GenAI registry as of 2025‑08‑21. Subject to change. + LLM_USAGE_REASONING_TOKENS = "gen_ai.usage.reasoning_tokens"Also consider adding forward-looking aliases for spec-compliant names to ease migration from deprecated keys:
# New spec-compliant aliases (do not remove old ones yet) LLM_USAGE_INPUT_TOKENS = "gen_ai.usage.input_tokens" LLM_USAGE_OUTPUT_TOKENS = "gen_ai.usage.output_tokens"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
(1 hunks)packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/version.py
(1 hunks)packages/opentelemetry-semantic-conventions-ai/pyproject.toml
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit Inference Engine (CLAUDE.md)
**/*.py
: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules
Files:
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/version.py
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
🧠 Learnings (1)
📚 Learning: 2025-08-17T15:06:48.109Z
Learnt from: CR
PR: traceloop/openllmetry#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-17T15:06:48.109Z
Learning: Semantic conventions must follow the OpenTelemetry GenAI specification
Applied to files:
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
🔇 Additional comments (1)
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/version.py (1)
1-1
: LGTM — version constant updated to 0.4.13 and matches pyproject.
packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (14)
packages/opentelemetry-instrumentation-sagemaker/pyproject.toml (1)
26-26
: LGTM: dependency aligned to ^0.4.13SageMaker instrumentation now compatible with the new reasoning attributes set.
If you plan to emit the new attributes soon from this package, consider opening a small follow-up to add minimal tests/assertions for presence of the keys on emitted spans when applicable. I can help scaffold those.
packages/opentelemetry-instrumentation-vertexai/pyproject.toml (1)
31-31
: LGTM: Vertex AI instrumentation moved to ^0.4.13Keeps dependencies in step with the newly added reasoning attributes.
Minor: consider adding a brief note in this package’s README “Compatibility” section (if present) that reasoning-related attributes require semconv‑ai >= 0.4.13. I can draft wording if helpful.
packages/opentelemetry-instrumentation-crewai/pyproject.toml (1)
21-27
: CrewAI instrumentor pinned to ^0.4.13 — good; matches the repo-wide alignment.Nothing blocking. Python >=3.10 constraint here is stricter than some siblings; that’s fine and unchanged.
If you plan a coordinated release train, bump the package version (currently 0.45.6) alongside the dependency constraint to surface the change on PyPI even though it’s “just” metadata.
packages/opentelemetry-instrumentation-cohere/pyproject.toml (1)
25-31
: Cohere instrumentor dependency moved to ^0.4.13 — approved.No compatibility concerns spotted. Follow-up PR should map Cohere’s reasoning signals (if exposed) to the new attributes.
If helpful, I can draft a short matrix showing which providers expose tokens/effort/summary so you can uniformly populate the new fields.
packages/opentelemetry-instrumentation-bedrock/pyproject.toml (1)
29-30
: Loosening semconv pin looks fine; consider adding an upper bound to avoid future breaking majors.Moving to >=0.55b0 is good to consume fixes without churn. To prevent accidental adoption of a future 1.x that might contain breaking changes, add an explicit ceiling.
Apply:
-opentelemetry-semantic-conventions = ">=0.55b0" +opentelemetry-semantic-conventions = ">=0.55b0,<1.0"packages/opentelemetry-instrumentation-haystack/pyproject.toml (1)
25-31
: Follow-up: plan to emit the new reasoning attributes from this instrumentor.With semconv-ai 0.4.13 available, ensure Haystack spans populate:
- gen_ai.usage.reasoning_tokens (when provided)
- gen_ai.request.reasoning_effort / reasoning_summary
- gen_ai.response.reasoning_effort
I can draft attribute-mapping snippets for common Haystack nodes in a follow-up PR.
packages/opentelemetry-instrumentation-alephalpha/pyproject.toml (1)
25-31
: Optional: bump package patch version to publish constraint changes.If these packages are released independently, consider incrementing version (e.g., 0.45.7) so consumers receive the relaxed constraints without needing to reinstall from source.
packages/opentelemetry-instrumentation-pinecone/pyproject.toml (1)
25-31
: Reminder: consider adding emission for new reasoning fields where applicable.If Pinecone operations surface reasoning metadata via upstream client calls, emit the new attributes to keep spans uniform across vendors.
packages/opentelemetry-instrumentation-mistralai/pyproject.toml (1)
26-26
: LGTM — version range updated to pull in reasoning attributes.No additional changes needed here. Consider adding a guard in the instrumentation to only set reasoning attributes if the provider response includes them, to avoid sparse attribute noise.
packages/opentelemetry-instrumentation-anthropic/pyproject.toml (2)
25-31
: Do you intend to publish this instrumentation package? If yes, bump its version.The dependency constraint changed but [tool.poetry].version remains 0.45.6. If you plan to publish, you’ll need a patch bump to ship the new dependency spec on PyPI. If not publishing right now, keeping 0.45.6 is fine.
32-46
: Optional: Align SDK dev dependency to ^1.28.0 for consistency with opentelemetry-api.You have opentelemetry-api = ^1.28.0 while opentelemetry-sdk (test group) is ^1.27.0. Both resolve to 1.28.x, but aligning to ^1.28.0 reduces cognitive overhead.
Proposed change:
- opentelemetry-sdk = "^1.27.0" + opentelemetry-sdk = "^1.28.0"packages/opentelemetry-instrumentation-openai/pyproject.toml (3)
36-45
: Optional: relax or document the strict pin on openai test dependency.openai = 1.99.7 is very specific; if tests aren’t tied to exact wire fixtures, consider a caret range to avoid frequent bumps. If you do need exact pinning (VCR cassettes, schema drift), add a short comment explaining why.
Example:
- openai = { extras = ["datalib"], version = "1.99.7" } + # Exact pin due to recorded fixtures; update alongside cassettes + openai = { extras = ["datalib"], version = "1.99.7" }Or, if flexibility is acceptable:
- openai = { extras = ["datalib"], version = "1.99.7" } + openai = { extras = ["datalib"], version = "^1.99.7" }
25-31
: If you plan to publish, consider a patch version bump here too.As with the Anthropic package, a dependency-only change still requires a new package version to reach users via PyPI.
1-55
: Next step (heads-up): instrument the new reasoning attributes in a follow-up PR.Per our repo learning, instrumentations should emit attributes from the semantic-conventions package rather than hardcoded strings. When you wire OpenAI reasoning fields next, map provider fields (e.g., usage.reasoning_tokens, request/response effort, optional reasoning summary) to:
- gen_ai.usage.reasoning_tokens
- gen_ai.request.reasoning_effort
- gen_ai.request.reasoning_summary
- gen_ai.response.reasoning_effort
This comment is informational; no code change needed in this file.
I can help draft the attribute mapping and test matrix for OpenAI (and annotate gaps for Anthropic) in the follow-up PR.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (30)
packages/opentelemetry-instrumentation-alephalpha/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-anthropic/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-bedrock/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-chromadb/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-cohere/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-crewai/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-google-generativeai/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-groq/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-haystack/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-lancedb/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-langchain/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-llamaindex/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-marqo/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-mcp/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-milvus/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-mistralai/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-ollama/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-openai-agents/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-openai/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-pinecone/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-qdrant/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-replicate/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-sagemaker/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-together/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-transformers/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-vertexai/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-watsonx/poetry.lock
is excluded by!**/*.lock
packages/opentelemetry-instrumentation-weaviate/poetry.lock
is excluded by!**/*.lock
packages/sample-app/poetry.lock
is excluded by!**/*.lock
packages/traceloop-sdk/poetry.lock
is excluded by!**/*.lock
📒 Files selected for processing (29)
packages/opentelemetry-instrumentation-alephalpha/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-anthropic/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-bedrock/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-chromadb/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-cohere/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-crewai/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-groq/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-haystack/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-lancedb/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-langchain/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-llamaindex/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-marqo/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-mcp/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-milvus/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-mistralai/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-ollama/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-openai-agents/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-openai/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-pinecone/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-qdrant/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-replicate/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-sagemaker/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-together/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-transformers/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-vertexai/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-watsonx/pyproject.toml
(1 hunks)packages/opentelemetry-instrumentation-weaviate/pyproject.toml
(1 hunks)packages/traceloop-sdk/pyproject.toml
(1 hunks)
✅ Files skipped from review due to trivial changes (7)
- packages/opentelemetry-instrumentation-together/pyproject.toml
- packages/opentelemetry-instrumentation-google-generativeai/pyproject.toml
- packages/opentelemetry-instrumentation-lancedb/pyproject.toml
- packages/opentelemetry-instrumentation-marqo/pyproject.toml
- packages/opentelemetry-instrumentation-ollama/pyproject.toml
- packages/opentelemetry-instrumentation-weaviate/pyproject.toml
- packages/opentelemetry-instrumentation-chromadb/pyproject.toml
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: CR
PR: traceloop/openllmetry#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-17T15:06:48.109Z
Learning: Instrumentation packages must leverage the semantic conventions package and emit OTel-compliant spans
📚 Learning: 2025-08-21T04:01:31.783Z
Learnt from: prane-eth
PR: traceloop/openllmetry#3330
File: packages/opentelemetry-semantic-conventions-ai/pyproject.toml:11-11
Timestamp: 2025-08-21T04:01:31.783Z
Learning: The openllmetry repository uses a monorepo structure where individual packages like opentelemetry-semantic-conventions-ai do not maintain their own CHANGELOG.md files. There is a single CHANGELOG.md at the repository root level instead.
Applied to files:
packages/opentelemetry-instrumentation-openai-agents/pyproject.toml
packages/opentelemetry-instrumentation-sagemaker/pyproject.toml
packages/opentelemetry-instrumentation-langchain/pyproject.toml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Test Packages (3.12)
- GitHub Check: Lint
- GitHub Check: Test Packages (3.10)
- GitHub Check: Build Packages (3.11)
- GitHub Check: Test Packages (3.11)
🔇 Additional comments (21)
packages/opentelemetry-instrumentation-openai-agents/pyproject.toml (1)
27-27
: Audit complete: dependency bump applied consistentlyAll consumer
pyproject.toml
files underpackages/
now reference^0.4.13
and there are no hard-pinned0.4.12
occurrences. The only file flagged during the scan—packages/opentelemetry-semantic-conventions-ai/pyproject.toml
—is the conventions package itself defining its own version (0.4.13), not a dependency reference. Additionally, the newLLM_*
reasoning constants are already present in the semantic conventions package (opentelemetry/semconv_ai/__init__.py
). No further changes are required.packages/opentelemetry-instrumentation-qdrant/pyproject.toml (1)
30-30
: LGTM: consistent bump to ^0.4.13 across all packagesI ran the suggested script and confirmed that every
pyproject.toml
inpackages/**
now pinsopentelemetry-semantic-conventions-ai = "^0.4.13"
.To prevent future drift, you can continue using:
#!/bin/bash rg -n --glob 'packages/**/pyproject.toml' \ -e 'opentelemetry-semantic-conventions-ai\s*=\s*".*"' \ | sed -E 's#(.*/pyproject\.toml):.*"(.+)"#\1 -> \2#' | sort -uExpect all lines to show
-> ^0.4.13
.packages/opentelemetry-instrumentation-milvus/pyproject.toml (1)
30-30
: Milvus instrumentation aligned to ^0.4.13LGTM—this update brings Milvus in line with the other AI-semantic-conventions packages at version 0.4.13.
Automated lockfile refresh could not be verified here because the Poetry CLI wasn’t available in this environment. Please manually ensure any Poetry-managed packages have their
poetry.lock
updated to pick up 0.4.13. For example:# From repo root cd packages/opentelemetry-instrumentation-milvus poetry update opentelemetry-semantic-conventions-aiRepeat for any other packages that use Poetry lockfiles.
packages/opentelemetry-instrumentation-transformers/pyproject.toml (1)
25-31
: I’m checking if any lockfiles still resolve opentelemetry-semantic-conventions-ai at 0.4.12.packages/opentelemetry-instrumentation-llamaindex/pyproject.toml (1)
25-31
: Dependency bump is consistent; unblock adding reasoning attrs in LlamaIndex spans.Change is minimal and safe. Confirm you’ll emit:
- gen_ai.usage.reasoning_tokens
- gen_ai.request.reasoning_effort
- gen_ai.request.reasoning_summary
- gen_ai.response.reasoning_effort
in the subsequent instrumentation PR.Before release, consider whether this constraint change warrants a patch version bump of this package itself (currently 0.45.6) so downstreams pick it up without waiting for other changes.
packages/traceloop-sdk/pyproject.toml (1)
26-39
: Dependency on semconv-ai ^0.4.13 confirmed; constants definitions present
- Verified that the four new
gen_ai.*
constants are defined without typo inpackages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py
(lines 79, 85–87).- No occurrences of those constants were found in this repo’s README or a local
docs/
folder—documentation is managed in a separate repo (see docs PR #100).Everything in the SDK code is in order. Please ensure that the external docs PR #100 includes:
- Updated examples or helper APIs referencing
gen_ai.usage.reasoning_tokens
gen_ai.request.reasoning_effort
gen_ai.request.reasoning_summary
gen_ai.response.reasoning_effort
packages/opentelemetry-instrumentation-bedrock/pyproject.toml (2)
30-30
: Approve: bump to ^0.4.13 aligns with new reasoning attributes.The range ^0.4.13 is correct and safely bounded (<0.5). No issues.
29-30
: I’ve added searches to verify whether the Bedrock instrumentation actually uses newer semantic-conventions APIs that would necessitate bumping the baseline to >=0.55b0. Once we see the results, we can confirm if the version bump is intentional.packages/opentelemetry-instrumentation-haystack/pyproject.toml (1)
30-30
: Approve: dependency updated to ^0.4.13.Matches semconv-ai 0.4.13 release that introduces reasoning attributes.
packages/opentelemetry-instrumentation-alephalpha/pyproject.toml (1)
30-30
: Approve: dependency updated to ^0.4.13.Range is correct; no other constraint changes observed.
packages/opentelemetry-instrumentation-mcp/pyproject.toml (2)
26-26
: Approve: semconv-ai bumped to ^0.4.13.Matches the repo-wide update.
26-27
: No semver conflicts detected: OTLP exporter ^1.34.1 is compatible with the existing API and SDK pins.
- All instrumentation packages (including
opentelemetry-instrumentation-mcp
) constrain
• OTel API to^1.28.0
(i.e. ≥1.28.0,<2.0.0)
• OTel SDK in tests to^1.27.0
(i.e. ≥1.27.0,<2.0.0)- The exporter’s own peer requirements are
>=1.0.0,<2.0.0
for both API and SDK, so no resolver conflicts or runtime symbol mismatches are expected.This version bump is safe to merge.
packages/opentelemetry-instrumentation-pinecone/pyproject.toml (1)
30-30
: Approve: dependency updated to ^0.4.13.Good alignment with new reasoning attributes.
packages/opentelemetry-instrumentation-groq/pyproject.toml (1)
26-26
: All semconv-ai constraints consistently updated to ^0.4.13
- Verified every
opentelemetry-semantic-conventions-ai
entry across the repo is now^0.4.13
(no remaining 0.4.10–0.4.12 pins).- Confirmed
__version__ = "0.4.13"
inpackages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/version.py
.Caret range (>=0.4.13,<0.5.0) is correct—approving these changes.
packages/opentelemetry-instrumentation-langchain/pyproject.toml (1)
30-30
: Dependency bump is correct; LangChain instrumentation now depends on the reasoning-aware semconv-ai.Given tests are planned in a follow-up PR, ensure upcoming LangChain spans actually set the new attributes (usage.reasoning_tokens, request/response.reasoning_effort, request.reasoning_summary) when available from providers.
If helpful, I can draft a small test matrix for providers that expose “reasoning” metadata so we can assert these attributes end-to-end. Want me to open a checklist?
packages/opentelemetry-instrumentation-replicate/pyproject.toml (1)
26-26
: Consistent bump to ^0.4.13 — approved.Good to keep all instrumentations aligned on the same semconv-ai minor. Nothing else to change in packaging.
packages/opentelemetry-instrumentation-watsonx/pyproject.toml (1)
18-18
: Approved: adopts semconv-ai ^0.4.13.Watsonx instrumentation can remain compatible; no need to change core semconv constraints for this bump. Ensure follow-up instrumentation PRs emit the new attributes when Watsonx surfaces reasoning details.
packages/opentelemetry-instrumentation-anthropic/pyproject.toml (2)
30-30
: Bump to ^0.4.13 looks correct and future-safe (<0.5.0).Caret range for 0.x correctly caps at <0.5.0. No issues spotted with the constraint itself.
25-31
: Consistent semconv-ai constraints verifiedAll
pyproject.toml
files across thepackages/
directory declare
opentelemetry-semantic-conventions-ai = "^0.4.13"
. No further updates required.packages/opentelemetry-instrumentation-openai/pyproject.toml (2)
30-30
: Dependency bump to ^0.4.13 is appropriate.Matches the semconv-ai package version introduced in this PR and respects <0.5.0.
25-31
: Repo-wide guardrail: double-check all instrumentations updated to ^0.4.13.Same note as for the Anthropic package—ensure no stragglers remain on ^0.4.12.
Reuse the script from the Anthropic comment to scan all pyprojects.
This is a step to solve issue #3257
Next step is to update the version of "semantic-conventions-ai" package in pyproject.toml of OpenAI instrumentation, and a Pull Request from prane-eth/openllmetry: feature/openai-reasoning.
feat(instrumentation): ...
orfix(instrumentation): ...
.Important
Add new reasoning attributes to
SpanAttributes
and update version to0.4.13
.LLM_USAGE_REASONING_TOKENS
,LLM_REQUEST_REASONING_EFFORT
,LLM_REQUEST_REASONING_SUMMARY
,LLM_RESPONSE_REASONING_EFFORT
toSpanAttributes
in__init__.py
.0.4.13
inversion.py
andpyproject.toml
.This description was created by
for c088e28. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
New Features
Chores