Skip to content

Conversation

hyukn
Copy link
Collaborator

@hyukn hyukn commented Sep 3, 2025

Triton will trigger unexpected recompilation for DG fused_moe _preprocess_after_permute_kernel, where the token number was defined as a constant, which might be treated as a specialization by Triton. Replacing it with a normal int value will solve the issue.

For concurrency 4096: w/. fixing vs. w/o. fixing = 51029.35 vs. 44774.58 TOPS.

Summary by CodeRabbit

  • Refactor

    • Updated token handling to be dynamic at runtime, improving flexibility across varying batch sizes and sequence lengths and reducing reliance on compile-time constants.
    • Enhances compatibility with a wider range of configurations without manual tuning.
  • Bug Fixes

    • Resolves errors and incorrect masking seen with larger or variable token counts.
    • Improves reliability and stability in edge cases involving diverse token distributions.

@hyukn hyukn requested a review from a team as a code owner September 3, 2025 06:24
@hyukn hyukn requested a review from yuxianq September 3, 2025 06:24
@hyukn hyukn changed the title [5488582][fix] Avoid unexpected Triton recompilation in DG fused_moe. [https://nvbugs/5488582][fix] Avoid unexpected Triton recompilation in DG fused_moe. Sep 3, 2025
Copy link
Contributor

coderabbitai bot commented Sep 3, 2025

📝 Walkthrough

Walkthrough

Converted a Triton kernel parameter from a compile-time constant (tl.constexpr) to a runtime argument and updated its usage and invocation accordingly within tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py.

Changes

Cohort / File(s) Summary
Triton kernel API update
tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py
Changed _preprocess_after_permute_kernel signature: replaced TOTAL_TOKENS: tl.constexpr with runtime total_tokens. Updated token mask computation to use total_tokens. Adjusted kernel launch to pass total_tokens (no TOTAL_TOKENS= kwarg).

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Suggested reviewers

  • schetlur-nv
  • brb-nv
  • ruodil
  • amitz-nv
  • hypdeb

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 935c2c1 and 073a877.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Filenames compiled into a target must be case-insensitively unique

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py
**/*.{h,hpp,hh,hxx,cc,cpp,cxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use spaces, not tabs; indent 4 spaces

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code must target Python 3.8+
Indent with 4 spaces; do not use tabs (Python)
Maintain module namespace on import: prefer from package.subpackage import foo; use foo.Symbol()
Python filenames use snake_case
Python class names use PascalCase
Python functions and methods use snake_case
Python local variables use snake_case; if starting with a number concept, prefix with k (e.g., k_99th_percentile)
Python global variables use G_ prefix with UPPER_SNAKE_CASE
Python constants use UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes
Initialize all externally visible class members in init
For public interfaces, prefer docstrings over comments; comments should be for in-function or file-local interfaces
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes and variables inline with docstrings immediately after assignment
Avoid reflection when a non-reflective approach suffices
Limit except clauses to specific exceptions where possible
When using try/except for duck-typing, keep try body minimal and move logic to else

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py
**/*.{cpp,cc,cxx,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: djns99
PR: NVIDIA/TensorRT-LLM#6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.
Learnt from: sklevtsov-nvidia
PR: NVIDIA/TensorRT-LLM#3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu:118-127
Timestamp: 2025-08-09T20:57:04.084Z
Learning: In the CUTLASS MoE finalize fusion implementation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu), when setting `fused_finalize_epilogue.stride_final_output` with shape `(hidden_size, num_output_tokens, 1)`, the `num_rows_in_final_output` should be set to `num_output_tokens` (not `hidden_size`) because of a swap+transpose operation that maps rows of the output tensor to `hidden_size` and columns to `num_output_tokens`.
Learnt from: sklevtsov-nvidia
PR: NVIDIA/TensorRT-LLM#3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1198-1209
Timestamp: 2025-08-08T22:03:40.707Z
Learning: In the CUTLASS MoE kernels (cpp/tensorrt_llm/cutlass_extensions), when `layout_info.fusion` is set to `TmaWarpSpecializedGroupedGemmInput::EpilogueFusion::FINALIZE`, the `router_scales` parameter must be non-null by design. The fused finalize kernel epilogue does not perform nullptr checks and requires valid router scales to function correctly. This is an implicit contract that callers must satisfy when enabling the FINALIZE fusion mode.
Learnt from: djns99
PR: NVIDIA/TensorRT-LLM#7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1475-1480
Timestamp: 2025-08-21T02:39:12.009Z
Learning: The min latency mode functionality in TensorRT-LLM MOE kernels (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu) is deprecated and no longer being maintained/updated, as confirmed by djns99. Bug reports and optimization suggestions for the computeStridesTmaWarpSpecializedLowLatencyKernel and related min latency code paths should be deprioritized.
📚 Learning: 2025-08-09T20:57:04.084Z
Learnt from: sklevtsov-nvidia
PR: NVIDIA/TensorRT-LLM#3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu:118-127
Timestamp: 2025-08-09T20:57:04.084Z
Learning: In the CUTLASS MoE finalize fusion implementation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu), when setting `fused_finalize_epilogue.stride_final_output` with shape `(hidden_size, num_output_tokens, 1)`, the `num_rows_in_final_output` should be set to `num_output_tokens` (not `hidden_size`) because of a swap+transpose operation that maps rows of the output tensor to `hidden_size` and columns to `num_output_tokens`.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (3)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py (3)

219-226: Good fix: make total_tokens a runtime arg to avoid Triton specialization

Switching from tl.constexpr to a runtime scalar is the right lever to stop recompiles tied to token count.


230-243: Masking is correct and prevents OOB writes

Using token_mask = token_offsets < total_tokens and applying it on the store guarantees safety for partial blocks at the tail.


286-293: Launch site updated correctly

Passing total_tokens positionally aligns with the new kernel signature; NUM_EXPERTS and BLOCK_SIZE_M remain compile-time constants as intended.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@hyukn hyukn requested a review from lfr-0531 September 3, 2025 06:32
@hyukn
Copy link
Collaborator Author

hyukn commented Sep 3, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17491 [ run ] triggered by Bot

Triton will trigger unexpected recompilation for DG fused_moe _preprocess_after_permute_kernel where token number was defined as constant, which might be treated as specialization by Triton. Replace it as a normal int value will solve the issue.

Signed-off-by: Yukun He <[email protected]>
@hyukn
Copy link
Collaborator Author

hyukn commented Sep 4, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17603 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17491 [ run ] completed with state ABORTED

@litaotju litaotju enabled auto-merge (squash) September 4, 2025 06:32
@tensorrt-cicd
Copy link
Collaborator

PR_Github #17603 [ run ] completed with state SUCCESS
/LLM/release-1.1.0rc2/L0_MergeRequest_PR pipeline #55 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Sep 4, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17640 [ run ] triggered by Bot

@litaotju litaotju added the Release Blocker PRs that blocking the final release build or branching out the release branch label Sep 4, 2025
@litaotju
Copy link
Collaborator

litaotju commented Sep 4, 2025

Screenshot 2025-09-04 at 23 36 10

I have to by pass and merge, the pipeline passed, only the H100 pending but this code path does not affect any of H100, its only B200.

@litaotju litaotju disabled auto-merge September 4, 2025 15:37
@litaotju litaotju merged commit 68f79d8 into NVIDIA:release/1.1.0rc2 Sep 4, 2025
4 of 5 checks passed
@tensorrt-cicd
Copy link
Collaborator

PR_Github #17640 [ run ] completed with state SUCCESS
/LLM/release-1.1.0rc2/L0_MergeRequest_PR pipeline #61 completed with status: 'SUCCESS'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Release Blocker PRs that blocking the final release build or branching out the release branch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants