Skip to content

Conversation

peaceh-nv
Copy link
Collaborator

@peaceh-nv peaceh-nv commented Sep 29, 2025

@coderabbitai summary

Description

When dp padding is enabled, the number of chunks should be re-calculated using padded tokens not the original all_rank_num_tokens. The OOM issue is found on SM120 due to dp padding is only enabled on the platform, see https://github.com/NVIDIA/TensorRT-LLM/pull/7965/files

@peaceh-nv peaceh-nv requested a review from a team as a code owner September 29, 2025 05:02
@peaceh-nv
Copy link
Collaborator Author

/bot run

@peaceh-nv peaceh-nv changed the title [None][fix] : Fix OOM issue when dp padding is enabled on SM120 [None][fix] : Fix OOM issue when dp padding is enabled Sep 29, 2025
Copy link
Contributor

coderabbitai bot commented Sep 29, 2025

📝 Walkthrough

Walkthrough

Refines chunking logic in fused MoE forward_impl for distributed mode: when DP with padding is enabled and parallel_size > 1, num_chunks is computed from max(all_rank_num_tokens) and world size; otherwise, the original num_rows-based computation remains. No public APIs changed.

Changes

Cohort / File(s) Summary
Fused MoE DP chunking
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
In forward_impl, conditional num_chunks calculation: for DP with padding and parallel_size > 1, use ceiling((max(all_rank_num_tokens) × num_ranks) ÷ moe_max_num_tokens); otherwise, use the prior num_rows-based formula. No other logic altered.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant Caller
    participant FusedMoE as FusedMoE.forward_impl
    participant DPState as DP Context
    participant Chunker as Chunking Logic

    Caller->>FusedMoE: forward_impl(input, ...)
    FusedMoE->>DPState: check use_dp, parallel_size, use_dp_padding
    alt DP with padding enabled and parallel_size > 1
        DPState-->>FusedMoE: all_rank_num_tokens
        FusedMoE->>Chunker: compute num_chunks = ceil(max(all_rank_num_tokens)*num_ranks / moe_max_num_tokens)
        note right of Chunker: New branch for DP padding
    else
        FusedMoE->>Chunker: compute num_chunks = ceil(num_rows / moe_max_num_tokens)
        note right of Chunker: Original behavior
    end
    FusedMoE->>FusedMoE: split inputs into chunks
    FusedMoE-->>Caller: proceed with existing processing per chunk
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The description includes only the placeholder summary tag and a brief description of the fix but omits the actual summary under “@coderabbitai summary” and is missing the required “## Test Coverage” and “## PR Checklist” sections from the repository’s template. Please populate the summary content under the “@coderabbitai summary” heading and add the “## Test Coverage” section with relevant tests as well as the “## PR Checklist” section to fully comply with the repository’s description template.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title “[None][fix] : Fix OOM issue when dp padding is enabled” clearly and concisely identifies the primary change by specifying that an out-of-memory bug with data-parallel padding is being fixed, and it follows the required ticket and type tagging convention.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9cea6bf and f17dc12.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: nv-lschneider
PR: NVIDIA/TensorRT-LLM#7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.
📚 Learning: 2025-08-09T20:57:04.084Z
Learnt from: sklevtsov-nvidia
PR: NVIDIA/TensorRT-LLM#3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu:118-127
Timestamp: 2025-08-09T20:57:04.084Z
Learning: In the CUTLASS MoE finalize fusion implementation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu), when setting `fused_finalize_epilogue.stride_final_output` with shape `(hidden_size, num_output_tokens, 1)`, the `num_rows_in_final_output` should be set to `num_output_tokens` (not `hidden_size`) because of a swap+transpose operation that maps rows of the output tensor to `hidden_size` and columns to `num_output_tokens`.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)

472-478: LGTM! Conditional chunking logic correctly addresses OOM with DP padding.

The change properly handles the chunking calculation when DP padding is enabled:

  • Uses max(all_rank_num_tokens) * len(all_rank_num_tokens) for padded scenarios to ensure chunks are sized for the maximum token count across all ranks
  • Falls back to the original num_rows calculation for non-padded cases
  • This prevents OOM by accounting for the actual memory requirements when padding is applied

The implementation aligns with the retrieved learning about token partitioning using ceil-like distribution for optimal performance in distributed scenarios.

Based on learnings

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20215 [ run ] triggered by Bot

Signed-off-by: peaceh <[email protected]>
@peaceh-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20222 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20215 [ run ] completed with state ABORTED
LLM/main/L0_MergeRequest_PR #15247 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20222 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15251 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@peaceh-nv peaceh-nv merged commit 808e556 into NVIDIA:main Oct 1, 2025
5 checks passed
faradawn pushed a commit to faradawn/TensorRT-LLM that referenced this pull request Oct 2, 2025
evezhier pushed a commit to evezhier/TensorRT-LLM that referenced this pull request Oct 3, 2025
faradawn pushed a commit to faradawn/TensorRT-LLM that referenced this pull request Oct 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants