Skip to content

Conversation

MatthiasKohl
Copy link
Collaborator

@MatthiasKohl MatthiasKohl commented Aug 12, 2025

Summary by CodeRabbit

  • New Features

    • Added a distributed all-to-all operation to tensorrt_llm._torch.distributed with a CUDA backend and CPU/no-op fallback; exposed as a PyTorch-compatible alltoall and included in the public API.
    • Supports single or multiple tensors with configurable split dimensions and optional stacking/concatenation; build now includes the all-to-all implementation.
  • Tests

    • Added multi-GPU integration tests (2‑GPU and 4‑GPU) validating correctness across shapes, dtypes, and partition/stacking configurations.

Description

Adds an all-to-all op required for Helix parallelism support

Test Coverage

tests/unittest/_torch/multi_gpu/test_alltoall.py

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@MatthiasKohl MatthiasKohl requested a review from a team as a code owner August 12, 2025 06:56
Copy link
Contributor

coderabbitai bot commented Aug 12, 2025

Warning

Rate limit exceeded

@MatthiasKohl has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 27 minutes and 28 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between e616d33 and cead8db.

📒 Files selected for processing (6)
  • cpp/tensorrt_llm/thop/CMakeLists.txt (1 hunks)
  • cpp/tensorrt_llm/thop/alltoallOp.cpp (1 hunks)
  • tensorrt_llm/_torch/custom_ops/cpp_custom_ops.py (1 hunks)
  • tensorrt_llm/_torch/distributed/__init__.py (1 hunks)
  • tensorrt_llm/_torch/distributed/ops.py (1 hunks)
  • tests/unittest/_torch/multi_gpu/test_alltoall.py (1 hunks)
📝 Walkthrough

Walkthrough

Adds an NCCL-backed CUDA all-to-all operation: C++ implementation and build entry, Python high-level op and fake-op, package export, and MPI-driven multi-GPU unit tests. No existing public C++ API signatures were removed or modified.

Changes

Cohort / File(s) Summary
C++ all-to-all implementation
cpp/tensorrt_llm/thop/alltoallOp.cpp
New CUDA/C++ Torch extension implementing an AllToAllOp using NCCL with initialize/run logic and Torch library fragment + CUDA binding.
Build file
cpp/tensorrt_llm/thop/CMakeLists.txt
Adds alltoallOp.cpp to the th_common SHARED library source list.
High-level Python op
tensorrt_llm/_torch/distributed/ops.py
New alltoall function: normalizes inputs, splits per-rank, calls backend torch.ops.trtllm.alltoall, and reassembles outputs (concat or stack).
Module export
tensorrt_llm/_torch/distributed/__init__.py
Re-exports alltoall and adds it to __all__.
Fake-op (testing / fallback)
tensorrt_llm/_torch/custom_ops/cpp_custom_ops.py
Registers fake op trtllm::alltoall and provides a Python wrapper that validates inputs and returns placeholder outputs.
Tests
tests/unittest/_torch/multi_gpu/test_alltoall.py
New MPI-driven multi-GPU unit tests (2- and 4-GPU) exercising dtypes, split dims, and stacking/concatenation behaviors.

Sequence Diagram(s)

sequenceDiagram
    participant Py as Python alltoall()
    participant Split as Split & prepare (ops.py)
    participant Backend as torch.ops.trtllm.alltoall (C++)
    participant NCCL as NCCL communicator (AllToAllOp)
    Py->>Split: call alltoall(inputs, group, dims, new_dims)
    Split->>Split: normalize inputs, split tensors into per-rank chunks
    Split->>Backend: op_inputs (Tensor[]), group, num_lists
    Backend->>NCCL: getComm(group) / initialize
    Backend->>NCCL: ncclGroupStart -> per-list ncclSend/ncclRecv -> ncclGroupEnd
    NCCL-->>Backend: transfers complete
    Backend-->>Split: flattened list of received chunks
    Split->>Py: assemble per-input outputs (concat or stack) and return
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~30–45 minutes

Suggested reviewers

  • shaharmor98
  • nv-guomingz
  • Superjomn
  • Barry-Delaney
  • litaotju
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@MatthiasKohl
Copy link
Collaborator Author

/bot run

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
cpp/tensorrt_llm/thop/alltoallOp.cpp (1)

50-57: Consider more descriptive error messages.

The error message "group size should be greater than 0" could be more informative by including the actual group size.

Apply this diff to improve the error message:

-        TLLM_CHECK_WITH_INFO(mGroup.size() > 0, "group size should be greater than 0");
+        TLLM_CHECK_WITH_INFO(mGroup.size() > 0, 
+            "group size should be greater than 0, got: " + std::to_string(mGroup.size()));
tests/unittest/_torch/multi_gpu/test_alltoall.py (1)

89-146: Consider adding edge case tests.

While the test coverage is good, consider adding tests for edge cases such as:

  • Empty tensors
  • Single element tensors
  • Non-contiguous tensors
  • Mixed precision tensors

Would you like me to generate additional test cases for these edge scenarios?

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0dc4b4e and f7d5d75.

📒 Files selected for processing (5)
  • cpp/tensorrt_llm/thop/CMakeLists.txt (1 hunks)
  • cpp/tensorrt_llm/thop/alltoallOp.cpp (1 hunks)
  • tensorrt_llm/_torch/distributed/__init__.py (1 hunks)
  • tensorrt_llm/_torch/distributed/ops.py (1 hunks)
  • tests/unittest/_torch/multi_gpu/test_alltoall.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a Python file, prefer docstrings over comments.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the class docstring.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tests/unittest/_torch/multi_gpu/test_alltoall.py
  • tensorrt_llm/_torch/distributed/ops.py
  • tensorrt_llm/_torch/distributed/__init__.py
**/*.{cpp,h,hpp,cc,cxx,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • tests/unittest/_torch/multi_gpu/test_alltoall.py
  • cpp/tensorrt_llm/thop/alltoallOp.cpp
  • tensorrt_llm/_torch/distributed/ops.py
  • tensorrt_llm/_torch/distributed/__init__.py
**/*.{cpp,h,hpp,cc,cxx}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,h,hpp,cc,cxx}: Closing braces of namespaces should have a comment saying the namespace it closes (e.g., } // namespace foo).
Prefer const or constexpr variables over #defines whenever possible.
A variable that is not modified after its initialization should be declared as const.
Except 0 (used for checking signness/existence/emptiness), nullptr, true, false, all other literals should only be used for variable initialization.
Use the Allman indentation style for braces in C++ code.
Put the semicolon for an empty for or while loop in a new line.
The statement forming the body of a switch, while, do..while, or for statement shall be a compound statement (use brace-delimited statements).
If and else should always be followed by brace-delimited statements, even if empty or a single statement.
C++ filenames should use camel case with the first letter lowercase (e.g., thisIsAFilename.cpp), and all files involved in a compilation target must have case-insensitive unique filenames.
All types (including class names) should use camel case with uppercase first letter (e.g., FooBarClass).
Local variables, methods, and namespaces should use camel case with first letter lowercase (e.g., localFooBar).
Non-magic-number global variables that are non-static and not defined in anonymous namespace should use camel case prefixed by 'g' (e.g., gDontUseGlobalFoos).
Non-magic-number global variables that are static or defined in an anonymous namespace should use camel case prefixed by 's' (e.g., sMutableStaticGlobal).
Locally visible static variables should use camel case with lowercase prefix 's' as the first letter (e.g., static std::once_flag sFlag;).
Class member variables should use camel case prefixed with 'm' (e.g., mNbFooValues). Public member variables do not require the 'm' prefix but it is encouraged for clarity.
Enumerations, global constants, static constants at class-scope, and function-scope magic-number/literal constants should be uppercase snake case with prefix...

Files:

  • cpp/tensorrt_llm/thop/alltoallOp.cpp
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/distributed/ops.py

247-247: Line too long (131 > 120)

(E501)

🔇 Additional comments (16)
cpp/tensorrt_llm/thop/CMakeLists.txt (2)

1-2: LGTM! Copyright header is properly formatted.

The copyright header follows the NVIDIA standard format and includes the current year (2024).


43-43: LGTM! Build integration is correct.

The alltoallOp.cpp file is properly added to the th_common shared library source list, maintaining alphabetical ordering between allreduceOp.cpp and attentionOp.cpp.

tensorrt_llm/_torch/distributed/__init__.py (2)

5-6: LGTM! Import statement correctly updated.

The alltoall function is properly imported from the .ops module alongside other collective operations.


10-10: LGTM! Public API export properly maintained.

The alltoall function is correctly added to the __all__ list, maintaining alphabetical ordering between allgather and reducescatter.

tensorrt_llm/_torch/distributed/ops.py (3)

249-268: LGTM! Input validation is comprehensive.

The function properly validates:

  • Group size and uniqueness
  • Single tensor vs. list handling
  • Type checking for dims and new_dims parameters
  • Matching lengths between inputs, dims, and new_dims

269-284: LGTM! Tensor splitting and preparation logic is correct.

The code properly:

  • Validates divisibility of the split dimension by number of ranks
  • Splits each input tensor into equal parts
  • Ensures tensors are contiguous for correct data pointer passing to the C++ op

285-303: LGTM! Output reconstruction logic is well-implemented.

The function correctly:

  • Calls the backend torch op with proper arguments
  • Reconstructs outputs per input tensor
  • Handles both concatenation (when new_dims[i] is None) and stacking (when specified)
  • Returns single tensor or list based on input type
cpp/tensorrt_llm/thop/alltoallOp.cpp (5)

1-16: LGTM! Copyright header follows NVIDIA standards.

The copyright header correctly includes the current year (2024) and follows the required format for TensorRT-LLM Open Source Software.


40-47: LGTM! Constructor properly follows RAII pattern.

The AllToAllOp class constructor correctly uses std::move for efficiency and follows the initialization list pattern.


97-113: LGTM! Public function implementation is clean and correct.

The alltoall function properly:

  • Converts the Python list to std::set for unique rank handling
  • Creates and initializes the AllToAllOp object
  • Returns input_list.vec() as fallback when ENABLE_MULTI_DEVICE is not defined

117-125: LGTM! PyTorch bindings are correctly implemented.

The Torch library declarations and implementations follow the standard pattern for PyTorch extensions.


33-35: Add closing comment for namespace.

According to the coding guidelines, closing braces of namespaces should have a comment indicating the namespace.

Apply this diff:

-} // namespace torch_ext
+} // namespace torch_ext

Wait, the comment is already there at line 115. This is fine.

tests/unittest/_torch/multi_gpu/test_alltoall.py (4)

41-49: LGTM! Helper function is well-structured.

The run_single_rank function properly sets up the CUDA device and handles exceptions with traceback printing.


52-86: LGTM! Comprehensive test validation logic.

The run_alltoall_op function thoroughly validates:

  • Tensor movement to GPU
  • Output tensor count, dtype, device, and shape
  • Content correctness using torch.allclose

148-161: LGTM! 2-GPU test is well-parameterized.

The test properly:

  • Skips when insufficient GPUs are available
  • Tests various sequence lengths and hidden sizes
  • Covers different dimension configurations

164-175: LGTM! 4-GPU test extends coverage appropriately.

The test adds coverage for 4-GPU scenarios with different parameter ranges to ensure scalability.

@MatthiasKohl MatthiasKohl requested a review from a team as a code owner August 12, 2025 07:04
@MatthiasKohl MatthiasKohl requested a review from litaotju August 12, 2025 07:04
@tensorrt-cicd
Copy link
Collaborator

PR_Github #14921 [ run ] triggered by Bot

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
tensorrt_llm/_torch/custom_ops/cpp_custom_ops.py (1)

1-1: Missing NVIDIA copyright header with current year.

Per repository guidelines, Python sources must include an NVIDIA copyright header that includes the current year.

Add the standard header at the top of the file:

+# Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
 from typing import List, Optional
🧹 Nitpick comments (1)
tensorrt_llm/_torch/custom_ops/cpp_custom_ops.py (1)

534-539: Optionally add clearer error messages and minimal validation for maintainability.

The added asserts are fine; consider slightly more descriptive messages to ease debugging in fake/mode trace failures.

If you prefer explicit exceptions (consistent with other fakes that use assert), you can keep asserts but include messages as in the previous diff. Otherwise, this minimal variant keeps the same structure:

-    assert len(input_list) > 0
-    assert len(input_list) == len(group)
+    assert len(input_list) > 0, "input_list must be non-empty"
+    group_size = len(group)
+    assert group_size > 0, "group must be non-empty"
+    assert len(input_list) % group_size == 0, "len(input_list) % len(group) must be 0"
+    if num_lists is not None:
+        assert len(input_list) == group_size * int(num_lists), \
+            "len(input_list) must equal len(group) * num_lists"
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f7d5d75 and 9effcd8.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/custom_ops/cpp_custom_ops.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a Python file, prefer docstrings over comments.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the class docstring.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tensorrt_llm/_torch/custom_ops/cpp_custom_ops.py
**/*.{cpp,h,hpp,cc,cxx,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • tensorrt_llm/_torch/custom_ops/cpp_custom_ops.py

@MatthiasKohl MatthiasKohl force-pushed the user/mjoux/add-alltoall branch from 747fb70 to e4a4502 Compare August 12, 2025 07:32
@tensorrt-cicd
Copy link
Collaborator

PR_Github #14921 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11263 completed with status: 'FAILURE'

@MatthiasKohl
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15002 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15002 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11329 completed with status: 'FAILURE'

Copy link
Collaborator

@brb-nv brb-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments are mostly questions I'm asking for my own understanding.

@MatthiasKohl MatthiasKohl force-pushed the user/mjoux/add-alltoall branch from 0ce6893 to e616d33 Compare August 13, 2025 05:54
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
tests/unittest/_torch/multi_gpu/test_alltoall.py (1)

103-114: Add a brief comment clarifying the per-rank slicing logic

This loop builds the per-destination slices (what each peer r should receive). A short inline comment will improve readability for future maintainers.

Apply this diff:

-            for r in range(world_size):
+            # Build the list of slices this rank would send to each peer r
+            for r in range(world_size):
                 idx = [slice(None)] * len(shape)
                 split = shape[d] // world_size
                 idx[d] = slice(r * split, (r + 1) * split)
                 send_tensors.append(tensor[idx])
🧹 Nitpick comments (6)
tensorrt_llm/_torch/distributed/ops.py (3)

234-246: Clarify preconditions in docstring (divisibility and homogeneity across ranks)

Make the invariants explicit: split dimension must be divisible by group size, and all per-rank tensors for a given input must have identical shape/dtype/device.

Apply this diff to extend the docstring:

     '''
     Add an operation that performs a collective all-to-all across both TP and CP groups.
     The operation is implemented using a torch op that wraps a NCCL group call of a series of
     NCCL send/recv operations to implement the all-to-all. See the following materials for details.
     https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/p2p.html#all-to-all,
     https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/api/group.html.
     Args:
         inputs (Union[Tensor, List[Tensor]]): The input tensor or tensor list.
         group (List[int]): The group of ranks to participate in the all-to-all.
         dims (Union[int, List[int]]): Split along given dimension (per tensor). By default -1.
         new_dims (Union[Optional[int], List[Optional[int]]]): The dimension to stack the splits along (per tensor).
             If None (default), the splits are concatenated along dimension given by `dims`.
+    Preconditions:
+        - For each input i, inputs[i].shape[dims[i]] must be divisible by len(group).
+        - For a given i, all ranks must provide tensors with identical shape, dtype, and device.
     Returns:
         The tensor when combining all splits from all participating ranks,
         or a list of tensors if `inputs` is a list of tensors.
     '''

250-269: Raise explicit exceptions instead of using asserts for user input validation

Asserts can be stripped with Python -O and produce less informative errors. Use ValueError/TypeError for robust, actionable feedback (consistent with public APIs).

Apply this diff:

-    n_ranks = len(group)
-    if n_ranks == 1:
-        return inputs
-
-    assert n_ranks > 0, "group must be non-empty"
-    assert n_ranks == len(set(group)), "group must be unique"
+    n_ranks = len(group)
+    if n_ranks == 1:
+        return inputs
+    if n_ranks <= 0:
+        raise ValueError("group must be non-empty")
+    if len(set(group)) != n_ranks:
+        raise ValueError("group must contain unique ranks")
     is_single_tensor = isinstance(inputs, torch.Tensor)
 
     if is_single_tensor:
-        assert isinstance(dims, int)
-        assert new_dims is None or isinstance(new_dims, int)
+        if not isinstance(dims, int):
+            raise TypeError("dims must be an int when inputs is a Tensor")
+        if new_dims is not None and not isinstance(new_dims, int):
+            raise TypeError("new_dims must be an int or None when inputs is a Tensor")
         inputs = [inputs]
         new_dims = [new_dims]
         dims = [dims]
-    assert len(dims) == len(inputs)
-    assert all(isinstance(dim, int) for dim in dims)
-    assert len(new_dims) == len(inputs)
-    assert all(new_dim is None or isinstance(new_dim, int)
-               for new_dim in new_dims)
+    if len(dims) != len(inputs):
+        raise ValueError("len(dims) must match len(inputs)")
+    if not all(isinstance(dim, int) for dim in dims):
+        raise TypeError("dims must contain ints")
+    if len(new_dims) != len(inputs):
+        raise ValueError("len(new_dims) must match len(inputs)")
+    if not all(nd is None or isinstance(nd, int) for nd in new_dims):
+        raise TypeError("new_dims must contain ints or None")

272-276: Improve error message on non-divisible splits and guard zero-size dims

Add a clearer error and consider guarding size_per_rank == 0 to fail fast with actionable context.

Apply this diff:

-    for inp, dim in zip(inputs, dims):
-        size_per_rank, rem = divmod(inp.shape[dim], n_ranks)
-        assert rem == 0, \
-            f"input.shape[{dim}] must be divisible by n_ranks ({n_ranks}), but got shape {inp.shape}"
+    for inp, dim in zip(inputs, dims):
+        size = inp.shape[dim]
+        size_per_rank, rem = divmod(size, n_ranks)
+        if rem != 0 or size_per_rank == 0:
+            raise ValueError(
+                f"inputs.shape[{dim}]={size} must be a positive multiple of len(group)={n_ranks}; got shape {tuple(inp.shape)}"
+            )
tests/unittest/_torch/multi_gpu/test_alltoall.py (3)

59-60: Avoid duplicate device selection

set_device is already called in run_single_rank; repeating it here is redundant.

Apply this diff:

-    torch.cuda.set_device(rank)

93-95: Unclear assertion — refine or remove

The assert enforces that if dims is a list, it must have length > 1. That’s a test harness artifact, not a correctness requirement. Suggest removing or replacing with a stricter check that dims and dtypes/new_dims lengths agree when lists are used.

Apply this diff:

-    assert not isinstance(dims, list) or len(dims) > 1
-    num_lists = len(dims) if isinstance(dims, list) else 1
+    num_lists = len(dims) if isinstance(dims, list) else 1
+    if isinstance(dims, list):
+        assert isinstance(new_dims, list) and len(new_dims) == num_lists

148-176: Broaden test coverage (optional): negative dims and single-rank group

Consider adding:

  • Cases where dims/new_dims use negative indices.
  • A trivial-case group with a single rank (alltoall should return inputs unchanged).

If helpful, I can draft an additional param set and a small test wrapper using the existing harness to cover these.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0ce6893 and e616d33.

📒 Files selected for processing (6)
  • cpp/tensorrt_llm/thop/CMakeLists.txt (1 hunks)
  • cpp/tensorrt_llm/thop/alltoallOp.cpp (1 hunks)
  • tensorrt_llm/_torch/custom_ops/cpp_custom_ops.py (1 hunks)
  • tensorrt_llm/_torch/distributed/__init__.py (1 hunks)
  • tensorrt_llm/_torch/distributed/ops.py (1 hunks)
  • tests/unittest/_torch/multi_gpu/test_alltoall.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • tensorrt_llm/_torch/custom_ops/cpp_custom_ops.py
  • tensorrt_llm/_torch/distributed/init.py
  • cpp/tensorrt_llm/thop/alltoallOp.cpp
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tests/unittest/_torch/multi_gpu/test_alltoall.py
  • tensorrt_llm/_torch/distributed/ops.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tests/unittest/_torch/multi_gpu/test_alltoall.py
  • tensorrt_llm/_torch/distributed/ops.py
🧠 Learnings (1)
📚 Learning: 2025-08-12T10:28:57.288Z
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-08-12T10:28:57.288Z
Learning: Applies to **/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py} : Prepend NVIDIA copyright header (current year) to all source files

Applied to files:

  • tests/unittest/_torch/multi_gpu/test_alltoall.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (3)
cpp/tensorrt_llm/thop/CMakeLists.txt (2)

43-43: Adds alltoallOp.cpp to th_common sources — LGTM

The build integration looks correct and keeps the new op co-located with other collectives (right after allreduceOp.cpp). No concerns here.


115-119: Compilation Safe When Multi-Device Is Disabled

All NCCL and MPI headers, types, and calls in alltoallOp.cpp are enclosed within #if ENABLE_MULTI_DEVICE#endif blocks. When ENABLE_MULTI_DEVICE is OFF, the free function alltoall(...) compiles to a simple fallback (return input_list.vec();) with no references to NCCL or MPI, matching the CMake guard around linking. No unresolved symbols will occur.

tests/unittest/_torch/multi_gpu/test_alltoall.py (1)

1-14: Header complies with the NVIDIA SPDX requirement — LGTM

Header is present with 2025 year, matching organizational guidance.

Note: Verified against retrieved team learning that all source files must carry the NVIDIA header.

@MatthiasKohl
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15080 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15080 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11385 completed with status: 'FAILURE'

@MatthiasKohl MatthiasKohl force-pushed the user/mjoux/add-alltoall branch from b1e1539 to cead8db Compare August 13, 2025 11:28
@MatthiasKohl
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15131 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15131 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11425 completed with status: 'FAILURE'

@MatthiasKohl
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15306 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15306 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11557 completed with status: 'FAILURE'

@lfr-0531 lfr-0531 requested a review from zongfeijing August 15, 2025 02:25
@MatthiasKohl MatthiasKohl force-pushed the user/mjoux/add-alltoall branch from 31f0838 to e7bffad Compare September 23, 2025 10:14
@MatthiasKohl
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19696 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19696 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14824 completed with status: 'FAILURE'

@brb-nv
Copy link
Collaborator

brb-nv commented Sep 23, 2025

/bot run --disable-fail-fast

@brb-nv brb-nv enabled auto-merge (squash) September 23, 2025 16:28
@tensorrt-cicd
Copy link
Collaborator

PR_Github #19708 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19708 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14833 completed with status: 'FAILURE'

@MatthiasKohl
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19806 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19806 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #14901 completed with status: 'FAILURE'

@brb-nv
Copy link
Collaborator

brb-nv commented Sep 24, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19834 [ run ] triggered by Bot

@brb-nv
Copy link
Collaborator

brb-nv commented Sep 24, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19843 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19834 [ run ] completed with state ABORTED
LLM/main/L0_MergeRequest_PR #14924 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19843 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14932 completed with status: 'FAILURE'

@brb-nv
Copy link
Collaborator

brb-nv commented Sep 25, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19874 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19874 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14956 completed with status: 'SUCCESS'

@brb-nv brb-nv merged commit eda1467 into NVIDIA:main Sep 25, 2025
5 checks passed
MatthiasKohl added a commit to MatthiasKohl/TensorRT-LLM that referenced this pull request Sep 29, 2025
MatthiasKohl added a commit to MatthiasKohl/TensorRT-LLM that referenced this pull request Sep 30, 2025
MatthiasKohl added a commit to MatthiasKohl/TensorRT-LLM that referenced this pull request Sep 30, 2025
MatthiasKohl added a commit to MatthiasKohl/TensorRT-LLM that referenced this pull request Sep 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants