Skip to content

Commit 6cd0e06

Browse files
authored
Brianjo torchrec update (#1838)
* Update torchrec_tutorial.rst Some formatting fixes. * Update index.rst * Add files via upload adds torchrec thumbnail. * Add files via upload * Add files via upload * Update index.rst * Delete 60-min-blitz.png * Update torchrec_tutorial.rst * Update torchrec_tutorial.rst
1 parent 9fa3081 commit 6cd0e06

File tree

3 files changed

+33
-16
lines changed

3 files changed

+33
-16
lines changed

_static/img/thumbnails/torchrec.png

26.1 KB
Loading

index.rst

+17
Original file line numberDiff line numberDiff line change
@@ -575,6 +575,15 @@ Welcome to PyTorch Tutorials
575575
:link: beginner/deeplabv3_on_android.html
576576
:tags: Mobile
577577

578+
.. Recommendation Systems
579+
580+
.. customcarditem::
581+
:header: Introduction to TorchRec
582+
:card_description: TorchRec is a PyTorch domain library built to provide common sparsity & parallelism primitives needed for large-scale recommender systems.
583+
:image: _static/img/thumbnails/torchrec.png
584+
:link: intermediate/torchrec_tutorial.html
585+
:tags: TorchRec,Recommender
586+
578587
.. End of tutorial card section
579588
580589
.. raw:: html
@@ -832,3 +841,11 @@ Additional Resources
832841

833842
beginner/deeplabv3_on_ios
834843
beginner/deeplabv3_on_android
844+
845+
.. toctree::
846+
:maxdepth: 2
847+
:includehidden:
848+
:hidden:
849+
:caption: Recommendation Systems
850+
851+
intermediate/torchrec_tutorial

intermediate_source/torchrec_tutorial.rst

+16-16
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
Introduction to Torchrec
2-
====================================================
1+
Introduction to TorchRec
2+
========================
33

44
.. tip::
55
To get the most of this tutorial, we suggest using this
@@ -12,34 +12,34 @@ AI’s `Deep learning recommendation
1212
model <https://arxiv.org/abs/1906.00091>`__, or DLRM. As the number of
1313
entities grow, the size of the embedding tables can exceed a single
1414
GPU’s memory. A common practice is to shard the embedding table across
15-
devices, a type of model parallelism. To that end, **torchRec introduces
15+
devices, a type of model parallelism. To that end, TorchRec introduces
1616
its primary API
17-
called** |DistributedModelParallel|_ **,
18-
or DMP. Like pytorch’s DistributedDataParallel, DMP wraps a model to
19-
enable distributed training.**
17+
called |DistributedModelParallel|_ ,
18+
or DMP. Like PyTorch’s DistributedDataParallel, DMP wraps a model to
19+
enable distributed training.
2020

21-
**Installation**
22-
--------------------
21+
Installation
22+
------------
2323

2424
Requirements:
2525
- python >= 3.7
2626

27-
We highly recommend CUDA when using torchRec. If using CUDA:
27+
We highly recommend CUDA when using TorchRec. If using CUDA:
2828
- cuda >= 11.0
2929

3030

3131
.. code:: shell
3232
3333
# install pytorch with cudatoolkit 11.3
3434
conda install pytorch cudatoolkit=11.3 -c pytorch-nightly -y
35-
# install torchrec
35+
# install TorchTec
3636
pip3 install torchrec-nightly
3737
3838
39-
**Overview**
40-
------------
39+
Overview
40+
--------
4141

42-
This tutorial will cover three pieces of torchRec - the ``nn.module`` |EmbeddingBagCollection|_, the |DistributedModelParallel|_ API, and
42+
This tutorial will cover three pieces of TorchRec - the ``nn.module`` |EmbeddingBagCollection|_, the |DistributedModelParallel|_ API, and
4343
the datastructure |KeyedJaggedTensor|_.
4444

4545

@@ -75,7 +75,7 @@ GPU.
7575
From EmbeddingBag to EmbeddingBagCollection
7676
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
7777

78-
Pytorch represents embeddings through |torch.nn.Embedding|_ and |torch.nn.EmbeddingBag|_.
78+
PyTorch represents embeddings through |torch.nn.Embedding|_ and |torch.nn.EmbeddingBag|_.
7979
EmbeddingBag is a pooled version of Embedding.
8080

8181
TorchRec extends these modules by creating collections of embeddings. We
@@ -121,7 +121,7 @@ Now, we’re ready to wrap our model with |DistributedModelParallel|_ (DMP). Ins
121121
embedding table on the appropriate device(s).
122122

123123
In this toy example, since we have two EmbeddingTables and one GPU,
124-
torchRec will place both on the single GPU.
124+
TorchRec will place both on the single GPU.
125125

126126
.. code:: python
127127
@@ -161,7 +161,7 @@ Representing minibatches with KeyedJaggedTensor
161161

162162
We need an efficient representation of multiple examples of an arbitrary
163163
number of entity IDs per feature per example. In order to enable this
164-
“jagged” representation, we use the torchRec datastructure
164+
“jagged” representation, we use the TorchRec datastructure
165165
|KeyedJaggedTensor|_ (KJT).
166166

167167
Let’s take a look at **how to lookup a collection of two embedding

0 commit comments

Comments
 (0)