You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
2.0 is the latest PyTorch version. PyTorch 2.0 offers the same eager-mode development experience, while adding a compiled mode via torch.compile. This compiled mode has the potential to speedup your models during training and inference.
450
450
</li>
451
451
452
-
<li><b>Why 2.0 instead of 1.14? </b><br>
452
+
<li><b>Why 2.0 instead of 1.14? </b><br>
453
453
PyTorch 2.0 is what 1.14 would have been. We were releasing substantial new features that we believe change how you meaningfully use PyTorch, so we are calling it 2.0 instead.
454
454
</li>
455
455
@@ -462,29 +462,29 @@ PyTorch 2.0 is what 1.14 would have been. We were releasing substantial new feat
<li> <b>Is 2.0 code backwards-compatible with 1.X? </b><br>
479
+
<li<b>Is 2.0 code backwards-compatible with 1.X? </b><br>
480
480
Yes, using 2.0 will not require you to modify your PyTorch workflows. A single line of code <codeclass="language-plaintext highlighter-rouge">model = torch.compile(model)</code> can optimize your model to use the 2.0 stack, and smoothly run with the rest of your PyTorch code. This is completely opt-in, and you are not required to use the new compiler.
481
481
</li>
482
482
483
-
<li><b>Is 2.0 enabled by default?</b><br>
483
+
<li><b>Is 2.0 enabled by default?</b><br>
484
484
No, you must explicitly enable 2.0 in your PyTorch code by optimizing your model with a single function call.
485
485
</li>
486
486
487
-
<li><b>How do I migrate my PT1.X code to PT2.0?</b><br>
487
+
<li><b>How do I migrate my PT1.X code to PT2.0?</b><br>
488
488
Your code should be working as-is without the need for any migrations. If you want to use the new Compiled mode feature introduced in 2.0, then you can start by optimizing your model with one line:
489
489
<codeclass="language-plaintext highlighter-rouge">model = torch.compile(model)</code> While the speedups are primarily observed during training, you can also use it for inference if your model runs faster than eager mode.
490
490
@@ -502,18 +502,18 @@ return model(\*\*input)
502
502
503
503
</li>
504
504
505
-
<li><b>Why should I use PT2.0 instead of PT 1.X? </b><br>
505
+
<li><b>Why should I use PT2.0 instead of PT 1.X? </b><br>
506
506
See answer to Question (2)
507
507
</li>
508
508
509
-
<li><b>Are there any applications where I should NOT use PT 2.0?</b><br>
509
+
<li><b>Are there any applications where I should NOT use PT 2.0?</b><br>
510
510
The current release of PT 2.0 is still experimental and in the nightlies. Dynamic shapes support in torch.compile is still early, and you should not be using it yet, and wait until the Stable 2.0 release lands in March 2023.
511
511
512
512
That said, even with static-shaped workloads, we’re still building Compiled mode and there might be bugs. Disable Compiled mode for parts of your code that are crashing, and raise an <ahref="https://github.com/pytorch/pytorch/issues"target="_blank">issue</a> (if it isn’t raised already).
513
513
514
514
</li>
515
515
516
-
<li><b>What is my code doing differently when running PyTorch 2.0? </b>
516
+
<li><b>What is my code doing differently when running PyTorch 2.0? </b>
517
517
Out of the box, PyTorch 2.0 is the same as PyTorch 1.x, your models run in eager-mode i.e. every line of Python is executed one after the other. <br>
518
518
519
519
In 2.0, if you wrap your model in `model = torch.compile(model)`, your model goes through 3 steps before execution: <br>
@@ -527,7 +527,7 @@ In 2.0, if you wrap your model in `model = torch.compile(model)`, your model goe
527
527
528
528
</li>
529
529
530
-
<li><b>What new components does PT2.0 add to PT?</b><br>
530
+
<li><b>What new components does PT2.0 add to PT?</b><br>
531
531
<ul>
532
532
<li><strong>TorchDynamo</strong> generates FX Graphs from Python bytecode. It maintains the eager-mode capabilities using <ahref="https://pytorch.org/docs/master/dynamo/guards-overview.html#caching-and-guards-overview"target="_blank">guards</a> to ensure the generated graphs are valid (<ahref="https://dev-discuss.pytorch.org/t/torchdynamo-an-experiment-in-dynamic-python-bytecode-transformation/361"target="_blank">read more</a>)</li>
533
533
<li><strong>AOTAutograd</strong> to generate the backward graph corresponding to the forward graph captured by TorchDynamo (<ahref="https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-2/645"target="_blank">read more</a>)</li>
@@ -547,15 +547,16 @@ In 2.0, if you wrap your model in `model = torch.compile(model)`, your model goe
547
547
</li>
548
548
549
549
<li> <b> How can I learn more about PT2.0 developments?</b>
550
-
<p>The most likely reason for performance hits is too many graph breaks. For instance, something innocuous as a print statement in your model’s forward triggers a graph break. We have ways to diagnose these - read more<ahref=" https://pytorch.org/docs/master/dynamo/faq.html#why-is-my-code-crashing"target="_blank"> here</a>.</p>
550
+
<p>The most likely reason for performance hits is too many graph breaks. For instance, something innocuous as a print statement in your model’s forward triggers a graph break. We have ways to diagnose these - read more <ahref="https://github.com/pytorch/torchdynamo/blob/main/documentation/FAQ.md#why-am-i-not-seeing-speedups"target="_blank">here</a>.</p>
551
+
<p>The most likely reason for performance hits is too many graph breaks. For instance, something innocuous as a print statement in your model’s forward triggers a graph break. We have ways to diagnose these - read more <ahref=" https://pytorch.org/docs/master/dynamo/faq.html#why-is-my-code-crashing"target="_blank">here</a>.</p>
551
552
</li>
552
553
553
554
<li> <b>Help my code is running slower with 2.0’s Compiled Model</b>
554
-
<p>The most likely reason for performance hits is too many graph breaks. For instance, something innocuous as a print statement in your model’s forward triggers a graph break. We have ways to diagnose these - read more<ahref="https://pytorch.org/docs/master/dynamo/faq.html#why-am-i-not-seeing-speedups"target="_blank">here</a>.</p>
555
+
<p>The most likely reason for performance hits is too many graph breaks. For instance, something innocuous as a print statement in your model’s forward triggers a graph break. We have ways to diagnose these - read more<ahref="https://pytorch.org/docs/master/dynamo/faq.html#why-am-i-not-seeing-speedups"target="_blank">here</a>.</p>
555
556
</li>
556
557
557
558
<li> <b> My previously-running code is crashing with 2.0! How do I debug it?</b>
558
-
<p>Here are some techniques to triage where your code might be failing, and printing helpful logs:<ahref="https://github.com/pytorch/torchdynamo/blob/main/documentation/FAQ.md#why-is-my-code-crashing"target="_blank">https://github.com/pytorch/torchdynamo/blob/main/documentation/FAQ.md#why-is-my-code-crashing</a></p>
559
+
<p>Here are some techniques to triage where your code might be failing, and printing helpful logs:<ahref="https://github.com/pytorch/torchdynamo/blob/main/documentation/FAQ.md#why-is-my-code-crashing"target="_blank">https://github.com/pytorch/torchdynamo/blob/main/documentation/FAQ.md#why-is-my-code-crashing</a></p>
559
560
</li>
560
561
561
562
</ol>
@@ -668,57 +669,4 @@ We will be hosting a series of live Q&A sessions for the community to have deepe
0 commit comments