Skip to content

Commit eec1722

Browse files
updating css
1 parent f08147a commit eec1722

File tree

2 files changed

+64
-69
lines changed

2 files changed

+64
-69
lines changed

_get_started/pytorch.md

+17-69
Original file line numberDiff line numberDiff line change
@@ -445,11 +445,11 @@ After all, we can’t claim we’re created a breadth-first unless **YOUR** mode
445445
<h2 id="faqs" style="text-transform: none">FAQs<a class="anchorjs-link " href="#faqs" aria-label="Anchor" data-anchorjs-icon="" style="font: 1em / 1 anchorjs-icons; padding-left: 0.375em;"></a></h2>
446446

447447
<ol>
448-
<li> <b> What is PT 2.0?</b> <br>
448+
<li><b>What is PT 2.0?</b><br>
449449
2.0 is the latest PyTorch version. PyTorch 2.0 offers the same eager-mode development experience, while adding a compiled mode via torch.compile. This compiled mode has the potential to speedup your models during training and inference.
450450
</li>
451451

452-
<li> <b>Why 2.0 instead of 1.14? </b> <br>
452+
<li><b>Why 2.0 instead of 1.14? </b><br>
453453
PyTorch 2.0 is what 1.14 would have been. We were releasing substantial new features that we believe change how you meaningfully use PyTorch, so we are calling it 2.0 instead.
454454
</li>
455455

@@ -462,29 +462,29 @@ PyTorch 2.0 is what 1.14 would have been. We were releasing substantial new feat
462462

463463
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pip3 install numpy --pre torch[dynamo] torchvision torchaudio --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117
464464

465-
</code></pre></div> </div>
465+
</code></pre></div</div>
466466

467467
<p>CUDA 11.6</p>
468468

469469
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pip3 install numpy --pre torch[dynamo] torchvision torchaudio --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu116
470470

471-
</code></pre></div> </div>
471+
</code></pre></div></div>
472472

473473
<p>CPU</p>
474474

475475
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pip3 install numpy --pre torch torchvision torchaudio --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cpu
476476

477-
</code></pre></div> </div> </li>
477+
</code></pre></div></div></li>
478478

479-
<li> <b>Is 2.0 code backwards-compatible with 1.X? </b> <br>
479+
<li<b>Is 2.0 code backwards-compatible with 1.X? </b><br>
480480
Yes, using 2.0 will not require you to modify your PyTorch workflows. A single line of code <code class="language-plaintext highlighter-rouge">model = torch.compile(model)</code> can optimize your model to use the 2.0 stack, and smoothly run with the rest of your PyTorch code. This is completely opt-in, and you are not required to use the new compiler.
481481
</li>
482482

483-
<li><b>Is 2.0 enabled by default?</b> <br>
483+
<li><b>Is 2.0 enabled by default?</b><br>
484484
No, you must explicitly enable 2.0 in your PyTorch code by optimizing your model with a single function call.
485485
</li>
486486

487-
<li> <b>How do I migrate my PT1.X code to PT2.0?</b> <br>
487+
<li><b>How do I migrate my PT1.X code to PT2.0?</b><br>
488488
Your code should be working as-is without the need for any migrations. If you want to use the new Compiled mode feature introduced in 2.0, then you can start by optimizing your model with one line:
489489
<code class="language-plaintext highlighter-rouge">model = torch.compile(model)</code> While the speedups are primarily observed during training, you can also use it for inference if your model runs faster than eager mode.
490490

@@ -502,18 +502,18 @@ return model(\*\*input)
502502

503503
</li>
504504

505-
<li> <b> Why should I use PT2.0 instead of PT 1.X? </b> <br>
505+
<li><b>Why should I use PT2.0 instead of PT 1.X? </b><br>
506506
See answer to Question (2)
507507
</li>
508508

509-
<li> <b> Are there any applications where I should NOT use PT 2.0?</b> <br>
509+
<li><b>Are there any applications where I should NOT use PT 2.0?</b><br>
510510
The current release of PT 2.0 is still experimental and in the nightlies. Dynamic shapes support in torch.compile is still early, and you should not be using it yet, and wait until the Stable 2.0 release lands in March 2023.
511511

512512
That said, even with static-shaped workloads, we’re still building Compiled mode and there might be bugs. Disable Compiled mode for parts of your code that are crashing, and raise an <a href="https://github.com/pytorch/pytorch/issues" target="_blank">issue</a> (if it isn’t raised already).
513513

514514
</li>
515515

516-
<li> <b> What is my code doing differently when running PyTorch 2.0? </b>
516+
<li><b>What is my code doing differently when running PyTorch 2.0? </b>
517517
Out of the box, PyTorch 2.0 is the same as PyTorch 1.x, your models run in eager-mode i.e. every line of Python is executed one after the other. <br>
518518

519519
In 2.0, if you wrap your model in `model = torch.compile(model)`, your model goes through 3 steps before execution: <br>
@@ -527,7 +527,7 @@ In 2.0, if you wrap your model in `model = torch.compile(model)`, your model goe
527527

528528
</li>
529529

530-
<li> <b>What new components does PT2.0 add to PT?</b> <br>
530+
<li><b>What new components does PT2.0 add to PT?</b><br>
531531
<ul>
532532
<li><strong>TorchDynamo</strong> generates FX Graphs from Python bytecode. It maintains the eager-mode capabilities using <a href="https://pytorch.org/docs/master/dynamo/guards-overview.html#caching-and-guards-overview" target="_blank">guards</a> to ensure the generated graphs are valid (<a href="https://dev-discuss.pytorch.org/t/torchdynamo-an-experiment-in-dynamic-python-bytecode-transformation/361" target="_blank">read more</a>)</li>
533533
<li><strong>AOTAutograd</strong> to generate the backward graph corresponding to the forward graph captured by TorchDynamo (<a href="https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-2/645" target="_blank">read more</a>)</li>
@@ -547,15 +547,16 @@ In 2.0, if you wrap your model in `model = torch.compile(model)`, your model goe
547547
</li>
548548

549549
<li> <b> How can I learn more about PT2.0 developments?</b>
550-
<p>The most likely reason for performance hits is too many graph breaks. For instance, something innocuous as a print statement in your model’s forward triggers a graph break. We have ways to diagnose these - read more<a href=" https://pytorch.org/docs/master/dynamo/faq.html#why-is-my-code-crashing" target="_blank"> here</a>.</p>
550+
<p>The most likely reason for performance hits is too many graph breaks. For instance, something innocuous as a print statement in your model’s forward triggers a graph break. We have ways to diagnose these - read more <a href="https://github.com/pytorch/torchdynamo/blob/main/documentation/FAQ.md#why-am-i-not-seeing-speedups" target="_blank">here</a>.</p>
551+
<p>The most likely reason for performance hits is too many graph breaks. For instance, something innocuous as a print statement in your model’s forward triggers a graph break. We have ways to diagnose these - read more <a href=" https://pytorch.org/docs/master/dynamo/faq.html#why-is-my-code-crashing" target="_blank">here</a>.</p>
551552
</li>
552553

553554
<li> <b>Help my code is running slower with 2.0’s Compiled Model</b>
554-
<p>The most likely reason for performance hits is too many graph breaks. For instance, something innocuous as a print statement in your model’s forward triggers a graph break. We have ways to diagnose these - read more<a href="https://pytorch.org/docs/master/dynamo/faq.html#why-am-i-not-seeing-speedups" target="_blank"> here</a>.</p>
555+
<p>The most likely reason for performance hits is too many graph breaks. For instance, something innocuous as a print statement in your model’s forward triggers a graph break. We have ways to diagnose these - read more <a href="https://pytorch.org/docs/master/dynamo/faq.html#why-am-i-not-seeing-speedups" target="_blank">here</a>.</p>
555556
</li>
556557

557558
<li> <b> My previously-running code is crashing with 2.0! How do I debug it?</b>
558-
<p>Here are some techniques to triage where your code might be failing, and printing helpful logs:<a href="https://github.com/pytorch/torchdynamo/blob/main/documentation/FAQ.md#why-is-my-code-crashing" target="_blank"> https://github.com/pytorch/torchdynamo/blob/main/documentation/FAQ.md#why-is-my-code-crashing</a></p>
559+
<p>Here are some techniques to triage where your code might be failing, and printing helpful logs: <a href="https://github.com/pytorch/torchdynamo/blob/main/documentation/FAQ.md#why-is-my-code-crashing" target="_blank">https://github.com/pytorch/torchdynamo/blob/main/documentation/FAQ.md#why-is-my-code-crashing</a></p>
559560
</li>
560561

561562
</ol>
@@ -668,57 +669,4 @@ We will be hosting a series of live Q&A sessions for the community to have deepe
668669
<li><a href="https://pytorch.org/events" target="_blank">Export Path</a></li>
669670
</ul>
670671

671-
<script src="{{ site.baseurl }}/assets/get-started-sidebar.js"></script>
672-
<style type="text/css" rel="stylesheet">
673-
674-
table,td{
675-
border: 1px solid #A0A0A1;
676-
padding: 10px;
677-
}
678-
679-
article.pytorch-article table tr td:first-of-type {
680-
padding-left: 10px;
681-
}
682-
683-
ul{
684-
margin: 1.5rem 0 1.5rem 0;
685-
}
686-
687-
.pytorch-2 .article-wrapper article.pytorch-article p {
688-
font-family: Verdana;
689-
word-break: break-word;
690-
}
691-
692-
.pytorch-2 .article-wrapper article.pytorch-article a {
693-
font-family: Verdana;
694-
word-break: break-word;
695-
}
696-
697-
.pytorch-2 .article-wrapper article.pytorch-article h2 {
698-
font-family: Verdana;
699-
}
700-
701-
.pytorch-2 .article-wrapper article.pytorch-article ul li{
702-
font-family: Verdana;
703-
}
704-
705-
.pytorch-2 .article-wrapper article.pytorch-article ul li{
706-
font-family: Verdana;
707-
}
708-
709-
.pytorch-2 .article-wrapper article.pytorch-article li {
710-
font-family: Verdana;
711-
}
712-
713-
.pytorch-2 .article-wrapper article.pytorch-article h3 {
714-
font-family: Verdana;
715-
}
716-
717-
.pytorch-2 .article-wrapper article.pytorch-article .QnATable {
718-
@media screen and (max-width: 418px) {
719-
max-width: 95vw;
720-
}
721-
}
722-
723-
724-
</style>
672+
<script src="{{ site.baseurl }}/assets/get-started-sidebar.js"></script>

_sass/get-started.scss

+47
Original file line numberDiff line numberDiff line change
@@ -273,3 +273,50 @@
273273
padding-left: rem(20px);
274274
}
275275
}
276+
277+
.pytorch-2 .article-wrapper article.pytorch-article table tr td:first-of-type {
278+
padding-left: 10px;
279+
}
280+
281+
.pytorch-2 .article-wrapper article.pytorch-article {
282+
table,td{
283+
border: 1px solid #A0A0A1;
284+
padding: 10px;
285+
}
286+
287+
h2 {
288+
font-family: Verdana;
289+
}
290+
291+
h3 {
292+
font-family: Verdana;
293+
}
294+
295+
b {
296+
font-family: Verdana;
297+
}
298+
299+
ul {
300+
margin: 1.5rem 0 1.5rem 0;
301+
302+
li {
303+
font-family: Verdana;
304+
}
305+
}
306+
307+
p {
308+
font-family: Verdana;
309+
word-break: break-word;
310+
}
311+
312+
a {
313+
font-family: Verdana;
314+
word-break: break-word;
315+
}
316+
317+
.QnATable {
318+
@media screen and (max-width: 418px) {
319+
max-width: 95vw;
320+
}
321+
}
322+
}

0 commit comments

Comments
 (0)