Skip to content

Commit d9134e6

Browse files
authored
Fix links in blog post (#1772)
Signed-off-by: Chris Abraham <[email protected]>
1 parent c578787 commit d9134e6

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

_posts/2024-10-17-pytorch2-5.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ For more information and examples, please refer to the [official blog post](http
107107

108108
Compiled Autograd is an extension to the PT2 stack allowing the capture of the entire backward pass. Unlike the backward graph traced by AOT dispatcher, Compiled Autograd tracing is deferred until backward execution time, which makes it impervious to forward pass graph breaks, and allows it to record backward hooks into the graph.
109109

110-
Please refer to the [tutorial](https://www.google.com/url?q=https://pytorch.org/tutorials/intermediate/compiled_autograd_tutorial.html&sa=D&source=docs&ust=1728926110018133&usg=AOvVaw3AYnAUHOmsCc0nFy19R6O3) for more information.
110+
Please refer to the [tutorial](https://pytorch.org/tutorials/intermediate/compiled_autograd_tutorial.html) for more information.
111111

112112

113113
### [Prototype] Flight Recorder
@@ -121,7 +121,7 @@ For more information please refer to the following [tutorial](https://pytorch.or
121121

122122
Max-autotune mode for the Inductor CPU backend in torch.compile profiles multiple implementations of operations at compile time and selects the best-performing one. This is particularly beneficial for GEMM-related operations, using a C++ template-based GEMM implementation as an alternative to the ATen-based approach with oneDNN and MKL libraries. We support FP32, BF16, FP16, and INT8 with epilogue fusions for x86 CPUs. We’ve seen up to 7% geomean speedup on the dynamo benchmark suites and up to 20% boost in next-token latency for LLM inference.
123123

124-
For more information please refer to the [tutorial](https://www.google.com/url?q=https://pytorch.org/tutorials/prototype/max_autotune_on_CPU_tutorial.html&sa=D&source=docs&ust=1728926070319900&usg=AOvVaw27_CteoNRwsbxRlrLy-aEd).
124+
For more information please refer to the [tutorial](https://pytorch.org/tutorials/prototype/max_autotune_on_CPU_tutorial.html).
125125

126126

127127
### [Prototype] TorchInductor CPU on Windows

0 commit comments

Comments
 (0)