Skip to content

Commit 555889a

Browse files
authored
Update README_raw.md
1 parent eb68d8b commit 555889a

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

README_raw.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ and the true output.
1414
### Table of Contents
1515
- <a href='#warm-up-numpy'>Warm-up: numpy</a>
1616
- <a href='#pytorch-tensors'>PyTorch: Tensors</a>
17-
- <a href='#pytorch-variables-and-autograd'>PyTorch: Variables and autograd</a>
17+
- <a href='#pytorch-autograd'>PyTorch: Autograd</a>
1818
- <a href='#pytorch-defining-new-autograd-functions'>PyTorch: Defining new autograd functions</a>
1919
- <a href='#tensorflow-static-graphs'>TensorFlow: Static Graphs</a>
2020
- <a href='#pytorch-nn'>PyTorch: nn</a>
@@ -93,7 +93,7 @@ we usually don't want to backpropagate through the weight update steps when
9393
training a neural network. In such scenarios we can use the `torch.no_grad()`
9494
context manager to prevent the construction of a computational graph.
9595

96-
Here we use PyTorch Variables and autograd to implement our two-layer network;
96+
Here we use PyTorch Tensors and autograd to implement our two-layer network;
9797
now we no longer need to manually implement the backward pass through the
9898
network:
9999

@@ -111,7 +111,7 @@ with respect to that same scalar value.
111111
In PyTorch we can easily define our own autograd operator by defining a subclass
112112
of `torch.autograd.Function` and implementing the `forward` and `backward` functions.
113113
We can then use our new autograd operator by constructing an instance and calling it
114-
like a function, passing Variables containing input data.
114+
like a function, passing Tensors containing input data.
115115

116116
In this example we define our own custom autograd function for performing the ReLU
117117
nonlinearity, and use it to implement our two-layer network:
@@ -171,8 +171,8 @@ raw computational graphs that are useful for building neural networks.
171171

172172
In PyTorch, the `nn` package serves this same purpose. The `nn` package defines a set of
173173
**Modules**, which are roughly equivalent to neural network layers. A Module receives
174-
input Variables and computes output Variables, but may also hold internal state such as
175-
Variables containing learnable parameters. The `nn` package also defines a set of useful
174+
input Tensors and computes output Tensors, but may also hold internal state such as
175+
Tensors containing learnable parameters. The `nn` package also defines a set of useful
176176
loss functions that are commonly used when training neural networks.
177177

178178
In this example we use the `nn` package to implement our two-layer network:
@@ -203,8 +203,8 @@ will optimize the model using the Adam algorithm provided by the `optim` package
203203
## PyTorch: Custom nn Modules
204204
Sometimes you will want to specify models that are more complex than a sequence of
205205
existing Modules; for these cases you can define your own Modules by subclassing
206-
`nn.Module` and defining a `forward` which receives input Variables and produces
207-
output Variables using other modules or other autograd operations on Variables.
206+
`nn.Module` and defining a `forward` which receives input Tensors and produces
207+
output Tensors using other modules or other autograd operations on Tensors.
208208

209209
In this example we implement our two-layer network as a custom Module subclass:
210210

0 commit comments

Comments
 (0)