@@ -91,26 +91,6 @@ network:
9191:INCLUDE autograd/ two_layer_net_autograd.py
9292```
9393
94- ## PyTorch: Defining new autograd functions
95- Under the hood, each primitive autograd operator is really two functions that
96- operate on Tensors. The ** forward** function computes output Tensors from input
97- Tensors. The ** backward** function receives the gradient of the output Tensors
98- with respect to some scalar value, and computes the gradient of the input Tensors
99- with respect to that same scalar value.
100-
101- In PyTorch we can easily define our own autograd operator by defining a subclass
102- of ` torch.autograd.Function ` and implementing the ` forward ` and ` backward ` functions.
103- We can then use our new autograd operator by constructing an instance and calling it
104- like a function, passing Variables containing input data.
105-
106- In this example we define our own custom autograd function for performing the ReLU
107- nonlinearity, and use it to implement our two-layer network:
108-
109- ``` python
110- :INCLUDE autograd/ two_layer_net_custom_function.py
111- ```
112-
113-
11494## PyTorch: nn
11595Computational graphs and autograd are a very powerful paradigm for defining
11696complex operators and automatically taking derivatives; however for large
@@ -148,10 +128,38 @@ provides implementations of commonly used optimization algorithms.
148128In this example we will use the ` nn ` package to define our model as before, but we
149129will optimize the model using the Adam algorithm provided by the ` optim ` package:
150130
131+ ## PyTorch: RNNs
132+
133+
134+ ## Data Loading
135+
136+
151137``` python
152138:INCLUDE nn/ two_layer_net_optim.py
153139```
154140
141+ # Advanced Topics
142+
143+ ## PyTorch: Defining new autograd functions
144+ Under the hood, each primitive autograd operator is really two functions that
145+ operate on Tensors. The ** forward** function computes output Tensors from input
146+ Tensors. The ** backward** function receives the gradient of the output Tensors
147+ with respect to some scalar value, and computes the gradient of the input Tensors
148+ with respect to that same scalar value.
149+
150+ In PyTorch we can easily define our own autograd operator by defining a subclass
151+ of ` torch.autograd.Function ` and implementing the ` forward ` and ` backward ` functions.
152+ We can then use our new autograd operator by constructing an instance and calling it
153+ like a function, passing Variables containing input data.
154+
155+ In this example we define our own custom autograd function for performing the ReLU
156+ nonlinearity, and use it to implement our two-layer network:
157+
158+ ``` python
159+ :INCLUDE autograd/ two_layer_net_custom_function.py
160+ ```
161+
162+
155163## TensorFlow: Static Graphs
156164PyTorch autograd looks a lot like TensorFlow: in both frameworks we define
157165a computational graph, and use automatic differentiation to compute gradients.
@@ -186,6 +194,9 @@ fit a simple two-layer net:
186194:INCLUDE autograd/ tf_two_layer_net.py
187195```
188196
197+ # HOGWILD
198+
199+ FIXME
189200
190201## PyTorch: Control Flow + Weight Sharing
191202As an example of dynamic graphs and weight sharing, we implement a very strange
0 commit comments