Skip to content

Commit f1d21a6

Browse files
committed
Add some examples to 'PyTorch Basics'
1 parent fcb53f3 commit f1d21a6

File tree

1 file changed

+10
-14
lines changed
  • tutorials/00 - PyTorch Basics

1 file changed

+10
-14
lines changed

tutorials/00 - PyTorch Basics/main.py

Lines changed: 10 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -10,12 +10,12 @@
1010

1111
#========================== Table of Contents ==========================#
1212
# 1. Basic autograd example 1 (Line 21 to 36)
13-
# 2. Basic autograd example 2 (Line 39 to 80)
14-
# 3. Loading data from numpy (Line 83 to 86)
15-
# 4. Implementing the input pipline (Line 90 to 117)
16-
# 5. Input pipline for custom dataset (Line 119 to 139)
17-
# 6. Using pretrained model (Line142 to 156)
18-
# 7. Save and load model (Line 159 to L161)
13+
# 2. Basic autograd example 2 (Line 39 to 76)
14+
# 3. Loading data from numpy (Line 79 to 82)
15+
# 4. Implementing the input pipline (Line 86 to 113)
16+
# 5. Input pipline for custom dataset (Line 115 to 135)
17+
# 6. Using pretrained model (Line 138 to 152)
18+
# 7. Save and load model (Line 155 to L157)
1919

2020

2121
#======================= Basic autograd example 1 =======================#
@@ -25,23 +25,21 @@
2525
b = Variable(torch.Tensor([3]), requires_grad=True)
2626

2727
# Build a computational graph.
28-
y = w * x + b # y = 2 * x + 3
28+
y = w * x + b # y = 2 * x + 3
2929

3030
# Compute gradients
3131
y.backward()
3232

3333
# Print out the gradients
34-
print(x.grad) # x.grad = 2
35-
print(w.grad) # w.grad = 1
36-
print(b.grad) # b.grad = 1
34+
print(x.grad) # x.grad = 2
35+
print(w.grad) # w.grad = 1
36+
print(b.grad) # b.grad = 1
3737

3838

3939
#======================== Basic autograd example 2 =======================#
4040
# Create tensors.
4141
x = Variable(torch.randn(5, 3))
4242
y = Variable(torch.randn(5, 2))
43-
print ('x: ', x)
44-
print ('y: ', y)
4543

4644
# Build a linear layer.
4745
linear = nn.Linear(3, 2)
@@ -54,7 +52,6 @@
5452

5553
# Forward propagation.
5654
pred = linear(x)
57-
print('pred: ', pred)
5855

5956
# Compute loss.
6057
loss = criterion(pred, y)
@@ -69,7 +66,6 @@
6966

7067
# 1-step Optimization (gradient descent).
7168
optimizer.step()
72-
print ('Optimized..!')
7369

7470
# You can also do optimization at the low level as shown below.
7571
# linear.weight.data.sub_(0.01 * linear.weight.grad.data)

0 commit comments

Comments
 (0)