Skip to content

Commit adcf791

Browse files
committed
Removed size of training_data from backprop, and improved comment
1 parent 4b01d44 commit adcf791

File tree

1 file changed

+7
-9
lines changed

1 file changed

+7
-9
lines changed

code/network_basic.py

Lines changed: 7 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -66,15 +66,13 @@ def SGD(self, training_data, epochs, mini_batch_size, eta,
6666
else:
6767
print "Epoch %s complete" % j
6868

69-
def backprop(self, training_data, n, eta):
70-
"""Update the network's weights and biases by applying a
71-
single iteration of gradient descent using backpropagation.
72-
The ``training_data`` is a list of tuples ``(x, y)``. It need
73-
not include the entire training data set --- it might be a
74-
mini-batch, or even a single training example. ``n`` is the
75-
size of the total training set (which may not be the same as
76-
the size of ``training_data``). The other parameters are
77-
self-explanatory."""
69+
def backprop(self, training_data, eta):
70+
"""Update the network's weights and biases by applying a single
71+
iteration of gradient descent using backpropagation. The
72+
``training_data`` is a list of tuples ``(x, y)``. It need not
73+
include the entire training data set --- it might be a
74+
mini-batch, or even a single training example. ``eta`` is the
75+
learning rate."""
7876
nabla_b = [np.zeros(b.shape) for b in self.biases]
7977
nabla_w = [np.zeros(w.shape) for w in self.weights]
8078
for x, y in training_data:

0 commit comments

Comments
 (0)