Skip to content

Commit b31a548

Browse files
cooperrcrossbarmelissawm
authored
added alt-text to tutorial-deep-learning-mnist (#100)
* added alt-text to tutorial-deep-learning-mnist Co-authored-by: Ross Barnowski <[email protected]> Co-authored-by: Melissa Weber Mendonça <[email protected]>
1 parent 9876df5 commit b31a548

File tree

1 file changed

+22
-2
lines changed

1 file changed

+22
-2
lines changed

content/tutorial-deep-learning-on-mnist.md

+22-2
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,13 @@ Your deep learning model — one of the most basic artificial neural networks th
1919

2020
Based on the image inputs and their labels ([supervised learning](https://en.wikipedia.org/wiki/Supervised_learning)), your neural network will be trained to learn their features using forward propagation and backpropagation ([reverse-mode](https://en.wikipedia.org/wiki/Automatic_differentiation#Reverse_accumulation) differentiation). The final output of the network is a vector of 10 scores — one for each handwritten digit image. You will also evaluate how good your model is at classifying the images on the test set.
2121

22-
![Diagram showing operations detailed in this tutorial](_static/tutorial-deep-learning-on-mnist.png)
22+
![Diagram showing operations detailed in this tutorial (The input image
23+
is passed into a Hidden layer that creates a weighted sum of outputs.
24+
The weighted sum is passed to the Non-linearity, then regularization and
25+
into the output layer. The output layer creates a prediction which can
26+
then be compared to existing data. The errors are used to calculate the
27+
loss function and update weights in the hidden layer and output
28+
layer.)](_static/tutorial-deep-learning-on-mnist.png)
2329

2430
This tutorial was adapted from the work by [Andrew Trask](https://github.com/iamtrask/Grokking-Deep-Learning) (with the author's permission).
2531

@@ -165,6 +171,9 @@ for sample, ax in zip(rng.choice(x_train, size=num_examples, replace=False), axe
165171
ax.imshow(sample.reshape(28, 28), cmap='gray')
166172
```
167173

174+
_Above are five images taken from the MNIST training set. Various hand-drawn
175+
Arabic numerals are shown, with exact values chosen randomly with each run of the code._
176+
168177
> **Note:** You can also visualize a sample image as an array by printing `x_train[59999]`. Here, `59999` is your 60,000th training image sample (`0` would be your first). Your output will be quite long and should contain an array of 8-bit integers:
169178
>
170179
>
@@ -334,7 +343,14 @@ Afterwards, you will construct the building blocks of a simple deep learning mod
334343

335344
Here is a summary of the neural network model architecture and the training process:
336345

337-
![Diagram showing operations detailed in this tutorial](_static/tutorial-deep-learning-on-mnist.png)
346+
347+
![Diagram showing operations detailed in this tutorial (The input image
348+
is passed into a Hidden layer that creates a weighted sum of outputs.
349+
The weighted sum is passed to the Non-linearity, then regularization and
350+
into the output layer. The output layer creates a prediction which can
351+
then be compared to existing data. The errors are used to calculate the
352+
loss function and update weights in the hidden layer and output
353+
layer.)](_static/tutorial-deep-learning-on-mnist.png)
338354

339355
- _The input layer_:
340356

@@ -552,6 +568,10 @@ axes[1].set_xlabel("Epochs")
552568
plt.show()
553569
```
554570

571+
_The training and testing error is shown above in the left and right
572+
plots, respectively. As the number of Epochs increases, the total error
573+
decreases and the accuracy increases._
574+
555575
The accuracy rates that your model reaches during training and testing may be somewhat plausible but you may also find the error rates to be quite high.
556576

557577
To reduce the error during training and testing, you can consider changing the simple loss function to, for example, categorical [cross-entropy](https://en.wikipedia.org/wiki/Cross_entropy). Other possible solutions are discussed below.

0 commit comments

Comments
 (0)