Skip to content

Commit e8742bc

Browse files
committed
errata upd. ch12
1 parent 6b61d4b commit e8742bc

File tree

1 file changed

+24
-2
lines changed

1 file changed

+24
-2
lines changed

docs/errata.md

Lines changed: 24 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,16 +11,17 @@ I would be happy if you just write me a short [mail](mailto:mail@sebastianraschk
1111

1212
## Donations
1313

14-
- Current amount for the next donation: $2.25
14+
- Current amount for the next donation: $3.50
1515
- Amount donated to charity: $0.00
1616

1717
## Leaderboard
1818

1919
1. Joseph Gordon ($0.75)
2020
2. T.S. Jayram ($0.50)
2121
3. S.R. ($0.50)
22-
4. Ryan S. ($0.25)
22+
4. Ryan S. ($1.25)
2323
5. Elias R. ($0.25)
24+
6. Haitham H. Saleh ($0.25)
2425

2526

2627
...
@@ -48,12 +49,33 @@ I am really sorry about you seeing many typos up in the equations so far. Unfort
4849

4950
- p.144: I wrote in the Linear Discrimnant section that "Those who are a little more familiar with linear algebra may know that the rank of the d×d-dimensional covariance matrix can be at most *d − 1* ..." Sorry, this is a little bit out of context. First of all, this is only true if *d >> N* (where *d* is the number of dimensions and *N* is the number of samples), and this should have been in the Principal Component Analysis section. Secondly, in context of the Linear Discriminant Analysis, the number of linear discriminants is at most <em>c-1</em> where <em>c</em> is the number of class labels; the in-between class scatter matrix <em>S<sub>B</sub></em> is the sum of <em>c</em> matrices with rank 1 or less.</strong> (S.R.)
5051

52+
**Chapter 12**
53+
54+
- p. 347: In the section "Introducing the multi-layer neural network architecture" it says "where *h* is the number of hidden units and *m + 1* is the number of hidden units plus bias unit." It should be "the number of **input** units plus bias unit." (Ryan S.)
55+
56+
- p. 348: In the section "Activating a neural network via forward propagation", I wrote "... which passes the origin at z = 0.5, as shown in the following graph:"
57+
To be correct, *passing through the origin* means passing through the point(0, 0). So, I probably meant to say "which cuts the y-axis at z=0" (Ryan S.)
58+
59+
- p. 366: In the section "Computing the logistic cost function, the generalized cost function (without the regularization term) is written as:
60+
"J(**w**) = - &sum;<sup>n</sup><sub>i=1</sub> &sum;<sup>t</sup><sub>k=1</sub> ... "
61+
Here, the inner &sum;<sup>t</sup><sub>k=1</sub> should be &sum;<sup>t</sup><sub>j=1</sub> (Ryan S.)
62+
63+
64+
- p. 369: In the section "Training neural networks via backpropagation" there is another duplication error. The second
65+
"**&delta;**<sup>(2)</sup>" in
66+
"**&delta;**<sup>(2)</sup> = (**W**<sup>(2)</sup>)<sup>T</sup> **&delta;**<sup>(2)</sup> * [ &part; &phi;(z<sup>(2)</sup>)/ &part; z<sup>(2)</sup>]"
67+
should be "**&delta;**<sup>(3)</sup>" so that
68+
"**&delta;**<sup>(2)</sup> = (**W**<sup>(2)</sup>)<sup>T</sup> **&delta;**<sup>(3)</sup> * [ &part; &phi;(z<sup>(2)</sup>)/ &part; z<sup>(2)</sup>]" (Ryan S.)
69+
5170
### Language
5271

5372
**Preface**
5473

74+
- p. viii: At the bottom of this page ("Chapter 3, A Tour of Machine Learning Classifirs Using Scikit-learn") it should be "Classifiers" not "classifirs" (Hamit H. Saleh)
75+
5576
- p. x: the phrase "--whether you want start from..." should be "--whether you want to start from..." (Joseph Gordon)
5677

78+
5779
**Chapter 2**
5880

5981
- p. 19: there should be a period between "otherwise" and "in" (this is towards the end of the page) (Joseph Gordon)

0 commit comments

Comments
 (0)