Skip to content

Commit 2c55eff

Browse files
Update posts
1 parent 519abb0 commit 2c55eff

3 files changed

+275
-0
lines changed
Lines changed: 136 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,136 @@
1+
---
2+
layout: post
3+
title: "Neural Networks And Deep Learning Book Chapter 1 Exercise 1.1 Solution"
4+
date: 2018-09-04 21:00:00
5+
comments: true
6+
categories: blog
7+
description: Solutions of "Neural Networks and Deep Learning by Michael Nielsen" Exercises Chapter 1 Part I
8+
---
9+
10+
11+
I must say [Neural Networks and Deep Learning by Michael Nielsen](http://neuralnetworksanddeeplearning.com/) is best deep learning book I have came across. It has perfect combination of theory plus code. It delves into deep mathematics as much as code. Methodology of "from-scratch" implementation of neural networks and exercises in between really encourages you to think carefully about what's actually happening under the hood of neural networks.
12+
13+
Following is my attempt to those exercises:
14+
15+
16+
<h1 style="font-size: 40px;">Exercise 1</h1>
17+
<hr>
18+
19+
<h1 style="font-size: 30px;">Sigmoid neurons simulating perceptrons, part I</h1>
20+
21+
Suppose we take all the weights and biases in a network of perceptrons, and multiply them by a positive constant, $$c>0$$. Show that the behaviour of the network doesn't change.
22+
<hr>
23+
**Solution 1:**
24+
25+
we know perceptron rule can be written as:
26+
27+
$$
28+
\begin{eqnarray}
29+
\mbox{z} = \left\{
30+
\begin{array}{ll}
31+
0 & \mbox{if } w\cdot x + b \leq 0 \\
32+
1 & \mbox{if } w\cdot x + b > 0
33+
\end{array}
34+
\right.
35+
\tag{1}\end{eqnarray}
36+
$$
37+
38+
<br>
39+
where $$z, w, b$$ represents *output*, *weights* and *bias* repectively.
40+
41+
We are asked to see perceptron's behaviour after multiplying weights and biases in a network of perceptrons by $c>0$ which happens to positive constant.
42+
43+
Let's multiply the above equation by $$c$$,
44+
45+
$$
46+
\begin{eqnarray}
47+
\mbox{z} = \left\{
48+
\begin{array}{ll}
49+
0 & \mbox{if } cw\cdot x + cb \leq 0 \\
50+
1 & \mbox{if } cw\cdot x + cb > 0
51+
\end{array}
52+
\right.
53+
\end{eqnarray}
54+
$$
55+
56+
Focusing on condition part:
57+
58+
In both conditions,
59+
<br>
60+
61+
$$
62+
cw \cdot x + cb = 0
63+
$$
64+
65+
And
66+
67+
$$
68+
cw \cdot x + cb > 0
69+
$$
70+
71+
<br>
72+
Using basic algebra $$a(b + c) = ab + ac$$, we can take out common factor constant $$c$$.
73+
74+
75+
$$
76+
c [w \cdot x + b] = 0
77+
$$
78+
79+
$$
80+
c [w \cdot x + b] > 0
81+
$$
82+
83+
84+
Multipling both sides by positive constant $$c$$, doesn't change sign of the equation, it only changes magnitudes and the equation is unchanged.
85+
86+
87+
If you find dot product confusing you can put equation $$1$$ in basic algebraic form.
88+
<hr>
89+
**Solution 1 in basic precise algebraic form:**
90+
91+
$$
92+
\begin{eqnarray}
93+
\mbox{output} & = & \left\{ \begin{array}{ll}
94+
0 & \mbox{if } \sum_j w_j x_j + b\leq \mbox{0} \\
95+
1 & \mbox{if } \sum_j w_j x_j +b > \mbox{0}
96+
\end{array} \right.
97+
\tag{2}\end{eqnarray}
98+
$$
99+
100+
<br>
101+
102+
Note that above dot product representation is same as current algebraic form:
103+
104+
$$
105+
w \cdot x + b \equiv \sum_j w_j x_j + b
106+
$$
107+
108+
109+
Multiplying by $$c$$,
110+
111+
$$
112+
\begin{eqnarray}
113+
\mbox{output} & = & \left\{ \begin{array}{ll}
114+
0 & \mbox{if } \sum_j cw_j x_j + cb\leq \mbox{0} \\
115+
1 & \mbox{if } \sum_j cw_j x_j + cb > \mbox{0}
116+
\end{array} \right.
117+
\end{eqnarray}
118+
$$
119+
120+
Factoring out common term $$c$$,
121+
122+
$$
123+
\begin{eqnarray}
124+
\mbox{output} & = & \left\{ \begin{array}{ll}
125+
0 & \mbox{if } c[\sum_j w_j x_j + b]\leq \mbox{0} \\
126+
1 & \mbox{if } c[\sum_j w_j x_j + b] > \mbox{0}
127+
\end{array} \right.
128+
\end{eqnarray}
129+
$$
130+
131+
<br>
132+
As you can see, dividing each side by the constant $$c$$, we will have the same behavior of the perceptron as represented in equation $$2$$. That means perceptrons is unaffected by multiplying its weights and bias by a postive constant $$c$$ and ultimately the behavior of the entire network of perceptrons doesn't change.
133+
134+
Will post rest solutions soon.
135+
136+
Hope you found it helpful. If you have any doubts or found any mistakes please comment below.
Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
---
2+
layout: post
3+
title: "Neural Networks And Deep Learning Book Chapter 1 Exercise 1.2 Solution"
4+
date: 2018-09-05 21:00:00
5+
comments: true
6+
categories: blog
7+
description: Solutions of "Neural Networks and Deep Learning by Michael Nielsen" Exercises Chapter 1 Part II
8+
---
9+
10+
I have been solving exercises of [Neural Networks and Deep Learning Book by Michael Nielsen](http://neuralnetworksanddeeplearning.com/). If you are following along my solutions, that's great. Thank you so much! If not, here is link to Chapter 1 Exercise 1.1 Solution about [Sigmoid neurons simulating perceptrons, part I](https://nipunsadvilkar.github.io/blog/2017/09/04/neural-networks-and-deep-learning-book-chap1-ex1-part1-solution.html)
11+
12+
Following is my attempt to second exercise:
13+
14+
15+
<h1 style="font-size: 40px;">Exercise 1.2</h1>
16+
<hr>
17+
18+
<h1 style="font-size: 30px;">Sigmoid neurons simulating perceptrons, part II</h1>
19+
20+
Suppose we have the same setup as the last problem - a network of perceptrons. Suppose also that the overall input to the network of perceptrons has been chosen. We won't need the actual input value, we just need the input to have been fixed. Suppose the weights and biases are such that $$w \cdot x + b \neq 0$$ for the input $$x$$ to any particular perceptron in the network. Now replace all the perceptrons in the network by sigmoid neurons, and multiply the weights and biases by a positive constant $$c > 0$$. Show that in the limit as $$c \rightarrow \infty$$ the behaviour of this network of sigmoid neurons is exactly the same as the network of perceptrons. How can this fail when $$w \cdot x + b = 0$$ for one of the perceptrons?
21+
<hr>
22+
**Solution:**
23+
24+
25+
We are asked to keep the same setup as in [Exercise 1.1](https://nipunsadvilkar.github.io/blog/2017/09/04/neural-networks-and-deep-learning-book-chap1-ex1-part1-solution.html).
26+
Refering to that,
27+
28+
$$
29+
\begin{eqnarray}
30+
\mbox{z} = \left\{
31+
\begin{array}{ll}
32+
0 & \mbox{if } w\cdot x + b \leq 0 \\
33+
1 & \mbox{if } w\cdot x + b > 0
34+
\end{array}
35+
\right.
36+
\tag{1}\end{eqnarray}
37+
$$
38+
39+
<br>
40+
we saw for perceptrons, $$(w \cdot x + b)$$ multiplied by positive constant $$c > 0$$ does not affect the result of $$cw \cdot x + cb = 0$$ or $$cw \cdot x + cb > 0$$. Similarly, squashing function *sigmoid* doesn't have any effect on behaviour of network given output $$z$$ have values at **extremities**.
41+
42+
To explain that extremities point further,
43+
44+
As we have seen in the first chapter, Sigmoid function looks like this:
45+
46+
<p align="center">
47+
<img src="{{ site.url }}/assets/img/sigmoid.png" alt="sigmoid" border="5">
48+
</p>
49+
50+
Algebraically, Sigmoid function is represented as:
51+
52+
$$
53+
\begin{eqnarray}
54+
\sigma(z) \equiv \frac{1}{1+e^{-z}}.
55+
\tag{2}\end{eqnarray}
56+
$$
57+
58+
<br>
59+
When $$z \equiv w \cdot x + b$$ is a large positive number. Then $$e^{-z} \approx 0$$ and so $$\sigma(z) \approx 1$$ (asymptotically). To put it in the words, when $$z = w \cdot x+b$$ is large and positive, the output from the sigmoid neuron is approximately $$1$$, just as it would have been for a perceptron. By referring to above diagram, we can say $$\sigma(z) > 0.5$$. On the other hand that $$z = w \cdot x+b$$ is very negative. Then $$e^{-z} \rightarrow \infty$$, and $$\sigma(z) \approx 0$$ (asymptotically). So when $$z = w \cdot x +b$$ is very negative, the behaviour of a sigmoid neuron also closely approximates a perceptron. Again by referring to above diagram, we can say $$\sigma(z) < 0.5$$. Here, the determination of 0.5 has not much effect. However, when $$w \cdot x+b = 0$$ i.e., $$\sigma(z) = 0.5$$ catergory of the result cannot be judged, so the binary classification is difficult to perform and this is where behaviour of sigmoid neurons deviates from the perceptron model.
60+
61+
If above paragraph seems difficult to grasp, always refer to diagram after each sentence, it would help to understand the theory more concretely.
62+
63+
64+
Thanks for reading! Hope you found it helpful. If you have any doubts or found any mistakes please comment below.
65+
66+
Will post rest solutions soon.
Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
---
2+
layout: post
3+
title: "Code Walkthrough: Tablib, a Python Module for Tabular Datasets"
4+
date: 2018-10-08 21:00:00
5+
comments: true
6+
categories: blog
7+
image: /assets/img/source_code.png
8+
description: Reading Great Code and it's benefits. Code walkthrough of tablib python module by Nipun Sadvilkar
9+
---
10+
<hr>
11+
12+
<h1 style="font-size: 30px;">Motivation</h1>
13+
14+
Oftentimes, I like to dive into open source projects to learn best practices and design patterns programming pundits use to do things correctly and optimally. In addition, [Peter Norvig](https://en.wikipedia.org/wiki/Peter_Norvig) has also said in his famous blog post [Teach Yourself Programming in Ten Years](http://norvig.com/21-days.html)
15+
16+
> *Talk with other programmers; read other programs. This is more important than any book or training course.*
17+
18+
I am big advocate of it. This blog post is to emphasize - how reading open source code helps you identify and understand efficient patterns and coding constructs.
19+
20+
<h1 style="font-size: 30px;">Tablib</h1>
21+
22+
I admire [Kenneth Reitz](https://github.com/kennethreitz) very much. Do read and follow his [The Hitchhiker’s Guide to Python!](https://docs.python-guide.org) to be a a great Python programmer. Lesson from this book - [Reading Great Code](https://docs.python-guide.org/writing/reading/?highlight=tablib#reading-great-code) - is the main reason why I decided to give a go at reading source code of [Tablib](https://github.com/kennethreitz/tablib). Reading source code is initially daunting because of certain constructs which are obscure or you may not be familiar with it, and which is natural. Despite such hurdles, if you keep con oncentrating you will find lot of "Aha!" moments by identifying useful patterns. Here is my experience, I came across a very simple yet useful code snippet which is very important and widely used task in data cleaning i.e., removing duplicates.
23+
24+
[Source code: tablib removing_duplicates menthod:](http://docs.python-tablib.org/en/master/_modules/tablib/core/#Dataset.remove_duplicates)
25+
26+
```python
27+
def remove_duplicates(self):
28+
"""Removes all duplicate rows from the :class:`Dataset` object
29+
while maintaining the original order."""
30+
seen = set()
31+
self._data[:] = [row for row in self._data if not (tuple(row) in seen or seen.add(tuple(row)))]
32+
```
33+
34+
Check the `if ` statement followed by [_generator expression_](https://dbader.org/blog/python-generator-expressions). If you look closely inside generator expression the technique used to check for duplicate rows is called [_short circuit technique_](https://www.geeksforgeeks.org/short-circuiting-techniques-python/) implemented in python.
35+
36+
37+
[Short circuit explained by official docs](https://docs.python.org/2/library/stdtypes.html#boolean-operations-and-or-not):
38+
39+
|Operation|Result|Notes|
40+
|---|---|---|
41+
|`x or y` |if x is false, then y, else x| Only evaluates the second argument(`y`) if the first one is `False`.|
42+
|`x and y`|if x is false, then x, else y| Only evaluates the second argument(`y`) if the first one(`x`) is `True`.|
43+
|`not x`|if x is false, then True, else False|`not` has a lower priority than non-Boolean operators|
44+
45+
<br>
46+
`remove_duplicates` method uses 1st and 3rd Operation from above table.
47+
48+
Key thing to remember is:
49+
50+
**The evaluation of expression takes place from left to right.**
51+
52+
Explained with toy example:
53+
```python
54+
>>> _data = [[1, 2, 3], [4, 5, 6], [1, 2, 3]]
55+
>>> seen = set()
56+
>>> data_deduplicated = [row for row in _data if not (tuple(row) in seen or seen.add(tuple(row)))]
57+
58+
>>> print(data_deduplicated)
59+
# [[1, 2, 3], [4, 5, 6]]
60+
```
61+
62+
To put it into words, within list comprehension - iterate over data row by row and check if given row is present within `seen` _set_. If it's not present, meaning
63+
```python
64+
tuple(row) in seen
65+
```
66+
evaluates to `False` and as per 1st operation from the table, evaluate second argument which is to add given row in `seen` _set_. Furthermore, `if not ()` condition gets satisfied and given row is added to outer list. Subsequently, if the same row occurs then we know it's already in `seen` _set_ and hence that row will not be added to outer list. In overall, resulting into removing of duplicate rows.
67+
68+
If you are more of a visual learning person, following demonstartion using [Python tutor tool](http://pythontutor.com/) built by an outstanding academic and prolific blogger - [Philip Guo](http://pgbovine.net) - would help*:
69+
> *If below IFrame is not visible then please enable **"load unsecure script"** of your browser. Don't worry! it's saying unsecure because of http protocol used by [Python tutor](http://pythontutor.com/) and not **https**.
70+
71+
<iframe width="820" height="650" frameborder="1.5" src="http://pythontutor.com/iframe-embed.html#code=_data%20%3D%20%5B%5B1,2,3%5D,%20%5B4,5,6%5D,%20%5B1,2,3%5D%5D%0Aseen%20%3D%20set%28%29%0Adata_deduplicated%20%3D%20%5Brow%20for%20row%20in%20_data%20if%20not%20%28tuple%28row%29%20in%20seen%20or%20seen.add%28tuple%28row%29%29%29%5D&codeDivHeight=400&codeDivWidth=350&cumulative=false&curInstr=6&heapPrimitives=nevernest&origin=opt-frontend.js&py=2&rawInputLstJSON=%5B%5D&textReferences=false"> </iframe>
72+
73+
I hope by now you have understood [_short circuit technique_](https://www.geeksforgeeks.org/short-circuiting-techniques-python/) and importance of reading open source code. So keep exploring and do share your experience with me. Thank you! :)

0 commit comments

Comments
 (0)