From 930d4b1c13dd23c69250bc5011c60b10bf64bdc2 Mon Sep 17 00:00:00 2001 From: Nigel Liang Date: Wed, 7 Nov 2018 12:01:05 -0800 Subject: [PATCH 01/19] Copy/paste bug in ex2 --- Exercise2/exercise2.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Exercise2/exercise2.ipynb b/Exercise2/exercise2.ipynb index 39983d90..79911886 100755 --- a/Exercise2/exercise2.ipynb +++ b/Exercise2/exercise2.ipynb @@ -817,7 +817,7 @@ "print('Cost at test theta : {:.2f}'.format(cost))\n", "print('Expected cost (approx): 3.16\\n')\n", "\n", - "print('Gradient at initial theta (zeros) - first five values only:')\n", + "print('Gradient at test theta - first five values only:')\n", "print('\\t[{:.4f}, {:.4f}, {:.4f}, {:.4f}, {:.4f}]'.format(*grad[:5]))\n", "print('Expected gradients (approx) - first five values only:')\n", "print('\\t[0.3460, 0.1614, 0.1948, 0.2269, 0.0922]')" From 67f760436126a4b8161c02342bad4895db46db87 Mon Sep 17 00:00:00 2001 From: Uzair Fasih Date: Tue, 19 Mar 2019 19:24:48 +0530 Subject: [PATCH 02/19] Changed lambda_ value from 100 to 0 --- Exercise5/exercise5.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Exercise5/exercise5.ipynb b/Exercise5/exercise5.ipynb index 66c4500e..c5e5c679 100755 --- a/Exercise5/exercise5.ipynb +++ b/Exercise5/exercise5.ipynb @@ -657,7 +657,7 @@ "metadata": {}, "outputs": [], "source": [ - "lambda_ = 100\n", + "lambda_ = 0\n", "theta = utils.trainLinearReg(linearRegCostFunction, X_poly, y,\n", " lambda_=lambda_, maxiter=55)\n", "\n", From f2edc4fbb8aac52c7fdc7287bca34a5d9a56a13e Mon Sep 17 00:00:00 2001 From: Gavin Hughes Date: Thu, 25 Jul 2019 12:01:50 -1000 Subject: [PATCH 03/19] Fix typo. --- Exercise1/exercise1.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Exercise1/exercise1.ipynb b/Exercise1/exercise1.ipynb index 6eea3ca9..0d245b5c 100755 --- a/Exercise1/exercise1.ipynb +++ b/Exercise1/exercise1.ipynb @@ -845,7 +845,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "*You should not submit your solutions.*" + "*You should now submit your solutions.*" ] }, { From ba5cb0389a440e0f697bf2805a0a488dbe22cc6d Mon Sep 17 00:00:00 2001 From: Furqan Amin Date: Sat, 3 Aug 2019 10:35:49 +0500 Subject: [PATCH 04/19] Fixed exercise4.ipynb Added the missing mathematical equation in Point #4 of 2.4 backpropagation. Also added an implementation note linking to the discussions of the course so it could help fellow students to implement backprop. --- Exercise4/exercise4.ipynb | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/Exercise4/exercise4.ipynb b/Exercise4/exercise4.ipynb index d8ebee0c..ab2e6145 100755 --- a/Exercise4/exercise4.ipynb +++ b/Exercise4/exercise4.ipynb @@ -664,6 +664,7 @@ "Note that the symbol $*$ performs element wise multiplication in `numpy`.\n", "\n", "1. Accumulate the gradient from this example using the following formula. Note that you should skip or remove $\\delta_0^{(2)}$. In `numpy`, removing $\\delta_0^{(2)}$ corresponds to `delta_2 = delta_2[1:]`.\n", + "$$ \\Delta^{(l)} = \\Delta^{(l)} + \\delta^{(l+1)} (a^{(l)})^{(T)} $$\n", "\n", "1. Obtain the (unregularized) gradient for the neural network cost function by dividing the accumulated gradients by $\\frac{1}{m}$:\n", "$$ \\frac{\\partial}{\\partial \\Theta_{ij}^{(l)}} J(\\Theta) = D_{ij}^{(l)} = \\frac{1}{m} \\Delta_{ij}^{(l)}$$\n", @@ -672,7 +673,10 @@ "**Python/Numpy tip**: You should implement the backpropagation algorithm only after you have successfully completed the feedforward and cost functions. While implementing the backpropagation alogrithm, it is often useful to use the `shape` function to print out the shapes of the variables you are working with if you run into dimension mismatch errors.\n", "\n", "\n", - "[Click here to go back and update the function `nnCostFunction` with the backpropagation algorithm](#nnCostFunction)." + "[Click here to go back and update the function `nnCostFunction` with the backpropagation algorithm](#nnCostFunction).\n", + "\n", + "\n", + "**Note:** If the iterative solution provided above is proving to be difficult to implement, try implementing the vectorized approach which is easier to implement in the opinion of the moderators of this course. You can find the tutorial for the vectorized approach [here](https://www.coursera.org/learn/machine-learning/discussions/all/threads/a8Kce_WxEeS16yIACyoj1Q)." ] }, { From 8433480788e7f5ea0cf372be446830321a8f9b75 Mon Sep 17 00:00:00 2001 From: Jonathan Dayton Date: Thu, 24 Oct 2019 14:01:48 -0500 Subject: [PATCH 05/19] Minor grammar fixes --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index ca997de7..a8c7eb29 100755 --- a/README.md +++ b/README.md @@ -3,9 +3,9 @@ ![](machinelearning.jpg) -This repositry contains the python versions of the programming assignments for the [Machine Learning online class](https://www.coursera.org/learn/machine-learning) taught by Professor Andrew Ng. This is perhaps the most popular introductory online machine learning class. In addition to being popular, it is also one of the best Machine learning classes any interested student can take to get started with machine learning. An unfortunate aspect of this class is that the programming assignments are in MATLAB or OCTAVE, probably because this class was made before python become the go-to language in machine learning. +This repositry contains the python versions of the programming assignments for the [Machine Learning online class](https://www.coursera.org/learn/machine-learning) taught by Professor Andrew Ng. This is perhaps the most popular introductory online machine learning class. In addition to being popular, it is also one of the best Machine learning classes any interested student can take to get started with machine learning. An unfortunate aspect of this class is that the programming assignments are in MATLAB or OCTAVE, probably because this class was made before python became the go-to language in machine learning. -The Python machine learning ecosystem has grown exponentially in the past few years, and still gaining momentum. I suspect that many students who want to get started with their machine learning journey would like to start it with Python also. It is for those reasons I have decided to re-write all the programming assignments in Python, so students can get acquainted with its ecosystem from the start of their learning journey. +The Python machine learning ecosystem has grown exponentially in the past few years, and is still gaining momentum. I suspect that many students who want to get started with their machine learning journey would like to start it with Python also. It is for those reasons I have decided to re-write all the programming assignments in Python, so students can get acquainted with its ecosystem from the start of their learning journey. These assignments work seamlessly with the class and do not require any of the materials published in the MATLAB assignments. Here are some new and useful features for these sets of assignments: @@ -96,4 +96,4 @@ If you are new to python and to `jupyter` notebooks, no worries! There is a plet - I would like to thank professor Andrew Ng and the crew of the Stanford Machine Learning class on Coursera for such an awesome class. -- Some of the material used, especially the code for submitting assignments for grading is based on [`mstampfer`'s](https://github.com/mstampfer/Coursera-Stanford-ML-Python) python implementation of the assignments. \ No newline at end of file +- Some of the material used, especially the code for submitting assignments for grading is based on [`mstampfer`'s](https://github.com/mstampfer/Coursera-Stanford-ML-Python) python implementation of the assignments. From 7be98ac3cd361b003f8661ffc83806ff293a6ef3 Mon Sep 17 00:00:00 2001 From: Filip Date: Mon, 18 Nov 2019 14:26:25 +0100 Subject: [PATCH 06/19] Added a link to Deepnote --- README.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index ca997de7..6bb7cc8f 100755 --- a/README.md +++ b/README.md @@ -13,7 +13,11 @@ These assignments work seamlessly with the class and do not require any of the m - The original assignment instructions have been completely re-written and the parts which used to reference MATLAB/OCTAVE functionality have been changed to reference its `python` counterpart. - The re-written instructions are now embedded within the Jupyter Notebook along with the `python` starter code. For each assignment, all work is done solely within the notebook. - The `python` assignments can be submitted for grading. They were tested to work perfectly well with the original Coursera grader that is currently used to grade the MATLAB/OCTAVE versions of the assignments. -- After each part of a given assignment, the Jupyter Notebook contains a cell which prompts the user for submitting the current part of the assignment for grading. +- After each part of a given assignment, the Jupyter Notebook contains a cell which prompts the user for submitting the current part of the assignment for grading. + + ## Online workspace + + You can work on the assignments in an online workspace called [Deepnote](https://www.deepnote.com/). This allows you to play around with the code and access the assignments from your browser. [](https://beta.deepnote.com/launch?template=data-science&url=https%3A%2F%2Fgithub.com%2Fdibgerge%2Fml-coursera-python-assignments) ## Downloading the Assignments @@ -24,7 +28,7 @@ To get started, you can start by either downloading a zip file of these assignme Each assignment is contained in a separate folder. For example, assignment 1 is contained within the folder `Exercise1`. Each folder contains two files: - The assignment `jupyter` notebook, which has a `.ipynb` extension. All the code which you need to write will be written within this notebook. - A python module `utils.py` which contains some helper functions needed for the assignment. Functions within the `utils` module are called from the python notebook. You do not need to modify or add any code to this file. - + ## Requirements These assignments has been tested and developed using the following libraries: From fe6eff79a86c27367585b76853a03dce247d3fab Mon Sep 17 00:00:00 2001 From: Christopher Daigle Date: Tue, 11 Feb 2020 07:34:57 -0500 Subject: [PATCH 07/19] Update requirements.txt --- requirements.txt | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/requirements.txt b/requirements.txt index 1a7539c8..7ae5f83b 100755 --- a/requirements.txt +++ b/requirements.txt @@ -20,7 +20,7 @@ ipython==6.5.0 ipython-genutils==0.2.0 ipywidgets==7.4.0 jedi==0.12.1 -Jinja2==2.10 +Jinja2==2.10.1 jsonschema==2.6.0 jupyter==1.0.0 jupyter-client==5.2.3 @@ -33,7 +33,7 @@ mkl-fft==1.0.4 mkl-random==1.0.1 nbconvert==5.3.1 nbformat==4.4.0 -notebook==5.6.0 +notebook==5.7.8 numpy==1.13.3 pandocfilters==1.4.2 parso==0.3.1 @@ -61,7 +61,7 @@ terminado==0.8.1 testpath==0.3.1 tornado==5.1 traitlets==4.3.2 -Twisted==18.7.0 +twisted==19.7.0 wcwidth==0.1.7 webencodings==0.5.1 widgetsnbextension==3.4.0 From 7998f93ceb40370c7990db7d7094d1325cdc4370 Mon Sep 17 00:00:00 2001 From: Christopher Daigle Date: Tue, 11 Feb 2020 07:40:32 -0500 Subject: [PATCH 08/19] change to safe versions of packages Jinja2 (2.10 -> 2.10.1), notebook (5.6.0 -> 5.7.8), and twisted (Twisted 18.7.0 -> twisted 19.7.0) --- requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/requirements.txt b/requirements.txt index 7ae5f83b..2890a287 100755 --- a/requirements.txt +++ b/requirements.txt @@ -61,7 +61,7 @@ terminado==0.8.1 testpath==0.3.1 tornado==5.1 traitlets==4.3.2 -twisted==19.7.0 +twisted==19.7.0 wcwidth==0.1.7 webencodings==0.5.1 widgetsnbextension==3.4.0 From 461a27df812418351412483e49a1a5f6f35e9b54 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=D0=9D=D0=B8=D0=BA=D0=BE=D0=BB=D0=B0=D0=B9=20=D0=94=D0=B0?= =?UTF-8?q?=D0=BD=D0=B0=D0=B8=D0=BB=D0=BE=D0=B2?= Date: Tue, 19 May 2020 13:04:33 +0300 Subject: [PATCH 09/19] Mistaken indexes fix --- Exercise8/exercise8.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Exercise8/exercise8.ipynb b/Exercise8/exercise8.ipynb index 8cd68244..d8aeda3d 100755 --- a/Exercise8/exercise8.ipynb +++ b/Exercise8/exercise8.ipynb @@ -683,7 +683,7 @@ "\n", "$$ \\frac{\\partial J}{\\partial x_k^{(i)}} = \\sum_{j:r(i,j)=1} \\left( \\left(\\theta^{(j)}\\right)^T x^{(i)} - y^{(i,j)} \\right) \\theta_k^{(j)} $$\n", "\n", - "$$ \\frac{\\partial J}{\\partial \\theta_k^{(j)}} = \\sum_{i:r(i,j)=1} \\left( \\left(\\theta^{(j)}\\right)^T x^{(i)}- y^{(i,j)} \\right) x_k^{(j)} $$\n", + "$$ \\frac{\\partial J}{\\partial \\theta_k^{(j)}} = \\sum_{i:r(i,j)=1} \\left( \\left(\\theta^{(j)}\\right)^T x^{(i)}- y^{(i,j)} \\right) x_k^{(i)} $$\n", "\n", "Note that the function returns the gradient for both sets of variables by unrolling them into a single vector. After you have completed the code to compute the gradients, the next cell run a gradient check\n", "(available in `utils.checkCostFunction`) to numerically check the implementation of your gradients (this is similar to the numerical check that you used in the neural networks exercise. If your implementation is correct, you should find that the analytical and numerical gradients match up closely.\n", @@ -809,7 +809,7 @@ "\n", "$$ \\frac{\\partial J}{\\partial x_k^{(i)}} = \\sum_{j:r(i,j)=1} \\left( \\left(\\theta^{(j)}\\right)^T x^{(i)} - y^{(i,j)} \\right) \\theta_k^{(j)} + \\lambda x_k^{(i)} $$\n", "\n", - "$$ \\frac{\\partial J}{\\partial \\theta_k^{(j)}} = \\sum_{i:r(i,j)=1} \\left( \\left(\\theta^{(j)}\\right)^T x^{(i)}- y^{(i,j)} \\right) x_k^{(j)} + \\lambda \\theta_k^{(j)} $$\n", + "$$ \\frac{\\partial J}{\\partial \\theta_k^{(j)}} = \\sum_{i:r(i,j)=1} \\left( \\left(\\theta^{(j)}\\right)^T x^{(i)}- y^{(i,j)} \\right) x_k^{(i)} + \\lambda \\theta_k^{(j)} $$\n", "\n", "This means that you just need to add $\\lambda x^{(i)}$ to the `X_grad[i,:]` variable described earlier, and add $\\lambda \\theta^{(j)}$ to the `Theta_grad[j, :]` variable described earlier.\n", "\n", From 3e46ab61824946925fbe02506f068e149c4f3103 Mon Sep 17 00:00:00 2001 From: Mukund Choudhary Date: Fri, 22 May 2020 14:27:33 +0530 Subject: [PATCH 10/19] possible typo fix --- Exercise1/exercise1.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Exercise1/exercise1.ipynb b/Exercise1/exercise1.ipynb index 0d245b5c..fcc76a38 100755 --- a/Exercise1/exercise1.ipynb +++ b/Exercise1/exercise1.ipynb @@ -508,7 +508,7 @@ " X : array_like\n", " The input dataset of shape (m x n+1).\n", " \n", - " y : arra_like\n", + " y : array_like\n", " Value at given features. A vector of shape (m, ).\n", " \n", " theta : array_like\n", From 2fc7b44e557bc84a2b84772bd3fb2db3cdbf1c88 Mon Sep 17 00:00:00 2001 From: Gerges Dib Date: Sat, 6 Jun 2020 23:31:31 -0700 Subject: [PATCH 11/19] remove requirement file, added conda environment file --- .gitignore | 1 + Exercise1/exercise1.ipynb | 2 +- Exercise3/exercise3.ipynb | 2 +- Exercise4/exercise4.ipynb | 22 ++++++++++--- Exercise5/exercise5.ipynb | 24 ++++---------- README.md | 2 +- environment.yml | 9 ++++++ requirements.txt | 68 --------------------------------------- 8 files changed, 36 insertions(+), 94 deletions(-) create mode 100644 environment.yml delete mode 100755 requirements.txt diff --git a/.gitignore b/.gitignore index 848d2a0d..f04b506b 100644 --- a/.gitignore +++ b/.gitignore @@ -108,3 +108,4 @@ venv.bak/ *.pkl *-solved.ipynb +.idea/ \ No newline at end of file diff --git a/Exercise1/exercise1.ipynb b/Exercise1/exercise1.ipynb index 7ebc6fa0..ef8a53e0 100755 --- a/Exercise1/exercise1.ipynb +++ b/Exercise1/exercise1.ipynb @@ -1299,7 +1299,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.6.4" + "version": "3.6.6" } }, "nbformat": 4, diff --git a/Exercise3/exercise3.ipynb b/Exercise3/exercise3.ipynb index e37be91f..33e782af 100755 --- a/Exercise3/exercise3.ipynb +++ b/Exercise3/exercise3.ipynb @@ -915,7 +915,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.6.4" + "version": "3.6.6" } }, "nbformat": 4, diff --git a/Exercise4/exercise4.ipynb b/Exercise4/exercise4.ipynb index d8ebee0c..a07d63c9 100755 --- a/Exercise4/exercise4.ipynb +++ b/Exercise4/exercise4.ipynb @@ -710,15 +710,27 @@ "\n", "\n", "
\n", - "**Practical Tip:** Gradient checking works for any function where you are computing the cost and the gradient. Concretely, you can use the same `computeNumericalGradient` function to check if your gradient implementations for the other exercises are correct too (e.g., logistic regression’s cost function).\n", + " Practical Tip: Gradient checking works for any function where you are computing the cost and the gradient. Concretely, you can use the same `computeNumericalGradient` function to check if your gradient implementations for the other exercises are correct too (e.g., logistic regression’s cost function).\n", "
" ] }, { "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "ename": "NameError", + "evalue": "name 'utils' is not defined", + "output_type": "error", + "traceback": [ + "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", + "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mutils\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcheckNNGradients\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnnCostFunction\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", + "\u001b[0;31mNameError\u001b[0m: name 'utils' is not defined" + ] + } + ], "source": [ "utils.checkNNGradients(nnCostFunction)" ] @@ -916,7 +928,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.6.4" + "version": "3.6.6" } }, "nbformat": 4, diff --git a/Exercise5/exercise5.ipynb b/Exercise5/exercise5.ipynb index 66c4500e..392cdede 100755 --- a/Exercise5/exercise5.ipynb +++ b/Exercise5/exercise5.ipynb @@ -19,9 +19,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "# used for manipulating directory paths\n", @@ -141,9 +139,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "def linearRegCostFunction(X, y, theta, lambda_=0.0):\n", @@ -359,9 +355,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "def learningCurve(X, y, Xval, yval, lambda_=0):\n", @@ -528,9 +522,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "def polyFeatures(X, p):\n", @@ -732,9 +724,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "def validationCurve(X, y, Xval, yval):\n", @@ -896,9 +886,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [] } diff --git a/README.md b/README.md index ca997de7..533b7a6a 100755 --- a/README.md +++ b/README.md @@ -57,7 +57,7 @@ If you are on a windows machine: Once you have installed python, create a new python environment will all the requirements using the following command: - conda create -n machine_learning python=3.6 scipy=1 numpy=1.13 matplotlib=2.1 jupyter + conda env create -f environment.yml After the new environment is setup, activate it using (windows) diff --git a/environment.yml b/environment.yml new file mode 100644 index 00000000..07a7a45d --- /dev/null +++ b/environment.yml @@ -0,0 +1,9 @@ +name: machine_learning +channels: + - defaults +dependencies: + - jupyter=1.0.0 + - matplotlib=2.1.2 + - numpy=1.13.3 + - python=3.6.4 + - scipy=1.0.0 diff --git a/requirements.txt b/requirements.txt deleted file mode 100755 index 1a7539c8..00000000 --- a/requirements.txt +++ /dev/null @@ -1,68 +0,0 @@ -appdirs==1.4.3 -asn1crypto==0.24.0 -attrs==18.1.0 -Automat==0.7.0 -backcall==0.1.0 -bleach==2.1.4 -certifi==2018.8.13 -cffi==1.11.5 -constantly==15.1.0 -cryptography==2.3.1 -cycler==0.10.0 -decorator==4.3.0 -entrypoints==0.2.3 -html5lib==1.0.1 -hyperlink==18.0.0 -idna==2.7 -incremental==17.5.0 -ipykernel==4.8.2 -ipython==6.5.0 -ipython-genutils==0.2.0 -ipywidgets==7.4.0 -jedi==0.12.1 -Jinja2==2.10 -jsonschema==2.6.0 -jupyter==1.0.0 -jupyter-client==5.2.3 -jupyter-console==5.2.0 -jupyter-core==4.4.0 -MarkupSafe==1.0 -matplotlib==2.1.2 -mistune==0.8.3 -mkl-fft==1.0.4 -mkl-random==1.0.1 -nbconvert==5.3.1 -nbformat==4.4.0 -notebook==5.6.0 -numpy==1.13.3 -pandocfilters==1.4.2 -parso==0.3.1 -pexpect==4.6.0 -pickleshare==0.7.4 -prometheus-client==0.3.1 -prompt-toolkit==1.0.15 -ptyprocess==0.6.0 -pyasn1==0.4.4 -pyasn1-modules==0.2.2 -pycparser==2.18 -Pygments==2.2.0 -pyOpenSSL==18.0.0 -pyparsing==2.2.0 -python-dateutil==2.7.3 -pytz==2018.5 -pyzmq==17.1.2 -qtconsole==4.3.1 -scipy==1.1.0 -Send2Trash==1.5.0 -service-identity==17.0.0 -simplegeneric==0.8.1 -six==1.11.0 -terminado==0.8.1 -testpath==0.3.1 -tornado==5.1 -traitlets==4.3.2 -Twisted==18.7.0 -wcwidth==0.1.7 -webencodings==0.5.1 -widgetsnbextension==3.4.0 -zope.interface==4.5.0 From 3ebf1b352b0cf58bf7aa3339ab8b94bd8d0bc96d Mon Sep 17 00:00:00 2001 From: SanderHestvik Date: Sat, 9 Jan 2021 12:50:39 +0100 Subject: [PATCH 12/19] add minor fix in commented example code Exercise 6 --- Exercise6/exercise6.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Exercise6/exercise6.ipynb b/Exercise6/exercise6.ipynb index cdc43975..e7e1ffac 100755 --- a/Exercise6/exercise6.ipynb +++ b/Exercise6/exercise6.ipynb @@ -395,7 +395,7 @@ " You can use `svmPredict` to predict the labels on the cross\n", " validation set. For example, \n", " \n", - " predictions = svmPredict(model, Xval)\n", + " predictions = utils.svmPredict(model, Xval)\n", "\n", " will return the predictions on the cross validation set.\n", " \n", From c9e00ff0376f1f99124e7a90ee808612b0a6b60b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Agust=C3=ADn=20Aliaga=20Casaletti?= Date: Fri, 28 May 2021 11:37:36 -0300 Subject: [PATCH 13/19] Fix formula typo in exercise 3 and other typos in exercise 5 --- Exercise3/exercise3.ipynb | 2 +- Exercise5/exercise5.ipynb | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Exercise3/exercise3.ipynb b/Exercise3/exercise3.ipynb index 33e782af..044d3634 100755 --- a/Exercise3/exercise3.ipynb +++ b/Exercise3/exercise3.ipynb @@ -373,7 +373,7 @@ "$$\n", "\\begin{align*}\n", "& \\frac{\\partial J(\\theta)}{\\partial \\theta_0} = \\frac{1}{m} \\sum_{i=1}^m \\left( h_\\theta\\left( x^{(i)} \\right) - y^{(i)} \\right) x_j^{(i)} & \\text{for } j = 0 \\\\\n", - "& \\frac{\\partial J(\\theta)}{\\partial \\theta_0} = \\left( \\frac{1}{m} \\sum_{i=1}^m \\left( h_\\theta\\left( x^{(i)} \\right) - y^{(i)} \\right) x_j^{(i)} \\right) + \\frac{\\lambda}{m} \\theta_j & \\text{for } j \\ge 1\n", + "& \\frac{\\partial J(\\theta)}{\\partial \\theta_j} = \\left( \\frac{1}{m} \\sum_{i=1}^m \\left( h_\\theta\\left( x^{(i)} \\right) - y^{(i)} \\right) x_j^{(i)} \\right) + \\frac{\\lambda}{m} \\theta_j & \\text{for } j \\ge 1\n", "\\end{align*}\n", "$$\n", "\n", diff --git a/Exercise5/exercise5.ipynb b/Exercise5/exercise5.ipynb index c0ca4f5d..5182c278 100755 --- a/Exercise5/exercise5.ipynb +++ b/Exercise5/exercise5.ipynb @@ -687,9 +687,9 @@ "\n", "### 3.2 Optional (ungraded) exercise: Adjusting the regularization parameter\n", "\n", - "In this section, you will get to observe how the regularization parameter affects the bias-variance of regularized polynomial regression. You should now modify the the lambda parameter and try $\\lambda = 1, 100$. For each of these values, the script should generate a polynomial fit to the data and also a learning curve.\n", + "In this section, you will get to observe how the regularization parameter affects the bias-variance of regularized polynomial regression. You should now modify the lambda parameter and try $\\lambda = 1, 100$. For each of these values, the script should generate a polynomial fit to the data and also a learning curve.\n", "\n", - "For $\\lambda = 1$, the generated plots should look like the the figure below. You should see a polynomial fit that follows the data trend well (left) and a learning curve (right) showing that both the cross validation and training error converge to a relatively low value. This shows the $\\lambda = 1$ regularized polynomial regression model does not have the high-bias or high-variance problems. In effect, it achieves a good trade-off between bias and variance.\n", + "For $\\lambda = 1$, the generated plots should look like the figure below. You should see a polynomial fit that follows the data trend well (left) and a learning curve (right) showing that both the cross validation and training error converge to a relatively low value. This shows the $\\lambda = 1$ regularized polynomial regression model does not have the high-bias or high-variance problems. In effect, it achieves a good trade-off between bias and variance.\n", "\n", "\n", " \n", From 3c04c2c2cfa2e14ae749c7f5b2fcbdc4ef10ecc1 Mon Sep 17 00:00:00 2001 From: Andy Wu Date: Wed, 14 Jul 2021 18:39:55 +0800 Subject: [PATCH 14/19] Changed grader to use the new grading system --- Exercise1/utils.py | 4 +++- Exercise2/utils.py | 4 +++- Exercise3/utils.py | 5 +++-- Exercise4/utils.py | 4 +++- Exercise5/utils.py | 4 +++- Exercise6/utils.py | 4 +++- Exercise7/utils.py | 4 +++- Exercise8/utils.py | 4 +++- submission.py | 54 ++++++++++++++++++++++------------------------ 9 files changed, 50 insertions(+), 37 deletions(-) diff --git a/Exercise1/utils.py b/Exercise1/utils.py index d0c909d5..b92e3cf5 100755 --- a/Exercise1/utils.py +++ b/Exercise1/utils.py @@ -19,7 +19,9 @@ def __init__(self): 'Computing Cost (for multiple variables)', 'Gradient Descent (for multiple variables)', 'Normal Equations'] - super().__init__('linear-regression', part_names) + part_names_key = ['DCRbJ', 'BGa4S', 'b65eO', 'BbS8u', 'FBlE2', 'RZAZC', '7m5Eu'] + assignment_key = 'UkTlA-FyRRKV5ooohuwU6A' + super().__init__('linear-regression', assignment_key, part_names, part_names_key) def __iter__(self): for part_id in range(1, 8): diff --git a/Exercise2/utils.py b/Exercise2/utils.py index 7c52dbe4..8e5e6a98 100755 --- a/Exercise2/utils.py +++ b/Exercise2/utils.py @@ -119,7 +119,9 @@ def __init__(self): 'Predict', 'Regularized Logistic Regression Cost', 'Regularized Logistic Regression Gradient'] - super().__init__('logistic-regression', part_names) + part_names_key = ['sFxIn', 'yvXBE', 'HerlY', '9fxV6', 'OddeL', 'aUo3H'] + assignment_key = 'JvOPouj-S-ys8KjYcPYqrg' + super().__init__('logistic-regression', assignment_key, part_names, part_names_key) def __iter__(self): for part_id in range(1, 7): diff --git a/Exercise3/utils.py b/Exercise3/utils.py index 633a5636..4bf99715 100755 --- a/Exercise3/utils.py +++ b/Exercise3/utils.py @@ -79,8 +79,9 @@ def __init__(self): 'One-vs-All Classifier Training', 'One-vs-All Classifier Prediction', 'Neural Network Prediction Function'] - - super().__init__('multi-class-classification-and-neural-networks', part_names) + part_names_key = ['jzAIf', 'LjDnh', '3yxcY', 'yNspP'] + assignment_key = '2KZRbGlpQnyzVI8Ki4uXjw' + super().__init__('multi-class-classification-and-neural-networks', assignment_key, part_names, part_names_key) def __iter__(self): for part_id in range(1, 5): diff --git a/Exercise4/utils.py b/Exercise4/utils.py index 6b7c3bdc..6d18b86f 100755 --- a/Exercise4/utils.py +++ b/Exercise4/utils.py @@ -193,7 +193,9 @@ def __init__(self): 'Sigmoid Gradient', 'Neural Network Gradient (Backpropagation)', 'Regularized Gradient'] - super().__init__('neural-network-learning', part_names) + part_names_key = ['aAiP2', '8ajiz', 'rXsEO', 'TvZch', 'pfIYT'] + assignment_key = 'xolSVXukR72JH37bfzo0pg' + super().__init__('neural-network-learning', assignment_key, part_names, part_names_key) def __iter__(self): for part_id in range(1, 6): diff --git a/Exercise5/utils.py b/Exercise5/utils.py index b2340ad7..da8b9be4 100755 --- a/Exercise5/utils.py +++ b/Exercise5/utils.py @@ -138,7 +138,9 @@ def __init__(self): 'Learning Curve', 'Polynomial Feature Mapping', 'Validation Curve'] - super().__init__('regularized-linear-regression-and-bias-variance', part_names) + part_names_key = ['a6bvf', 'x4FhA', 'n3zWY', 'lLaa4', 'gyJbG'] + assignment_key = '-wEfetVmQgG3j-mtasztYg' + super().__init__('regularized-linear-regression-and-bias-variance', assignment_key, part_names, part_names_key) def __iter__(self): for part_id in range(1, 6): diff --git a/Exercise6/utils.py b/Exercise6/utils.py index 6d99cec8..4ff2c2a8 100755 --- a/Exercise6/utils.py +++ b/Exercise6/utils.py @@ -695,7 +695,9 @@ def __init__(self): 'Parameters (C, sigma) for Dataset 3', 'Email Processing', 'Email Feature Extraction'] - super().__init__('support-vector-machines', part_names) + part_names_key = ['drOLk', 'JYt9Q', 'UHwLk', 'RIiFh'] + assignment_key = 'xHfBJWXxTdKXrUG7dHTQ3g' + super().__init__('support-vector-machines', assignment_key, part_names, part_names_key) def __iter__(self): for part_id in range(1, 5): diff --git a/Exercise7/utils.py b/Exercise7/utils.py index 81109173..668802b9 100755 --- a/Exercise7/utils.py +++ b/Exercise7/utils.py @@ -211,7 +211,9 @@ def __init__(self): 'PCA', 'Project Data (PCA)', 'Recover Data (PCA)'] - super().__init__('k-means-clustering-and-pca', part_names) + part_names_key = ['7yN0U', 'G1WGM', 'ixOMV', 'AFoJK', 'vf9EL'] + assignment_key = 'rGGTuM9gQoaikOnlhLII1A' + super().__init__('k-means-clustering-and-pca', assignment_key, part_names, part_names_key) def __iter__(self): for part_id in range(1, 6): diff --git a/Exercise8/utils.py b/Exercise8/utils.py index 938a7bb9..762b7524 100755 --- a/Exercise8/utils.py +++ b/Exercise8/utils.py @@ -235,7 +235,9 @@ def __init__(self): 'Collaborative Filtering Gradient', 'Regularized Cost', 'Regularized Gradient'] - super().__init__('anomaly-detection-and-recommender-systems', part_names) + part_names_key = ['WGzrg', '80Tcg', 'KDzSh', 'wZud3', 'BP3th', 'YF0u1'] + assignment_key = 'JvOPouj-S-ys8KjYcPYqrg' + super().__init__('anomaly-detection-and-recommender-systems', assignment_key, part_names, part_names_key) def __iter__(self): for part_id in range(1, 7): diff --git a/submission.py b/submission.py index 10113e47..135b19f5 100755 --- a/submission.py +++ b/submission.py @@ -1,21 +1,21 @@ -from urllib.parse import urlencode -from urllib.request import urlopen -import pickle import json +import os +import pickle from collections import OrderedDict + import numpy as np -import os +import requests class SubmissionBase: - - submit_url = 'https://www-origin.coursera.org/api/' \ - 'onDemandProgrammingImmediateFormSubmissions.v1' + submit_url = 'https://www.coursera.org/api/onDemandProgrammingScriptSubmissions.v1?includes=evaluation' save_file = 'token.pkl' - def __init__(self, assignment_slug, part_names): + def __init__(self, assignment_slug, assignment_key, part_names, part_names_key): self.assignment_slug = assignment_slug + self.assignment_key = assignment_key self.part_names = part_names + self.part_names_key = part_names_key self.login = None self.token = None self.functions = OrderedDict() @@ -28,24 +28,25 @@ def grade(self): # Evaluate the different parts of exercise parts = OrderedDict() for part_id, result in self: - parts[str(part_id)] = {'output': sprintf('%0.5f ', result)} - result, response = self.request(parts) + parts[self.part_names_key[part_id - 1]] = {'output': sprintf('%0.5f ', result)} + response = self.request(parts) response = json.loads(response.decode("utf-8")) # if an error was returned, print it and stop - if 'errorMessage' in response: - print(response['errorMessage']) + if 'errorCode' in response: + print(response['message'], response['details']['learnerMessage']) return # Print the grading table print('%43s | %9s | %-s' % ('Part Name', 'Score', 'Feedback')) print('%43s | %9s | %-s' % ('---------', '-----', '--------')) - for part in parts: - part_feedback = response['partFeedbacks'][part] - part_evaluation = response['partEvaluations'][part] + for index, part in enumerate(parts): + part_feedback = response['linked']['onDemandProgrammingScriptEvaluations.v1'][0]['parts'][str(part)][ + 'feedback'] + part_evaluation = response['linked']['onDemandProgrammingScriptEvaluations.v1'][0]['parts'][str(part)] score = '%d / %3d' % (part_evaluation['score'], part_evaluation['maxScore']) - print('%43s | %9s | %-s' % (self.part_names[int(part) - 1], score, part_feedback)) - evaluation = response['evaluation'] + print('%43s | %9s | %-s' % (self.part_names[int(index) - 1], score, part_feedback)) + evaluation = response['linked']['onDemandProgrammingScriptEvaluations.v1'][0] total_score = '%d / %d' % (evaluation['score'], evaluation['maxScore']) print(' --------------------------------') print('%43s | %9s | %-s\n' % (' ', total_score, ' ')) @@ -71,18 +72,15 @@ def login_prompt(self): pickle.dump((self.login, self.token), f) def request(self, parts): - params = { - 'assignmentSlug': self.assignment_slug, + payload = { + 'assignmentKey': self.assignment_key, + 'submitterEmail': self.login, 'secret': self.token, - 'parts': parts, - 'submitterEmail': self.login} - - params = urlencode({'jsonBody': json.dumps(params)}).encode("utf-8") - f = urlopen(self.submit_url, params) - try: - return 0, f.read() - finally: - f.close() + 'parts': dict(eval(str(parts)))} + headers = {} + + r = requests.post(self.submit_url, data=json.dumps(payload), headers=headers) + return r.content def __iter__(self): for part_id in self.functions: From e6d17fc42e09b71cdbab0f47644e0d57c1073cb2 Mon Sep 17 00:00:00 2001 From: Brandon Park Date: Sun, 18 Jul 2021 15:59:02 -0700 Subject: [PATCH 15/19] Fixed Exercise8 assignment key --- Exercise8/utils.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Exercise8/utils.py b/Exercise8/utils.py index 762b7524..5edf19cf 100755 --- a/Exercise8/utils.py +++ b/Exercise8/utils.py @@ -236,7 +236,7 @@ def __init__(self): 'Regularized Cost', 'Regularized Gradient'] part_names_key = ['WGzrg', '80Tcg', 'KDzSh', 'wZud3', 'BP3th', 'YF0u1'] - assignment_key = 'JvOPouj-S-ys8KjYcPYqrg' + assignment_key = 'gkyVYM98RcWlmQ9s84QNKA' super().__init__('anomaly-detection-and-recommender-systems', assignment_key, part_names, part_names_key) def __iter__(self): From f20846416ecb4a5d89eab0600a77bd2ebe01ca10 Mon Sep 17 00:00:00 2001 From: enyoukai <52297896+enyoukai@users.noreply.github.com> Date: Thu, 2 Sep 2021 19:50:11 -0700 Subject: [PATCH 16/19] typo of element as elemennt --- Exercise1/exercise1.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Exercise1/exercise1.ipynb b/Exercise1/exercise1.ipynb index 38a9faed..6f921a48 100755 --- a/Exercise1/exercise1.ipynb +++ b/Exercise1/exercise1.ipynb @@ -371,7 +371,7 @@ "\n", "As you perform gradient descent to learn minimize the cost function $J(\\theta)$, it is helpful to monitor the convergence by computing the cost. In this section, you will implement a function to calculate $J(\\theta)$ so you can check the convergence of your gradient descent implementation. \n", "\n", - "Your next task is to complete the code for the function `computeCost` which computes $J(\\theta)$. As you are doing this, remember that the variables $X$ and $y$ are not scalar values. $X$ is a matrix whose rows represent the examples from the training set and $y$ is a vector whose each elemennt represent the value at a given row of $X$.\n", + "Your next task is to complete the code for the function `computeCost` which computes $J(\\theta)$. As you are doing this, remember that the variables $X$ and $y$ are not scalar values. $X$ is a matrix whose rows represent the examples from the training set and $y$ is a vector whose each element represent the value at a given row of $X$.\n", "" ] }, From 0b5560d6f39613e35eda1571c4cfa0f4dc34b530 Mon Sep 17 00:00:00 2001 From: GSKW <63060445+GSKW@users.noreply.github.com> Date: Fri, 26 Nov 2021 20:51:56 +0300 Subject: [PATCH 17/19] Fixed index system --- submission.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/submission.py b/submission.py index 135b19f5..2236f4f2 100755 --- a/submission.py +++ b/submission.py @@ -45,7 +45,7 @@ def grade(self): 'feedback'] part_evaluation = response['linked']['onDemandProgrammingScriptEvaluations.v1'][0]['parts'][str(part)] score = '%d / %3d' % (part_evaluation['score'], part_evaluation['maxScore']) - print('%43s | %9s | %-s' % (self.part_names[int(index) - 1], score, part_feedback)) + print('%43s | %9s | %-s' % (self.part_names[int(index)], score, part_feedback)) evaluation = response['linked']['onDemandProgrammingScriptEvaluations.v1'][0] total_score = '%d / %d' % (evaluation['score'], evaluation['maxScore']) print(' --------------------------------') From 5ea6e17ede688fb34fa4df5c53726d6efe411521 Mon Sep 17 00:00:00 2001 From: Zohair-coder <52404521+Zohair-coder@users.noreply.github.com> Date: Tue, 4 Jan 2022 17:21:36 +0500 Subject: [PATCH 18/19] Added requests dependency --- environment.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/environment.yml b/environment.yml index 07a7a45d..624707a3 100644 --- a/environment.yml +++ b/environment.yml @@ -5,5 +5,6 @@ dependencies: - jupyter=1.0.0 - matplotlib=2.1.2 - numpy=1.13.3 + - requests=2.26.0 - python=3.6.4 - scipy=1.0.0 From 5d06e83b5fbf51098594b0ba5c47aacb57bf738d Mon Sep 17 00:00:00 2001 From: Andrew Low Date: Thu, 26 May 2022 23:55:18 +0800 Subject: [PATCH 19/19] fix typo in exercise 5 --- Exercise5/exercise5.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Exercise5/exercise5.ipynb b/Exercise5/exercise5.ipynb index 5182c278..5da0c5f5 100755 --- a/Exercise5/exercise5.ipynb +++ b/Exercise5/exercise5.ipynb @@ -396,7 +396,7 @@ " A vector of shape m. error_train[i] contains the training error for\n", " i examples.\n", " error_val : array_like\n", - " A vecotr of shape m. error_val[i] contains the validation error for\n", + " A vector of shape m. error_val[i] contains the validation error for\n", " i training examples.\n", " \n", " Instructions\n",