Skip to content

Commit 85817ac

Browse files
author
Quinn Liu
committed
removed unnecessary comments
1 parent c88890f commit 85817ac

File tree

2 files changed

+26
-43
lines changed
  • unsupervisedLearning/neuralNetworks/learningWithBackpropagation

2 files changed

+26
-43
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,9 @@
11
Machine Learning
22
================
33

4+
The majority of the material here was created while taking Andrew Ng's free online
5+
[Machine Learning class](https://www.coursera.org/course/ml) which I highly recommend!
6+
47
Definition of Machine Learning by Tom Mitchell
58

69
*"A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E."*
@@ -27,8 +30,5 @@ Definition of Machine Learning by Tom Mitchell
2730
- TODO: ReinforcementLearning
2831
- TODO: RecommenderSystems
2932

30-
Most importantly, the majority of the material here was created while
31-
taking Andrew Ng's free online [Machine Learning class](https://www.coursera.org/course/ml).
32-
3333
===================================================================
3434
Feel free to e-mail me at [email protected] for any questions. Enjoy!

unsupervisedLearning/neuralNetworks/learningWithBackpropagation/ex4.m

Lines changed: 23 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -1,33 +1,17 @@
1-
%% Machine Learning Online Class - Exercise 4 Neural Network Learning
2-
3-
% Instructions
4-
% ------------
5-
%
6-
% This file contains code that helps you get started on the
7-
% linear exercise. You will need to complete the following functions
8-
% in this exericse:
9-
%
10-
% sigmoidGradient.m
11-
% randInitializeWeights.m
12-
% nnCostFunction.m
13-
%
14-
% For this exercise, you will not need to change any code in this file,
15-
% or any other files other than those mentioned above.
16-
%
1+
%% Neural Network Learning with Backpropagation
172

183
%% Initialization
194
clear ; close all; clc
205

216
%% Setup the parameters you will use for this exercise
22-
input_layer_size = 400; % 20x20 Input Images of Digits
23-
hidden_layer_size = 25; % 25 hidden units
24-
num_labels = 10; % 10 labels, from 1 to 10
25-
% (note that we have mapped "0" to label 10)
7+
inputLayerSize = 400; % 20x20 Input Images of Digits
8+
hiddenLayerSize = 25; % 25 hidden units
9+
numberOfLabels = 10; % 10 labels, from 1 to 10
10+
% (note that we have mapped "0" to label 10)
2611

2712
%% =========== Part 1: Loading and Visualizing Data =============
2813
% We start the exercise by first loading and visualizing the dataset.
2914
% You will be working with a dataset that contains handwritten digits.
30-
%
3115

3216
% Load Training Data
3317
fprintf('Loading and Visualizing Data ...\n')
@@ -40,6 +24,7 @@
4024
sel = sel(1:100);
4125

4226
displayData(X(sel, :));
27+
title('Example of 100 Randomly Selected Digits from Input Training Set');
4328

4429
fprintf('Program paused. Press enter to continue.\n');
4530
pause;
@@ -54,7 +39,7 @@
5439
% Load the weights into variables Theta1 and Theta2
5540
load('ex4weights.mat');
5641

57-
% Unroll parameters
42+
% Unroll parameters by converting the matrix into a single long column vector
5843
nn_params = [Theta1(:) ; Theta2(:)];
5944

6045
%% ================ Part 3: Compute Cost (Feedforward) ================
@@ -74,8 +59,8 @@
7459
% Weight regularization parameter (we set this to 0 here).
7560
lambda = 0;
7661

77-
J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, ...
78-
num_labels, X, y, lambda);
62+
J = nnCostFunction(nn_params, inputLayerSize, hiddenLayerSize, ...
63+
numberOfLabels, X, y, lambda);
7964

8065
fprintf(['Cost at parameters (loaded from ex4weights): %f '...
8166
'\n(this value should be about 0.287629)\n'], J);
@@ -93,8 +78,8 @@
9378
% Weight regularization parameter (we set this to 1 here).
9479
lambda = 1;
9580

96-
J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, ...
97-
num_labels, X, y, lambda);
81+
J = nnCostFunction(nn_params, inputLayerSize, hiddenLayerSize, ...
82+
numberOfLabels, X, y, lambda);
9883

9984
fprintf(['Cost at parameters (loaded from ex4weights): %f '...
10085
'\n(this value should be about 0.383770)\n'], J);
@@ -128,8 +113,8 @@
128113

129114
fprintf('\nInitializing Neural Network Parameters ...\n')
130115

131-
initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size);
132-
initial_Theta2 = randInitializeWeights(hidden_layer_size, num_labels);
116+
initial_Theta1 = randInitializeWeights(inputLayerSize, hiddenLayerSize);
117+
initial_Theta2 = randInitializeWeights(hiddenLayerSize, numberOfLabels);
133118

134119
% Unroll parameters
135120
initial_nn_params = [initial_Theta1(:) ; initial_Theta2(:)];
@@ -162,8 +147,8 @@
162147
checkNNGradients(lambda);
163148

164149
% Also output the costFunction debugging values
165-
debug_J = nnCostFunction(nn_params, input_layer_size, ...
166-
hidden_layer_size, num_labels, X, y, lambda);
150+
debug_J = nnCostFunction(nn_params, inputLayerSize, ...
151+
hiddenLayerSize, numberOfLabels, X, y, lambda);
167152

168153
fprintf(['\n\nCost at (fixed) debugging parameters (w/ lambda = 10): %f ' ...
169154
'\n(this value should be about 0.576051)\n\n'], debug_J);
@@ -190,20 +175,20 @@
190175

191176
% Create "short hand" for the cost function to be minimized
192177
costFunction = @(p) nnCostFunction(p, ...
193-
input_layer_size, ...
194-
hidden_layer_size, ...
195-
num_labels, X, y, lambda);
178+
inputLayerSize, ...
179+
hiddenLayerSize, ...
180+
numberOfLabels, X, y, lambda);
196181

197182
% Now, costFunction is a function that takes in only one argument (the
198183
% neural network parameters)
199184
[nn_params, cost] = fmincg(costFunction, initial_nn_params, options);
200185

201186
% Obtain Theta1 and Theta2 back from nn_params
202-
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
203-
hidden_layer_size, (input_layer_size + 1));
187+
Theta1 = reshape(nn_params(1:hiddenLayerSize * (inputLayerSize + 1)), ...
188+
hiddenLayerSize, (inputLayerSize + 1));
204189

205-
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
206-
num_labels, (hidden_layer_size + 1));
190+
Theta2 = reshape(nn_params((1 + (hiddenLayerSize * (inputLayerSize + 1))):end), ...
191+
numberOfLabels, (hiddenLayerSize + 1));
207192

208193
fprintf('Program paused. Press enter to continue.\n');
209194
pause;
@@ -229,6 +214,4 @@
229214

230215
pred = predict(Theta1, Theta2, X);
231216

232-
fprintf('\nTraining Set Accuracy: %f\n', mean(double(pred == y)) * 100);
233-
234-
217+
fprintf('\nTraining Set Accuracy: %f\n', mean(double(pred == y)) * 100);

0 commit comments

Comments
 (0)