Skip to content

Commit 60e4c80

Browse files
committed
DOC fixed doctests for cache_size parameter
1 parent 8878ebe commit 60e4c80

File tree

2 files changed

+17
-10
lines changed

2 files changed

+17
-10
lines changed

doc/modules/svm.rst

Lines changed: 13 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,13 @@ The disadvantages of Support Vector Machines include:
5353
the SGDClassifier class instead. The objective function can be
5454
configured to be almost the same as the LinearSVC model.
5555

56+
**Kernel cache size**
57+
For SVC, SVR, nuSVC and NuSVR, the size of the kernel cache
58+
has a strong impact on run times for larger problems.
59+
If you have enough RAM available, it is recommended
60+
to set `cache_size` to a higher value, such as 500(MB)
61+
or 1000(MB).
62+
5663

5764
.. _svm_classification:
5865

@@ -89,8 +96,8 @@ training samples::
8996
>>> Y = [0, 1]
9097
>>> clf = svm.SVC()
9198
>>> clf.fit(X, Y)
92-
SVC(C=1.0, coef0=0.0, degree=3, gamma=0.5, kernel='rbf', probability=False,
93-
shrinking=True, tol=0.001)
99+
SVC(C=1.0, cache_size=200, coef0=0.0, degree=3, gamma=0.5, kernel='rbf',
100+
probability=False, shrinking=True, tol=0.001)
94101

95102
After being fitted, the model can then be used to predict new values::
96103

@@ -126,8 +133,8 @@ classifiers are constructed and each one trains data from two classes::
126133
>>> Y = [0, 1, 2, 3]
127134
>>> clf = svm.SVC()
128135
>>> clf.fit(X, Y)
129-
SVC(C=1.0, coef0=0.0, degree=3, gamma=1.0, kernel='rbf', probability=False,
130-
shrinking=True, tol=0.001)
136+
SVC(C=1.0, cache_size=200, coef0=0.0, degree=3, gamma=1.0, kernel='rbf',
137+
probability=False, shrinking=True, tol=0.001)
131138
>>> dec = clf.decision_function([[1]])
132139
>>> dec.shape[1] # 4 classes: 4*3/2 = 6
133140
6
@@ -215,8 +222,8 @@ floating point values instead of integer values::
215222
>>> y = [0.5, 2.5]
216223
>>> clf = svm.SVR()
217224
>>> clf.fit(X, y)
218-
SVR(C=1.0, coef0=0.0, degree=3, epsilon=0.1, gamma=0.5, kernel='rbf',
219-
probability=False, shrinking=True, tol=0.001)
225+
SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.5,
226+
kernel='rbf', probability=False, shrinking=True, tol=0.001)
220227
>>> clf.predict([[1, 1]])
221228
array([ 1.5])
222229

doc/tutorial.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -151,8 +151,8 @@ set, let us use all the images of our dataset apart from the last
151151
one::
152152

153153
>>> clf.fit(digits.data[:-1], digits.target[:-1])
154-
SVC(C=1.0, coef0=0.0, degree=3, gamma=0.001, kernel='rbf', probability=False,
155-
shrinking=True, tol=0.001)
154+
SVC(C=1.0, cache_size=200, coef0=0.0, degree=3, gamma=0.001, kernel='rbf',
155+
probability=False, shrinking=True, tol=0.001)
156156

157157
Now you can predict new values, in particular, we can ask to the
158158
classifier what is the digit of our last image in the `digits` dataset,
@@ -187,8 +187,8 @@ persistence model, namely `pickle <http://docs.python.org/library/pickle.html>`_
187187
>>> iris = datasets.load_iris()
188188
>>> X, y = iris.data, iris.target
189189
>>> clf.fit(X, y)
190-
SVC(C=1.0, coef0=0.0, degree=3, gamma=0.25, kernel='rbf', probability=False,
191-
shrinking=True, tol=0.001)
190+
SVC(C=1.0, cache_size=200, coef0=0.0, degree=3, gamma=0.25, kernel='rbf',
191+
probability=False, shrinking=True, tol=0.001)
192192

193193
>>> import pickle
194194
>>> s = pickle.dumps(clf)

0 commit comments

Comments
 (0)