@@ -53,6 +53,13 @@ The disadvantages of Support Vector Machines include:
5353 the SGDClassifier class instead. The objective function can be
5454 configured to be almost the same as the LinearSVC model.
5555
56+ **Kernel cache size **
57+ For SVC, SVR, nuSVC and NuSVR, the size of the kernel cache
58+ has a strong impact on run times for larger problems.
59+ If you have enough RAM available, it is recommended
60+ to set `cache_size ` to a higher value, such as 500(MB)
61+ or 1000(MB).
62+
5663
5764.. _svm_classification :
5865
@@ -89,8 +96,8 @@ training samples::
8996 >>> Y = [0, 1]
9097 >>> clf = svm.SVC()
9198 >>> clf.fit(X, Y)
92- SVC(C=1.0, coef0=0.0, degree=3, gamma=0.5, kernel='rbf', probability=False ,
93- shrinking=True, tol=0.001)
99+ SVC(C=1.0, cache_size=200, coef0=0.0, degree=3, gamma=0.5, kernel='rbf',
100+ probability=False, shrinking=True, tol=0.001)
94101
95102After being fitted, the model can then be used to predict new values::
96103
@@ -126,8 +133,8 @@ classifiers are constructed and each one trains data from two classes::
126133 >>> Y = [0, 1, 2, 3]
127134 >>> clf = svm.SVC()
128135 >>> clf.fit(X, Y)
129- SVC(C=1.0, coef0=0.0, degree=3, gamma=1.0, kernel='rbf', probability=False ,
130- shrinking=True, tol=0.001)
136+ SVC(C=1.0, cache_size=200, coef0=0.0, degree=3, gamma=1.0, kernel='rbf',
137+ probability=False, shrinking=True, tol=0.001)
131138 >>> dec = clf.decision_function([[1]])
132139 >>> dec.shape[1] # 4 classes: 4*3/2 = 6
133140 6
@@ -215,8 +222,8 @@ floating point values instead of integer values::
215222 >>> y = [0.5, 2.5]
216223 >>> clf = svm.SVR()
217224 >>> clf.fit(X, y)
218- SVR(C=1.0, coef0=0.0, degree=3, epsilon=0.1, gamma=0.5, kernel='rbf' ,
219- probability=False, shrinking=True, tol=0.001)
225+ SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma=0.5,
226+ kernel='rbf', probability=False, shrinking=True, tol=0.001)
220227 >>> clf.predict([[1, 1]])
221228 array([ 1.5])
222229
0 commit comments