Skip to content

Commit 24a3150

Browse files
LalliAcquarth
authored andcommitted
DOC Docstrings validation in LinearSVC (scikit-learn#15496)
1 parent bcb8eda commit 24a3150

File tree

1 file changed

+39
-39
lines changed

1 file changed

+39
-39
lines changed

sklearn/svm/_classes.py

Lines changed: 39 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -26,12 +26,12 @@ class LinearSVC(BaseEstimator, LinearClassifierMixin,
2626
2727
Parameters
2828
----------
29-
penalty : string, 'l1' or 'l2' (default='l2')
29+
penalty : str, 'l1' or 'l2' (default='l2')
3030
Specifies the norm used in the penalization. The 'l2'
3131
penalty is the standard used in SVC. The 'l1' leads to ``coef_``
3232
vectors that are sparse.
3333
34-
loss : string, 'hinge' or 'squared_hinge' (default='squared_hinge')
34+
loss : str, 'hinge' or 'squared_hinge' (default='squared_hinge')
3535
Specifies the loss function. 'hinge' is the standard SVM loss
3636
(used e.g. by the SVC class) while 'squared_hinge' is the
3737
square of the hinge loss.
@@ -47,7 +47,7 @@ class LinearSVC(BaseEstimator, LinearClassifierMixin,
4747
Regularization parameter. The strength of the regularization is
4848
inversely proportional to C. Must be strictly positive.
4949
50-
multi_class : string, 'ovr' or 'crammer_singer' (default='ovr')
50+
multi_class : str, 'ovr' or 'crammer_singer' (default='ovr')
5151
Determines the multi-class strategy if `y` contains more than
5252
two classes.
5353
``"ovr"`` trains n_classes one-vs-rest classifiers, while
@@ -58,7 +58,7 @@ class LinearSVC(BaseEstimator, LinearClassifierMixin,
5858
If ``"crammer_singer"`` is chosen, the options loss, penalty and dual
5959
will be ignored.
6060
61-
fit_intercept : boolean, optional (default=True)
61+
fit_intercept : bool, optional (default=True)
6262
Whether to calculate the intercept for this model. If set
6363
to false, no intercept will be used in calculations
6464
(i.e. data is expected to be already centered).
@@ -80,7 +80,7 @@ class LinearSVC(BaseEstimator, LinearClassifierMixin,
8080
weight one.
8181
The "balanced" mode uses the values of y to automatically adjust
8282
weights inversely proportional to class frequencies in the input data
83-
as ``n_samples / (n_classes * np.bincount(y))``
83+
as ``n_samples / (n_classes * np.bincount(y))``.
8484
8585
verbose : int, (default=0)
8686
Enable verbose output. Note that this setting takes advantage of a
@@ -119,20 +119,26 @@ class LinearSVC(BaseEstimator, LinearClassifierMixin,
119119
n_iter_ : int
120120
Maximum number of iterations run across all classes.
121121
122-
Examples
122+
See Also
123123
--------
124-
>>> from sklearn.svm import LinearSVC
125-
>>> from sklearn.datasets import make_classification
126-
>>> X, y = make_classification(n_features=4, random_state=0)
127-
>>> clf = LinearSVC(random_state=0, tol=1e-5)
128-
>>> clf.fit(X, y)
129-
LinearSVC(random_state=0, tol=1e-05)
130-
>>> print(clf.coef_)
131-
[[0.085... 0.394... 0.498... 0.375...]]
132-
>>> print(clf.intercept_)
133-
[0.284...]
134-
>>> print(clf.predict([[0, 0, 0, 0]]))
135-
[1]
124+
SVC
125+
Implementation of Support Vector Machine classifier using libsvm:
126+
the kernel can be non-linear but its SMO algorithm does not
127+
scale to large number of samples as LinearSVC does.
128+
129+
Furthermore SVC multi-class mode is implemented using one
130+
vs one scheme while LinearSVC uses one vs the rest. It is
131+
possible to implement one vs the rest with SVC by using the
132+
:class:`sklearn.multiclass.OneVsRestClassifier` wrapper.
133+
134+
Finally SVC can fit dense data without memory copy if the input
135+
is C-contiguous. Sparse data will still incur memory copy though.
136+
137+
sklearn.linear_model.SGDClassifier
138+
SGDClassifier can optimize the same cost function as LinearSVC
139+
by adjusting the penalty and loss parameters. In addition it requires
140+
less memory, allows incremental (online) learning, and implements
141+
various loss functions and regularization regimes.
136142
137143
Notes
138144
-----
@@ -153,27 +159,20 @@ class LinearSVC(BaseEstimator, LinearClassifierMixin,
153159
`LIBLINEAR: A Library for Large Linear Classification
154160
<https://www.csie.ntu.edu.tw/~cjlin/liblinear/>`__
155161
156-
See also
162+
Examples
157163
--------
158-
SVC
159-
Implementation of Support Vector Machine classifier using libsvm:
160-
the kernel can be non-linear but its SMO algorithm does not
161-
scale to large number of samples as LinearSVC does.
162-
163-
Furthermore SVC multi-class mode is implemented using one
164-
vs one scheme while LinearSVC uses one vs the rest. It is
165-
possible to implement one vs the rest with SVC by using the
166-
:class:`sklearn.multiclass.OneVsRestClassifier` wrapper.
167-
168-
Finally SVC can fit dense data without memory copy if the input
169-
is C-contiguous. Sparse data will still incur memory copy though.
170-
171-
sklearn.linear_model.SGDClassifier
172-
SGDClassifier can optimize the same cost function as LinearSVC
173-
by adjusting the penalty and loss parameters. In addition it requires
174-
less memory, allows incremental (online) learning, and implements
175-
various loss functions and regularization regimes.
176-
164+
>>> from sklearn.svm import LinearSVC
165+
>>> from sklearn.datasets import make_classification
166+
>>> X, y = make_classification(n_features=4, random_state=0)
167+
>>> clf = LinearSVC(random_state=0, tol=1e-5)
168+
>>> clf.fit(X, y)
169+
LinearSVC(random_state=0, tol=1e-05)
170+
>>> print(clf.coef_)
171+
[[0.085... 0.394... 0.498... 0.375...]]
172+
>>> print(clf.intercept_)
173+
[0.284...]
174+
>>> print(clf.predict([[0, 0, 0, 0]]))
175+
[1]
177176
"""
178177

179178
def __init__(self, penalty='l2', loss='squared_hinge', dual=True, tol=1e-4,
@@ -203,7 +202,7 @@ def fit(self, X, y, sample_weight=None):
203202
n_features is the number of features.
204203
205204
y : array-like of shape (n_samples,)
206-
Target vector relative to X
205+
Target vector relative to X.
207206
208207
sample_weight : array-like of shape (n_samples,), default=None
209208
Array of weights that are assigned to individual
@@ -213,6 +212,7 @@ def fit(self, X, y, sample_weight=None):
213212
Returns
214213
-------
215214
self : object
215+
An instance of the estimator.
216216
"""
217217
# FIXME Remove l1/l2 support in 0.23 ----------------------------------
218218
msg = ("loss='%s' has been deprecated in favor of "

0 commit comments

Comments
 (0)