Skip to content

Commit b53b573

Browse files
committed
Replaced abbreviated 'w.r.t' to 'with regards to'
1 parent 73246b1 commit b53b573

File tree

12 files changed

+398
-433
lines changed

12 files changed

+398
-433
lines changed

doc/modules/clustering.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1034,8 +1034,8 @@ Advantages
10341034
Drawbacks
10351035
~~~~~~~~~
10361036

1037-
- The previously introduced metrics are **not normalized w.r.t. random
1038-
labeling**: this means that depending on the number of samples,
1037+
- The previously introduced metrics are **not normalized with regards to
1038+
random labeling**: this means that depending on the number of samples,
10391039
clusters and ground truth classes, a completely random labeling will
10401040
not always yield the same values for homogeneity, completeness and
10411041
hence v-measure. In particular **random labeling won't yield zero

doc/modules/dp-derivation.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -194,8 +194,8 @@ The updates
194194

195195
The updates for mu essentially are just weighted expectations of
196196
:math:`X` regularized by the prior. We can see this by taking the
197-
gradient of the bound w.r.t. :math:`\nu_{\mu}` and setting it to zero. The
198-
gradient is
197+
gradient of the bound with regards to :math:`\nu_{\mu}` and setting it to zero.
198+
The gradient is
199199

200200
.. math::
201201

doc/modules/ensemble.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -629,8 +629,8 @@ the parameter ``loss``:
629629
target values.
630630
* Huber (``'huber'``): Another robust loss function that combines
631631
least squares and least absolute deviation; use ``alpha`` to
632-
control the sensitivity w.r.t. outliers (see [F2001]_ for more
633-
details).
632+
control the sensitivity with regards to outliers (see [F2001]_ for
633+
more details).
634634
* Quantile (``'quantile'``): A loss function for quantile regression.
635635
Use ``0 < alpha < 1`` to specify the quantile. This loss function
636636
can be used to create prediction intervals

doc/modules/feature_extraction.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -640,7 +640,7 @@ for languages that use white-spaces for word separation as it generates
640640
significantly less noisy features than the raw ``char`` variant in
641641
that case. For such languages it can increase both the predictive
642642
accuracy and convergence speed of classifiers trained using such
643-
features while retaining the robustness w.r.t. misspellings and
643+
features while retaining the robustness with regards to misspellings and
644644
word derivations.
645645

646646
While some local positioning information can be preserved by extracting

examples/applications/plot_species_distribution_modeling.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@ def plot_species_distribution(species=["bradypus_variegatus_0",
193193
pl.title(species.name)
194194
pl.axis('equal')
195195

196-
# Compute AUC w.r.t. background points
196+
# Compute AUC with regards to background points
197197
pred_background = Z[background_points[0], background_points[1]]
198198
pred_test = clf.decision_function((species.cov_test - mean)
199199
/ std)[:, 0]

sklearn/cluster/_k_means.c

Lines changed: 7 additions & 7 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

sklearn/cluster/_k_means.pyx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -202,8 +202,8 @@ def _mini_batch_update_csr(X, np.ndarray[DOUBLE, ndim=1] x_squared_norms,
202202
# no new sample: leave this center as it stands
203203
continue
204204

205-
# rescale the old center to reflect it previous accumulated
206-
# weight w.r.t. the new data that will be incrementally contributed
205+
# rescale the old center to reflect it previous accumulated weight
206+
# with regards to the new data that will be incrementally contributed
207207
if compute_squared_diff:
208208
old_center[:] = centers[center_idx]
209209
centers[center_idx] *= old_count

sklearn/cluster/k_means_.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -576,7 +576,7 @@ class KMeans(BaseEstimator, ClusterMixin, TransformerMixin):
576576
Precompute distances (faster but takes more memory).
577577
578578
tol : float, optional default: 1e-4
579-
Relative tolerance w.r.t. inertia to declare convergence
579+
Relative tolerance with regards to inertia to declare convergence
580580
581581
n_jobs : int
582582
The number of jobs to use for the computation. This works by breaking

sklearn/cluster/tests/common.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ def generate_clustered_data(seed=0, n_clusters=3, n_features=2,
1414
prng = np.random.RandomState(seed)
1515

1616
# the data is voluntary shifted away from zero to check clustering
17-
# algorithm robustness w.r.t. non centered data
17+
# algorithm robustness with regards to non centered data
1818
means = np.array([[1, 1, 1, 0],
1919
[-1, -1, 0, 1],
2020
[1, -1, 1, 1],

0 commit comments

Comments
 (0)