Skip to content

Commit 336650d

Browse files
committed
Pushing the docs to 0.22/ for branch: 0.22.X, commit 0a56df6dbbe4f1a56cb11d132e43641d7358dd7e
1 parent c01774d commit 336650d

File tree

1,354 files changed

+17092
-14430
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,354 files changed

+17092
-14430
lines changed

0.22/.buildinfo

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
# Sphinx build info version 1
22
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
3-
config: f4079f7774527c3c39933ee94ad93da2
3+
config: 4539849c394eaa7e7ae7935a4f98fbd5
44
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file not shown.

0.22/_downloads/5612f9c55259a4294f34843655f9c6af/plot_gpr_on_structured_data.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Gaussian processes on discrete data structures\n\n\nThis example illustrates the use of Gaussian processes for regression and\nclassification tasks on data that are not in fixed-length feature vector form.\nThis is achieved through the use of kernel functions that operates directly\non discrete structures such as variable-length sequences, trees, and graphs.\n\nSpecifically, here the input variables are some gene sequences stored as\nvariable-length strings consisting of letters 'A', 'T', 'C', and 'G',\nwhile the output variables are floating point numbers and True/False labels\nin the regression and classification tasks, respectively.\n\nA kernel between the gene sequences is defined using R-convolution [1]_ by\nintegrating a binary letter-wise kernel over all pairs of letters among a pair\nof strings.\n\nThis example will generate three figures.\n\nIn the first figure, we visualize the value of the kernel, i.e. the similarity\nof the sequences, using a colormap. Brighter color here indicates higher\nsimilarity.\n\nIn the second figure, we show some regression result on a dataset of 6\nsequences. Here we use the 1st, 2nd, 4th, and 5th sequences as the training set\nto make predictions on the 3rd and 6th sequences.\n\nIn the third figure, we demonstrate a classification model by training on 6\nsequences and make predictions on another 5 sequences. The ground truth here is\nsimply whether there is at least one 'A' in the sequence. Here the model makes\nfour correct classifications and fails on one.\n\n.. [1] Haussler, D. (1999). Convolution kernels on discrete structures\n(Vol. 646). Technical report, Department of Computer Science, University of\nCalifornia at Santa Cruz.\n"
18+
"\n# Gaussian processes on discrete data structures\n\n\nThis example illustrates the use of Gaussian processes for regression and\nclassification tasks on data that are not in fixed-length feature vector form.\nThis is achieved through the use of kernel functions that operates directly\non discrete structures such as variable-length sequences, trees, and graphs.\n\nSpecifically, here the input variables are some gene sequences stored as\nvariable-length strings consisting of letters 'A', 'T', 'C', and 'G',\nwhile the output variables are floating point numbers and True/False labels\nin the regression and classification tasks, respectively.\n\nA kernel between the gene sequences is defined using R-convolution [1]_ by\nintegrating a binary letter-wise kernel over all pairs of letters among a pair\nof strings.\n\nThis example will generate three figures.\n\nIn the first figure, we visualize the value of the kernel, i.e. the similarity\nof the sequences, using a colormap. Brighter color here indicates higher\nsimilarity.\n\nIn the second figure, we show some regression result on a dataset of 6\nsequences. Here we use the 1st, 2nd, 4th, and 5th sequences as the training set\nto make predictions on the 3rd and 6th sequences.\n\nIn the third figure, we demonstrate a classification model by training on 6\nsequences and make predictions on another 5 sequences. The ground truth here is\nsimply whether there is at least one 'A' in the sequence. Here the model makes\nfour correct classifications and fails on one.\n\n.. [1] Haussler, D. (1999). Convolution kernels on discrete structures\n (Vol. 646). Technical report, Department of Computer Science, University\n of California at Santa Cruz.\n"
1919
]
2020
},
2121
{

0.22/_downloads/5a693c97e821586539ab9d250762742c/plot_partial_dependence.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Partial Dependence Plots\n\n\nPartial dependence plots show the dependence between the target function [2]_\nand a set of 'target' features, marginalizing over the values of all other\nfeatures (the complement features). Due to the limits of human perception, the\nsize of the target feature set must be small (usually, one or two) thus the\ntarget features are usually chosen among the most important features.\n\nThis example shows how to obtain partial dependence plots from a\n:class:`~sklearn.neural_network.MLPRegressor` and a\n:class:`~sklearn.ensemble.HistGradientBoostingRegressor` trained on the\nCalifornia housing dataset. The example is taken from [1]_.\n\nThe plots show four 1-way and two 1-way partial dependence plots (ommitted for\n:class:`~sklearn.neural_network.MLPRegressor` due to computation time). The\ntarget variables for the one-way PDP are: median income (`MedInc`), average\noccupants per household (`AvgOccup`), median house age (`HouseAge`), and\naverage rooms per household (`AveRooms`).\n\n.. [1] T. Hastie, R. Tibshirani and J. Friedman, \"Elements of Statistical\n Learning Ed. 2\", Springer, 2009.\n\n.. [2] For classification you can think of it as the regression score before\n the link function.\n"
18+
"\n# Partial Dependence Plots\n\n\nPartial dependence plots show the dependence between the target function [2]_\nand a set of 'target' features, marginalizing over the values of all other\nfeatures (the complement features). Due to the limits of human perception, the\nsize of the target feature set must be small (usually, one or two) thus the\ntarget features are usually chosen among the most important features.\n\nThis example shows how to obtain partial dependence plots from a\n:class:`~sklearn.neural_network.MLPRegressor` and a\n:class:`~sklearn.ensemble.HistGradientBoostingRegressor` trained on the\nCalifornia housing dataset. The example is taken from [1]_.\n\nThe plots show four 1-way and two 1-way partial dependence plots (omitted for\n:class:`~sklearn.neural_network.MLPRegressor` due to computation time). The\ntarget variables for the one-way PDP are: median income (`MedInc`), average\noccupants per household (`AvgOccup`), median house age (`HouseAge`), and\naverage rooms per household (`AveRooms`).\n\n.. [1] T. Hastie, R. Tibshirani and J. Friedman, \"Elements of Statistical\n Learning Ed. 2\", Springer, 2009.\n\n.. [2] For classification you can think of it as the regression score before\n the link function.\n"
1919
]
2020
},
2121
{

0.22/_downloads/7ee55c12f8d3eb1dd8d2005d9dd7b6f1/plot_release_highlights_0_22_0.py

+4-5
Original file line numberDiff line numberDiff line change
@@ -246,11 +246,10 @@ def test_sklearn_compatible_estimator(estimator, check):
246246
# classification. Two averaging strategies are currently supported: the
247247
# one-vs-one algorithm computes the average of the pairwise ROC AUC scores, and
248248
# the one-vs-rest algorithm computes the average of the ROC AUC scores for each
249-
# class against all other classes. In both cases, the predicted labels are
250-
# provided in an array with values from 0 to ``n_classes``, and the scores
251-
# correspond to the probability estimates that a sample belongs to a particular
252-
# class. The OvO and OvR algorithms supports weighting uniformly
253-
# (``average='macro'``) and weighting by the prevalence
249+
# class against all other classes. In both cases, the multiclass ROC AUC scores
250+
# are computed from the probability estimates that a sample belongs to a
251+
# particular class according to the model. The OvO and OvR algorithms support
252+
# weighting uniformly (``average='macro'``) and weighting by the prevalence
254253
# (``average='weighted'``).
255254
#
256255
# Read more in the :ref:`User Guide <roc_metrics>`.

0.22/_downloads/c101b602d0b3510ef47dd19d64a4a92b/plot_release_highlights_0_22_0.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@
184184
"cell_type": "markdown",
185185
"metadata": {},
186186
"source": [
187-
"ROC AUC now supports multiclass classification\n----------------------------------------------\nThe :func:`roc_auc_score` function can also be used in multi-class\nclassification. Two averaging strategies are currently supported: the\none-vs-one algorithm computes the average of the pairwise ROC AUC scores, and\nthe one-vs-rest algorithm computes the average of the ROC AUC scores for each\nclass against all other classes. In both cases, the predicted labels are\nprovided in an array with values from 0 to ``n_classes``, and the scores\ncorrespond to the probability estimates that a sample belongs to a particular\nclass. The OvO and OvR algorithms supports weighting uniformly\n(``average='macro'``) and weighting by the prevalence\n(``average='weighted'``).\n\nRead more in the `User Guide <roc_metrics>`.\n\n"
187+
"ROC AUC now supports multiclass classification\n----------------------------------------------\nThe :func:`roc_auc_score` function can also be used in multi-class\nclassification. Two averaging strategies are currently supported: the\none-vs-one algorithm computes the average of the pairwise ROC AUC scores, and\nthe one-vs-rest algorithm computes the average of the ROC AUC scores for each\nclass against all other classes. In both cases, the multiclass ROC AUC scores\nare computed from the probability estimates that a sample belongs to a\nparticular class according to the model. The OvO and OvR algorithms support\nweighting uniformly (``average='macro'``) and weighting by the prevalence\n(``average='weighted'``).\n\nRead more in the `User Guide <roc_metrics>`.\n\n"
188188
]
189189
},
190190
{

0.22/_downloads/d2c3d354a93eca3b78b2436d5a8e7164/plot_gpr_on_structured_data.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,8 @@
3333
four correct classifications and fails on one.
3434
3535
.. [1] Haussler, D. (1999). Convolution kernels on discrete structures
36-
(Vol. 646). Technical report, Department of Computer Science, University of
37-
California at Santa Cruz.
36+
(Vol. 646). Technical report, Department of Computer Science, University
37+
of California at Santa Cruz.
3838
"""
3939
print(__doc__)
4040

Binary file not shown.

0.22/_downloads/fa25d310c75e4ff65e62ab2cd8fdcef4/plot_partial_dependence.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
:class:`~sklearn.ensemble.HistGradientBoostingRegressor` trained on the
1515
California housing dataset. The example is taken from [1]_.
1616
17-
The plots show four 1-way and two 1-way partial dependence plots (ommitted for
17+
The plots show four 1-way and two 1-way partial dependence plots (omitted for
1818
:class:`~sklearn.neural_network.MLPRegressor` due to computation time). The
1919
target variables for the one-way PDP are: median income (`MedInc`), average
2020
occupants per household (`AvgOccup`), median house age (`HouseAge`), and

0.22/_downloads/scikit-learn-docs.pdf

-22.5 KB
Binary file not shown.

0.22/_images/iris.png

0 Bytes
-140 Bytes
-215 Bytes
-148 Bytes
-148 Bytes
32 Bytes
-79 Bytes
-79 Bytes
-135 Bytes
-58 Bytes
545 Bytes
75 Bytes
75 Bytes
-119 Bytes
-119 Bytes
9 Bytes
9 Bytes
6 Bytes
6 Bytes
-111 Bytes
-111 Bytes
202 Bytes
202 Bytes
48 Bytes
-35 Bytes
-28 Bytes
-28 Bytes
-299 Bytes
949 Bytes
120 Bytes
120 Bytes
45 Bytes
-2 Bytes

0.22/_sources/auto_examples/applications/plot_face_recognition.rst.txt

+11-7

0.22/_sources/auto_examples/applications/plot_model_complexity_influence.rst.txt

+23-18

0 commit comments

Comments
 (0)