Skip to content

Commit c490691

Browse files
committed
Merge pull request scikit-learn#2163 from ianozsvald/fix_plot_forest_iris_docs
Updated docs to fix formatting errors
2 parents d1bc252 + 0bf3988 commit c490691

File tree

1 file changed

+9
-7
lines changed

1 file changed

+9
-7
lines changed

examples/ensemble/plot_forest_iris.py

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -20,21 +20,23 @@
2020
ExtraTreesClassifier() # 0.95 score
2121
RandomForestClassifier() # 0.94 score
2222
AdaBoost(DecisionTree(max_depth=3)) # 0.94 score
23-
DecisionTree(max_depth=None) # 0.94 score``
23+
DecisionTree(max_depth=None) # 0.94 score
2424
2525
Increasing `max_depth` for AdaBoost lowers the standard deviation of the scores (but
2626
the average score does not improve).
2727
2828
See the console's output for further details about each model.
2929
3030
In this example you might try to:
31-
1) vary the `max_depth` for the DecisionTreeClassifier and AdaBoostClassifier, perhaps
32-
try ``max_depth=3`` for the DecisionTreeClassifier or ``max_depth=None``
33-
for AdaBoostClassifier
34-
2) vary `n_estimators`
3531
36-
Remember that RandomForests and ExtraTrees can be fitted in parallel (each tree is
37-
built independently of the others), AdaBoost's samples are built iteratively.
32+
1) vary the ``max_depth`` for the ``DecisionTreeClassifier`` and
33+
``AdaBoostClassifier``, perhaps try ``max_depth=3`` for the
34+
``DecisionTreeClassifier`` or ``max_depth=None`` for ``AdaBoostClassifier``
35+
2) vary ``n_estimators``
36+
37+
It is worth noting that RandomForests and ExtraTrees can be fitted in parallel
38+
on many cores as each tree is built independently of the others. AdaBoost's
39+
samples are built sequentially and so do not use multiple cores.
3840
"""
3941
print(__doc__)
4042

0 commit comments

Comments
 (0)