Skip to content

Commit 0bc3a6f

Browse files
committed
Merge pull request scikit-learn#5245 from ogrisel/lda-acronym-deprecation
[MRG+1] Deprecate LDA/QDA in favor of expanded names
2 parents b099a59 + 679936f commit 0bc3a6f

File tree

18 files changed

+1037
-1034
lines changed

18 files changed

+1037
-1034
lines changed

doc/modules/classes.rst

Lines changed: 5 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -603,10 +603,10 @@ From text
603603

604604
.. _lda_ref:
605605

606-
:mod:`sklearn.lda`: Linear Discriminant Analysis
607-
================================================
606+
:mod:`sklearn.discriminant_analysis`: Discriminant Analysis
607+
===========================================================
608608

609-
.. automodule:: sklearn.lda
609+
.. automodule:: sklearn.discriminant_analysis
610610
:no-members:
611611
:no-inherited-members:
612612

@@ -618,7 +618,8 @@ From text
618618
:toctree: generated
619619
:template: class.rst
620620

621-
lda.LDA
621+
discriminant_analysis.LinearDiscriminantAnalysis
622+
discriminant_analysis.QuadraticDiscriminantAnalysis
622623

623624

624625
.. _learning_curve_ref:
@@ -1136,24 +1137,6 @@ See the :ref:`metrics` section of the user guide for further details.
11361137
preprocessing.scale
11371138

11381139

1139-
1140-
:mod:`sklearn.qda`: Quadratic Discriminant Analysis
1141-
===================================================
1142-
1143-
.. automodule:: sklearn.qda
1144-
:no-members:
1145-
:no-inherited-members:
1146-
1147-
**User guide:** See the :ref:`lda_qda` section for further details.
1148-
1149-
.. currentmodule:: sklearn
1150-
1151-
.. autosummary::
1152-
:toctree: generated
1153-
:template: class.rst
1154-
1155-
qda.QDA
1156-
11571140
.. _random_projection_ref:
11581141

11591142
:mod:`sklearn.random_projection`: Random projection

doc/modules/lda_qda.rst

Lines changed: 85 additions & 64 deletions
Original file line numberDiff line numberDiff line change
@@ -1,112 +1,131 @@
11
.. _lda_qda:
22

33
==========================================
4-
Linear and quadratic discriminant analysis
4+
Linear and Quadratic Discriminant Analysis
55
==========================================
66

77
.. currentmodule:: sklearn
88

9-
Linear discriminant analysis (:class:`lda.LDA`) and
10-
quadratic discriminant analysis (:class:`qda.QDA`)
11-
are two standard classifiers, with, as their names suggest, a linear and a
12-
quadratic decision surface, respectively.
9+
Linear Discriminant Analysis
10+
(:class:`discriminant_analysis.LinearDiscriminantAnalysis`) and Quadratic
11+
Discriminant Analysis
12+
(:class:`discriminant_analysis.QuadraticDiscriminantAnalysis`) are two classic
13+
classifiers, with, as their names suggest, a linear and a quadratic decision
14+
surface, respectively.
1315

1416
These classifiers are attractive because they have closed-form solutions that
15-
can be easily computed, are inherently multiclass, have proven to work well in practice and have
16-
no hyperparameters to tune.
17+
can be easily computed, are inherently multiclass, have proven to work well in
18+
practice and have no hyperparameters to tune.
1719

1820
.. |ldaqda| image:: ../auto_examples/classification/images/plot_lda_qda_001.png
1921
:target: ../auto_examples/classification/plot_lda_qda.html
2022
:scale: 80
2123

2224
.. centered:: |ldaqda|
2325

24-
The plot shows decision boundaries for LDA and QDA. The first row shows that,
25-
when the classes covariances are the same, LDA and QDA yield the same result
26-
(up to a small difference resulting from the implementation). The bottom row demonstrates that in general,
27-
LDA can only learn linear boundaries, while QDA can learn
28-
quadratic boundaries and is therefore more flexible.
26+
The plot shows decision boundaries for Linear Discriminant Analysis and
27+
Quadratic Discriminant Analysis. The bottom row demonstrates that Linear
28+
Discriminant Analysis can only learn linear boundaries, while Quadratic
29+
Discriminant Analysis can learn quadratic boundaries and is therefore more
30+
flexible.
2931

3032
.. topic:: Examples:
3133

32-
:ref:`example_classification_plot_lda_qda.py`: Comparison of LDA and QDA on synthetic data.
34+
:ref:`example_classification_plot_lda_qda.py`: Comparison of LDA and QDA
35+
on synthetic data.
3336

34-
Dimensionality reduction using LDA
35-
==================================
36-
37-
:class:`lda.LDA` can be used to perform supervised dimensionality reduction, by
38-
projecting the input data to a linear subspace consisting of the directions which maximize the
39-
separation between classes (in a precise sense discussed in the mathematics section below).
40-
The dimension of the output is necessarily less that the number of classes,
41-
so this is a in general a rather strong dimensionality reduction, and only makes senses
42-
in a multiclass setting.
37+
Dimensionality reduction using Linear Discriminant Analysis
38+
===========================================================
4339

44-
This is implemented in :func:`lda.LDA.transform`. The desired
45-
dimensionality can be set using the ``n_components`` constructor
46-
parameter. This parameter has no influence on :func:`lda.LDA.fit` or :func:`lda.LDA.predict`.
40+
:class:`discriminant_analysis.LinearDiscriminantAnalysis` can be used to
41+
perform supervised dimensionality reduction, by projecting the input data to a
42+
linear subspace consisting of the directions which maximize the separation
43+
between classes (in a precise sense discussed in the mathematics section
44+
below). The dimension of the output is necessarily less that the number of
45+
classes, so this is a in general a rather strong dimensionality reduction, and
46+
only makes senses in a multiclass setting.
47+
48+
This is implemented in
49+
:func:`discriminant_analysis.LinearDiscriminantAnalysis.transform`. The desired
50+
dimensionality can be set using the ``n_components`` constructor parameter.
51+
This parameter has no influence on
52+
:func:`discriminant_analysis.LinearDiscriminantAnalysis.fit` or
53+
:func:`discriminant_analysis.LinearDiscriminantAnalysis.predict`.
4754

4855
.. topic:: Examples:
4956

50-
:ref:`example_decomposition_plot_pca_vs_lda.py`: Comparison of LDA and PCA for dimensionality reduction of the Iris dataset
57+
:ref:`example_decomposition_plot_pca_vs_lda.py`: Comparison of LDA and PCA
58+
for dimensionality reduction of the Iris dataset
5159

5260
Mathematical formulation of the LDA and QDA classifiers
5361
=======================================================
5462

55-
Both LDA and QDA can be derived from simple probabilistic models
56-
which model the class conditional distribution of the data :math:`P(X|y=k)`
57-
for each class :math:`k`. Predictions can then be obtained by using Bayes' rule:
63+
Both LDA and QDA can be derived from simple probabilistic models which model
64+
the class conditional distribution of the data :math:`P(X|y=k)` for each class
65+
:math:`k`. Predictions can then be obtained by using Bayes' rule:
5866

5967
.. math::
6068
P(y=k | X) = \frac{P(X | y=k) P(y=k)}{P(X)} = \frac{P(X | y=k) P(y = k)}{ \sum_{l} P(X | y=l) \cdot P(y=l)}
6169
6270
and we select the class :math:`k` which maximizes this conditional probability.
6371

64-
More specifically, for linear and quadratic discriminant analysis, :math:`P(X|y)`
65-
is modelled as a multivariate Gaussian distribution with density:
72+
More specifically, for linear and quadratic discriminant analysis,
73+
:math:`P(X|y)` is modelled as a multivariate Gaussian distribution with
74+
density:
6675

6776
.. math:: p(X | y=k) = \frac{1}{(2\pi)^n |\Sigma_k|^{1/2}}\exp\left(-\frac{1}{2} (X-\mu_k)^t \Sigma_k^{-1} (X-\mu_k)\right)
6877

69-
To use this model as a classifier, we just need to estimate from the training data
70-
the class priors :math:`P(y=k)` (by the proportion of instances of class :math:`k`), the
71-
class means :math:`\mu_k` (by the empirical sample class means) and the covariance matrices
72-
(either by the empirical sample class covariance matrices, or by a regularized estimator: see the section on shrinkage below).
78+
To use this model as a classifier, we just need to estimate from the training
79+
data the class priors :math:`P(y=k)` (by the proportion of instances of class
80+
:math:`k`), the class means :math:`\mu_k` (by the empirical sample class means)
81+
and the covariance matrices (either by the empirical sample class covariance
82+
matrices, or by a regularized estimator: see the section on shrinkage below).
7383

74-
In the case of LDA, the Gaussians for each class are assumed
75-
to share the same covariance matrix: :math:`\Sigma_k = \Sigma` for all :math:`k`.
76-
This leads to linear decision surfaces between, as can be seen by comparing the the log-probability ratios
77-
:math:`\log[P(y=k | X) / P(y=l | X)]`:
84+
In the case of LDA, the Gaussians for each class are assumed to share the same
85+
covariance matrix: :math:`\Sigma_k = \Sigma` for all :math:`k`. This leads to
86+
linear decision surfaces between, as can be seen by comparing the the
87+
log-probability ratios :math:`\log[P(y=k | X) / P(y=l | X)]`:
7888

7989
.. math::
8090
\log\left(\frac{P(y=k|X)}{P(y=l | X)}\right) = 0 \Leftrightarrow (\mu_k-\mu_l)\Sigma^{-1} X = \frac{1}{2} (\mu_k^t \Sigma^{-1} \mu_k - \mu_l^t \Sigma^{-1} \mu_l)
8191
82-
In the case of QDA, there are no assumptions on the covariance matrices :math:`\Sigma_k` of the Gaussians,
83-
leading to quadratic decision surfaces. See [#1]_ for more details.
92+
In the case of QDA, there are no assumptions on the covariance matrices
93+
:math:`\Sigma_k` of the Gaussians, leading to quadratic decision surfaces. See
94+
[#1]_ for more details.
8495

8596
.. note:: **Relation with Gaussian Naive Bayes**
8697

87-
If in the QDA model one assumes that the covariance matrices are diagonal, then
88-
this means that we assume the classes are conditionally independent,
89-
and the resulting classifier is equivalent to the Gaussian Naive Bayes classifier :class:`GaussianNB`.
98+
If in the QDA model one assumes that the covariance matrices are diagonal,
99+
then this means that we assume the classes are conditionally independent,
100+
and the resulting classifier is equivalent to the Gaussian Naive Bayes
101+
classifier :class:`naive_bayes.GaussianNB`.
90102

91103
Mathematical formulation of LDA dimensionality reduction
92-
===========================================================
104+
========================================================
93105

94106
To understand the use of LDA in dimensionality reduction, it is useful to start
95107
with a geometric reformulation of the LDA classification rule explained above.
96-
We write :math:`K` for the total number of target classes.
97-
Since in LDA we assume that all classes have the same estimated covariance :math:`\Sigma`, we can rescale the
98-
data so that this covariance is the identity:
108+
We write :math:`K` for the total number of target classes. Since in LDA we
109+
assume that all classes have the same estimated covariance :math:`\Sigma`, we
110+
can rescale the data so that this covariance is the identity:
99111

100112
.. math:: X^* = D^{-1/2}U^t X\text{ with }\Sigma = UDU^t
101113

102-
Then one can show that to classify a data point after scaling is equivalent to finding the estimated class mean :math:`\mu^*_k` which is
103-
closest to the data point in the Euclidean distance. But this can be done just as well after projecting on the :math:`K-1` affine subspace :math:`H_K`
104-
generated by all the :math:`\mu^*_k` for all classes. This shows that, implicit in the LDA classifier, there is
105-
a dimensionality reduction by linear projection onto a :math:`K-1` dimensional space.
106-
107-
We can reduce the dimension even more, to a chosen :math:`L`, by projecting onto the linear subspace :math:`H_L` which
108-
maximize the variance of the :math:`\mu^*_k` after projection (in effect, we are doing a form of PCA for the transformed class means :math:`\mu^*_k`).
109-
This :math:`L` corresponds to the ``n_components`` parameter in the :func:`lda.LDA.transform` method. See [#1]_ for more details.
114+
Then one can show that to classify a data point after scaling is equivalent to
115+
finding the estimated class mean :math:`\mu^*_k` which is closest to the data
116+
point in the Euclidean distance. But this can be done just as well after
117+
projecting on the :math:`K-1` affine subspace :math:`H_K` generated by all the
118+
:math:`\mu^*_k` for all classes. This shows that, implicit in the LDA
119+
classifier, there is a dimensionality reduction by linear projection onto a
120+
:math:`K-1` dimensional space.
121+
122+
We can reduce the dimension even more, to a chosen :math:`L`, by projecting
123+
onto the linear subspace :math:`H_L` which maximize the variance of the
124+
:math:`\mu^*_k` after projection (in effect, we are doing a form of PCA for the
125+
transformed class means :math:`\mu^*_k`). This :math:`L` corresponds to the
126+
``n_components`` parameter used in the
127+
:func:`discriminant_analysis.LinearDiscriminantAnalysis.transform` method. See
128+
[#1]_ for more details.
110129

111130
Shrinkage
112131
=========
@@ -115,10 +134,11 @@ Shrinkage is a tool to improve estimation of covariance matrices in situations
115134
where the number of training samples is small compared to the number of
116135
features. In this scenario, the empirical sample covariance is a poor
117136
estimator. Shrinkage LDA can be used by setting the ``shrinkage`` parameter of
118-
the :class:`lda.LDA` class to 'auto'. This automatically determines the
119-
optimal shrinkage parameter in an analytic way following the lemma introduced
120-
by Ledoit and Wolf [#2]_. Note that currently shrinkage only works when setting the
121-
``solver`` parameter to 'lsqr' or 'eigen'.
137+
the :class:`discriminant_analysis.LinearDiscriminantAnalysis` class to 'auto'.
138+
This automatically determines the optimal shrinkage parameter in an analytic
139+
way following the lemma introduced by Ledoit and Wolf [#2]_. Note that
140+
currently shrinkage only works when setting the ``solver`` parameter to 'lsqr'
141+
or 'eigen'.
122142

123143
The ``shrinkage`` parameter can also be manually set between 0 and 1. In
124144
particular, a value of 0 corresponds to no shrinkage (which means the empirical
@@ -154,12 +174,13 @@ a high number of features.
154174

155175
.. topic:: Examples:
156176

157-
:ref:`example_classification_plot_lda.py`: Comparison of LDA classifiers with and without shrinkage.
177+
:ref:`example_classification_plot_lda.py`: Comparison of LDA classifiers
178+
with and without shrinkage.
158179

159180
.. topic:: References:
160181

161182
.. [#1] "The Elements of Statistical Learning", Hastie T., Tibshirani R.,
162-
Friedman J., Section 4.3, p.106-119, 2008.
183+
Friedman J., Section 4.3, p.106-119, 2008.
163184
164-
.. [#2] Ledoit O, Wolf M. Honey, I Shrunk the Sample Covariance Matrix. The Journal of Portfolio
165-
Management 30(4), 110-119, 2004.
185+
.. [#2] Ledoit O, Wolf M. Honey, I Shrunk the Sample Covariance Matrix.
186+
The Journal of Portfolio Management 30(4), 110-119, 2004.

doc/modules/multiclass.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ by decomposing such problems into binary classification problems.
3333
several joint classification tasks. This is a generalization
3434
of the multi-label classification task, where the set of classification
3535
problem is restricted to binary classification, and of the multi-class
36-
classification task. *The output format is a 2d numpy array or sparse
36+
classification task. *The output format is a 2d numpy array or sparse
3737
matrix.*
3838

3939
The set of labels can be different for each output variable.
@@ -65,7 +65,7 @@ if you're using one of these unless you want custom multiclass behavior:
6565
:ref:`Nearest Neighbors <neighbors>`,
6666
setting ``multi_class='multinomial'`` in
6767
:class:`sklearn.linear_model.LogisticRegression`.
68-
- Support multilabel: :ref:`Decision Trees <tree>`,
68+
- Support multilabel: :ref:`Decision Trees <tree>`,
6969
:ref:`Random Forests <forest>`, :ref:`Nearest Neighbors <neighbors>`,
7070
:ref:`Ridge Regression <ridge_regression>`.
7171
- One-Vs-One: :class:`sklearn.svm.SVC`.

doc/modules/neighbors.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -467,9 +467,9 @@ similar to the label updating phase of the :class:`sklearn.KMeans` algorithm.
467467
It also has no parameters to choose, making it a good baseline classifier. It
468468
does, however, suffer on non-convex classes, as well as when classes have
469469
drastically different variances, as equal variance in all dimensions is
470-
assumed. See Linear Discriminant Analysis (:class:`sklearn.lda.LDA`) and
471-
Quadratic Discriminant Analysis (:class:`sklearn.qda.QDA`) for more complex
472-
methods that do not make this assumption. Usage of the default
470+
assumed. See Linear Discriminant Analysis (:class:`sklearn.discriminant_analysis.LinearDiscriminantAnanlysi`)
471+
and Quadratic Discriminant Analysis (:class:`sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis`)
472+
for more complex methods that do not make this assumption. Usage of the default
473473
:class:`NearestCentroid` is simple:
474474

475475
>>> from sklearn.neighbors.nearest_centroid import NearestCentroid

doc/whats_new.rst

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -350,8 +350,8 @@ New features
350350
- Add :class:`cluster.Birch`, an online clustering algorithm. By
351351
`Manoj Kumar`_, `Alexandre Gramfort`_ and `Joel Nothman`_.
352352

353-
- Added shrinkage support to :class:`lda.LDA` using two new solvers. By
354-
`Clemens Brunner`_ and `Martin Billinger`_.
353+
- Added shrinkage support to :class:`discriminant_analysis.LinearDiscriminantAnalysis`
354+
using two new solvers. By `Clemens Brunner`_ and `Martin Billinger`_.
355355

356356
- Added :class:`kernel_ridge.KernelRidge`, an implementation of
357357
kernelized ridge regression.
@@ -758,8 +758,8 @@ Bug fixes
758758
- Explicitly close open files to avoid ``ResourceWarnings`` under Python 3.
759759
By Calvin Giles.
760760

761-
- The ``transform`` of :class:`lda.LDA` now projects the input on the most
762-
discriminant directions. By Martin Billinger.
761+
- The ``transform`` of :class:`discriminant_analysis.LinearDiscriminantAnalysis`
762+
now projects the input on the most discriminant directions. By Martin Billinger.
763763

764764
- Fixed potential overflow in ``_tree.safe_realloc`` by `Lars Buitinck`_.
765765

@@ -2266,9 +2266,9 @@ API changes summary
22662266
- Fixed API inconsistency: :meth:`linear_model.SGDClassifier.predict_proba` now
22672267
returns 2d array when fit on two classes.
22682268

2269-
- Fixed API inconsistency: :meth:`qda.QDA.decision_function` and
2270-
:meth:`lda.LDA.decision_function` now return 1d arrays when fit on two
2271-
classes.
2269+
- Fixed API inconsistency: :meth:`discriminant_analysis.QuadraticDiscriminantAnalysis.decision_function`
2270+
and :meth:`discriminant_analysis.LinearDiscriminantAnalysis.decision_function` now return 1d arrays
2271+
when fit on two classes.
22722272

22732273
- Grid of alphas used for fitting :class:`linear_model.LassoCV` and
22742274
:class:`linear_model.ElasticNetCV` is now stored
@@ -3053,8 +3053,8 @@ Some other modules benefited from significant improvements or cleanups.
30533053

30543054
- Add attribute converged to Gaussian Mixture Models by Vincent Schut.
30553055

3056-
- Implemented ``transform``, ``predict_log_proba`` in :class:`lda.LDA`
3057-
By `Mathieu Blondel`_.
3056+
- Implemented ``transform``, ``predict_log_proba`` in
3057+
:class:`discriminant_analysis.LinearDiscriminantAnalysis` By `Mathieu Blondel`_.
30583058

30593059
- Refactoring in the :ref:`svm` module and bug fixes by `Fabian Pedregosa`_,
30603060
`Gael Varoquaux`_ and Amit Aides.

examples/classification/plot_classifier_comparison.py

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -39,13 +39,14 @@
3939
from sklearn.tree import DecisionTreeClassifier
4040
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
4141
from sklearn.naive_bayes import GaussianNB
42-
from sklearn.lda import LDA
43-
from sklearn.qda import QDA
42+
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
43+
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
4444

4545
h = .02 # step size in the mesh
4646

4747
names = ["Nearest Neighbors", "Linear SVM", "RBF SVM", "Decision Tree",
48-
"Random Forest", "AdaBoost", "Naive Bayes", "LDA", "QDA"]
48+
"Random Forest", "AdaBoost", "Naive Bayes", "Linear Discriminant Analysis",
49+
"Quadratic Discriminant Analysis"]
4950
classifiers = [
5051
KNeighborsClassifier(3),
5152
SVC(kernel="linear", C=0.025),
@@ -54,8 +55,8 @@
5455
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
5556
AdaBoostClassifier(),
5657
GaussianNB(),
57-
LDA(),
58-
QDA()]
58+
LinearDiscriminantAnalysis(),
59+
QuadraticDiscriminantAnalysis()]
5960

6061
X, y = make_classification(n_features=2, n_redundant=0, n_informative=2,
6162
random_state=1, n_clusters_per_class=1)

0 commit comments

Comments
 (0)