Discriminant analysis is used to build supervised classification models. It works by deriving and combining the probability functions that calculate the likelihood of values of each predictor variable being in each class. The most frequently used types of discriminant analysis make the following assumptions about the data:
- Both linear discriminant analysis and quadratic discriminant analysis assume that each predictor variable is normally distributed (Guassian distribution).
- Additionally, linear discriminant analysis assumes that the covariance (interdependence) between the predictor variables remains constant for values in different classes.
Discriminant analysis is closely related to the Naive Bayesian Classifier, which also uses probability functions for classification and (for the relevant set of use cases) assumes a Guassian distribution within each predictor variable. However, it is more versatile because it does not presume complete independence between the predictor variables.
It is also normally expected to yield better results than logistic regression if its more rigorous assumptions concerning the input data hold true.
As with regression, there are more complex types of discriminant analysis including the non-parametric flexible discriminant analysis to cover more complex cases that cannot be modelled linearly or quadratically.
- Linear discriminant analysis Fisher's linear discriminant Quadratic discriminant analysis Flexible discriminant analysis
- has functional building block
- FBB_Classification FBB_Dimensionality reduction
- has input data type
- IDT_Vector of quantitative variables
- has internal model
- INM_Function INM_Probability
- has output data type
- ODT_Classification ODT_Probability
- has learning style
- has parametricity
- PRM_Parametric PRM_Nonparametric with hyperparameter(s)
- has relevance
- sometimes supports
- mathematically similar to