Is the proportion of each variable variance that can be explained by the factors?

In statistics, explained variation measures the proportion to which a mathematical model accounts for the variation (dispersion) of a given data set. Often, variation is quantified as variance; then, the more specific term explained variance can be used.

The complementary part of the total variation is called unexplained or residual variation.

Definition in terms of information gain[edit]

Information gain by better modelling[edit]

Following Kent (1983),[1] we use the Fraser information (Fraser 1965)[2]

Is the proportion of each variable variance that can be explained by the factors?

where is the probability density of a random variable , and with () are two families of parametric models. Model family 0 is the simpler one, with a restricted parameter space .

Parameters are determined by maximum likelihood estimation,

The information gain of model 1 over model 0 is written as

where a factor of 2 is included for convenience. Γ is always nonnegative; it measures the extent to which the best model of family 1 is better than the best model of family 0 in explaining g(r).

Information gain by a conditional model[edit]

Assume a two-dimensional random variable where X shall be considered as an explanatory variable, and Y as a dependent variable. Models of family 1 "explain" Y in terms of X,

,

whereas in family 0, X and Y are assumed to be independent. We define the randomness of Y by , and the randomness of Y, given X, by . Then,

can be interpreted as proportion of the data dispersion which is "explained" by X.

Special cases and generalized usage[edit]

Linear regression[edit]

The fraction of variance unexplained is an established concept in the context of linear regression. The usual definition of the coefficient of determination is based on the fundamental concept of explained variance.

Correlation coefficient as measure of explained variance[edit]

Let X be a random vector, and Y a random variable that is modeled by a normal distribution with centre . In this case, the above-derived proportion of explained variation equals the squared correlation coefficient .

Note the strong model assumptions: the centre of the Y distribution must be a linear function of X, and for any given x, the Y distribution must be normal. In other situations, it is generally not justified to interpret as proportion of explained variance.

In principal component analysis[edit]

Explained variance is routinely used in principal component analysis. The relation to the Fraser–Kent information gain remains to be clarified.

Criticism[edit]

As the fraction of "explained variance" equals the squared correlation coefficient , it shares all the disadvantages of the latter: it reflects not only the quality of the regression, but also the distribution of the independent (conditioning) variables.

In the words of one critic: "Thus gives the 'percentage of variance explained' by the regression, an expression that, for most social scientists, is of doubtful meaning but great rhetorical value. If this number is large, the regression gives a good fit, and there is little point in searching for additional variables. Other regression equations on different data sets are said to be less satisfactory or less powerful if their is lower. Nothing about supports these claims".[3]: 58 And, after constructing an example where is enhanced just by jointly considering data from two different populations: "'Explained variance' explains nothing."[3][page needed][4]: 183

See also[edit]

  • Analysis of variance
  • Variance reduction
  • Variance-based sensitivity analysis

References[edit]

  1. ^ Kent, J. T. (1983). "Information gain and a general measure of correlation". Biometrika. 70 (1): 163–173. doi:10.1093/biomet/70.1.163. JSTOR 2335954.
  2. ^ Fraser, D. A. S. (1965). "On Information in Statistics". Ann. Math. Statist. 36 (3): 890–896. doi:10.1214/aoms/1177700061.
  3. ^ a b Achen, C. H. (1982). Interpreting and Using Regression. Beverly Hills: Sage. pp. 58–59. ISBN 0-8039-1915-8.
  4. ^ Achen, C. H. (1990). "'What Does "Explained Variance" Explain?: Reply". Political Analysis. 2 (1): 173–184. doi:10.1093/pan/2.1.173.

  • Explained and Unexplained Variance on a graph

What proportion of variance is explained by?

The simplest way to measure the proportion of variance explained in an analysis of variance is to divide the sum of squares between groups by the sum of squares total. This ratio represents the proportion of variance explained. It is called eta squared or η².

What is the proportion of variance that can be explained by the regression model?

In simple regression, the proportion of variance explained is equal to r2; in multiple regression, it is equal to R2. where N is the total number of observations and p is the number of predictor variables.

What is total variance explained in factor analysis?

Total variance explained. Eigenvalue actually reflects the number of extracted factors whose sum should be equal to the number of items that are subjected to factor analysis. The next item shows all the factors extractable from the analysis along with their eigenvalues.

What proportion of the variance does the first factor account?

The proportion of variance explained table shows the contribution of each latent factor to the model. The first factor explains 20.9% of the variance in the predictors and 40.3% of the variance in the dependent variable.