In psychometrics, the Kuder–Richardson formulas, first published in 1937, are a measure of internal consistency reliability for measures with dichotomous choices. They were developed by Kuder and Richardson.

Kuder–Richardson Formula 20 (KR-20)

The name of this formula stems from the fact that is the twentieth formula discussed in Kuder and Richardson's seminal paper on test reliability.[1]

It is a special case of Cronbach's α, computed for dichotomous scores.[2][3] It is often claimed that a high KR-20 coefficient (e.g., > 0.90) indicates a homogeneous test. However, like Cronbach's α, homogeneity (that is, unidimensionality) is actually an assumption, not a conclusion, of reliability coefficients. It is possible, for example, to have a high KR-20 with a multidimensional scale, especially with a large number of items.

Values can range from 0.00 to 1.00 (sometimes expressed as 0 to 100), with high values indicating that the examination is likely to correlate with alternate forms (a desirable characteristic). The KR-20 may be affected by difficulty of the test, the spread in scores and the length of the examination.

In the case when scores are not tau-equivalent (for example when there is not homogeneous but rather examination items of increasing difficulty) then the KR-20 is an indication of the lower bound of internal consistency (reliability).

The formula for KR-20 for a test with K test items numbered i = 1 to K is

where pi is the proportion of correct responses to test item i, qi is the proportion of incorrect responses to test item i (so that pi + qi = 1), and the variance for the denominator is

where n is the total sample size.

If it is important to use unbiased operators then the sum of squares should be divided by degrees of freedom (n  1) and the probabilities are multiplied by

Kuder–Richardson Formula 21 (KR-21)

Often discussed in tandem with KR-20, is Kuder–Richardson Formula 21 (KR-21).[4] KR-21 is a simplified version of KR-20, which can be used when the difficulty of all items on the test are known to be equal. Like KR-20, KR-21 was first set forth as the twenty-first formula discussed in Kuder and Richardson's 1937 paper.

The formula for KR-21 is as such:

Similarly to KR-20, K is equal to the number of items. Difficulty level of the items (p), is assumed to be the same for each item, however, in practice, KR-21 can be applied by finding the average item difficulty across the entirety of the test. KR-21 tends to be a more conservative estimate of reliability than KR-20, which in turn is a more conservative estimate than Cronbach's α.[4]

References

  1. Kuder, G. F., & Richardson, M. W. (1937). The theory of the estimation of test reliability. Psychometrika, 2(3), 151–160.
  2. Cortina, J. M., (1993). What Is Coefficient Alpha? An Examination of Theory and Applications. Journal of Applied Psychology, 78(1), 98–104.
  3. Ritter, Nicola L. (18 February 2010). Understanding a Widely Misunderstood Statistic: Cronbach's "Alpha". Annual meeting of the Southwest Educational Research Association. New Orleans.
  4. 1 2 "Kuder and Richardson Formula 20 | Real Statistics Using Excel". Retrieved 8 March 2019.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.