0 citations
The Measurement of Interrater Agreement
Wiley series in probability and statistics2003pp. 598–626
Citations Over Time
Abstract
In this chapter we consider the measurement of interrater agreement when the ratings are on categorical scales. First, we discuss the case of the same two raters per subject. Next, we consider weighted kappa to incorporate a notion of distance between rating categories, followed by the case of multiple ratings per subject with different sets of raters. We discuss applications to other problems and then relate the results of the preceding sections to the theory presented in an earlier chapter on correlated binary variables. A problem solving section appears at the end of the chapter.
Related Papers
- → Interrater reliability: the kappa statistic(2012)17,961 cited
- Interrater reliability: the kappa statistic.(2012)
- → Simple Procedures to Estimate Chance Agreement and Kappa for the Interrater Reliability of Response Segments Using the Rorschach Comprehensive System(1999)57 cited
- → Reliability of a cronh's disease clinical classification scheme based on disease behaviour(1998)50 cited
- → Evaluating interrater agreement in SPICE-based assessments(2003)22 cited