Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial
Citations Over TimeTop 1% of 2012 papers
Abstract
Many research designs require the assessment of inter-rater reliability (IRR) to demonstrate consistency among observational ratings provided by multiple coders. However, many studies use incorrect statistical procedures, fail to fully report the information necessary to interpret their results, or do not address how IRR affects the power of their subsequent analyses for hypothesis testing. This paper provides an overview of methodological issues related to the assessment of IRR with a focus on study design, selection of appropriate statistics, and the computation, interpretation, and reporting of some commonly-used IRR statistics. Computational examples include SPSS and R syntax for computing Cohen's kappa and intra-class correlations to assess IRR.
Related Papers
- → ASPECTS Interobserver Agreement of 100 Investigators from the TENSION Study(2021)70 cited
- → Observational Studies(2023)29 cited
- → Inpatient diagnostic assessments: 2. Interrater reliability and outcomes of structured vs. unstructured interviews(2001)72 cited
- → Interrater reliability and agreement of performance ratings: A methodological comparison(1996)46 cited
- → The Effects of Video Observational Training on Video and Live Observational Proficiency(1994)12 cited