Sampling Variability of Performance Assessments
Citations Over TimeTop 1% of 1993 papers
Abstract
In this article, performance assessments are cast within a sampling framework. More specifically, a performance assessment is viewed as a sample of student performance drawn from a complex universe defined by a combination of all possible tasks, occasions, raters, and measurement methods. Using generalizability theory, we present evidence bearing on the generalizability and convergent validity of performance assessments sampled from a range of measurement facets and measurement methods. Results at both the individual and school level indicate that task‐sampling variability is the major source ofmeasurment error. Large numbers of tasks are needed to get a reliable measure of mathematics and science achievement at the elementary level. With respect to convergent validity, results suggest that methods do not converge. Students' performance scores, then, are dependent on both the task and method sampled.
Related Papers
- → Generalizing Generalizability in Information Systems Research(2003)1,544 cited
- → Generalizability theory: a primer(1992)1,325 cited
- → Applying Generalizability Theory using EduG(2011)108 cited
- → Reliability of observers' subjective impressions of families: A generalizability theory approach(2012)20 cited
- → Using Generalizability Theory for the Estimation of Reliability of a Patient Classification System(1994)1 cited