Conceptualizing Disagreement in Qualitative Coding
Citations Over TimeTop 18% of 2018 papers
Abstract
Collaborative qualitative coding often involves coders assign- ing different labels to the same instance, leading to ambiguity. We refer to such an instance of ambiguity as disagreement in coding. Analyzing reasons for such a disagreement is essential-- both for purposes of bolstering user understanding gained from coding and reinterpreting the data collaboratively, and for negotiating user-assigned labels for building effective machine learning models. We propose a conceptual definition of collective disagreement using diversity and divergence within the coding distributions. This perspective of disagreement translates to diverse coding contexts and groups of coders irrespective of discipline. We introduce two tree-based ranking metrics as standardized ways of comparing disagreements in how data instances have been coded. We empirically validate that, of the two tree-based metrics, coders' perceptions of dis- agreement match more closely with the n-ary tree metric than with the post-traversal tree metric.
Related Papers
- → An efficient approach for the maintenance of path traversal patterns(2004)8 cited
- → An Incremental Updating Technique for Discovering Frequent Traversal Patterns(2004)2 cited
- Optimization strategy in generation of path condition(2012)
- → Tree Traversal(2021)