Validity of Online Screening for Autism: Crowdsourcing Study Comparing Paid and Unpaid Diagnostic Tasks
Journal of Medical Internet Research2019Vol. 21(5), pp. e13668–e13668
Citations Over TimeTop 10% of 2019 papers
Peter Washington, Haik Kalantarian, Qandeel Tariq, Jessey Schwartz, Kaitlyn Dunlap, Brianna Chrisman, Maya Varma, Michael Ning, Aaron Kline, Nate Stockham, Kelley Paskov, Catalin Voss, Nick Haber, Dennis P. Wall
Abstract
Many paid crowd workers on AMT enjoyed answering screening questions from videos, suggesting higher intrinsic motivation to make quality assessments. Paid crowdsourcing provides promising screening assessments of pediatric autism with an average deviation <20% from professional gold standard raters, which is potentially a clinically informative estimate for parents. Parents of children with autism likely overfit their intuition to their own affected child. This work provides preliminary demographic data on raters who may have higher ability to recognize and measure features of autism across its wide range of phenotypic manifestations.
Related Papers
- → What? How? Where? A Survey of Crowdsourcing(2013)38 cited
- → Obstacles of Mobile Crowdsourcing: A Survey(2019)8 cited
- → Crowdsourcing as a Future Collaborative Computing Paradigm(2023)1 cited
- → Forms of Crowdsourcing(2013)1 cited
- How to Increase the Accuracy of Crowdsourcing Campaigns(2015)