The EasyCog Dataset: Towards Easier Cognitive Assessment with Passive Video Watching
Abstract
As the global population ages, the prevalence of cognitive impairment continues to rise, highlighting the urgent need for accessible and low-burden cognitive assessment. Current assessments based on clinical scales are often hindered by subjectivity, significant user burden, and practice effects, limiting their applicability. We observe that passive visual stimuli can engage multiple cognitive domains while minimizing the need for active participation. Considering the lack of related datasets, we establish EasyCog, the first large-scale multimodal dataset for low-burden cognitive assessments. EasyCog collects synchronized forehead/ear EEG and contactless eye tracking data while participants passively view a short, cognitively structured video followed by an eyes-closed rest. The dataset includes 101 participants spanning healthy controls and patients with PD, AD, and VaD, with clinician-administered MoCA/MMSE scores collected in daily settings. We provide detailed collection procedures, quality validation, implementation, and benchmark baselines. Results indicate assessment feasibility while highlighting generalization challenges. By integrating passive visual stimuli with affordable sensing, EasyCog provides a foundation for future research in accessible and scalable cognitive monitoring in both clinical and community settings.