Exploring an Affective and Responsive Virtual Environment to Improve Remote Learning
Citations Over TimeTop 13% of 2023 papers
Abstract
Online classes are typically conducted by using video conferencing software such as Zoom, Microsoft Teams, and Google Meet. Research has identified drawbacks of online learning, such as “Zoom fatigue”, characterized by distractions and lack of engagement. This study presents the CUNY Affective and Responsive Virtual Environment (CARVE) Hub, a novel virtual reality hub that uses a facial emotion classification model to generate emojis for affective and informal responsive interaction in a 3D virtual classroom setting. A web-based machine learning model is employed for facial emotion classification, enabling students to communicate four basic emotions live through automated web camera capture in a virtual classroom without activating their cameras. The experiment is conducted in undergraduate classes on both Zoom and CARVE, and the results of a survey indicate that students have a positive perception of interactions in the proposed virtual classroom compared with Zoom. Correlations between automated emojis and interactions are also observed. This study discusses potential explanations for the improved interactions, including a decrease in pressure on students when they are not showing faces. In addition, video panels in traditional remote classrooms may be useful for communication but not for interaction. Students favor features in virtual reality, such as spatial audio and the ability to move around, with collaboration being identified as the most helpful feature.
Related Papers
- → UFO-Zoom: A new coupled map navigation technique using hand trajectories in the air(2012)5 cited
- → Design of large zoom ratio long-wave infrared zoom system with compound zoom method(2018)4 cited
- UFO-Zoom: 화면으로부터의 수직 거리를 이용하여 Zoom을 수행하는 3차원 제스처 기반 화면 탐색 방식 제안(2011)
- LibGuides: Guide to Zoom for RSOs: Running Your Zoom Meeting(2020)
- LibGuides: Guide to Zoom for RSOs: After Your Zoom Meeting(2020)