Sight and Sound Converge to Form Modality-Invariant Representations in Temporoparietal Cortex
Citations Over TimeTop 10% of 2012 papers
Abstract
People can identify objects in the environment with remarkable accuracy, regardless of the sensory modality they use to perceive them. This suggests that information from different sensory channels converges somewhere in the brain to form modality-invariant representations, i.e., representations that reflect an object independently of the modality through which it has been apprehended. In this functional magnetic resonance imaging study of human subjects, we first identified brain areas that responded to both visual and auditory stimuli and then used crossmodal multivariate pattern analysis to evaluate the neural representations in these regions for content specificity (i.e., do different objects evoke different representations?) and modality invariance (i.e., do the sight and the sound of the same object evoke a similar representation?). While several areas became activated in response to both auditory and visual stimulation, only the neural patterns recorded in a region around the posterior part of the superior temporal sulcus displayed both content specificity and modality invariance. This region thus appears to play an important role in our ability to recognize objects in our surroundings through multiple sensory channels and to process them at a supramodal (i.e., conceptual) level.
Related Papers
- → Attention and the crossmodal construction of space(1998)399 cited
- The Tactile Modality: A Review of Tactile Sensitivity and Human Tactile Interfaces(2007)
- → Simple and complex crossmodal correspondences involving audition(2020)32 cited
- → Crossmodal Audio and Tactile Interaction with Mobile Touchscreens(2010)8 cited
- → Crossmodal Audio and Tactile Interaction with Mobile Touchscreens(2012)