Perceiving Possibilities for Action: On the Necessity of Calibration and Perceptual Learning for the Visual Guidance of Action
Citations Over TimeTop 11% of 2005 papers
Abstract
Tasks such as steering, braking, and intercepting moving objects constitute a class of behaviors, known as visually guided actions, which are typically carried out under continuous control on the basis of visual information. Several decades of research on visually guided action have resulted in an inventory of control laws that describe for each task how information about the sufficiency of one's current state is used to make ongoing adjustments. Although a considerable amount of important research has been generated within this framework, several aspects of these tasks that are essential for successful performance cannot be captured. The purpose of this paper is to provide an overview of the existing framework, discuss its limitations, and introduce a new framework that emphasizes the necessity of calibration and perceptual learning. Within the proposed framework, successful human performance on these tasks is a matter of learning to detect and calibrate optical information about the boundaries that separate possible from impossible actions. This resolves a long-lasting incompatibility between theories of visually guided action and the concept of an affordance. The implications of adopting this framework for the design of experiments and models of visually guided action are discussed.
Related Papers
- → Breadth and Limits of the Affordance Concept(2004)54 cited
- → Organizational Affordances: A Structuration Theory Approach to Affordances(2016)43 cited
- → Characterising affordances: The descriptions-of-affordances-model(2011)74 cited
- → The Affordances of Broken Affordances(2015)6 cited
- → Affordance Conditions of Product Parts in User-Product Interaction(2009)1 cited