Title: Capacity limits in visual processing and puzzles of visual perception
Observers can grasp the essence of a scene in an instant, yet when probed for details they are at a loss. People have trouble finding their keys, yet they may be quite visible once found. How does one explain this combination of marvelous successes with quirky failures? Researchers have attempted to provide a unifying explanation in terms of mechanisms fordealing with limited capacity. In particular, a popular proposal posits limited access to higher-level processing, that a mechanism known as selective attention serially gates access to that processing, and that the gate operates early in visual processing. This account, however, has been problematic.
An efficient encoding in peripheral vision provides a more parsimonious explanation. More than 99% of the visual field lies outside the fovea. Peripheral vision must condense this mass of information into a succinct representation that nonetheless carries the information that is needed for vision at a glance. I will describe a model in which the visual system deals with limited capacity by encoding the visual input in terms of a rich set of local image statistics, where the local regions grow — and the representation becomes less precise — with distance from fixation. This representation computes sophisticated image features at the expense of spatial localization of those features. This tradeoff is critical to modeling vision, and once understood, a great many visual phenomena can be explained without further ado.