Rolls and colleagues discovered face-selective neurons in the inferior
temporal visual cortex; showed how these and object-selective
neurons have translation, size, contrast, spatial feequency and
in some cases even view invariance; and showed how neurons encode
information using mainly sparse distributed firing rate encoding. These
neurophysiological investigations are complemented by one of the few
biologically plausible models of how face and object recognition are
implemented in the brain, VisNet. Key descriptions are in 639, B15, and 508.
neurons (in the amygdala (38, 91, 97), inferior temporal visual cortex (38A,
96, 162), and orbitofrontal cortex (397))
(see 412, 451, 501, B11, B12, B15).
expression selective neurons in the cortex in the superior temporal
126) and orbitofrontal cortex (397). Reduced connectivity in this system in autism (541, 609).
neurons in the inferior temporal visual cortex implement translation, view,
invariant representations of faces and objects (91, 108, 127, 191, 248, B12, B15).
natural scenes, the receptive fields of inferior temporal cortex
to approximately the size of objects, revealing a mechanism that
object recognition (320, 516, B12).
attentional control of visual processing by inferior temporal cortex
complex natural scenes (445).
natural scenes, inferior temporal visual cortex neurons encode
about the locations of objects relative to the fovea, thus encoding
useful in scene representations (395, 455, 516).
temporal visual cortex encodes information about the
identity of objects, but not about their reward value, as shown by reversal and devaluation investigations (32, 320, B11). This provides a foundation for a key principle in primates
including humans that the reward value and emotional valence of visual stimuli are
represented in the orbitofrontal cortex as shown by one-trial reversal learning and devaluation investigations (79, 212, 216) (and to some extent in the amygdala 38, 383, B11), whereas before
that in visual cortical areas, the representations are about objects and stimuli independently
of value (B11, B13, B14, B15). This provides for the separation of emotion from perception.
encoding using a sparse distributed graded representation with
information encoded by neurons (at least up to tens) (172, 196, 204, 225, 227,
321, 255, 419, 474,
508, 553, 561, B12, B15). (These
discoveries argue against ‘grandmother cells’.) The representation is
by neuronally plausible dot product decoding, and is thus suitable for
associative computations performed in the brain (231, B12).
relatively little information is encoded and transmitted by
cross-correlations between neurons (265, 329, 348, 351, 369, 517). Much of the
information is available from the firing rates very rapidly, in 20-50
197, 257, 407). All these discoveries are important in our
computation and information transmission in the brain (B12, B15).
theory and model of invariant visual object recognition in the ventral
system closely related to empirical discoveries (162,
485, 516, 535,
536, 554, B12, 589, B15, 639).
and model of coordinate transforms in the dorsal visual system using a
combination of gain modulation and slow or trace rule competitive
learning. The theory starts with retinal position inputs gain modulated
by eye position to produce a head centred representation, followed by
gain modulation by head direction, followed by gain modulation by
place, to produce an allocentric representation in spatial view
coordinates useful for the idiothetic update of hippocampal spatial
view cells (612).
These coordinate transforms are used for self-motion update in the
theory of navigation using hippocampal spatial view cells (633).
sound recording to allow 3-dimensional sound localization (11A,
patent, Binaural sound recording).