People see with feeling. We ‘gaze’, ‘behold’, ‘stare’, ‘gape’ and ‘glare’. In this paper, we develop the hypothesis that the brain's ability to see in the present incorporates a representation of the affective impact of those visual sensations in the past. This representation makes up part of the brain's prediction of what the visual sensations stand for in the present, including how to act on them in the near future. The affective prediction hypothesis implies that responses signalling an object's salience, relevance or value do not occur as a separate step after the object is identified. Instead, affective responses support vision from the very moment that visual stimulation begins.
Michael May lost the ability to see when he was 3 years old, after an accident destroyed his left eye and damaged his right cornea. Some 40 years later, Mr May received a corneal transplant that restored his brain's ability to absorb normal visual input from the world (Fine et al. 2003). With the hardware finally working, Mr May saw only simple movements, colours and shapes rather than, as most people do, a world of faces and objects and scenes. It was as if he lived in two different worlds: one where sound, touch, smell and taste were all integrated, and a second world of vision that stood apart. His visual sensations seemed foreign, similar to a language he was just learning to speak. As time passed, and Mr May gained experience with the visual world in context, he slowly became fluent in vision. Two years after his surgery, Mr May commented: ‘The difference between today and over 2 years ago is that I can better guess at what I am seeing. What is the same is that I am still guessing.’ (p. 916, italics in the original).
What Mr May did not know is that sighted people automatically make the guesses he was forced to make with effort. From birth, the human brain captures statistical regularities in sensory–motor patterns and stores them as internal representations. The brain then uses these stored representations, almost instantaneously, to predict continuously and unintentionally what incoming visual sensations stand for the in-world (Bar 2003, 2007, 2009; Kveraga et al. 2007a,b). When the brain receives new sensory input from the world in the present, it generates a hypothesis based on what it knows from the past to guide recognition and action in the immediate future. This is how people learn that the sounds of a voice come from moving lips on a face, that the red and green blotches on a round Macintosh apple are associated with a certain tartness of taste, and that the snowy white petals of an English rose have a velvety texture and hold a certain fragrance.
External sensations do not occur in a vacuum, but in a context of internal sensations from the body that holds the brain. The sensory–motor patterns being stored for future use include sensations from organs, muscles, joints (or ‘interoceptive’ sensations), as well as representations of sights, sounds, smells, tastes and touch (‘exteroceptive’ sensations). Although bodily sensations are rarely experienced consciously or with perfect fidelity (Barrett et al. 2004), they are continually detected and internally represented (Craig 2002, 2009). Psychology refers to these internal bodily changes as ‘affective’. In addition to directly experiencing changes in their breathing, muscle tension or stomach motility, people routinely experience more diffuse feelings, pleasure or discomfort, feeling worked up or slowed down. Therefore, in addition to learning that the sounds of a voice come from moving lips, people also learn whether or not they liked the sound of the voice (Bliss-Moreau submitted) and the person that it belongs to (Bliss-Moreau et al. 2008); they learn that they enjoy tartly flavoured red and green (Macintosh) apples or the milder tasting yellow apples (Golden Delicious); and they learn whether or not they prefer the strong fragrance of white English roses over the milder smell of a deep red American Beauty, or even the smell of roses over lilies. When the brain detects visual sensations from the eye in the present moment, and tries to interpret them by generating a prediction about what those visual sensations refer to or stand for in the world, it uses not only previously encountered patterns of sound, touch, smell and tastes, as well as semantic knowledge. It also uses affective representations—prior experiences of how those external sensations have influenced internal sensations from the body. Often these representations reach subjective awareness and are experienced as affective feelings, but they need not (e.g. Zajonc 1980; for a recent discussion, see Barrett & Bliss-Moreau in press). Guessing at what an object is, in part, requires knowing its value.
In this paper, we explore the hypothesis that the brain routinely makes affective predictions during visual recognition. We suggest that the brain's prediction about the meaning of visual sensations of the present includes some representation of the affective impact of those (or similar) sensations in the past. An effective prediction, in effect allows the brain to anticipate and prepare to act on those sensations in the future. Furthermore, affective predictions are made quickly and efficiently, only milliseconds after visual sensations register on the retina. From this perspective, sensations from the body are a dimension of knowledge—they help us to identify what the object is when we encounter it, based, in part, on past reactions. If this hypothesis is correct, then affective responses signalling an object's salience, relevance or value do not occur as a separate step after the object is identified—affective response assists in seeing an object as what it is from the very moment that visual stimulation begins. Whether or not he was aware of it, Mr May's predictions about the visual world probably became more effective, in part, as his affective state became more responsive to his visual sensations. Learning to see meant experiencing visual sensations as value added. Whether experienced consciously or not, a pleasant or unpleasant affective change in response to visual stimulation might have helped Mr May (as it helps all of us) to recognize moving lips as a particular person speaking, a blotch of red and green as an apple or white silky petals as a rose.
2. Affect defined
In English, the word ‘affect’ means ‘to produce a change’. To be affected by something is to be influenced by it. In psychology, affect refers to a specific kind of influence—something's ability to influence a person's body state. Sometimes, the resulting bodily sensations in the core of the body are experienced as physical symptoms (such as being wound up from drinking too much coffee, fatigued from not enough sleep or energetic from exercise). Much of the time, sensations from the body are experienced as simple feelings of pleasure or displeasure with some degree of activation, either alone or as an emotion (figure 1; see Russell & Barrett 1999; Barrett & Bliss-Moreau in press). At still other times, bodily changes are too subtle to be consciously experienced at all. Whether consciously felt or not, an object is said to have affective ‘value’ if it has the capacity to influence a person's breathing, or heart rate, hormonal secretions, etc. In fact, objects are said to be ‘positive’ or ‘negative’ by virtue of their capacity to influence a person's body state (just as objects are said to be ‘red’ if they reflect light at 600 nm).
When affect is experienced so that it is reportable, it can be in the background or foreground of consciousness. When in the background, it is perceived as a property of the world, rather than as the person's reaction to it. ‘Unconscious affect’ (as it is called) is why a drink tastes delicious or is unappetizing (e.g. Berridge & Winkielman 2003; Winkielman et al. 2005; Koenigs & Tranel 2008), why we experience some people as nice and others as mean (Li et al. 2007) and why some paintings are beautiful while others are ugly (for a discussion of affect and preferences, see Clore & Storbeck 2006). When in the foreground, it is perceived as a personal reaction to the world: people like or dislike a drink, a person or a painting. Affect can be experienced as emotional (such as being anxious at an uncertain outcome, depressed from a loss or happy at a reward; cf. Barrett 2006a,b).
3. Affect and perception
For centuries, philosophers have believed that every moment of waking life is to some degree pleasant or unpleasant with some degree of arousal that affect is a basic ingredient of mental life. This idea continues to be incorporated into contemporary perspectives on consciousness, including Damasio's somatic marker hypothesis (Damasio 1999), Edelman's theory of neural Darwinism (Edelman 1987; Edelman & Tononi 2000), Searle's theory of consciousness (Searle 1992, 2004) and Humphrey's (2006) theory of conscious sensation. This idea can also be found in early psychological writing of Spencer (1855), James (1890), Sully (1892) and Wundt (1998). During the behaviourist era in psychology, scientists no longer regarded affect as central to perception and thought, and this trend continued as psychology emerged from the grips of behaviourism during the cognitive revolution. Affective responses were ignored in cognitive science altogether and questions about affect were relegated to the study of emotion, where it was assumed that affect occurred after object perception and in reaction to it (e.g. Arnold 1960). First, a red, round, hard object is perceived as an apple, and only then is the object related to the past experiences of enjoying the crunchy sweetness of the first bite, or to a breezy trip to the farm for apple picking on a fine autumn afternoon.
As it turns out, philosophers and early psychologists were right, at least when it comes to the place of affect in mental life. We contend that prior affective reactions to apples might actually help the brain to predict that visual sensations refer to an apple in the first place. Not only does the brain draft the prediction of an apple into a conscious perception of an apple, but the resulting affective response may also become conscious, so that the apple is experienced as pleasant or perceivers experience themselves as reacting to the apple with a pleasant, mild, heightened state of arousal.
It is easy to see evidence for this logic by referring briefly to the neuronal workspace that realizes affective responses (presented in figure 2). The centrepiece of this circuitry is the orbitofrontal cortex (OFC). In this paper, we use the acronym ‘OFC’ to refer to the entire orbital sector of the prefrontal cortex. The OFC is a heteromodal association area that integrates sensory input from the world and the body (i.e. from extra- and intrapersonal space) to create a contextually sensitive, multimodal representation of the world and its value to the person at a particular moment in time (Barbas 1993, 2000; Carmichael & Price 1995; McDonald 1998; Cavada et al. 2000; Mesulam 2000; Ghashghaei & Barbas 2002). The OFC plays a role in representing reward and threat (e.g. Kringelbach & Rolls 2004), as well as hedonic experience (Kringelbach 2005; Wager et al. 2008), but it also plays a role in processing olfactory, auditory and visual information (see fig. 6 in Kringelbach 2005; also see Price 2007). The OFC's ongoing integration of sensory information from the external world with that from the body indicates that conscious percepts are indeed intrinsically infused with affective value, so that the affective salience or significance of an object is not computed after the fact. As it turns out, the OFC plays a crucial role in forming the predictions that support object perception. This suggests the hypothesis that the predictions generated during object perception carry affective value as a necessary and normal part of visual experience.
4. Evidence for affective predictions
To formulate the affective prediction hypothesis, it is important to consider a more general aspect about the way the brain seems to predict. There is accumulating evidence that during object perception, the brain quickly makes an initial prediction about the ‘gist’ of the scene or object to which visual sensations refer (Schyns & Oliva 1994; Bar et al. 2001, 2006; Oliva & Torralba 2001; Torralba & Oliva 2003). Like a Dutch artist from the sixteenth or seventeenth century, the brain uses low spatial frequency visual information available from the object in context to produce a rough sketch, and then begins to fill in the details using information from memory (for a review, see Bar 2004, 2007, 2009). Effectively, the brain is performing a basic-level categorization that serves as a gist-level prediction about the class to which the object belongs.
With back and forth between visual cortex and the areas of the prefrontal cortex (via the direct projections that connect them), a finer level of categorization is achieved until a precise representation of the object is finally constructed. Like an artist who successively creates a representation of objects by applying smaller and smaller pieces of paint to represent the light of different colours and intensities, the brain gradually adds high spatial frequency information until a specific object is consciously seen.
As the brain generates its initial prediction about an object, it uses information from the OFC, supporting the hypothesis that affect is an ingredient of prediction. Studies that precisely measure the timing of neuronal activity indicate that information about an object is instantly propagated to the front of the brain. OFC activation has been observed between 80 and 130 ms after stimulus onset (when objects are presented in isolation; for a review, see Lamme & Roelfsema 2000; Bullier 2001; also see Thorpe et al. 1983; Bar et al. 2006; Rudrauf et al. 2008). Two studies indicate that this early activity is driven by low spatial frequency and magnocellular visual input characteristic of early-stage prediction (Bar et al. 2006; Kveraga et al. 2007a,b). This propagation of neuronal firing is sometimes called a ‘feed-forward sweep’ (Lamme & Roelfsema 2000), where sensory information from the world is projected rapidly from the back to the front of the brain after an image is presented to the visual system (Bar 2003, 2007; Bar et al. 2006). Many of these same studies show another wave of OFC activity between 200 and 450 ms, which might represent the refinement of initial affective predictions.
Similar findings are reported in the face perception literature. Early evoked response potentials (ERPs) in the frontal regions begin at approximately 80 ms, but are typically observed between 120 and 180 ms after stimulus onset (depending on whether the face is presented foveally or parafoveally). These early components reflect the categorization of the face as a face (versus a non-face) or as generally affective (neutral versus valenced), as valenced (e.g. happy versus sad) or as portraying some degree of arousal (for reviews, see Eimer & Holmes 2007; Palermo & Rhodes 2007; Vuilleumier & Pourtois 2007). Later ERP components between 200 and 350 ms correspond to the conscious perception of fear, disgust, sadness and anger (for a review see Barrett et al. (2007b)).
5. The circuitry for seeing with feeling
Neuroanatomical evidence provides the strongest support for the notion that affect informs visual perception and allows the affective prediction hypothesis to be further specified. The two functionally related circuits within the OFC (figure 3; for reviews, see Barbas & Pandya 1989; Carmichael & Price 1996; Öngür & Price 2000; Öngür et al. 2003) are differentially connected to the dorsal ‘where is it’ visual stream and the ventral ‘what is it’ stream (figure 4), suggesting two different roles for affect during object perception.
In the next section, we describe how medial parts of the OFC are connected to the dorsal ‘where’ visual stream, and help to produce the brain's gist-level prediction by providing initial affective information about what an object might mean for a person's well-being. With gist-level visual information about an object, the medial OFC initiates the internal bodily changes that are needed to guide subsequent actions on that object in context. The ability to reach for a round object and pick it up for a bite depends on the prediction that it is an apple and that it will enhance one's well-being in the immediate future because it has been done so in the past.
While the medial OFC is directing the body to prepare a physical response (or, what might be called ‘crafting an initial affective response’), the lateral parts of the OFC are integrating the sensory feedback from this bodily state with cues from the five external senses. Based on the anatomical connections that we review in §7, we hypothesize that the resulting multimodal representation helps to create a unified experience of specific objects in context. The ability to consciously see a particular apple not only requires integration of information from the five senses, but also requires that this information is integrated with the ‘sixth sense’ of affect (e.g., that such apples are delicious).
6. Basic-level affective predictions in object perception
The medial portions of the OFC guide autonomic, endocrine and behavioural responses to an object (Barbas & De Olmos 1990; Carmichael & Price 1995, 1996; Öngür et al. 1998; Rempel-Clower & Barbas 1998; Ghashghaei & Barbas 2002; Barbas et al. 2003; Price 2007). The medial OFC has strong reciprocal connections to the lateral parietal areas (MT and MST) within the dorsal ‘where is it’ visual stream (Barbas 1988, 1995; Carmichael & Price 1995; Cavada et al. 2000; Kondo et al. 2003; figure 4). The dorsal stream carries achromatic visual information of low spatial and fast temporal resolution through posterior parietal cortex, processing spatial information and visual motion, and providing the basis for spatial localization (Ungerleider & Mishkin 1982) and visually guided action (Goodale & Milner 1992).
Through largely magnocellular pathways, the medial OFC receives the same low spatial frequency visual information (Kveraga et al. 2007a,b), devoid of specific visual detail, that is used to create a basic-level category representation about the object's identity (Bar 2003, 2007). Through strong projections to hypothalamic, midbrain, brainstem and spinal column control centres, the medial OFC uses this low spatial frequency visual information to modify the perceiver's bodily state to re-create the affective context in which the object was experienced in the past (to allow subsequent behaviour in the immediate future). Based on its neuroanatomical connections to the lateral parietal cortex, we hypothesize that the OFC's representation of these autonomic and endocrine changes is relayed back to the dorsal ‘where is it’ stream as an initial estimate of the affective value and motivational relevance. With bidirectional processing between the medial OFC and the lateral parietal cortex, a person's physiological and behavioural response is coordinated with the information about the spatial location of the object. This prediction is part of the brain's preparation to respond to an object (based on this gist-level prediction) even before the object is consciously perceived.
There is some additional neuroimaging evidence to support our hypothesis that affective changes are part of a basic gist prediction during object perception. Areas in the medial OFC, including the ventromedial prefrontal cortex (vmPFC) and the portion of the rostral anterior cingulate cortex (ACC) beneath the corpus callosum, typically show increased activity during processing of contextual associations, where an object triggers cortical representations of other objects that have predicted relevance in a particular situation (Bar & Aminoff 2003; Bar 2004). For example, a picture of a traffic light activates visual representations of other objects that typically share a ‘street’ context, such as cars, pedestrians and so on. These findings suggest that an object has the capacity to reinstate the context with which it has been associated in prior experience. Given that the vmPFC and rostral ACC project directly to autonomic and endocrine output centres in the hypothalamus, midbrain, brainstem and spinal cord, it is likely that this reinstatement includes reconstituting the internal affective context that is associated with past exposures to the object. The internal world of the body may be one element in the ‘context frame’ that facilitates object recognition (for a discussion of context frames, see Bar 2004).
7. Affective predictions in visual consciousness
The lateral portions of the OFC receive information about the nuanced body changes that occur as the result of gist-level affective predictions (from vmPFC and anterior insula). Lateral OFC then integrates this bodily information with sensory information from the world to establish an experience-dependent representation of an object in context. Many neurons within the lateral OFC are multimodal and respond to a variety of different sensory inputs (Kringelbach & Rolls 2004). The lateral OFC has robust reciprocal connections to the inferior temporal areas (TEO, TE and temporal pole) of the ventral ‘what’ visual stream (figure 4; Barbas 1988, 1995; Carmichael & Price 1995; Cavada et al. 2000; Rolls & Deco 2002; Kondo et al. 2003). The ventral stream carries higher resolution visual details (including colour) through interior temporal cortex that give rise to the experience of seeing.
Through largely parvocellular pathways, the lateral OFC receives this high spatial frequency visual information full of rich detail about the visual features of objects used to create a specific representation of an object. The internal sensory information received from the anterior insula is itself important for the conscious experience of affect (Craig 2002, 2009). The resulting multimodal representation then influences processing in the ventral ‘what’ stream, and with additional back and forth a specific, polysensory, contextually relevant representation of the object is generated. This conscious percept includes the affective value of the object. Sometimes, this value is represented as a property of the object, and other times it is represented as a person's reaction to that object.
8. Coordinating affective predictions in the brain
On the basis of neuroanatomical evidence, we have, thus far, proposed that affect plays two related roles in object perception. The medial OFC estimates the value of gist-level representations. A small, round, object might be an apple if it is resting in a bowl on a kitchen counter, associated with an unpleasant affective response if the perceiver does not enjoy the taste of apples, a pleasant affective response if he or she is an apple lover and is hungry, and even no real affective change in an apple lover who is already satiated. The medial OFC not only realizes the affective significance of the apple but also prepares the perceiver to act—to turn away from the apple, to pick it up and bite, or to ignore it, respectively. The lateral OFC integrates sensory information from this bodily context with information from other sensory modalities, as well as more detailed visual information, producing the visual experience of a specific apple, ball or balloon. These two aspects of affective prediction do not occur in stages per se, but there might be slight differences in the time at which the two are computed.
Autonomic, hormonal or muscular changes in the body that are generated as part of a gist-level prediction via the medial OFC might be initiated before and incorporated into the multimodal representation of the world that is represented in the lateral OFC. Visual information arrives more quickly to the medial OFC owing to a ‘magnocellular advantage’ in visual processing (term by Laylock et al. 2007). Magnocellular neurons projecting from the lateral geniculate nucleus (in the thalamus) rapidly conduct low spatial frequency visual information to V1 and the dorsal ‘where’ stream areas (compared with the parvocellular neurons that carry highly specific and detailed visual information to V1 and to the ventral ‘what’ stream). In humans, magnocellular neurons in V1 fire from 25 ms (Klistorner et al. 1997) to 40 ms (Paulus et al. 1999) earlier than parvocellular neurons in V1. Even more strikingly, under certain conditions, some neurons within the dorsal stream that receive input directly from the lateral geniculate nucleus (e.g. V5/MT; Sincich et al. 2004) become active even before V1 (Ffytche et al. 1995; Buchner et al. 1997). As a result, low spatial frequency visual information about an object arrives to the medial OFC before high spatial frequency visual information arrives to the lateral OFC. Consistent with this view, Fox & Simpson (2002) found that neurons in the prefrontal cortex became active at approximately 10 ms after neurons in the dorsal ‘where’ stream, but coincident with the activation in the ventral ‘what’ stream.
A magnocelluar advantage extending to the medial OFC would help to resolve the debate over how the brain processes affective value of objects and faces when they are unattended or presented outside of visual awareness (either because of experimental procedures or brain damage). Some researchers argue for a subcortical route by which low spatial frequency visual information about objects and faces can bypass V1 via the amygdala to represent affective value in the body and behaviour (e.g. LeDoux 1996, 2000; Weiskrantz 1996; Morris et al. 1998; Catani et al. 2003; Rudrauf et al. 2008), whereas others argue that such a circuit is not functional in primates (Rolls 2000; Pessoa & Ungerleider 2004). Either way, affective predictions do not necessarily require a purely subcortical route. Nor is a subcortical route necessary to explain how objects presented outside of visual awareness influence the body and behaviour, how blindsighted patients can respond to the affective tone of a stimulus despite damage to V1, or why patients with amygdala damage have deficits in face perception. And because OFC lesions have been linked to memory deficits (for a discussion, see Frey & Petrides 2000), the absence of a magnocellular advantage may also help to explain why people suffering from agnosia can experience deficits in affective responses to visual stimulation but not to other sensory stimuli (Bauer 1982; Damasio et al. 1982; Habib 1986; Sierra et al. 2002). The amygdala's importance in object and face processing (for a review, see Vuilleumier & Pourtois 2007) may refer as much (if not more) to its participation in affective predictions than to its ability to work around early cortical processing in perception. These findings also suggest that the enhanced activity observed in fusiform gyrus in response to unattended emotional faces (e.g. Vuilleumeir et al. 2001) may be influenced, in part, by the OFC (which is strongly connected to the amygdala) rather than to the amygdala per se.
That being said, there is other evidence to suggest that both components of affective prediction happen more or less simultaneously. There is something like a parvocellular advantage in visual processing, in that visual information reaches the ventral ‘what’ stream very quickly, and like the input to the dorsal ‘where’ stream, arrives without the benefit of cortical processing in early visual cortex. The lateral geniculate nucleus not only projects directly to the dorsal stream but also appears to project directly to the anterior temporal lobe areas that are connected to the ventral stream (including the parahippocampal gyrus and amygdala; Catani et al. 2003). As a consequence, upon the presentation of a stimulus, some of the neurons in the anterior temporal cortex fire almost coincidently with those in the occipital cortex (e.g. 47 versus 45 ms, respectively; Wilson et al. 1983). Without further study, however, it is difficult to say whether these are magno- or parvocellular connections.
Taken together, these findings indicate that it may be more appropriate to describe the affective predictions generated by the medial and lateral OFC as phases in a single affective prediction evolving over time, rather than as two separate ‘types’ of affective predictions (with one informing the other). This interpretation is supported by the observation that the medial and lateral OFC are strongly connected by intermediate areas (figure 3); in addition, the lateral OFC receives some low spatial frequency visual information and the medial OFC some high spatial frequency information; and, magnocellular and parvocellular projections are not as strongly anatomically segregated as was first believed (for a review, see Laylock et al. (2007)). Furthermore, there are strong connections throughout the dorsal ‘where’ and ventral ‘what’ streams at all levels of processing (Merigan & Maunsell 1993; Chen et al. 2007). Finally, the OFC has widespread connections to a variety of thalamic nuclei that receive highly processed visual input and therefore can not be treated as solely bottom-up structures in visual processing. 1As a result of these interconnections, the timing differences in the medial and lateral OFC affective predictions will be small and perhaps difficult to measure with the current technology, even if they prove to be functionally significant in the time course of object perception.
A tremendous amount of research has now established that object recognition is a complex process that relies on many different sources of information from the world (e.g. contrast, colour, texture, low spatial frequency cues). In this paper, we suggest that object recognition uses another source of information: sensory cues from the body that represent the object's value in a particular context. We have laid the foundation for the hypothesis that people do not wait to evaluate an object for its personal significance until after they know what the object is. Rather, an affective reaction is one component of the prediction that helps a person see the object in the first place. Specifically, we hypothesize that very shortly after being exposed to objects, the brain predicts their value for the person's well-being based on prior experiences with those objects, and these affective representations shape the person's visual experience and guide action in the immediate future. When the brain effortlessly guesses an object's identity, that guess is partially based on how the person feels.
Our ideas about affective prediction suggest that people do not come to know the world exclusively through their senses; rather, their affective states influence the processing of sensory stimulation from the very moment an object is encountered. These ideas also suggest the intriguing possibility that exposure to visual sensations alone is not sufficient for visual experience. And even more exciting, plasticity in visual cortex areas might require, at least to some extent, the formation of connections between visual processing areas in the back of the brain and affective circuitry in the front. This suggests that affective predictions may not be produced by feedback from the OFC alone. As unlikely as it may seem, affective predictions might also influence plasticity in the visual system so that visual processing is changed from the bottom-up.
This, of course, brings us back to Mr May. Even several years after his surgery, Mr May's brain did not have a sufficiently complex and nuanced cache of multimodal representations involving visual sensations to allow him to easily predict the meaning of novel input from the visual world around him. Said a different way, his paucity of knowledge about the visual world forced him to think through every guess. Our hypothesis is that the guessing became easier, and more automatic, as Mr May's visual sensations took on affective value. He went from having what the philosopher Humphrey (2006) called ‘affectless vision’ (p. 67) to seeing with feeling.
We have proposed that a person's affective state has a top-down influence in normal object perception. Specifically, we have proposed that the medial OFC participates in an initial phase of affective prediction (‘what is the relevance of this class of objects for me’), whereas the lateral OFC provides more subordinate-level and contextually relevant affective prediction (‘what is the relevance of this particular object in this particular context for me at this particular moment in time’). If this view is correct, then personal relevance and salience are not computed after an object is already identified, but may be part of object perception itself.
Deep thanks to Michael May for sharing his thoughts and experiences. We also thank Eliza Bliss-Moreau, Krysal Yu and Jasmine Boshyan who helped with the preparation of figures. Thanks to Daniel Gilbert and the members of the Barrett and Bar laboratories who made helpful comments on the previous drafts of this manuscript. Preparation of this paper was supported by NIH grant R01NS050615 to M.B., and a National Institutes of Health Director's Pioneer Award (DP1OD003312), grants from the National Institute of Aging (AG030311) and the National Science Foundation (BCS 0721260; BCS 0527440) and a contract with the Army Research Institute (W91WAW), as well as by a Cattell Award and a fellowship from the American Philosophical Society to L.F.B.
One contribution of 18 to a Theme Issue ‘Predictions in the brain: using our past to prepare for the future’.
↵1 For example, the midbrain's superior colliculus and thalamic nuclei such as the pulvinar and mediodorsal receive cortically processed visual input from V1, the ventral visual stream (area IT) and the sensory integration network in the OFC (Abramson & Chalupa 1985; Casanova 1993; Webster et al. 1993, 1995).
- © 2009 The Royal Society