The Automaticity of Emotion RecognitionJessica L. TracyUniversity of British ColumbiaRichard W. RobinsUniversity of California, DavisEvolutionary accounts of emotion typically assume that humans evolved to quickly and efficientlyrecognize emotion expressions because these expressions convey fitness-enhancing messages. Thepresent research tested this assumption in 2 studies. Specifically, the authors examined (a) how quicklyperceivers could recognize expressions of anger, … Continue reading “Automaticity of Emotion Recognition | My Assignment Tutor”
The Automaticity of Emotion RecognitionJessica L. TracyUniversity of British ColumbiaRichard W. RobinsUniversity of California, DavisEvolutionary accounts of emotion typically assume that humans evolved to quickly and efficientlyrecognize emotion expressions because these expressions convey fitness-enhancing messages. Thepresent research tested this assumption in 2 studies. Specifically, the authors examined (a) how quicklyperceivers could recognize expressions of anger, contempt, disgust, embarrassment, fear, happiness,pride, sadness, shame, and surprise; (b) whether accuracy is improved when perceivers deliberate abouteach expression’s meaning (vs. respond as quickly as possible); and (c) whether accurate recognition canoccur under cognitive load. Across both studies, perceivers quickly and efficiently (i.e., under cognitiveload) recognized most emotion expressions, including the self-conscious emotions of pride, embarrassment, and shame. Deliberation improved accuracy in some cases, but these improvements were relativelysmall. Discussion focuses on the implications of these findings for the cognitive processes underlyingemotion recognition.Keywords: emotion recognition, nonverbal expression, efficiency, automaticityBuilding on Darwin’s (1872) seminal work on the nonverbalexpression of emotion, researchers have argued that emotion expressions evolved, in part, to communicate needs that facilitatesurvival and reproduction. Supporting this account, a large body ofresearch suggests that each of the so-called “basic” emotions(anger, disgust, fear, happiness, sadness, and surprise), as well asseveral more cognitively complex emotions (contempt, embarrassment, pride, and shame), are associated with distinct, crossculturally recognized nonverbal expressions (Ekman, 2003; Izard,1971; Haidt & Keltner, 1999; Tracy & Robins, 2007a).1Most evolutionary accounts of emotion expressions assume thatthe ability to recognize each expression is also an evolutionaryadaptation (O¨ hman, 2000). Logically, if expressions send fitnessenhancing messages, observers must be equipped with the cognitive capacity to accurately perceive these signals and achieveconscious awareness of the emotion conveyed. Such knowledgewould assist observers in obtaining a full understanding of thesituation and mentally preparing a flexible and appropriate response (Scherer, 1994). Furthermore, observers should be able toaccurately recognize emotion expressions under the conditions inwhich they are typically displayed: briefly and with considerablesurrounding noise and distraction. Thus, the emotion recognitionprocess should be quick, and accuracy should be independent of anobserver’s level of attentional focus or available cognitive resources. Given that expressions are typically displayed in everydayinteractions in which individuals must attend to other elements ofthe environment, to be adaptive in real-world situations emotionrecognition should be an efficient process; that is, accurate evenwhen observers’ cognitive resources are allocated elsewhere.2Yet, despite the large body of research on the nonverbal expressions associated with each emotion, we are aware of no previousstudy that has specifically tested whether recognition of eachdistinct emotion can occur quickly, efficiently (i.e., with minimalcognitive resources), and without conscious deliberation, orwhether the ability to discriminate among similarly valenced emotions becomes impaired when individuals are forced to recognizeemotions quickly and under cognitive load.Several lines of research are, however, consistent with thepossibility that emotion recognition is an automatic process. First,a number of studies have found that subliminally displayed emotion expressions can influence observers’ behaviors without theirawareness. These expressions have been shown to generate automatic facial mimicry, interfere with the perception of incongruent1 Contempt also has been classified as a basic emotion (Ekman, 1992),but its expression is recognized at lower levels than all other emotions(Elfenbein & Ambady, 2002) and is typically not recognized at abovechance levels when a free-response format is used (Russell, 1991; Wagner,2000). Furthermore, most researchers would agree that contempt is unlikely to meet all nine of Ekman’s (1992) criteria for basic emotions; forexample, there is no direct evidence that any animal other than humansexperience contempt.2 Other evolutionary accounts hold that emotion expressions did notevolve to communicate senders’ emotional states, but rather that expressions are intentionally displayed by senders for the purpose of socialcommunication (e.g., Fridlund, 1994; Parkinson, 2005; Russell, Bachorowksi, & Fernandez-Dols, 2003; see Kappas, 1997, for a review of thisissue). However, given that automatic emotion recognition could occureither because recognition is an innate ability (as the Ekman, 1992, viewwould suggest) or because recognition is learned and automatized early inlife (as the Fridlund view might suggest), the present research does notaddress these competing hypotheses.Jessica L. Tracy, Department of Psychology, University of BritishColumbia, Vancouver, British Columbia, Canada; Richard W. Robins,Psychology Department, University of California, Davis.We thank the National Institute of Mental Health (MH2006) and theSocial Sciences and Humanities Research Council of Canada (File #410-2006-1593) for supporting this research.Correspondence concerning this article should be addressed to Jessica L.Tracy, Department of Psychology, University of British Columbia, 2136West Mall, Vancouver, British Columbia, Canada V6T 1Z4. E-mail:jltracy@psych.ubc.ca.Emotion Copyright 2008 by the American Psychological Association2008, Vol. 8, No. 1, 81–95 1528-3542/08/$12.00 DOI: 10.1037/1528-3542.8.1.8181emotion words, and influence subsequent behavioral choices andjudgments (Dimberg & Thunberg, 1998; Dimberg, Thunberg, &Elmehed, 2000; Niedenthal, 1990; Stenberg, Wiking, & Dahl,1998; Winkielman, Berridge, & Wilbarger, 2005). These findingssuggest that at some level, observers can obtain knowledge ofemotion expressions’ meaning without awareness of the expressions, and this knowledge is fairly specific to the content of theemotion portrayed; for example, subliminal presentation of negative expressions leads perceivers to subsequently judge a beverageless favorably (Winkielman et al., 2005). Second, distinct brainresponses have been found to occur in response to distinct, subliminally presented expressions. For example, greater amygdalaactivation has been found in response to fearful versus happyexpressions (Whalen et al., 1998).Third, a handful of studies have examined the conscious recognition of briefly displayed (i.e., less than 8 s) emotion expressionsand shown that observers can accurately identify several distinctemotions—including anger, disgust, fear, happiness, sadness, andsurprise—when they are provided with unlimited time to respond(Ducci, 1981; Kirouac & Dore, 1983; McAndrew, 1986). Giventhat the expressions presented in these studies were not followedby a “mask” (i.e., presentation of another stimulus that corrodesany residue of the target expression in mental imagery), participants would have been able to use mental imagery to recognize theexpressions after they were no longer displayed onscreen, makingit difficult to know how quickly recognition occurred. In the fewstudies that used this methodology but also masked stimuli (Kirouac & Dore, 1984; Matsumoto & Ekman, 2004), participantsaccurately recognized expressions displayed at very brief latencies(maximum durations ranged from 50 ms to 200 ms). However, inboth studies participants were given an unlimited amount of timeto respond; and responses were assessed in a nonautomatic fashionby asking participants to press one of six keys that representedeach emotion (Kirouac & Dore, 1984), to choose the best optionfrom a list of seven emotion words, or to rate the extent to whicheach expression conveyed each of seven emotions (Matsumoto &Ekman, 2004). To assess the speed of the recognition process,studies must limit both the display and the response time, andparticipants must be able to respond in an automatic fashion (i.e.,press keys without needing to think through what each key represents). In the few studies that have met these requirements, participants have been found to quickly and accurately distinguishbetween emotions that differ in valence (e.g., anger vs. happiness;Kestenbaum & Nelson, 1992; Stanners, Byrd, & Gabriel, 1985),but these researchers did not examine whether participants couldautomatically discriminate among similarly valenced emotions(e.g., anger vs. sadness).To summarize, previous studies either (a) contrasted only positive versus negative emotions or (b) did not restrict the timeallotted for participants to respond, thereby allowing participantsto make use of mental imagery or respond in a thoughtful manner.Thus, no previous study has tested how quickly observers canrecognize and distinguish among similarly valenced emotionswhen expressions are shown only briefly and participants areforced to respond quickly. It remains unclear whether, for example, observers can quickly determine that an ally or foe is showinganger rather than fear. From an evolutionary perspective, thisquestion is critical, given that these two emotions send verydifferent messages about how the perceiver should behave (i.e.,prepare to fight vs. flee). In addition, no previous studies haveexamined how quickly the cognitively complex emotions such ascontempt, embarrassment, pride, and shame can be recognized.Finally, no previous studies have addressed the question ofwhether accurate recognition can occur efficiently; that is, whetherrecognition can occur when cognitive resources are depleted.The Present ResearchWe tested whether a broad range of emotion expressions can berecognized and discriminated from each other quickly, efficiently, andwithout deliberation, as would be predicted by an evolutionary account. We did so using expressions that have been verified to represent each emotion according to the Emotion-Facial Action CodingScheme (EM-FACS; Ekman & Rosenberg, 1997; Ekman & Friesen,1978) and a research design that restricts both the latency of thestimulus presentation and participants’ responses. The present research also tests whether emotion recognition can occur under cognitive load; that is, whether it is an efficient process.In Study 1, we examined emotion recognition for the small setof basic emotions found by Ekman and colleagues (Ekman &Friesen, 1971; Ekman, Sorenson, & Friesen, 1969) to be universally recognized (anger, disgust, fear, happiness, sadness, andsurprise), as well as for two more cognitively complex emotions:contempt and pride, which also have cross-culturally recognizednonverbal expressions (Ekman & Friesen, 1986; Tracy & Robins,2007a). In Study 2, we replicated Study 1 but included twoadditional cognitively complex emotions with recognizable expressions: embarrassment and shame (Izard, 1971; Keltner, 1995).One novel feature of the present research is that three of thecomplex emotions we examined— embarrassment, pride, andshame—belong to the unique class of “self-conscious” emotions.These emotions emerge later in the course of development than thebasic emotions, most likely because their experience requires ahigher level of cognitive capacities, including the ability to selfreflect and to understand others’ mental states (Izard, Ackerman,& Schultz, 1999; Lagattuta & Thompson, 2007; Tracy & Robins,2007b). All three of these emotions have been associated withcross-culturally recognized expressions (Haidt & Keltner, 1999;Izard, 1971; Tracy & Robins, 2007a), but the evidence for theiruniversality is, in some cases, not as strong as for the basicemotions. The present research is the first to examine recognitionof all emotion expressions known to generalize across cultures (atsome level) under speeded and distracting conditions.Study 1Study 1 tested whether accurate recognition of anger, contempt,disgust, fear, happiness, pride, sadness, and surprise can occurquickly, efficiently, and without deliberation. Participants were askedto identify and discriminate among these expressions under one ofthree conditions: (a) a fast condition, in which participants viewed andresponded to each expression as quickly as possible and were forcedto respond within a restricted time period; (b) a deliberated condition,in which participants were encouraged to take their time and deliberate about their response to each expression; and (c) a cognitive loadcondition, in which participants were distracted while viewing andresponding to each expression. By restricting the exposure and response times in Conditions 1 and 3, we were able to assess how82 TRACY AND ROBINSquickly individuals can perceive and consciously identify each distinct emotion. By including a deliberated condition, we were able totest whether recognition rates are improved when participants directlyallocate their cognitive resources to the task compared with when theyare prevented from doing so.MethodParticipants and procedure. One hundred one undergraduatestudents (65% women) participated in exchange for course credit.Participants were assigned to one of three conditions: fast (n 26), deliberated (n 48), or cognitive load (n 27).3 In allconditions, participants viewed photos of emotion expressionsdisplayed on a 17-in. (43.18-cm) computer monitor approximately12 in. (30.48 cm) from their faces. They responded to eachexpression by pressing one of two keys, J or F, representing “yes”and “no,” and were instructed to keep their index fingers on therelevant keys at all times. The J key was clearly marked with agreen sticker and the F key was clearly marked with a red sticker,so that participants who moved their hands during the experimentcould easily and quickly replace their fingers.Expressions of the eight emotions—anger, contempt, disgust,fear, happiness, pride, sadness, and surprise—were shown in eightblocks of 22 photos each. Each block was assigned a differenttarget emotion, and participants determined whether each expression in the block did or did not represent the target emotion for thatblock (e.g., anger in the anger block). Before viewing each block,participants were informed of that block’s target emotion. Theorder of photos within each block, and the order of blocks, wererandomized across participants. In the fast condition, participantswere instructed,As you view each photo, decide as quickly as you can whether or notthe target emotion is being expressed. Make sure to respond quickly.A good way to do this is to use your intuition—just go with your firstimpression.The question “Is this anger [contempt, disgust, fear, happiness, pride,sadness, surprise]?” was also displayed in a large (32-pt) font aboveeach photo on the screen to prompt participants and remind them ofthe target emotion. However, participants did not need to read thetarget question to respond because it was identical throughout theblock. Each photo appeared on screen for a maximum of 1,000 ms; itdisappeared as soon as participants responded and was replaced by thenext photo. If participants did not respond within 1,000 ms, noresponse was recorded and a message appeared on screen tellingparticipants to respond more quickly. We chose 1,000 ms as thestimulus duration and maximum response time because pilot testingdemonstrated that this time frame forced participants to respond asquickly as possible but did not cause them to disengage from the taskout of frustration with its difficulty.In the deliberated condition, participants were instructed to“think carefully about whether or not the target emotion is beingshown. You will have plenty of time to really think through yourdecision.” Each expression remained on screen for 8,000 ms, andparticipants were prevented from responding during this period;pilot testing demonstrated that 8,000 ms was more than enoughtime for participants to generate a thoughtful response (wheninstructed to take as long as they want to respond, virtually allparticipants respond within a few seconds). After 8,000 ms, thephoto was replaced with the question “Was that anger [contempt,disgust, fear, happiness, pride, sadness, surprise]?” displayed in alarge (32-pt) font in the center of the screen. Participants completed the trial by pressing the “yes” or “no” key at any time.In the cognitive load condition, participants were given the sameinstructions as in the fast condition but were also told,Before you see each set of photos, you will see a number flash on thescreen. We want to test whether you can remember this numberthroughout the whole task. When you see the number, say it aloudtwice. This will help you remember it. After each set of photos iscomplete, you will be asked to recall the number.Expressions and target emotion prompts were displayed in thesame manner as in the fast condition, but before the start of eachblock, participants viewed a seven-digit number on screen, andexperimenters verified that they read the number aloud twice.Participants were asked to recall and enter the number at the endof each block.4 Similar dual-task cognitive load manipulations(e.g., asking participants to rehearse a six-digit or eight-digitnumber while viewing a single stimulus or a set of stimuli; Bargh& Tota, 1988; Gilbert & Osbourne, 1989) have been used effectively in studies examining the automaticity of social judgments,such as perceptions of others; of note, these studies found that thenumber-memory task distracted attention from the primary judgment task even though the two tasks were unrelated (Bargh &Chartrand, 2000). Each expression remained on screen for 1,500ms, which was the total time allotted to view each photo andrespond. The time duration was increased for this condition fromthe fast condition because pilot testing demonstrated that participants became frustrated when given only 1,000 ms and asked toremember a seven-digit number. We felt it was important thatparticipants complete the study with minimal emotional reaction tothe task, given that this could influence their ability to accuratelyrecognize emotions (e.g., participants in a frustrated or angry moodmay show higher levels of recognition for anger expressions andlower levels of recognition for happy expressions; Blairy, Herrera,& Hess, 1999). Thus, we took precautions to ensure that the taskwas not overly taxing for participants; however, this meant in-3 The deliberated condition was considerably longer than the other twoconditions because of the longer length of time each stimulus was presented, so we divided it into two separate conditions of equal length. Thisdivision meant that each participant in the deliberated condition viewedonly half of the blocks of expressions (four blocks out of eight, with eachblock including all expressions), but their total time participating was muchcloser to the total time of participants in the fast and cognitive loadconditions. One of the subconditions of the deliberated condition included23 participants and showed blocks with target emotions of contempt,happiness, pride, and surprise; the other subcondition included 25 participants and blocks with target emotions of anger, disgust, fear, and sadness.In Study 2, participants in the first subcondition also viewed a fifth blockwith the target emotion of embarrassment, and participants in the secondsubcondition also viewed a fifth block with the target emotion of shame. Inall cases, order of the blocks was randomized between participants, as wasthe presentation of expressions within each block. Results are combinedacross all participants in this condition.4 Recognition rates and mean response latencies in the cognitive loadcondition did not differ when participants who failed to correctly recall thetarget number in each emotion block (i.e., made more than three errors)were removed from that block. This held across both studies.AUTOMATIC RECOGNITION 83creasing the time duration in the cognitive load condition from theno-load fast condition, thereby making it more difficult to interpretdifferences in response latencies between the two conditions. Forthis reason, we do not test whether response latencies differ between the fast and the load conditions, but do compare accuracyrates between the two conditions. However, given that any observed differences could reflect either the difference in load or thedifference in the amount of exposure–response time, we focusprimarily on comparisons between both of these conditions and thedeliberated condition— comparisons that are most relevant to thegoals of the research.Stimuli. Each block consisted of 22 photos: 8 showing thetarget emotion expression for that block (e.g., anger if participantswere asked in that block whether each expression representedanger) and 14 showing each of the seven other emotion expressions twice each (displayed once by each of two targets). Allphotos were taken from the waist up. Two targets (a male and afemale Caucasian) wore identical white shirts and posed in front ofa plain blue background. Posing instructions for anger, contempt,disgust, fear, happiness (i.e., the Duchenne smile that includesAction Units 6 and 12), sadness, and surprise were based on thedirected facial action task (Ekman, Levenson, & Friesen, 1983).Erika Rosenberg, a leading expert in the Facial Action CodingSystem (FACS; Ekman & Friesen, 1978) and certified FACScoder, verified that each expression was correctly posed and signified the correct emotion, based on the EM-FACS (Ekman &Rosenberg, 1997). Pride expressions were posed on the basis ofprevious research (Tracy & Robins, 2004), and Rosenberg verifiedthat these expressions included all facial and body actions relevantto pride. All expressions were later FACS coded by a differentcertified FACS coder; Table 1 shows all action units that wereposed for each expression.For each emotion, an alternate expression that included the bodyand the face was also posed. These alternate expressions weretheoretically derived on the basis of consensual ideas about eachemotion, but in most cases have not been empirically demonstratedto generate reliable recognition for each emotion. Specifically, theadditional movements for each emotion were (a) hands in fists foranger, in preparation to fight; (b) head tilted slightly forward fordisgust, to suggest that the individual might become sick; (c) handsraised to protect the body with shoulders pulled in and held rigidfor fear; (d) head tilted slightly forward and shoulders slumped forsadness (Boone & Cunningham, 1998); (e) head tilted slightlyback for contempt (Rosenberg & Ekman, 1995); (f) a less intense(i.e., non-Duchenne) smile for happiness that included raised lipcorners but no movement of the orbicular occuli muscles surrounding the eyes (and no body movement for either version ofhappiness); and (g) arms raised with palms outstretched for surprise.The alternate expressions were included to decrease the numberof times each photo was repeated without making the blocksunduly short in length. However, given our goal of assessingparticipants’ ability to quickly recognize each established basicemotion expression, results were analyzed only for the EM-FACS–verified, original versions of each expression. In the case of pride,there are two reliably recognized versions of the expression, bothof which include the body (in one, arms are raised above the headwith hands in fists, and in the other, arms are akimbo with handson the hips). Thus, analyses for pride were based on the mean ofrecognition rates across both versions. For the nontarget emotionsin each block (e.g., the anger expressions in a disgust target block),one of the photos portrayed the alternate expression and oneportrayed the standard expression; we varied which human targetshowed each version (alternate or standard) between blocks. Forthe target emotion in each block, each of four photos were repeatedtwice: the female target posing the standard version of the emotion(no body), the male target posing the standard version of theemotion (no body), and each target posing the alternate version.Only the former two expressions (shown twice) were included inanalyses; mean hit rates for these four photos were operationalizedas accuracy rates.ResultsHow quick is emotion recognition? In the fast condition, themean recognition rate across all emotions, 78%, was significantlygreater than chance (i.e., 50%; p .05), based on the binomial test.Mean recognition rates for each emotion ranged from 47% (contempt)to 88% (happiness), and all were significantly greater than chance( ps .05), except for contempt (see Figure 1 for means).To determine how quickly perceivers can accurately recognizeeach expression, we next examined mean response latencies forTable 1Action Units Portrayed in Each Emotion Expression Posed, Based on the Emotion-Facial ActionCoding Scheme (Ekman & Rosenberg, 1997)EmotionexpressionAction unitsFemale MaleAnger 4 7 17 24 4 7 17 23Disgust 4 7 9 16 19 25 26 4 7 9 16 19 25 26Fear 1 2 4 5 15 16 21 25 27 58 1 2 4 5 25 26 58Happiness 6 7 12 25 6 7 12 25Sadness 1 4 7 15 16 17 21 1 4 15Surprise 1 2 5 25 27 38 1 2 5 25 26 38Contempt 7 R10 7 R10 25Pride 12 53 12 53Shame 15 43 54 43 54Embarrassment 6 14 24 43 51 54 14 43 52 5484 TRACY AND ROBINSeach emotion in the fast condition, for accurate responders only.5The mean response latency across all emotions was 602 ms(range 544 for happiness to 669 for contempt; see Figure 2 formeans for each emotion). Mean latencies for the happy and surprise expressions were significantly faster than for the fear, t(29) 2.85 and t(33) 2.38, and for contempt, t(15) 2.47 and t(17) 2.16, ps .05, expressions, but no other differences amongemotions emerged.To determine whether accurate recognition can occur even morequickly, we next examined recognition rates among subsamples ofparticipants who responded within 600 ms for each expression. Onthe basis of binomial tests, all emotions except fear (66%, ns) andcontempt (37%, ns) were recognized significantly better thanchance (50%) within the 600-ms latency (overall M 80%, p.05; see Figure 3 for mean frequencies for each emotion). Theseresults suggest that participants could discriminate among similarly valenced emotions (e.g., could determine that the angerexpression was anger and not disgust) within 600 ms.However, it is also possible that accuracy rates were high whenparticipants responded quickly because they were simply pressing the“yes” key for any negative emotion expression when asked about anynegative emotion target. To address this issue, we examined falsealarm rates (i.e., the proportion of participants who responded “yes” toan expression that did not represent the target emotion) for all participants in the fast condition. Results showed that mean false alarmrates (for each expression, averaged across all possible misidentifications) were fairly low (overall M 15%) and relatively similar acrossemotions (range 10%–25%; see Table 2). In no case was a meanfalse alarm rate significantly greater than chance (50%, p .05), andin only one case was a specific false alarm rate for a specific expression significantly greater than chance (p .05): One of the two prideexpressions (with arms akimbo) was labeled as happiness by 81% ofparticipants (a result that is not particularly surprising given that thepride expression includes a smile).6 In general, however, the overalllow level of false alarms, especially for the negative emotions, suggests that participants did not tend to mislabel each expression as thetarget emotion (which the design of the study would allow them to dowhile still maintaining high hit rates).Does deliberation improve recognition? To test whether accuracy is improved when observers deliberate over an expression’smeaning, we compared recognition rates in the fast and deliberatedconditions. If discriminating among emotions is a cognitively taxingprocess, it should be improved when participants are encouraged tofocus on each expression and deliberate about the correct response.For several expressions, the recognition rates were higher in thedeliberated relative to the fast condition: anger, t(47) 2.40, 11%increase, p .05; fear, t(47) 3.60, 25% increase, p .05; sadness,t(47) 3.58, 18% increase, p .05; and pride, t(49) 2.42, 7%increase, p .05 (see Figure 1 for means).7 In contrast, for contempt,disgust, happiness, and surprise, there was no difference in accuracy5 In all conditions of both studies, before examining response latencieswe trimmed the data such that responses made in less than 200 ms (fewerthan 1% of all responses in both studies) were excluded and treated asmissing data. Bargh and Chartrand (2000) have argued that responses madewithin this brief latency should be treated as meaningless error.6 Because of the study’s design, false alarm rates and recognition rateswere independent. That is, a false alarm rate of 81% does not mean thatonly 19% of participants accurately identified the emotion (in fact, the falsealarm rate and the recognition rate could both be 100%).7 Throughout the article, percentage increases refer to the number of percentage points a particular frequency increased or decreased between conditions. That is, these are not percentages of a previously mentioned frequency.20%30%40%50%60%70%80%90%100%Happiness Surprise Pride Anger Disgust Sadness Fear ContemptFastDeliberatedCognitive Load*** ** **Figure 1. Mean recognition rates in the fast, deliberated, and cognitive load conditions: Study 1. Differencesbetween the fast and cognitive load conditions are difficult to interpret because the maximum response time allottedwas 1,000 ms in the fast condition and 1,500 ms in the cognitive load condition; these differing time frames could havepromoted differences in recognition levels. Thus, the only significant differences presented here are those that emergedbetween the deliberated condition and each of the other two conditions. N 101. *p .05.AUTOMATIC RECOGNITION 85between the two conditions, t(49) 0.84, t(47) 1.68, t(49) 1.21,t(49) 0.71, respectively, ns (see Figure 1).We next examined false alarm rates in the deliberated condition.Most of the rates were fairly low (overall M 11%; see Table 2), andin no case was a mean false alarm rate significantly greater thanchance (50%, p .05). In three cases, the false alarm rate for aspecific emotion was significantly greater than chance (p .05):Anger was labeled as contempt by 76% of participants, and the twopride expressions were labeled as happiness by 88% and 84% ofparticipants, respectively. Given that participants did not make thesemistakes because they were responding quickly or under cognitiveload, similar errors would likely be found in typical recognitionstudies using nonspeeded responding, if these studies permitted participants to label expressions as more than a single emotion.To test whether deliberation influenced participants’ ability tocorrectly determine that a particular expression was not a partic-500550600650700750800850Fast Cognitive LoadmillisecondsHappinessPrideSurpriseAngerSadnessDisgustFearContemptFigure 2. Mean response latencies in the fast and cognitive load conditions, for accurate responders only:Study 1. Differences between conditions are difficult to interpret because the maximum response time allottedwas 1,000 ms in the fast condition and 1,500 ms in the cognitive load condition. N 53.20%30%40%50%60%70%80%90%100%AngerPrideHappinessDisgustSadnessSurpriseFearContempt37%66%84%90% 90%86%92% 91%Figure 3. Mean recognition rates for participants who responded within 600 ms in the fast condition: Study 1.All rates are significantly greater than chance, except fear and contempt. Mean N 14 (range 9–18).86 TRACY AND ROBINSular target emotion (i.e., make correct rejections), we comparedfalse alarm rates in the deliberated versus fast conditions. Nodifferences emerged (see Table 2 for means), and 0.10, for anger,disgust, happiness, sadness, surprise contempt, and pride, respectively, all ns (see Table 2 for means). Thus, participants whodeliberated were no more likely than participants who respondedquickly to correctly reject the suggestion that these expressionsactually represented other emotions, despite the fact that deliberators were somewhat better at accurately recognizing several ofthem (anger, fear, sadness, and pride).Is emotion recognition efficient? To determine whether recognition is an efficient process, we examined recognition rates in thecognitive load condition. Mean accuracy across emotions was80%, significantly greater than chance ( p .05) based on thebinomial test, and this held for every specific emotion exceptcontempt (51%, ns; see Figure 1 for all means). We next examinedmean response latencies for each emotion for accurate respondersonly, to determine how quickly observers can accurately recognizeeach expression under cognitive load. The mean response latencyacross emotions was 762 ms. Mean latencies for happy expressionsdiffered significantly from mean latencies for fear, t(40) 2.03,p .05, but no other differences emerged (see Figure 2).To determine whether accurate recognition can occur even morequickly under cognitive load, we next examined recognition ratesamong subsamples of participants who responded to each expression within 600 ms. On the basis of binomial tests, anger (M 100%), disgust (M 96%), happiness (M 90%), and pride (M 90%) were recognized significantly better than chance ( p .05)within this brief latency. In contrast, contempt (M 48%, ns), fear(M 80%, ns), sadness (M 81%, ns), and surprise (M 64%,ns) were not recognized significantly better than chance within the600-ms latency, in most cases because mean ns were too low (forthese four expressions, only 4–8 individuals responded within 600ms). These results suggest that participants could discriminateamong some of the similarly valenced emotions (e.g., happy vs.pride and anger vs. disgust) in less than 600 ms while undercognitive load, but not all.We next compared accuracy rates in the cognitive load and fastconditions and found no difference for any emotion, t(51) 0.47,0.41, 0.70, 1.45, 0.63, 0.36, 0.04, and 0.80, for anger, contempt,disgust, fear, happiness, pride, sadness, and surprise, respectively,all ns. However, a comparison of accuracy rates in the cognitiveload and deliberated conditions revealed that accuracy was higherfor fear, t(48) 2.45, 14% increase, p .05; sadness, t(48) 5.26, 18% increase, p .05; and surprise, t(50) 2.41, 14%increase, p .05, in the deliberated condition, suggesting that therecognition of these emotions was somewhat impaired under cognitive load. In contrast, recognition rates for anger, contempt,disgust, happiness, and pride were no worse under cognitive loadthan when participants deliberated, t(40) 1.88, t(50) 0.32,t(40) 0.45, t(50) 1.58, and t(50) 1.22, respectively, all ns.We next examined false alarm rates in the cognitive load condition. Most of the rates were fairly low (overall M 14%, seeTable 2 for means for each expression), and in no case was a meanfalse alarm rate significantly greater than chance ( p .05). In twocases, the false alarm rates for specific emotions were significantlygreater than chance ( p .05): Surprise was labeled as fear by 74%of participants, and the pride expression (with arms raised) waslabeled as happiness by 70% of participants. A comparison of theoverall false alarm rates between the cognitive load, fast, anddeliberated conditions showed that in all cases, rates were comparable; no significant differences in mean rates emerged.DiscussionThe findings from Study 1 suggest that with the exception ofcontempt, emotion expressions can be accurately recognized whenparticipants are forced to respond quickly and under cognitiveload. Several findings support this conclusion: Accuracy rates foranger, disgust, fear, happiness, pride, sadness, and surprise weresignificantly greater than chance in the fast and cognitive loadconditions; mean response latencies for accurate responders werebelow 700 ms for all of these emotions and below 650 ms for allexcept fear; and all emotions except fear were accurately recognized better than chance by those participants who respondedwithin 600 ms.However, in several cases emotions were better recognizedwhen participants deliberated; accuracy rates were higher for anger, fear, pride, and sadness in the deliberated versus fast conditionand higher for fear, sadness, and surprise in the deliberated versuscognitive load condition. However, with the exception of fear,these differences were not particularly large, and combined withthe finding that each of these emotions can be recognized accurately under speeded and distracted conditions, they do not indicatethat recognition of these emotions requires complex cognitiveprocesses. Rather, such processes seem to improve participants’ability to recognize these four emotions. For disgust and happiness, the absence of a difference between conditions reduces thelikelihood that recognition is a cognitively taxing process.The fact that a difference emerged for anger and pride but notsurprise in the deliberated versus fast condition, whereas the opposite pattern emerged in the deliberated versus cognitive loadcondition, raises questions about the level of attention required torecognize anger, surprise, and pride (e.g., why can individualsrecognize anger equally well under cognitive load as when deliberating, but not when responding quickly?) In contrast, recognitionof fear and sadness seem to be more unambiguously impairedwhen cognitive resources are depleted.Table 2Mean False Alarm Rates in the Fast, Deliberated, and CognitiveLoad ConditionsEmotionexpressionFastcondition (%)Deliberatedcondition (%)Cognitive loadcondition (%)Study 1 Study 2 Study 1 Study 2 Study 1 Study 2Anger 25 18 17 13 19 18Disgust 20 16 11 7 18 8Fear 18 24 16 25 16 17Happiness 10 15 9 10 11 6Sadness 18 15 7 16 8 13Surprise 16 15 11 11 16 10Contempt 12 17 4 11 11 10Pride 17 16 16 13 17 9Shame 13 20 19Embarrassment 14 19 12Note. Mean false alarm rates did not differ significantly ( p .05) acrossconditions for any emotion.AUTOMATIC RECOGNITION 87Overall, false alarms were fairly low, but there were severalexceptions. In particular, pride expressions were fairly frequentlymislabeled as happiness, but the fact that this mistake occurredacross conditions suggests that it was not a result of speeded ordistracted processing. Furthermore, overall false alarm rates for thepride expression were markedly low; excluding happiness falsealarms, pride was incorrectly labeled as other emotions only 6% ofthe time participants had the opportunity to do so, across conditions. Given that the pride expression includes the major feature ofthe happiness expression (the smile) and that pride is typicallyaccompanied by positive affect, which laypeople often define as“happiness,” it is not surprising that some participants labeled thepride expression as happiness when given the opportunity to do toso and not forced to choose between pride and happiness. Anotherfalse alarm that occurred in the deliberated condition, the mislabeling of anger as contempt, may also have resulted from participants assuming that angry targets felt contemptuous. However,this result is equally likely to be part of a larger problem with thecontempt expression: It was not recognized better than chance inany condition, even when participants deliberated. The final significant false alarm was for surprise, which was frequently mislabeled as fear under cognitive load. Although this mistake did notoccur to such a large extent in the other two conditions, it was aprominent error (56% in the deliberated condition and 61% in thefast condition, both ns) and is consistent with previous researchshowing that individuals across cultures tend to mistake surprisefor fear (e.g., Ekman et al., 1969). This false alarm could, in fact,represent an adaptive advantage, given the similarity of the twoexpressions (eyes wide) and the relative costs of making thismistake versus missing an actual fear expression.Despite the evolutionary importance of recognizing fear, fearwas the emotion, other than contempt, whose recognition sufferedmost under speeded and loaded conditions. Fear was not recognized better than chance within 600 ms and was recognized moreslowly than happiness in the fast and cognitive load conditions.Although these findings seem inconsistent with the evolutionaryexpectation that fear’s survival-relevant message should be mostquickly recognized, they are consistent with previous researchshowing that responses to negatively valenced stimuli (e.g., wordsand expressions) tend to be slower than responses to positivelyvalenced stimuli when tasks require observers to categorize stimuli(Ducci, 1981; Eastwood, Smilek, & Merikle, 2003; Hugenberg,2005; Kirita & Endo, 1995; Leppanen, Tenhunen, & Heitanen,2003; Stenberg et al., 1998). In contrast, when participants mustsimply perceive stimuli without making any cognitive judgmentsabout them (i.e., press a button when the stimulus is perceived),responses to negative stimuli tend to be quicker than responses topositive stimuli, as one might expect (Hugenberg, 2005; Leppanenet al., 2003; O¨ hman, Lundqvist, & Esteves, 2001). This discrepancy suggests that fear might be perceived quickly and automatically, but its urgent message (“danger!”) may be a distractingsource of interference that inhibits categorization (Dijksterhuis &Aarts, 2003; Eastwood et al., 2003; Hugenberg, 2005; Leppanen etal., 2003).However, it is not clear whether this distinction is restricted tofear versus happiness or is more likely to be a generalized negativeversus positive valence effect. In previous research, this distinctionhas emerged between responses to happiness and a range ofnegative emotions, but these studies have not examined whether itholds for other positive emotions, such as pride. In the presentresearch, happiness was recognized most quickly and accuratelyand had the lowest false alarm rate, whereas pride was not significantly more quickly recognized than fear or any other negativeemotion. These findings thus suggest that there may be somethingunique about happiness, rather than a generalized valence effectthat produces quicker recognition for positive emotions.8 We further examine this issue in Study 2.One question not addressed by Study 1 is whether fast andefficient recognition applies only to the eight emotions we examined. We found no differences, on the whole, between pride andthe less cognitively complex emotions, but contempt—anotheremotion that may be more cognitively complex—was clearly lesswell recognized than the more basic emotions. High recognitionfor pride may have been due, in part, to its positive valence, so itremains unclear whether negative emotions that are more cognitively complex and possibly less biologically based (i.e., moreculturally constructed) than the original six, such as embarrassment and shame, can be quickly and efficiently recognized. In fact,from an evolutionary perspective, it is not clear that the adaptivebenefits of recognizing socially complex emotions like contemptor embarrassment are as crucial as the benefits of quickly andefficiently recognizing an emotion like fear or anger.Study 2In Study 2, we sought to replicate and extend the findings ofStudy 1 by including two additional cognitively complex socialemotions: embarrassment and shame.MethodParticipants and procedure. One hundred thirty-two undergraduate students (71% women) participated in exchange forcourse credit (36 were assigned to the fast condition, 64 to thedeliberated condition, and 32 to the cognitive load condition). Theprocedure was the same as in Study 1, except that two additionalblocks of stimuli were added to each condition for the two newlyincluded emotions, embarrassment and shame.Stimuli. Embarrassment and shame expressions were posed bythe same male and female targets who posed the emotion expressions for Study 1; all 10 emotion expressions were included inStudy 2. Embarrassment and shame expressions were posed on thebasis of previous research (Heerey, Keltner, & Capps, 2003), andRosenberg verified that these expressions included the facial andhead actions relevant to these two emotions. As in Study 1,alternate embarrassment and shame expressions were also posed;these included the body and the face. Specifically, the alternateembarrassment and shame expressions both included a slumpedposture (i.e., shoulders pulled inward, chest relaxed, and bodyleaning forward) and the head tilt downward that is part of thepreviously verified version of each expression. As in Study 1,8 It is also possible that the latencies for pride recognition were moresimilar to latencies for recognition of several negative emotions becausepride is not an unambiguously positive emotion. Pride has two distinctfacets, one of which is positive and one of which is more negative, and bothof which are associated with the same nonverbal expression (Tracy &Robins, 2007b).88 TRACY AND ROBINSresults were analyzed only for the version of each expression thathas previously been reliably associated with each particular emotion.ResultsHow quick is emotion recognition? As in Study 1, overallrecognition in the fast condition (M 81%) was significantlygreater than chance (50%, ps .05), based on the binomial test,and this held for each specific emotion except contempt (M 47%, ns; see Figure 4 for means for each emotion). An examination of mean response latencies for each emotion, for accurateresponders only, showed that the overall mean latency was 593 ms(range 534 ms for happiness to 664 ms for fear; see Figure 5),9 ms below the mean latency in the fast condition in Study 1.To further explore the difference between the response latencyfor happiness and fear that emerged in Study 1, we compared meanlatencies for happiness expressions with mean latencies for eachother emotion. Happiness latencies differed significantly fromlatencies for each of the negative basic emotions, t(53) 2.66,t(53) 2.44, t(34) 3.42, t(56) 2.79, and t(24) 3.41, foranger, disgust, fear, sadness, and contempt, respectively, all ps.05, but not from latencies for any of the self-conscious emotionsor surprise. To test whether these differences were unique tohappiness or could be attributed to a positive–negative valenceeffect, we compared the mean latency for pride (M 562) witheach other latency. Pride was recognized more quickly than fear,t(39) 2.35, and contempt, t(28) 2.40, both ps .05, but notany other emotion. Shame was also recognized more quickly thanfear, t(40) 2.02, and contempt, t(29) 2.08, both ps .05. Theonly other significant difference was that surprise was recognizedmore quickly than contempt, t(30) 2.08, p .05. Thus, happiness was the only emotion that was recognized more quickly thanall negative basic emotions, but it was not recognized more quicklythan every emotion, nor was it the only positive emotion that wasrecognized more quickly than some other emotions. In contrast,fear and contempt were the only emotions that were recognizedsignificantly more slowly than both positive emotions and shame.We next examined recognition rates for the subsamples ofparticipants who responded to each expression within 600 ms. Allemotions except contempt (42%, ns) and fear (72%, ns) wererecognized significantly greater than chance within this latency(overall M 81%, p .05; see Figure 6 for means for eachemotion), suggesting that all three complex self-conscious emotions can be recognized and discriminated very quickly. Theseresults replicate the finding from Study 1 that fear and contemptare the only emotions that cannot be recognized within 600 ms.An examination of false alarm rates in the fast condition showedthat mean rates (for each expression, across all possible misidentifications) were fairly low (M 16%) and relatively similaracross emotions (see Table 2). In no case was a mean false alarmrate significantly greater than chance ( p .05), and in only onecase was a false alarm for a particular emotion expression significantly greater than chance: The pride expression (arms akimbo)was labeled as happiness by 84% of participants ( p .05). For theremainder of emotions, false alarm rates were not significantlygreater than chance, and in most cases they were below 50%.Does deliberation improve recognition? We next comparedrecognition rates in the fast and deliberated conditions. Accuracyrates were higher in the deliberated condition for anger, t(66) 2.76, 11% increase; pride, t(66) 3.61, 8% increase; and surprise,t(66) 2.19, 8% increase, all ps .05 (see Figure 4 for means).In contrast, accuracy rates for contempt, disgust, fear, sadness,20%30%40%50%60%70%80%90%100%HappinessPrideSurpriseAngerShameDisgustSadnessEmbarrassmentFearContemptFastDeliberatedCognitive Load* * ** * *Figure 4. Mean recognition rates in the fast, deliberated, and cognitive load conditions: Study 2. Differencesbetween the fast and cognitive load conditions are difficult to interpret because the maximum response timeallotted was 1,000 ms in the fast condition and 1,500 ms in the cognitive load condition. Thus, the onlysignificant differences presented here are those that emerged between the deliberated condition and each of theother two conditions. N 132. *p .05.AUTOMATIC RECOGNITION 89embarrassment, and shame did not differ between the twoconditions, ts(66) 0.04, 1.64, 0.22, 1.50, 1.18, and 1.57,respectively, ns (see Figure 4). To test whether deliberationinfluenced participants’ ability to correctly determine that eachexpression was not a particular target emotion (i.e., makecorrect rejections), we compared false alarm rates in the fastand deliberated conditions. No differences emerged, all ts 1except for disgust, t 1.18, ns (see Table 2 for means),suggesting that deliberating did not improve participants’ ability to correctly determine that a particular expression does notrepresent a target emotion, and this held for the complexemotions. As in Study 1, in no case was a mean false alarm inthe deliberated condition rate significantly greater than chance( p .05), but in several cases a particular expression waslabeled as a particular incorrect emotion at a greater-thanchance frequency: Pride was mislabeled as happiness (84%);fear, as sadness (67%); surprise, as fear (70%); and shame, asboth sadness (80%) and embarrassment (72%, all ps .05).500550600650700750800850Fast Cognitive LoadmillisecondsHappinessPrideSurpriseShameEmbarrassmentDisgustSadnessAngerFearContemptFigure 5. Mean response latencies in the fast and cognitive load conditions, for accurate responders only:Study 2. Differences between conditions are difficult to interpret because the maximum response time allottedwas 1,000 ms in the fast condition and 1,500 ms in the cognitive load condition. N 64.20%30%40%50%60%70%80%90%100%HappinessPrideSurpriseAngerShameDisgustEmbarrassmentSadnessFearContempt42%72%77%82% 80%85%89% 88%94%97%Figure 6. Mean recognition rates for participants who responded within 600 ms in the fast condition: Study 2.All rates are significantly greater than chance except fear and contempt. Mean N 20 (range 17–26).90 TRACY AND ROBINSIs emotion recognition efficient? As in Study 1, overall accuracy (M 81%) in the cognitive load condition was significantlygreater than chance ( p .05), and this held for each specificemotion except fear and contempt (see Figure 4 for means). Themean response latency for accurate responders was 720 ms, 42 mslower than the mean latency in the cognitive load condition inStudy 1. Happiness, pride, and surprise expressions were recognized more quickly than those of contempt, t(41)s 3.29, t(37) 3.26, and t(39) 2.90, respectively; fear, t(35) 3.47, t(32) 2.66, and t(34) 2.31, respectively; and sadness, t(53) 2.78,t(50) 2.74, and t(49) 2.33, respectively; all ps .05. Happiness was also more quickly recognized than anger, t(54) 2.01,and both happiness and surprise were recognized more quicklythan shame, t(54) 2.08 and t(51) 2.03, respectively, all ps.05. No other differences in response latencies emerged.We next examined recognition rates among the subsamples ofparticipants who responded to each expression within 600 ms,under cognitive load. On the basis of binomial tests, disgust (M 80%), happiness (M 100%), pride (M 90%), surprise (M 90%), embarrassment (M 86%), and shame (M 88%) expressions were recognized significantly better than chance ( p .05).In contrast, anger (M 74%), contempt (M 45%), fear (M 50%), and sadness (M 63%) expressions were not accuratelyrecognized within the 600-ms latency. These results suggest thatparticipants could recognize the complex self-conscious emotionsin very brief latencies, even when cognitively taxed.We next compared accuracy rates in the cognitive load versusfast conditions and found no difference for any emotion, all ts1, except for contempt, t(66) 1.81, and shame, t(66) 1.01,both ns, suggesting that recognition was not impaired any furtherby the addition of a cognitive load, even for the complex emotions.However, several differences emerged between the cognitive loadand deliberated conditions: Accuracy was higher in the deliberatedcondition for anger, t(62) 2.26, 9% increase; pride, t(62) 2.56,8% increase; and surprise, t(62) 2.71, 12% increase, all ps.05, replicating the comparisons between the fast and deliberatedconditions and suggesting that the recognition of these three emotions was slightly impaired under distracted and speeded responding. In contrast, recognition rates for contempt, disgust, fear,happiness, sadness, embarrassment, and shame were no worseunder cognitive load than when participants deliberated.False alarm rates in the cognitive load condition were fairly low(see Table 2), and in no case was a mean false alarm rate significantly greater than chance ( p .05). However, as was found inthe deliberated condition, the shame expression was mislabeled assadness by 72% of participants ( p .05). Mean false alarm rateswere comparable across conditions; no significant differenceswere found among the fast, deliberated, and cognitive load conditions for any emotion.DiscussionThe findings from Study 2 generally replicated the findingsfrom Study 1. In both studies, all emotion expressions exceptcontempt were recognized accurately, quickly, and efficiently (i.e.,under cognitive load). With only a few exceptions, accuracy wasnot substantially improved by deliberation. There were, however,a few discrepancies between the two studies. First, Study 2 failedto replicate the finding that deliberation improves recognition offear and sadness. However, in Study 2, fear recognition was notparticularly high in any of the three conditions (Ms 64% in thefast condition, 66% in the deliberated condition, and 59% in thecognitive load condition), although it was significantly greater thanchance in all three. Second, in Study 2 but not Study 1, therecognition rates for anger, surprise, and pride were lower in thefast and cognitive load conditions compared with the deliberatedcondition, suggesting that recognition of these three emotions may,in fact, benefit from directed cognitive resources. However, thesedifferences were fairly small, especially for pride, which wasrecognized by 90% of participants in the fast and cognitive loadconditions, but near ceiling (98%) in the deliberated condition. Thefact that these differences did not replicate across studies alsosuggests that they are not robust effects.Finally, the findings from Study 2 allow us to add two moreemotions to those found in Study 1 to be accurately recognizedunder fast and cognitive load conditions. Embarrassment andshame were recognized equally as well regardless of whetherparticipants responded quickly, deliberated, or were distracted bya cognitive load. Shame was somewhat frequently mislabeled,usually as sadness, but this false alarm may be due to participantsviewing shame expressions as conveying both shame and sadness,given that the two emotions likely co-occur in most circumstancesand share an important feature (eye gaze downward).General DiscussionThe findings from the present research address several questionsabout the process of emotion recognition. First, both studies suggest that overall, emotion expressions can be accurately recognizedand discriminated from each other very quickly (i.e., within 600ms), and under cognitive load. Results from the false alarm rateanalyses suggest that participants did not identify emotion expressions under speeded or cognitive load conditions by simply responding “yes” to all expressions displayed. Rather, even whilecognitively taxed, participants accurately recognized most emotions and accurately rejected most false suggestions. These findings held for the basic emotions and for the cognitively complex,self-conscious emotions of embarrassment, pride, and shame.However, there are several caveats to this conclusion. First,contempt was not recognized better than chance in any condition,suggesting that its expression is difficult to recognize even whencognitive resources are directly allocated to the task. This findingis consistent with previous studies that have failed to find abovechance recognition of contempt (e.g., Elfenbein & Ambady, 2002;Russell, 1991; Tracy & Robins, 2007b; Wagner, 2000). One possible explanation is that contempt recognition may be hindered bycollege students’ unfamiliarity with the word contempt; studieshave shown that recognition is improved by providing contextualinformation about the emotion (Matsumoto & Ekman, 2004, butsee Wagner, 2000). Other studies have found that contempt isrecognized at higher rates when a head tilt is added to the facialmovements in the standard expression (Izard & Haynes, 1988;Rosenberg & Ekman, 1995); this finding fits with the suggestionthat all complex emotion expressions involve the head or body inaddition to the facial musculature (Tracy & Robins, in press). Forthe other complex emotions, however, the results do not suggestthat their recognition requires greater cognitive resources than therecognition of more basic emotions; in fact, pride and shame wereAUTOMATIC RECOGNITION 91recognized more quickly than fear, and in Study 2 under cognitiveload, pride was recognized more quickly than sadness.A second caveat is that for anger, pride, and surprise, accuracywas somewhat improved by deliberation, when compared with atleast one of the two fast conditions. All three emotions werereliably recognized under conditions of minimal cognitive resources, at rates comparable to those typically found in nonspeeded conditions (Elfenbein & Ambady, 2002; Tracy & Robins,2004), and pride was recognized accurately within 600 ms evenunder cognitive load, but participants who were given time andencouraged to deliberate about the meaning of these three expressions showed slightly improved recognition—although not consistently so (across both studies and speeded–load conditions, meanincreases under deliberation for these three emotions ranged from7% for pride to 11% for surprise). This suggests that althoughextensive cognitive resources are not necessary to recognize theseemotions, the addition of such resources can improve accuracy forsome perceivers.Third, although fear was recognized better than chance in bothstudies, and in Study 2 was recognized equally well across all threeconditions, it was not well recognized by those participants whoresponded within 600 ms in either study. Fear was also recognizedsignificantly more slowly than several emotions. This findinghighlights a distinction between fear and the other emotions, which(except for contempt) were accurately recognized within 600 msand were not consistently among the slowest recognized. Thisdifference may be the result, as previously suggested, of interference created by the fear expression, which might inhibit therecognition process by orienting perceivers to potential danger.Consistent with this interpretation, fear was recognized moreslowly than both positive emotions (happiness and pride) and theone emotion of ambiguous valence, surprise, in the cognitive loadcondition in Study 2. However, future studies must further addressthis issue before we can rule out other possible interpretations,such as unique features of the fear expression that might make itparticularly difficult to recognize.One possible interpretation is that fear is an intensely negativeemotion, and the findings from both studies suggest that negativeemotions— especially negative basic emotions—are in generalrecognized less accurately and quickly than positive emotions. Byincluding the pride expression, the present research was able totake a new look at this distinction, previously found betweenhappiness and a range of negative emotions. In Study 1, priderecognition appeared to be no different from the recognition ofseveral negative emotions, and there was little reason to suspectthat participants’ ability to more quickly and accurately recognizehappiness was a valence effect. In Study 2, however, this was notas clearly the case. Happiness and pride were recognized morequickly than fear, sadness, and contempt in at least one of the twospeeded conditions. However, happiness was the only emotion thatwas also more quickly recognized than anger and disgust, suggesting that there is something unique about the happiness expression.Shame was also recognized more quickly than fear and contempt,so we cannot infer a generalized positive valence effect, but it isnoteworthy that pride and surprise—the only emotion that is notclearly positive or negative—were consistently the most quicklyrecognized expressions, after happiness.The fact that across studies happiness expressions were generally recognized more quickly than negative basic emotion expressions—the expressions that likely evolved to send urgent, survivaloriented messages—but not more quickly than the negative socialemotions (embarrassment and shame), which likely evolved tosend more social, less urgent messages, is consistent with thetheory that negative basic emotions are distracting (Pratto & John,1991). That is, when participants see anger, disgust, fear, andsadness expressions, rather than immediately reach a consciousunderstanding of the expression and press the correct key, theircognitive resources may be immediately allocated to a more important task: finding the source of the threat. This interpretationwould not explain why happiness, in particular, is so quicklyrecognized, but it does provide a possible explanation for why thenegative basic emotions were recognized less accurately and moreslowly in the fast and cognitive load conditions.However, the positive–negative valence distinction could alsobe due to the need to discriminate among four to six differentnegative emotions versus only two positive emotions, if emotionrecognition occurs through two sequential steps: first determiningthe valence of the expression and then discriminating amongsimilarly valenced expressions. Future studies could address thisissue by comparing recognition rates for only two negative emotions with two positive emotions, although participants may stilluse a mental process that requires them to discard false optionsthey know exist, even if those options are not part of the experimental procedure. It is also possible that the difference betweenpositive and negative emotions is due to the actual reliability ofeach signal, rather than its positivity or negativity, or the numberof options available. Happiness, pride, and surprise are typicallyrecognized at the highest rates of any emotions, across cultures,suggesting that their expressions may be particularly clear signals(Elfenbein & Ambady, 2002; Tracy & Robins, in press). Thisclarity would likely be reflected in quick, accurate, and efficientrecognition.ImplicationsThe present studies add to our knowledge of emotion recognition in several ways. As the first studies to systematically examinerecognition of all known emotion expressions under speeded anddistracting conditions, our findings demonstrate that each distinctemotion expression, except contempt, can be recognized and discriminated quickly and efficiently. Even under cognitively taxingconditions, participants made clear distinctions among similarlyvalenced emotions, and they did so quickly (in fact, when responding quickly, participants seemed less likely to show a false alarmresponse bias than when deliberating, although overall differenceswere not significant). By demonstrating that emotion recognitioncan occur under constraints that are likely to be present in the realworld, these findings support evolutionary accounts of emotionrecognition.A second novel contribution of the present research is thefinding that all three self-conscious emotions known to havenonverbal expressions were recognized as quickly and efficientlyas the previously established basic emotion expressions, such asanger and sadness (and more so than fear). Despite the fact that theexperience of self-conscious emotions requires greater cognitivecomplexity, the ability to recognize them seems as likely to be anevolved capacity of the mind as it is for the basic emotions.92 TRACY AND ROBINSThird, the greater speed in processing positive than negativeexpressions, found in several previous studies (e.g., Ducci, 1981;Kestenbaum & Nelson, 1992; Kirouac & Dore, 1983; Hugenberg,2005; Leppanen et al., 2003; Stenberg et al., 1998), was demonstrated to extend to another positive emotion besides happiness:pride. Previous studies addressing this issue have included only asingle positive emotion expression (happiness), so it has beenunclear whether these effects are specific to happiness or reflect abroader distinction between positively and negatively valencedemotions. Our findings for the pride expression, particularly fromStudy 2, are somewhat consistent with the latter conclusion. However, our findings also suggest that two processes may be involved:one that promotes a general positive versus negative emotiondistinction in speed and accuracy, and one that promotes theparticularly quick and accurate recognition of happiness specifically.Limitations and Future DirectionsSeveral results were inconsistent across studies and thus requirereplication. For example, further research is needed to determinewhether deliberation improves accuracy for fear and sadness. Inaddition, several specific limitations of our research design shouldbe addressed in future work. Although we were able to examinerecognition rates among subgroups of participants who respondedto each expression within 600 ms, future studies should limit theduration of the stimulus presentation and time allotted for responses to this brief window to determine precisely how quicklyindividuals can recognize each emotion expression. A related issueconcerns the fact that in our deliberated condition, participantsviewed expressions for a lengthy duration (8 s) and had unlimitedtime to respond. Thus, we cannot be sure whether differences thatemerged between this condition and the other two conditions weredue to the longer viewing period, the longer response time, or both.To fully tease apart these possibilities, future studies should separately and systematically manipulate the viewing and responsetimes. However, given that the differences we found betweenmaximally different conditions (restricted exposure and restrictedresponse time vs. essentially unrestricted exposure and unrestrictedresponse time) tended to be fairly small, it is unlikely that recognition rates in our fast condition would differ substantially fromrecognition rates in either intermediary condition (restricted exposure and unrestricted response time or unrestricted exposure andrestricted response time).A second limitation is that although we included all 10 emotionsfor which there is at least some evidence of cross-cultural recognition, these expressions were portrayed by only two targets, bothCaucasian, so questions remain about the extent to which ourfindings generalize to targets of other ethnicities. Hugenberg andBodenhausen (2003) found that targets’ race can influence perceivers’ ability to recognize emotions, such that Black targetsshowing ambiguous expressions are more readily perceived asshowing anger than are White targets showing the same expressions when perceivers are high in racism. Thus, the recognitionrates found here in the fast and cognitive load conditions may varydepending on the race of the target and racism level of theperceiver. If the positive–negative valence distinction is due tointerference from the threat signal associated with negative emotions, this distinction might be exacerbated when negative expressions are shown by targets who seem particularly threatening,either because their race is stereotypically associated with threat orbecause of other target-specific features (e.g., size, clothing, etc.;Hugenberg, 2005).In a related vein, future research should replicate these findingsfor perceivers from other cultures. Previous research has suggestedthat for the most part, the emotion expressions we included areaccurately recognized by individuals from different cultures, atleast when these individuals are given unlimited time to do so.However, we do not know whether cross-cultural recognitionwould be as accurate when perceivers are forced to respondquickly or under cognitive load. Addressing this issue may berelevant to extant controversies about the universality of emotionexpressions. If, in contrast to the Darwinian view, expressions areuniversal because they have spread from culture to culture throughcross-cultural transmission, then the recognition process may require greater cognitive resources for individuals living in culturesin which expressions did not originate. Future studies that applythe present methods to cross-cultural research may help addressquestions about the universality of emotion expressions and thevalidity of recently proposed resolutions to this issue, such asdialect theory (Elfenbein, Beaupre, & Levesque, 2007). If thein-group bias in emotion recognition is the result of culturespecific dialects (Elfenbein & Ambady, 2002), then the recognition of out-group emotions may be particularly impaired whencognitive resources are limited.Finally, given that both studies used a forced-choice responseformat, we cannot be sure that emotion recognition occurs asquickly and efficiently in real-life conditions, in which participantsare not presented with emotion word options when interpreting theexpressions of others. On the other hand, real-world recognitionmay occur more quickly than was found here, given that in the realworld there is no need for the motor cortex to generate a buttonpress that demonstrates recognition, as was the case in the presentresearch. Future studies could address both of these issues bycombining the present methodology with a more open-ended,free-response format that does not require motor responses, perhaps one that uses voice recognition software and asks participantsto say emotion labels aloud as they view expressions.ConclusionsMore broadly, these findings are informative about the cognitiveprocesses that underlie emotion recognition. Some time ago, Bargh(1994) argued that automatic social processes are marked by “fourhorsemen”: lack of awareness, lack of intention, lack of control,and efficiency. Although more recent work has suggested that notall automatic processes must share all of these features (OkonSinger, Tzelgov, & Henik, 2007), it is nonetheless informative touse this perspective to help understand the ways in which a givenmental process is, and is not, automatic. In the case of emotionrecognition, previous studies demonstrating that subliminal expressions influence subsequent behaviors suggest that at somelevel, the content of expressions can be perceived without intentionand without awareness (Dimberg et al., 2000; Niedenthal, 1990;Winkielman & Berridge, 2004). Other studies demonstrating thatexpressions interfere with the processing of incongruent stimulisuggest that at some level, emotion recognition cannot be controlled (Stenberg et al., 1998). The findings from the presentAUTOMATIC RECOGNITION 93research suggest that conscious awareness of each expression’smeaning can be reached without attentional focus and with onlylimited cognitive resources—suggesting that emotion recognitionmeets the requirement of the fourth horseman: It is efficient. Infact, the greater number of significant false alarms that occurredwhen participants deliberated than when they responded quickly orunder cognitive load (in Study 2) suggests that an important part ofrecognition, the correct rejection of false suggestions, may in factbe impaired when attentional resources are directed toward thetask. In this regard, emotion recognition may be one of the manysocial judgments that benefits from a lack of directed attention(Patterson & Stockbridge, 1998; Wilson & Schooler, 1991).If this is the case, we are fortunate that in the everyday conditions under which emotions are typically displayed, perceivers donot have time to elaborately interpret each expression in such away that false alarms become likely. For certain emotions, a smallbut significant proportion of perceivers may consequently be lesslikely to accurately identify their expressions than they would ifthey deliberated. However, the present findings suggest that themajority of individuals do succeed in accurately recognizing expressions under real-world constraints. Thus, a suggestion made byDarwin (1872) long before psychologists reached an understanding of automaticity seems to be correct: “So many shades ofexpression are instantly recognized without any conscious processof analysis on our parts” (p. 359).ReferencesBargh, J. A. (1994). The Four Horsemen of automaticity: Awareness,efficiency, intention, and control in social cognition. In R. S. Wyer Jr. &T. K. Srull (Eds.), Handbook of social cognition (2nd ed., pp. 1–40).Hillsdale, NJ: Erlbaum.Bargh, J. A., & Chartrand, T. (2000). The mind in the middle: A practicalguide to priming and automaticity research. In H. T. Reis & C. M. Judd(Eds.), Handbook of research methods in social and personality psychology (pp. 253–285). New York: Cambridge University Press.Bargh, J. A., & Tota, M. E. (1988). Context-dependent automatic processing in depression: Accessibility of negative constructs with regard to selfbut not others. Journal of Personality and Social Psychology, 54, 925–939.Blairy, S., Herrera, P., & Hess, U. (1999). Mimicry and the judgment ofemotional facial expressions. Journal of Nonverbal Behavior, 23, 5–41.Boone, R. T., & Cunningham, J. G. (1998). Children’s decoding of emotion in expressive body movement: The development of cue attunement.Developmental Psychology, 34, 1007–1016.Darwin, C. (1872). The expression of the emotions in man and animals (3rded.). New York: Oxford University Press.Dijksterhuis, A., & Aarts, H. (2003). On wildebeests and humans: Thepreferential detection of negative stimuli. Psychological Science, 14,14–18.Dimberg, U., & Thunberg, M. (1998). Rapid facial reactions to emotionalfacial expressions. Scandinavian Journal of Psychology, 39, 39–45.Dimberg, U., Thunberg, M., & Elmehed, K. (2000). Unconscious facialreactions to emotional facial expressions. Psychological Science, 11,86–89.Ducci, L. (1981). Reaction times in the recognition of facial expressions ofemotion. Italian Journal of Psychology, 8, 183–193.Eastwood, J. D., Smilek, D., & Merikle, P. M. (2003). Negative facialexpression captures attention and disrupts performance. Perception andPsychophysics, 65, 352–358.Elkman, P. (1992). An argument for basic emotions. Cognition and Emotion, 6, 169–200.Ekman, P. (2003). Emotions revealed. New York: Times Books.Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the faceand emotion. Journal of Personality and Social Psychology, 17, 124–129.Ekman, P., & Friesen, W. V. (1978). Facial Action Coding System:Investigator’s guide. Palo Alto, CA: Consulting Psychologists.Ekman, P., & Friesen, W. V. (1986). A new pan-cultural facial expressionof emotion. Motivation and Emotion, 10, 159–168.Ekman, P., Levenson, R. W., & Friesen, W. V. (1983, September 16).Autonomic nervous system activity distinguishes among emotions. Science, 221, 1208–1210.Ekman, P., & Rosenberg, E. L. (1997). What the face reveals: Basic andapplied studies of spontaneous expression using the Facial Action Coding System (FACS). New York: Oxford University Press.Ekman, P., Sorenson, E. R., & Friesen, W. V. (1969, April 4). Pan-culturalelements in facial displays of emotion. Science, 164, 86–88.Elfenbein, H. A., & Ambady, N. (2002). On the universality and culturalspecificity of emotion recognition: A meta-analysis. Psychological Bulletin, 128, 203–235.Elfenbein, H. A., Beaupre, M., & Levesque, M. (2007). Toward a dialecttheory: Cultural differences in the expression and recognition of posedfacial expressions. Emotion, 7, 131–146.Fridlund, A. J. (1994) Human facial expression: An evolutionary View. SanDiego, CA: Academic Press.Gilbert, D. T., & Osbourne, R. E. (1989). Thinking backward: Somecurable and incurable consequences of cognitive busyness. Journal ofPersonality and Social Psychology, 57, 940–949.Haidt, J., & Keltner, D. (1999). Culture and facial expression: Open-endedmethods find more expressions and a gradient of recognition. Cognition& Emotion, 13, 225–266.Heerey, E. A., Keltner, D., & Capps, L. M. (2003). Making sense ofself-conscious emotion: Linking theory of mind and emotion in childrenwith autism. Emotion, 3, 394–400.Hugenberg, K. (2005). Social categorization and the perception of facialaffect: Target race moderates the recognition advantage for happy faces.Emotion, 5, 267–276.Hugenberg, K., & Bodenhausen, G. V. (2003). Facing prejudice: Implicitprejudice and the perception of facial threat. Psychological Science, 14,640–643.Izard, C. E. (1971). The face of emotion. East Norwalk, CT: AppletonCentury-Crofts.Izard, C. E., Ackerman, B. P., & Schultz, D. (1999). Independent emotionsand consciousness: Self-consciousness and dependent emotions. In J. A.Singer & P. Singer (Eds.), At play in the fields of consciousness: Essaysin honor of Jerome L. Singer (pp. 83–102). Mahwah, NJ: Erlbaum.Izard, C. E., & Haynes, O. M. (1988). On the form and universality of thecontempt expression: A challenge to Ekman and Friesen’s claim ofdiscovery. Motivation and Emotion, 12, 1–16.Kappas A. (1997). The fascination with faces: Are they windows to oursoul? Journal of Nonverbal Behavior, 21, 157–161.Keltner, D. (1995). Signs of appeasement: Evidence for the distinct displays of embarrassment, amusement, and shame. Journal of Personalityand Social Psychology, 68, 441–454.Kestenbaum, R., & Nelson, C. (1992). Neural and behavioral correlates ofemotion recognition in children and adults. Journal of ExperimentalChild Psychology, 54, 1–18.Kirita, T., & Endo, M. (1995). Happy face advantage in recognizing facialexpressions. Acta Psychologica, 89, 149–163.Kirouac, G., & Dore, F. Y. (1983). Accuracy and latency of judgment offacial expressions of emotions. Perceptual & Motor Skills, 57, 683–686.Kirouac, G., & Dore, F. Y. (1984). Judgment of facial expressions ofemotion as a function of exposure time. Perceptual & Motor Skills, 59,147–150.Lagattuta, K. H., & Thompson, R. A. (2007). The development of self-94 TRACY AND ROBINSconscious emotions: Cognitive processes and social influences. In J. L.Tracy, R. W. Robins, & J. P. Tangney (Eds.), The self-conscious emotions: Theory and research (pp. 91–113). New York: Guilford Press.Leppanen, J. M., Tenhunen, M., & Hietanen, J. K. (2003). Faster choicereaction times to positive than to negative facial expressions. Journal ofPsychophysiology, 17, 113–123.Matsumoto, D., & Ekman, P. (2004). The relationship among expressions,labels, and descriptions of contempt. Journal of Personality and SocialPsychology, 87, 529–540.McAndrew, F. T. (1986). A Cross-cultural study of recognition thresholdsfor facial expressions of emotions. Journal of Cross-Cultural Psychology, 17, 211–224.Niedenthal, P. M. (1990). Implicit perception of affective information.Journal of Experimental Social Psychology, 26, 505–527.O¨ hman, A. (2000). Fear and anxiety: Evolutionary, cognitive, and clinicalperspectives. In M. Lewis & J. M. Haviland-Jones (Eds.), Handbook ofemotions (2nd ed., pp. 573–691). New York: Guilford Press.O¨ hman, A., Lundqvist, D., & Esteves, F. (2001). The face in the crowdrevisited: A threat advantage with schematic stimuli. Journal of Personality and Social Psychology, 80, 381–396.Okon-Singer, H., Tzelgov, J., & Henik, A. (2007). Distinguishing betweenautomaticity and attention in the processing of emotionally significantstimuli. Emotion, 7, 147–157.Parkinson, B. (2005). Do facial movements express emotions or communicate motives? Personality and Social Psychology Review, 9, 278–311.Patterson, M. L., & Stockbridge, E. (1998). Effects of cognitive demandand judgment strategy on person perception accuracy. Journal of Nonverbal Behavior, 22, 253–263.Pratto, F., & John, O. P. (1991). Automatic Vigilance: The attentiongrabbing power of negative social information. Journal of Personalityand Social Psychology, 61, 380–391.Rosenberg, E. L., & Ekman, P. (1995). Conceptual and methodologicalissues in the judgment of facial expressions of emotion. Motivation andEmotion, 19, 111–138.Russell, J. (1991). Negative results on a reported facial expression ofcontempt. Motivation and Emotion, 15, 281–291.Russell, J. A. Bachorowski, J., & Fernandez-Dols, J. (2003). Facial andvocal expressions of emotions. Annual Review of Psychology, 54, 349–359.Scherer, K. (1994). Emotion serves to decouple stimulus and response. InP. Ekman & R. J. Davidson (Eds.), The nature of emotion: Fundamentalquestions (pp. 127–130). New York: Oxford University.Stanners, R. F., Byrd, D. M., & Gabriel, R. (1985). The time it takes toidentify facial expressions: Effects of age, gender of subject, sex ofsender, and type of expression. Journal of Nonverbal Behavior, 9,201–213.Stenberg, G., Wiking, S., & Dahl, M. (1998). Judging words at face value:Interference in a word processing task reveals automatic processing ofaffective facial expressions. Cognition & Emotion, 12, 755–782.Tracy, J. L., & Robins, R. W. (2004). Show your pride: Evidence for adiscrete emotion expression. Psychological Science, 15, 194–197.Tracy, J. L., & Robins, R. W. (in press). The nonverbal expression of pride:Evidence for cross-cultural recognition. Journal of Personality andSocial Psychology.Tracy, J. L., & Robins, R. W. (2007a). The self in self-conscious emotions:A cognitive appraisal approach. In J. L. Tracy, R. W. Robins, & J. P.Tangney (Eds.), The self-conscious emotions: Theory and research (pp.3–20). New York: Guilford Press.Tracy, J. L., & Robins, R. W. (2007b). The prototypical pride expression:Development of a nonverbal behavioral coding scheme. Emotion, 7,789–801.Wagner, H. L. (2000). The accessibility of the term “contempt” and themeaning of the unilateral lip curl. Cognition & Emotion, 14, 689–710.Whalen, P. J., Rauch, S. L., & Etcoff, N. L. (1998). Masked presentationsof emotional facial expressions modulate amygdale activity withoutexplicit knowledge. Journal of Neuroscience, 18, 411–418.Wilson, T. D., & Schooler, J. W. (1991). Thinking too much: Introspectioncan reduce the quality of preferences and decisions. Journal of Personality and Social Psychology, 60, 181–192.Winkielman, P., & Berridge, K. (2004). Unconscious emotion. CurrentDirections in Psychological Science, 13, 120–123.Winkielman, P., Berridge, K., & Wilbarger, J. L. (2005). Unconsciousaffective reactions to masked happy versus angry faces influence consumption behavior and judgments of value. Personality and SocialPsychology Bulletin, 31, 121–135.Received June 22, 2007Revision received September 10, 2007Accepted October 18, 2007 AUTOMATIC RECOGNITION 95