Emotions and facial processing of emotions
Emotions have played an important role in our evolution and the ability to express these come from environmental cues (Darwin, 1872). In 1999, Ekman provided criteria for emotions (see table 1). He and Friesen (1975) also identified six emotions (happiness, disgust, surprise, sadness, anger and fear) that were universal, that is these emotions were represented facially the same way across different cultures of the world. As a result, these six facial expressions tend to be termed as innate and hence support the “nurture” side of the debate in the famous “Nature vs Nurture” theme in Psychology.
Amongst these emotions, of key interest to psychologists are the emotions of anger and happiness. Hansen and Hansen (1988) concluded that it was easier for participants to detect an angry face within a set of happy faces. This is generally called the anger superiority effect. Further evidence (amongst others) for this effect was given by Fox, Lester, Russo, Bowles, Pichler, and Dutton (2000). Contrastingly, a research conducted by Juth, Lundqvist, Karlsson and Ohman (2005) found out that it was the facial expression of happiness that was detected faster than other emotions.
This occurrence is termed as the happiness superiority effect. Conclusively, there is evidence (through empirical studies) that some emotions expressed in the face can be detected faster than others, namely anger and happiness. Table 1-Criteria for “basic emotions” Ekman (1999) When studying facial emotions and its effect on attention, psychologists use reaction time(s) (RT) and accuracy as measures of performance, the dependent variables. A typical experimental set up comprises of participants who have to look for a target stimulus amongst many other distractors.
A target stimulus is the one which the experimenter wants the participant to look for/identify. A distractor (in this case) is a stimulus that limits attention from the target stimulus. In Hansen and Hansen’s (1988) experiment, the target stimuli were the two facial expressions of anger and happiness. The third target stimulus was a neutral face. These target stimuli are also known as “discrepant faces”. Each one of these discrepant faces was shown with the other two corresponding stimuli respectively that acted as distractors (see Figure 1). The participants were told to detect the discrepant face nd their RT as well as accuracy Figure 1-Diagramatic representation of Hansen and Hansen’s (1988) experimental set-up on discrepant faces was measured. To summarize the results of their first experiment, Hansen and Hansen found that it took a shorter time for participants to detect angry faces when they were presented amongst happy or neutral distractors. The results also showed that it took a longer time to detect happy faces from angry ones, on neutral distractors. In their second experiment, Hansen and Hansen used four faces, each one having a discrepant face.
There were two situations to the experiment – either an angry face in a happy crowd or vice versa. Identification of where exactly the discrepant face was located was the key to this experiment. Again, the results were consistent with the previous experiment that it was easier to detect an angry discrepant face in a happy crowd than detecting a happy face in an angry crowd. A third experiment was also conducted on the anger superiority effect by Hansen and Hansen. This experimental set up differed in a way that the display size (number of faces to look from) was varied this time around (4 or 9 faces).
The reason the display size was increased was to check if a target pop out occurred or not. A target pop out occurs when there is no change (or very little) in the time taken when detecting a target, even though display size is increased (Treisman and Gelade, 1980, (Treisman and Gormican, 1998). A target pop out depicts the automatic, parallel search prospective in attention. On the other hand, if time taken when detecting a target increases with increase in display size, the serial search is confirmed, as we have to look for each element in turn and then make a decisive response. See figure 2). So building up from the previous two experiments, we could hypothesize that if anger superiority really exists, the face that depicts anger will be automatically detected and hence an increase in the display size won’t affect the time taken to detect these angry emotions. The results of the experiment supported this idea and proved participants were more efficient in detecting angry faces than any other. Furthermore, the results illustrated that for happy faces, the time to detect increased linearly as the display size increased.
Display Size (n) Display Size (n) Automatic, parallel search Serial search Figure 2-A Generic graphical display of parallel and serial search (no data or numbers have been shown) Fox et al (2000) replicated the conditions of Hansen and Hansen’s (1988) study, but used schematic faces instead. (See figure 3). They also, varied the duration for the search (300 milliseconds and 800 milliseconds). The participants in this particular Figure 3-Schematic faces used by Fox et al (2000) experiment had to pick the “odd” face out.
The researchers found that it was more difficult to pick the odd face from a set of angry faces, than to pick an odd face from happy or neutral faces (for 300 milliseconds). Ohman, Lundqvist and Esteves (2001) reported similar results for threatening angry faces, thus supporting the anger superiority effect. Moving on from the anger superiority effect, psychologists have also gathered data to support the happiness superiority effect. One such team comprised of Juth et al (2005). They used eight coloured photographic images of faces of different individuals.
Moreover, they also varied the orientation of the faces in these images so that in each image, the face looked directly or away from the viewer. The results of this experiment indicated that happy faces stood out. Contrary to previous believe, the researchers did not find any evidence of an anger superiority effect when the image in the face was pointing directly at the participants. In 2011, Becker, Anderson, Mortensen, Neufeld and Neel found that their experiments did not reveal any anger pop out and instead supported the happiness superiority effect.
Furthermore, evidence for the happiness superiority effect also comes from a study mentioned in “Science Daily”. The website posted that researcher J. Antonio Aznar-Casanova and his team used the technique of cerebral asymmetry and concluded that happiness and surprise were processed faster than sadness and fear. From the above evidences, we can conclude that anger and happiness can be detected faster than other emotions mentioned at the start of this essay due to their respective “superiority effects”. However, their detection depends upon several factors. One such factor is whether or not the faces are schematic or images of real people.
Miyazawa and Iwasaki (2010) stated that Juth et al (2005) found a happiness superiority effect with real faces (as stated above), but found an anger superiority effect with schematic faces. A possible reason for this could be the fact that using schematic faces results in low ecological validity (the extent to which the experiment is true to everyday life). People in general detect facial expressions of others like themselves and not schematic drawings. In addition, problems of confounds within experiments trying to detect superiority effects have occurred.
Purcell, Stewart and Skov (1996) found that there were dark patches on the angry faces used in Hansen and Hansen’s (1988) experiment. In support of this idea, Mak-Fan, Thompson and Green (2011) questioned the validity of schematic faces and claimed that curved-lined mouth faces were confounds in such experiments. Keeping these claims in mind, it could be assumed that the anger superiority effect occurs with lines only. This perhaps indicates a low level visual explanation when it comes to detecting angry faces, and not high level facial processing. However, further research needs to be carried out to prove this.
Another confounding factor is individual and cultural differences. In 1999, Shiori, Someya, Helmeste and Tang conducted a cross cultural experiment between Americans and Japanese. They found that while both groups of people performed equally well in judging positive emotions (example happiness), the Japanese were poorer at judging negative emotions (anger and fear). Shiori et al explained this finding in terms of differences between the Japanese and American cultures. The Japanese culture (in broad terms) discourages outward expression of negative emotions while the American culture promotes outward expression of emotions.
Hence, this study is an example of how culture effects detection of Figure 4-Results of E. K. Farran et al (2011) emotions facially. Farran, Branson and King (2011) looked at individual differences between children and adolescents who suffered from High Functioning Autism (HFA) or Asperger syndrome (AS) and those who were developing normally (control group). The participants were matched for chronological age. The results of their study showed that participants suffering from HFA and AS were slower at processing emotions of fear, anger and sadness than compared to the control group. (See figure 4).
Thus, we have evidence that not all people detect emotions facially the same way as others. An interesting finding was that participants (including the control group) in general took lesser time to detect the facial expression of happiness and disgust than any other emotion, indicating a happiness superiority effect. In conclusion, there is ample evidence that some emotions expressed in the face can be detected faster than others, namely happiness and anger. While both the anger and happiness superiority effects have been found amongst people, we cannot be sure as to which emotion exactly is detected the fastest.
However, all the studies mentioned above give an insight into how we as humans detect emotions facially. Moreover, present literature on facial expression of emotions does not contain any studies that give us an indication of all the emotions being detected in the same time as each other. Thus, we can conclude that some emotions are in reality detected faster than others. Future research could be done with other emotions (surprise, disgust, sadness) as these emotions are not as frequently looked into as anger and happiness.
A key area for research would be in the field of cognitive neuroscience. Eye trackers could be used to detect the precise locations of people’s gaze at a particular facial expression and the depth cues these people find when looking at such facial expressions. The results of such experiments could be used to accept or reject the fact that some facial emotions are detected faster than others. Also, fMRI techniques would help to gain insight into how the brain detects and processes different facial emotions and would follow up from Casanova and his team’s research (2009) as mentioned above.