Speaker
Description
Humans excel in the interpretation of facial cues, which allows them to anticipate others and respond accordingly. Nonetheless, it remains unclear whether an advantage in the detection of those cues arises from the categorization of facial expressions, such as happy or angry, or from the magnitude of the deformation of the face, regardless of the emotion category.
Here, we sought to clarify this aspect by asking 52 participants to perform a visual search for emotional faces among neutral expressions. Pictures of happy and angry faces were selected from ADFES’ dynamic facial expressions either at their respective -yet unequalized- maximum intensity (unmatched-intensity condition) or at an intermediate -yet equalized- intensity (matched-intensity condition) according to 11 independent raters.
We examined the effect of emotion (i.e., happy vs angry; generalized linear mixed effect models) on participants’ accuracy and reaction times in both conditions. Unsurprisingly (Juth et al., 2005), happy faces are detected faster and more accurately than angry ones (p <.001) in the unmatched-intensity condition. However, the effect is reversed when the intensity of facial expressions is matched, with an advantage for angry faces in terms of accuracy and reaction times (p <.001).
Our results question the existence of a happy-face advantage in visual search tasks. Instead, a decisive role is played by the magnitude of the deformation of the face and its relationship with perceptual features. Further studies could test whether the apparent advantage of specific facial expressions reported in other tasks is explained by low-level features rather than emotional categories.