Face Masks Impair Basic Emotion Recognition: Group Effects and Individual Variability: Social Psychology: Vol 0, No 0

Face Masks Impair Basic Emotion Recognition

Group Effects and Individual Variability

Published Online: February 21, 2022
https://doi.org/10.1027/1864-9335/a000470

Abstract

Abstract. With the widespread adoption of masks, there is a need for understanding how facial obstruction affects emotion recognition. We asked 120 participants to identify emotions from faces with and without masks. We also examined if recognition performance was related to autistic traits and personality. Masks impacted recognition of expressions with diagnostic lower face features the most and those with diagnostic upper face features the least. Persons with higher autistic traits were worse at identifying unmasked expressions, while persons with lower extraversion and higher agreeableness were better at recognizing masked expressions. These results show that different features play different roles in emotion recognition and suggest that obscuring features affects social communication differently as a function of autistic traits and personality.
Human social environments underwent rapid change with the worldwide onset of the COVID-19 pandemic in early 2020. Social gatherings have been exchanged for video calls; physical barriers were enacted to protect those at work. More fundamentally, even the most basic forms of human nonverbal communication like facial emotion recognition have been altered by the widespread adoption of face masks. While face masks are one of the key virus spread prevention measures (Leung et al., 2020; Eikenberry et al., 2020, Prather et al., 2020), they visually occlude the lower half of the face, including the chin, mouth, and nose, thereby hiding facial cues that humans rely on to read others' minds, intentions, and emotions (Hugenberg & Wilson, 2013). Indeed, theories of human basic emotional processing (e.g., Ekman, 1999; also see Barrett et al., 2011) maintain that visual facial features (and vocal cues) provide key signals for quick recognition of basic emotional expressional in others, with impairments in this emotional reading linked with decreased social (e.g., Addington et al., 2006; Leppänen & Hietanen, 2001) and cognitive function (Virtanen et al., 2017). In the present study, we examined how occluding facial features by face masks impacted emotion recognition, and whether this impact varied with individual autistic and personality traits, which may predispose different individuals to utilize information from face features differently.

Emotion Recognition and Facial Occlusion

Research shows that specific visual face features play different roles in facilitating emotional recognition for each of the six basic emotions – happiness, disgust, fear, sadness, anger, and surprise (Ekman, 1999; e.g., Wegrzyn et al., 2017; Kret & De Gelder, 2012; Smith et al., 2005; Blais et al., 2012; Kret & De Gelder, 2012). Identifying happy and disgusted expressions is found to rely on the information from the mouth (e.g., a smile; Smith et al., 2005) and corners of the nose, respectively (Sullivan et al., 2007, Gosselin et al., 2010). In contrast, identifying fearful and sad expressions is found to be reliant on the information from the upper face, like the eyes (e.g., Bombari et al., 2013; Smith et al., 2005; Sullivan et al., 2007). The data for anger are mixed, with some studies reporting decreased anger recognition when the lower face is occluded (Kotsia et al., 2008), and others reporting that eyes are the most diagnostic for this emotion (Smith et al., 2005). Finally, recognition of surprise, with characteristic wide eyes and an open mouth, appears to depend on the information from both the top and bottom of the face (Calder et al., 2000; Ekman et al., 1980; Gosselin et al., 2010; also see Smith et al., 2005).
It follows from this work that basic emotion recognition should be impacted by facial occlusion, such as face masks, with this impact particularly prominent for emotions for which the occluded nose and mouth have the largest diagnostic value. Two studies conducted so far have examined how recognizing emotions may be affected by mask wearing. Grundmann et al. (2020) asked two groups of participants to identify neutral, happy, fearful, angry, sad, and disgusted expression from faces wearing masks and faces wearing no masks. The group identifying emotions from masked faces performed with overall 21% lower accuracy. While an important initial result, between-group comparisons present a challenge for inferring the magnitude of the underlying face occlusion effect due to known shortcoming of this design (e.g., possible preexisting group differences; e.g., see Charness et al., 2012). For example, if the participants assigned to the mask condition had worse average emotion recognition overall, the apparent mask impact would have been inflated. Similarly, if one group had better emotion recognition in general, it would reduce the resulting mask impact. Furthermore, while a comparison of two effects requires a statistical test of a difference, making such conclusions is often not statistically sound (Nieuwenhuis et al., 2011). Resolving some of these issues by a using a repeated measures design, Carbon (2020) asked the same group of participants to identify disgusted, angry, sad, happy, fearful, and neutral expressions from faces wearing masks and faces wearing no masks. Their data indicated that recognition of anger, disgust, happiness, and sadness was reduced for faces wearing masks. Recognition of fearful and neutral expressions was unimpacted, with the authors speculating that an impact of masks for recognizing these emotions may have been obscured by performance ceiling of near 100% accuracy. Indeed, accuracy recognition for fear, in this study, was inflated relative to typical findings (Calvo & Nummenmaa, 2016) and real-life scenarios in which surprise is always included as a potential alternative. Both expressions rely on similar upper face features (e.g., Roy-Charland et al., 2014) and are commonly confused, leading to a reduction in recognition accuracy for both fear and surprise. Ceiling effects may have also been exacerbated by unlimited response time and the analysis of raw accuracy, which did not account for possible biases in responding. For example, if a participant responds with the answer fear for every trial, they will have a 100% accuracy for fear, despite having a poor ability to distinguish that emotion from others (see Wagner, 1993 for a review). Finally, the authors showed participants the same face identities with and without masks. This may have allowed participants to utilize familiarity rather than face features to recognize emotions.
Parallel lines of work in the social domain examining how emotion recognition may be affected by cultural face coverings, such as a niqab (Fischer et al., 2012; Hareli et al., 2013; Kret & De Gelder, 2012; Kret & Fischer, 2018; Wang et al., 2015), report similar results. There was a reduction in the accuracy of emotion perception for participants wearing a niqab for happiness but no impact for recognizing anger, which depends on perceiving upper parts of the face (i.e., the eyes). However, perception of a more complex emotion of shame was heightened for niqab wearers, significantly implicating contextual variables when cultural types of face covering are used. In line with this, Fischer et al. (2012) found that covering the face with a niqab led to a negativity bias with emotions perceived as more negative, while Kret and Fischer (2018) reported key differences in how emotions were recognized when faces were covered by a western winter scarf relative to a niqab. Wang et al. (2015) also found that recognition of emotions of niqab wearing faces was modulated by an individual's cultural experience. Thus, overall, the available literature from the cognitive and social domains suggests that basic emotion recognition is likely altered by face occlusion with the magnitude of this impact for specific emotions remaining ambiguous due to design and task constraints.

Individual Differences in Emotion Recognition

The role of social and personality differences in emotion recognition from occluded faces remains relatively unexplored. Individual factors, such as age (Mill et al., 2009) and number of autistic-like traits (McKenzie et al., 2018), have been shown to impact basic emotion recognition. For example, McKenzie et al. (2018) examined how autistic-like traits and visual processing styles are related to emotion recognition. Forty individuals with self-reported autism and 216 typically developing peers completed the Autism Spectrum Quotient (AQ; Baron-Cohen et al., 2001), an emotion recognition task, and visual processing task (i.e., Navon task; Navon, 1977). The authors found that a higher number of autistic-like traits were associated with lower performance on the emotion recognition task, with no effects on visual processing style.
Given the open nature of the question concerning individual differences in basic emotional processing, here we performed an exploratory examination of the role of that individual differences in autistic and personality traits may play in basic emotion recognition. There is a possibility that different social and personality traits predispose different individuals to seek and process visual information from faces differently (e.g., trait congruency hypothesis; Bargh et al., 1988). Thus, if certain social and personality traits are associated with a preference for the information from specific (e.g., upper or lower) face features, it could affect recognition of emotional expressions under conditions when those parts of the face are visually occluded. Some studies suggest that the link between poor emotion recognition and autistic traits is driven by avoidance of looking at the eyes in autism and an over-reliance on lower face cues (e.g., Neumann et al., 2006; Spezio et al., 2007a, 2007b; Madipakkam et al., 2017). Personality traits have also been found to predict the frequency of looking at the eyes of faces (e.g., Hoppe et al., 2018; Rauthmann et al., 2012) with preliminary evidence suggesting that extraverts (Ellingsen et al., 2019) and those high in anxiety and neuroticism look more at the mouth (Corden et al., 2008) while those high in agreeableness look more at eyes and make more eye-contact (Broz et al., 2012). There have also been suggestions that the AQ captures a unique construct to the big five personality traits and could even be considered a sixth dimension (Wakabayashi et al., 2006). General perceptiveness and attentiveness to facial cues may also differ between personality traits. Openness to experience and conscientiousness have been found to positively relate to emotion recognition (Matsumoto et al., 2000), with openness hypothesized to involve a general interest and curiosity for social cues and conscientiousness facilitating the detailed perception of small facial cues.
The second reason for why we examined these individual variables is because autistic and personality traits are generally associated with broader concepts of social competence and social exposure (e.g., Black et al., 2017; Graziano & Tobin, 2009; McKenzie et al., 2018). It is also possible that increased social exposure may be associated with better emotion recognition in general, which could provide benefits in situations in which key parts of face are occluded. Indeed, individuals with more autistic-like traits tend to have lower social competence, less social exposure, and reduced emotion recognition, regardless of clinical (Black et al., 2017; McKenzie et al., 2018) or neurotypical status (McKenzie et al., 2018). While there are a few studies on the relationship between personality traits and emotion recognition (see Furnes et al., 2019 for a review), the available ones show that individuals high in trait agreeableness engage in frequent social interactions and have better emotion recognition (Graziano & Tobin, 2009). Extroverts have also been found to seek increased social interactions and as a result may have better emotion recognition (Matsumoto et al., 2000 Experiment 4; Scherer & Scherer, 2011), although findings on this have been mixed, with some researchers failing to replicate this effect (Cunningham, 1977; Matsumoto et al., 2000).

The Present Study

The present study examined the ability to recognize basic facial emotions – fearful, happy, angry, sad, disgusted, surprised, and neutral – from faces wearing face masks and those wearing no masks. We also assessed whether average group performance varied with individual participants' autistic traits and their personality. The first aim was to assess how lower face occlusion by masks impacted emotion recognition for each of the six basic emotions and neutral expression (Ekman, 1999). Extending past work on this topic, we included surprised expressions, examined unbiased hit rate (Hu), controlled for face identity, and used a repeated measures design. The second aim was to assess how average emotion recognition varied as a function of individual participants' number of autistic traits and personality. To measure those traits, participants completed the AQ (Baron-Cohen et al., 2001), which measures autistic-like traits in a typical population, and the Big Five Inventory (BFI; Human et al., 2014), which measures the big five personality traits – openness, conscientiousness, extraversion, agreeableness, and neuroticism.
We reasoned that emotion recognition would overall be lower for faces wearing masks, with the greatest impact of face occlusion on the recognition of emotional expressions that depended on the visibility of lower face features, that is, disgust and happiness, and the lowest impact of mask occlusion on the recognition of emotional expressions that depended on the visibility of upper face features, that is, fear and anger. Our predictions for individual variability in performance were less precise. Based on past work, individuals with higher number of autistic-like traits should have overall lowered ability to recognize emotions from unoccluded faces while those with personality traits that promote social exposure (e.g., extraversion) or utilization of upper face cues should be better able to recognize emotions under such unfavorable conditions.

Methods

Participants

An a priori power analysis determined that data from 112 participants were needed to obtain .90 power (α = .05) assuming a medium correlation effect size (r = 0.3) between self-report measures and behavioral data. One hundred twenty-six undergraduate students participated in the study for course credit. The data from 120 (103 female; M = 20.68 years, SD = 2.67 years) participants were analyzed.1 The study was approved by the University's research ethics board.

Apparatus and Stimuli

Stimuli were color photographs of faces showing happy, sad, fearful, surprised, disgusted, angry, and neutral expressions. They were obtained from the Karolinska Directed Emotional Faces database (Lundqvist et al., 1998).2 This database has been validated to produce an average biased hit rate of 0.72 and an average Hu of 0.56 per emotion (Wagner, 1993; Goeleven et al., 2008). In the present study, 20 male and 20 female identities were used for each emotion, for a total of 280 images (140 masked, 140 unmasked; half male half female). Examples are illustrated in Figure 1A. Only images with a normed Hu of 0.50 or higher were used. Adobe Photoshop CS6 was used to create a masked version by placing an image of a surgical mask over the face's nose and mouth (Figure 1A) with reference points of the chin, bridge of the nose, and face edge.
Figure 1 (A) Example stimuli, showing a protagonist with each of the six basic facial emotions with and without a mask. Participants' ratings of how realistic the mask stimuli looked (from 1 = very unrealistic to 10 = very realistic with 5 as neutral) indicated that they found the stimuli to be realistic representations (M = 6.77, SD = 2.03). (B) Trial progression. Trials started with a 600 ms presentation of a blank screen. Then, a preparation screen was shown for 1000 ms. A response screen appeared next showing a picture of a face with a response question ("Which emotion does this person primarily show?") above the image and response options (Anger, Disgust, Fear, Neutral, Happiness, Sadness, Surprise) below the image. Participants were asked to indicate their responses using a mouse. The display remained visible until participants responded or 5000 ms had elapsed, whichever came first. Face stimuli (without masks) reprinted with permission from the FACES (https://faces.mpdl.mpg.de/imeji/) and KDEF databases (https://www.kdef.se/). The authors are willing to share the edited face stimuli upon request, provided expressed permission from the KDEF and FACES creators.
The experiment was administered online. A custom JsPsych script (https://www.jspsych.org/) was used to launch the task on participants' local machines and to collect the data. The images were presented in color against a white background. Screen resolution was detected for each participant's computer, and images were scaled to take up 75% of the vertical screen space.

Design and Procedure

The study was a repeated measures design with two factors: Mask (masked, unmasked) and Emotion (disgust, anger, sad, surprise, happy, fear, neutral). Thus, all six basic emotions and neutral expression were included to measure differences in emotion recognition for faces wearing masks and faces wearing no masks for emotions depending on the perception of upper (happy and disgust) versus lower (fear and sadness) face parts.
Face identity was controlled such that half of participants viewed half of the identities with masks while the other half of participants viewed those same identities without masks. Each set contained equal number of male and female images. Thus, no participant viewed the same identity portraying the same emotion with and without a mask. This manipulation ensured that participants identified emotions based on facial features and not based on person familiarity.
The procedure is illustrated in Figure 1B, with the task parameters generally consistent with emotion recognition literature (e.g., Carbon, 2020; Mill et al., 2009). Trials started with blank screen for 600 ms, which was followed by a 1,000 ms presentation of a preparation screen, informing participants that a picture was coming so they would know to prepare to see the face presentation. Then, a face was shown at the center of the screen and remained visible until 5,000 ms had elapsed or until participant has responded. This response time-out manipulation was implemented to guard against performance ceiling at 100% accuracy. For each trial, using a mouse click, participants were asked to select one of seven options displayed below the image to report the primary emotion shown by the image. The options were Anger,, Disgust,, Fear, Neutral, Happiness, Sadness, Surprise, presented in this alphabetical order. They were told that they would have a short period of time to respond to each trial, although response times were not analyzed because they would be contaminated by the amount of time it takes to select each response option with the mouse (see also Carbon, 2020). Eight practice trials showing face identities not part of the test set preceded the experiment. The experiment contained a total of 280 trials divided over five blocks. All conditions were presented in random order throughout the task.
Following the experimental task, participants provided basic demographic information and completed the AQ (Baron-Cohen et al., 2001), and the BFI (Benet-Martínez & John, 1998; John & Srivastava, 1999). The AQ (Baron-Cohen et al., 2001) contains 50 items and asks participants to self-report on items measuring cognitive and behavioral characteristics of autistic traits. Higher AQ scores denote the presence of more autistic-like traits and lower social functioning. We used a 24-item short version of the BFI (Benet-Martínez & John, 1998; John & Srivastava, 1999) to assess the levels of extraversion, openness, neuroticism, conscientiousness, and agreeableness (as used by Human et al., 2014). Participants indicated their endorsement of each questionnaire item on a scale from 1 ( = strongly disagree) to 7 ( = strongly agree). Higher scores indicated higher levels of each trait.

Data Handling

Trials containing response anticipations (RTs > 600 ms; 0.275% of data) and timeouts (RTs > 5,000 ms; 1.63% of data) were counted as errors and were excluded from analyses. The unbiased hit rate (Hu; Wagner, 1993) was then computed for each participant for each mask and emotion condition.3 Hu reflects the joint probability that (a) a stimulus in a response category was correctly identified (e.g., correctly identifying sadness when sadness was shown) and (b) that the response option was correctly deployed (e.g., only selecting the sadness response when sadness is shown and no other emotions). The product of these probabilities yields a number from 0 to 1, which indicates emotion recognition performance controlled for possible response selection bias. Figure 2 plots Hu values as a function of emotion and mask.
Figure 2 A. Unbiased hit rate (Hu) for each emotional expression for faces wearing masks and faces wearing no masks. Dotted lines indicate the mean for each emotion collapsed across mask condition. Replicating typical findings in the literature (e.g., superiority of the happy expression, e.g., Neath, 2012, and lowest performance for fearful expressions, e.g., Calvo & Nummenmaa, 2016), overall, recognition performance (Hu) for happy expressions was higher than for all other expressions (ps < .001). Next highest performance was for the neutral expression, which was higher than for anger, disgust, fear, sad, and surprise (ps < .001). Then came performance for surprised and sad expressions, which did not differ (p = 1.0).
Both were reliably higher than for disgust and fear (ps < .001), and surprise was also higher than anger (p = .003). Finally, recognition performance for anger was greater than for disgust and fear (ps < .001) and Hu for disgust was greater than for fear (p < .001). B. Difference scores (unmasked Hu – masked Hu) as a function of emotional expression. Error bars indicate standard error of the mean.

Results

Overall Effect of Masks on Emotion Recognition

Our first aim was to assess how occluding face features impacted recognition of emotions. The overall Hu for emotion recognition from unmasked faces was 0.86 (SD = 0.046), indicating that the unoccluded facial expressions were identified above the chance response level of 1/7 or 0.142 (Disgust = 0.71, Anger = 0.76, Sad = 0.75, Surprise = 0.73, Happy = 0.97, Fear = 0.42, Neutral = 0.92).
A repeated measures ANOVA with Mask (masked, unmasked) and Emotion (Disgust, Anger, Sad, Surprise, Happy, Fear, Neutral) was run on these Hu values. Corrected degrees of freedom are reported when Mauchly's test was significant. All follow-up t-tests were two-tailed, paired, and Bonferroni corrected. As illustrated in Figure 2, there were main effects of Mask (F(1,119) = 1,401.38, MSE = 0.016, p < .001, ηp2 = .92) and Emotion (F(4.08, 485.90) = 548.76, MSE = 0.021, p < .001, ηp2 = .82). These main effects indicated that emotion recognition was significantly reduced overall when faces wore masks (M = 0.52, SE = 0.007) relative to when they did not (M = 0.75, SE = 0.007) with this reduction evident across all emotions (all ts > 7.81, ps < .01; Figure 2A).
Importantly, there was a significant Mask × Emotion interaction which indicated that facial occlusion impacted recognition of some emotions more than others (F(3.88, 462.24) = 90.64, MSE = 0.015, p < .001, ηp2 = .43). We hypothesized that the recognition of emotional expressions that relied more strongly on the information conveyed by lower face parts (e.g., disgust, happiness) would be more impacted by lower face occlusion. To examine this, we computed the difference scores between masked and unmasked Hu for each emotion.
These data are illustrated in Figure 2B, with larger difference magnitudes denoting larger impact of facial occlusion. Indeed, Hu was reduced by masks the most for disgust than for any other emotions (all ps < .001). Recognizing fear, an emotion that typically depends on perceiving upper face parts, was impacted the least. Recognition of angry expressions was more impacted more than recognition of sad, neutral, surprised, happy, and fearful expressions (ps < .003). Recognition of neutral and sadness were impacted more than fearful, happy, and surprised expressions, ps < .001) and did not differ (ps = 1.0). Recognition of surprised and happy expressions was impacted more than performance for fearful expressions and did not differ (p = 1.0). Thus, occluding face parts strongly impacts emotional recognition, especially for emotions for which lower face parts are diagnostic.

Individual Differences in Unmasked and Masked Emotion Recognition

Our second aim was to examine whether these overall effects in emotion recognition varied with individual differences in autistic and personality traits. We performed two multiple regressions (using the backward stepwise method) in which the individual participants' overall AQ4 scores and the scores for each of the Big Five traits (Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism) were entered as predictors of the average Hu for unmasked and masked faces.5
The first model indicated that AQ and personality traits accounted for 4.6% of variability in performance of emotion recognition for unmasked faces; however, the full model did not reach significance (adjusted R2 = .046, F(6,110) = 1.932, p = .082; AQ: β = −0.237, p = .031; Extraversion: β = −0.141, p = .185; Neuroticism: β = 0.090, p = .393; Openness: β = −0.048, p = .608; Conscientiousness: β = 0.171, p = .089; Agreeableness: β = 0.079, p = .457). The model became reliable with the removal of Openness and Agreeableness and accounted for 5.6% of variability (adjusted R2 = 0.056; F(4, 112) = 2.730, p = .033) with the AQ remaining the only reliable predictor in all model iterations (β = −0.258, p = .014). Figure 3A shows the relationship between individual AQ scores and emotion recognition for unmasked faces indicating that those with fewer autistic traits performed better on emotion recognition relative to those with more autistic traits.
Figure 3 Scatter plots showing the relationship between individual participants' autistic and personality traits and emotion recognition performance. (A) Individual participants unbiased hit rate (Hu; y-axis) for emotion recognition from faces wearing no masks as a function of their AQ score (β = −0.258, p = .014). Note that higher values of autistic traits reflect lower overall social competence. (B) Individual participants' Hu performance for emotion recognition from faces wearing masks (y-axis) as a function of their trait agreeableness (β = 0 .22, p = .040). (C) Individual participants Hu performance for emotion recognition from faces wearing masks as a function of trait extraversion (β = −0.22, p = .035).
The predictors in the second model accounted for about 6.7% of variance in the emotion recognition performance for masked faces. Here, the full model was reliable (adjusted R2 = 0.067; F(6,110) = 2.37, p = .033) with the traits of Extraversion and Agreeableness remaining as reliable predictors of emotion recognition (AQ: β = −0.069, p = .523; Extraversion: β = −0.22, p = .035; Neuroticism: β = 0.12, p = .27; Openness: β = 0.11, p = .26; Conscientiousness: β = 0.12, p = .22; Agreeableness: β = 0.22, p = .040). Figure 3B and Figure 3C illustrate the relationship between emotion recognition performance and traits of agreeableness and extraversion, indicating that individuals with higher levels of agreeableness had better emotion recognition performance from faces wearing masks while those higher in extraversion showed worse performance in emotion recognition.6

Discussion

Basic recognition of facial emotions is one of the fundamental ways in which humans understand one another (Addington et al., 2006; Leppänen & Hietanen, 2001). Different face features carry key information for recognizing basic emotions (e.g., mouth for happy, eyes for fear; Ekman, 1999; Smith et al., 2005; Blais et al., 2012; Kret & De Gelder, 2012; Wegrzyn et al., 2017). This suggests that occluding lower face features may negatively impact emotion recognition, particularly for emotional expressions for which these features carry the largest diagnostic value. As such, the COVID-19 crisis, which has seen a wide adoption of mask wearing by the general public, may be impacting not only our physical health but also our ability for basic social communication with people. In the present study, we assessed how lower face occlusion by face masks impacted recognition of basic emotions and whether individual differences in autistic- and personality-related traits modulated these overall effects.
Our data indicated a significant overall reduction in emotion recognition from masked faces (overall 23.1% Hu reduction), with this reduction in emotion recognition ranging from 10.2% for fear to 45.7% for disgust. These results conceptually replicate past reports and extend them to show reduced emotion recognition from masked faces displaying neutral, fearful, and surprised expressions. Thus, our improved design indicated widespread deficiencies in emotion recognition from faces wearing masks that span all six basic emotions (including neutral).
This finding provides support for the notion that emotion recognition is intimately tied with perception of facial features and that occluding those features impacts emotion recognition overall and by extension the resulting social communication. One potential mechanism by which facial occlusion may affect overall emotion recognition is by preventing typical holistic face perception. If so, similar overall determents in emotion recognition would be expected to arise when other face parts (e.g., eyes) are visually occluded. Future studies are needed to examine this question.
Supporting the role of specific individual facial features in recognition of basic emotions, our data also indicated that emotional recognition varied with emotion. Similar to Carbon (2020), we also found that recognition of disgust was impacted the most by face masks, as diagnostic face features corners of the nose and the mouth were occluded (e.g., Blais et al., 2012; Smith et al., 2005). Anger was the next most impacted with performance reduction of 30%, providing further support for the idea that the lower as opposed to upper face features may be more diagnostic for recognizing this emotion (Kotsia et al., 2008; Blais et al., 2012). Recognition of neutral and sad expressions was impacted by masks with a 23% reduction in performance suggesting that information from both lower and upper face contribute significant information for recognizing these emotions (Smith et al., 2005; Sullivan et al., 2007). Finally, recognition of surprise (15% reduction) and fear (10% reduction) was the least impacted by mask wearing, dovetailing with findings that information from the eye region is heavily diagnostic for these expressions (Kret & De Gelder, 2012; Wegrzyn et al., 2017). Unexpectedly, recognition of happy expression was not impacted by mask occlusion much (14.7% reduction in Hu), despite the proposed diagnostic value of smile (e.g., Ekman et al., 1980; Gosselin et al., 2010). This suggests that the wrinkles in the corners of the eyes (the duchenne smile; Ekman et al., 1990) may be a stronger cue to happiness and sufficient to facilitate correct recognition of happy expressions. While an argument could be made that happiness was the one expression with high accuracy in performance (0.97 Hu for unmasked faces), high Hu for happiness is commonly found (the happy superiority effect; see Neath, 2012). Interestingly, Carbon (2020) reported a larger impact of lower face occlusion on recognition of happy expressions (about 25%), despite also having very high hit rate for unmasked happy faces (about 98%). However, this may have been due to the use of older face actors with wrinkles that potentially prevented identification of the characteristic happy eye-wrinkle. Together, while showing overall impact of lower face occlusion by masks on emotion recognition, our data also showed larger impairments in recognition of expressions that depend on diagnostic lower face features (i.e., disgust) than for those that depend on diagnostic upper face features (i.e., fear).
Our second aim was to investigate whether this determent in performance varied as a function of autistic and personality traits. In line with previous research (McKenzie et al., 2018), our results revealed that participants with more autistic traits displayed lower emotion recognition from unoccluded faces (Baron-Cohen et al., 2001). It has been suggested that such poor emotion recognition could be driven by a general tendency to avoid looking at the eye region of the face and an over-reliance on lower face cues in autism (e.g., Neumann et al., 2006; Spezio et al., 2007a, 2007b; Madipakkam et al., 2017). However, this would suggest that typically developing individuals with high number of autistic traits may also be particularly impaired in emotion recognition when lower face cues are occluded.
This was not supported by our data, as autistic traits were not reliably related to emotion recognition from faces wearing masks. This would suggest that autistic traits in the typically developing population may be linked to a more general impairment in emotion recognition possibly due to an overall increased difficulty in reading social cues (e.g., Clements et al. 2018; White et al., 2007) rather than specific issues in reading particular facial cues. Alternatively, it is also possible that autistic traits in the typically developing population do not resemble autistic behavior in clinical samples, thus a more detailed study of their similarities and differences may be warranted (e.g., Hayward et al, 2018).
We also explored the relationship between personality traits and emotion recognition from faces wearing masks, theorizing that some personality traits may lead to different preferences for fixating face features or result from seeking increased social exposure. The data showed that trait agreeableness was associated with better recognition of emotions from masked faces while extraversion was associated with worse recognition of emotions from masked faces. Thus, the increased social exposure hypothesis does not appear to be well supported by these results since both trait agreeableness and extraversion are linked with increased social exposure. Alternatively, it is possible that these results reflect trait differences in visual processing patterns. Trait-congruency theories (e.g., Bargh et al., 1988) propose that specific personality traits predispose individuals to seek out particular types of information and process it in a personality-congruent manner.
In this context, the divergence in data between those high in agreeableness and extraversion would have reflect different preferences for facial features used during emotion recognition. There is some evidence to support this trait-congruency explanation, with multiple investigations showing associations between personality traits and fixations on the eye region of the face (e.g., Hoppe et al., 2018; Rauthmann et al., 2012). Extraversion has typically been associated with better emotion recognition overall (Matsumoto et al., 2000 Experiment 4; Scherer & Scherer, 2011; also see Cunningham, 1977; Matsumoto et al., 2000, Experiment 1 and Experiment 2), so the negative relation in the present study appears specific to lower face occlusion. Indeed, extraverts are theorized to utilize the mouth region preferentially for emotion recognition, displaying longer dwell time, longer average fixation, and faster first fixation to the mouth (Ellingsen et al., 2019).
They also find smiles more rewarding (Smillie et al., 2012) and thus may be more impaired in emotion recognition when this facial region is occluded. Data also suggest similar conclusions for trait agreeableness, which has been found positively associated with increased mutual eye-contact (Broz et al., 2012), suggesting a preference for using the information from eyes during recognition of emotions. This preference may allow people high in agreeableness to perform better when the lower face is covered, as found in the present study.
Future studies are needed to better understand the nuanced links and mechanisms of influence between autistic traits, personality traits, and emotion recognition.
In sum, the present study provides strong evidence that occluding lower face parts by face masks impacts the recognition of each of the six basic emotions, which firmly implicates facial features as one of the key communicative tools for emotional states. Our data also show that this overall impairment in emotion recognition varies across emotions and with individual characteristics of observers such that some individuals may be impacted more in emotional recognition by lower face occlusion. The determents in such basic aspects of social communication highlight the impact of the COVID-19 pandemic on social function, showing that our social communication is impaired when we interact with others wearing face masks. Beyond the pandemic, these results have importance for health settings in which masks are commonplace. For example, doctor patient relations require easy interpretation of emotional states for better patient outcomes (Finset & Mjaaland, 2009; Decety & Fotopoulou, 2015) and increased compliance (Finset & Mjaaland, 2009). Anger and sadness are two emotions that commonly come up when dealing with difficult medical problems, and our data suggest these emotions are two of the most impacted by masks.
Future studies are needed to examine the effectiveness of potential mitigation strategies. One such strategy may be to provide additional emotional context when interacting with masks, like verbal statements or descriptions of emotions. This could be especially helpful for expressing the emotions most impacted by masks. Training to recognize expressions from upper face cues more effectively might be another useful option, particularly for medical professionals. Finally, potential use of transparent masks (see Mheidly et al., 2020) or training to use upper face cues more effectively would likely allow for normal transmission of facial cues while still providing protection from disease, though it remains to be seen if these masks can indeed restore normal social interactions in pandemic scenarios and health settings that use masks daily.

Electronic Supplementary Materials

The electronic supplementary material is available with the online version of the article at https://doi.org/10.1027/1864-9335/a000470
  • ESM 1. Intercorrelations between predictor and dependent variables for the sample (n = 117) included in regression analyses.
Many thanks to Ethan Mendell, Sabrina Provencher, and Zhouran Wang for their assistance with this project.
1Data were excluded from the initial sample of 126 when participants failed to complete the emotion recognition task (n = 4) and when their unmasked–masked unbiased hit rate exceeded 2.5SD of the group mean (n = 2). Prior to regressions, the AQ and Big Five scores were further examined for the presence of outliers as defined by extreme scores falling outside of the 2.5SD from the group mean. Data from 3 participants were labeled as outliers and were thus excluded from regression analyses (N = 117; 101 female; M = 20.63 years, SD = 2.68 years).
2Female identities: F01, F02, F02, F05, F06, F07, F09, F13, F16, F19, F20, F21, F22, F23, F24, F26, F27, F32, F34, F35. Male identities: M01, M05, M06, M07, M08, M09, M10, M11, M12, M13, M14, M16, M17, M18, M22, M24, M25, M28, M31, M35.
3Hu = (number of times the expression is correctly identified/number of times the expression is presented) × (number of times the response for that expression is correctly deployed/number of times the response for that expression is deployed).
4The original binary scoring system (Baron-Cohen et al., 2001) assigned a 0 for any disagreement with an item (slightly disagree or definitely disagree) and 1 for any agreement (response of slightly agree or definitely agree). However, the alternate scoring method of assigning 1–4 for each response (Austin, 2005) is becoming increasingly popular (e.g., McCrackin & Itier, 2019; Murray et al., 2016) as it allows for examination of the full variation in responses (with scores ranging from 50 to 200) and shows better internal consistency and test–retest reliability (Stevenson & Hart, 2017). Here, we used the four-point AQ scoring. For the Big Five, participants indicated their endorsement of each item on a scale from 1 ( = strongly disagree) to 7 ( = strongly agree). The scores from each subscales' questions were added to obtain an overall score for each personality trait, with higher scores indicating higher levels of each trait. All variables were normally distributed. Correlation matrix is presented Table E1 (Electronic Supplemental Material, ESM 1).
5We have discussed using a multilevel model to analyze these data (e.g., LMER). However, we decided to use traditional ANOVAs instead, as we were concerned that examining the impact of autistic traits and five personality traits on the Hu performance for the seven emotional expressions (happy, fear, anger, sad, surprise, neutral, disgust), two mask conditions, as well as their interactions would produce liberal fits for the model given our sample size.

6We also examined whether the AQ and the Big Five predicted overall differences in Hu scores between unmasked and masked faces (i.e., unmasked Hu – masked Hu). This regression model indicated no significant steps (all ps > .05). The step including AQ, Openness, Agreeableness, and Extraversion approached significance (adjusted R2 = .041; F(3,113) = 2.37, p = .063), with the effects of individual predictors consistent with those reported in the main text (AQ: β = −0.176, p = .107; Extraversion: β = 0.118, p = .257; Openness: β = −0.176, p = .07; Agreeableness: β = −0.16, p = .122). Please note the results of this model should be interpreted with caution due to potentially restricted range of Hu difference scores.
© 2022 Hogrefe. All rights reserved.