Repository logo
 
Loading...
Profile Picture

Search Results

Now showing 1 - 2 of 2
  • Perceiving emotions in visual stimuli: social verbal context facilitates emotion detection of words but not of faces
    Publication . Blom, Stephanie; Aarts, Henk; Semin, Gün R.
    Building on the notion that processing of emotional stimuli is sensitive to context, in two experimental tasks we explored whether the detection of emotion in emotional words (task 1) and facial expressions (task 2) is facilitated by social verbal context. Three different levels of contextual supporting information were compared, namely (1) no information, (2) the verbal expression of an emotionally matched word pronounced with a neutral intonation, and (3) the verbal expression of an emotionally matched word pronounced with emotionally matched intonation. We found that increasing levels of supporting contextual information enhanced emotion detection for words, but not for facial expressions. We also measured activity of the corrugator and zygomaticus muscle to assess facial simulation, as processing of emotional stimuli can be facilitated by facial simulation. While facial simulation emerged for facial expressions, the level of contextual supporting information did not qualify this effect. All in all, our findings suggest that adding emotional-relevant voice elements positively influence emotion detection.
  • Facial emotion detection in Vestibular Schwannoma patients with and without facial paresis
    Publication . Blom, Stephanie; Aarts, Henk; Kunst, H.P.M.; Wever, Capi C.; Semin, Gün R.
    This study investigates whether there exist differences in facial emotion detection accuracy in patients suffering from Vestibular Schwannoma (VS) due to their facial paresis. Forty-four VS patients, half of them with, and half of them without a facial paresis, had to classify pictures of facial expressions as being emotional or non-emotional. The visual information of images was systematically manipulated by adding different levels of visual noise. The study had a mixed design with emotional expression (happy vs. angry) and visual noise level (10% to 80%) as repeated measures and facial paresis (present vs. absent) and degree of facial dysfunction as between subjects' factors. Emotion detection accuracy declined when visual information declined, an effect that was stronger for anger than for happy expressions. Overall, emotion detection accuracy for happy and angry faces did not differ between VS patients with or without a facial paresis, although exploratory analyses suggest that the ability to recognize emotions in angry facial expressions was slightly more impaired in patients with facial paresis. The findings are discussed in the context of the effects of facial paresis on emotion detection, and the role of facial mimicry, in particular, as an important mechanism for facial emotion processing and understanding.