Cultural Differences in Emotion Recognition and Expression Assignment Paper

Cultural Differences in Emotion Recognition and Expression Assignment Paper

Cultural Differences in Emotion Recognition and Expression Assignment Paper

The breadth of emotions that our eyes are able to express is truly far-reaching. From joy to longing, from anger to fear, from sadness to disgust – eyes can become powerful windows to our internal states. We use our eyes to take in the world around us, and to reflect the world within us. To reveal our inner emotional states with our facial expressions and to interpret them accurately is one of the foundations of social interaction.Whether emotion is universal or social is a recurrent issue in the history of emotion study among psychologists. Some researchers view emotion as a universal construct, and that a large part of emotional experience is biologically based. However, emotion is not only biologically determined, but is also influenced by the environment. Therefore, cultural differences exist in some aspects of emotions, one such important aspect of emotion being emotional arousal level. All affective states are systematically represented as two bipolar dimensions, valence and arousal. Arousal level of actual and ideal emotions has consistently been found to have cross-cultural differences. In Western or individualist culture, high arousal emotions are valued and promoted more than low arousal emotions. Moreover, Westerners experience high arousal emotions more than low arousal emotions. By contrast, in Eastern or collectivist culture, low arousal emotions are valued more than high arousal emotions. Moreover, people in the East actually experience and prefer to experience low arousal emotions more than high arousal emotions. Mechanism of these cross-cultural differences and implications are also discussed.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Permalink: https://nursingpaperessays.com/cultural-differe…assignment-paper/

We investigated the influence of contextual expressions on emotion recognition accuracy and gaze patterns among American and Chinese participants. We expected Chinese participants would be more influenced by, and attend more to, contextual information than Americans. Consistent with our hypothesis, Americans were more accurate than Chinese participants at recognizing emotions embedded in the context of other emotional expressions. Eye tracking data suggest that, for some emotions, Americans attended more to the target faces and made more gaze transitions to the target face than Chinese. For all emotions except anger and disgust, Americans appeared to use more of a contrasting strategy where each face was individually contrasted with the target face, compared with Chinese who used less of a contrasting strategy. Both cultures were influenced by contextual information, although the benefit of contextual information depended upon the perceptual dissimilarity of the contextual emotions to the target emotion and the gaze pattern employed during the recognition task.Culture is a huge factor in determining whether we look someone in the eye or the kisser to interpret facial expressions, according to a new study.

For instance, in Japan, people tend to look to the eyes for emotional cues, whereas Americans tend to look to the mouth, says researcher Masaki Yuki, a behavioral scientist at Hokkaido University in Japan.

This could be because the Japanese, when in the presence of others, try to suppress their emotions more than Americans do, he said.

In any case, the eyes are more difficult to control than the mouth, he said, so they probably provide better clues about a person’s emotional state even if he or she is trying to hide it.

Clues from emoticons

As a child growing up in Japan, Yuki was fascinated by pictures of American celebrities.

“Their smiles looked strange to me,” Yuki told LiveScience. “They opened their mouths too widely, and raised the corners of their mouths in an exaggerated way.”

Japanese people tend to shy away from overt displays of emotion, and rarely smile or frown with their mouths, Yuki explained, because the Japanese culture tends to emphasize conformity, humbleness and emotional suppression, traits that are thought to promote better relationships. Cultural Differences in Emotion Recognition and Expression Assignment Paper

So when Yuki entered graduate school and began communicating with American scholars over e-mail, he was often confused by their use of emoticons such as smiley faces 🙂 and sad faces, or :(.

“It took some time before I finally understood that they were faces,” he wrote in an e-mail. In Japan, emoticons tend to emphasize the eyes, such as the happy face (^_^) and the sad face (;_;). “After seeing the difference between American and Japanese emoticons, it dawned on me that the faces looked exactly like typical American and Japanese smiles,” he said.

Photo research

Intrigued, Yuki decided to study this phenomenon. First, he and his colleagues asked groups of American and Japanese students to rate how happy or sad various computer-generated emoticons seemed to them. As Yuki predicted, the Japanese gave more weight to the emoticons’ eyes when gauging emotions, whereas Americans gave more weight to the mouth. For example, the American subjects rated smiling emoticons with sad-looking eyes as happier than the Japanese subjects did.

It is important to understand the differences between young and older adults in emotional states and reaction. Many of the theoretical models studying emotional experience across adulthood predict changes throughout this life stage. A growing number of studies find that, as we age, the way we understand, manage, and react to positive and negative events changes. Different theoretical models have been proposed to explain this phenomenon: (a) Socioemotional Selectivity Theory; (b) Strength And Vulnerability Integration; and (c) Dynamic integration theory.

One of the most widely espoused theories in recent years is the Socioemotional Selectivity Theory (SST). The SST maintains that time horizons play a key role in motivation (Carstensen, 2006). The future time perspective considers that when the subjective sense of time and its limits changes, our motivational priorities also shift. The theory differentiates two broad categories of goals: one concerning the goals which help us acquire knowledge of the world, and another related to the goals that help us achieve emotional well-being. As people age, they increasingly perceive time as finite. This perception leads older people to prioritize behaviors or goals from which they derive emotional meaning, while younger people prioritize goals related to knowledge acquisition. For example, Hess and his colleagues have shown that older adults, compared to young adults, weighted negative information related to morality more than information regarding competences when judging strangers and rating their likability (Hess, 2005; Leclerc and Hess, 2007). The SST holds that this tendency is even more striking when the categories of goals compete. Moreover, the differences in emotional reactivity do not only manifest in negative emotional states. A recent meta-analysis of 100 independent studies found a reliable positivity effect with older adults showing a positive bias overall and the younger age group showing a negative bias overall (Reed et al., 2014). The “positivity effect” refers to the tendency of older people to prioritize achieving emotional gratification. SST directly connects thinking about a limited future with the emergence of the positivity effect. In short, young adults focused their attention and better remembered negative information while older adults attended to and better remembered positive information (Kennedy et al., 2004). Clearly, individual differences exist. Life events and individuals’ management of such variables may positively or negatively impact on the emergence of the positivity effect (Scheibe and Carstensen, 2010).Cultural Differences in Emotion Recognition and Expression Assignment Paper

While emotions and feelings are quite different, we all use the words interchangeably to more or less explain the same thing – how something or someone makes us feel.

However, it’s better to think of emotions and feelings as closely related, but distinct instances – basically, they’re two sides of the same coin.

It’s no secret that boys and girls are different—very different. The differences between genders, however, extend beyond what the eye can see. Research reveals major distinguishers between male and female brains.

Scientists generally study four primary areas of difference in male and female brains: processing, chemistry, structure, and activity. The differences between male and female brains in these areas show up all over the world, but scientists also have discovered exceptions to every so-called genderrule. You may know some boys who are very sensitive, immensely talkative about feelings, and just generally don’t seem to fit the “boy” way of doing things. As with all gender differences, no one way of doing things is better or worse. The differences listed below are simply generalized differences in typical brain functioning, and it is important to remember that all differences have advantages and disadvantages.

Processing

Male brains utilize nearly seven times more gray matter for activity while female brains utilize nearly ten times more white matter. What does this mean?

Gray matter areas of the brain are localized. They are information- and action-processing centers in specific splotches in a specific area of the brain. This can translate to a kind of tunnel vision when they are doing something. Once they are deeply engaged in a task or game, they may not demonstrate much sensitivity to other people or their surroundings.

White matter is the networking grid that connects the brain’s gray matter and other processing centers with one another. This profound brain-processing difference is probably one reason you may have noticed that girls tend to more quickly transition between tasks than boys do. The gray-white matter difference may explain why, in adulthood, females are great multi-taskers, while men excel in highly task-focused projects.

Chemistry

Male and female brains process the same neurochemicals but to different degrees and through gender-specific body-brain connections. Some dominant neurochemicals are serotonin, which, among other things, helps us sit still; testosterone, our sex and aggression chemical; estrogen, a female growth and reproductive chemical; and oxytocin, a bonding-relationship chemical.

In part, because of differences in processing these chemicals, males on average tend to be less inclined to sit still for as long as females and tend to be more physically impulsive and aggressive. Additionally, males process less of the bonding chemical oxytocin than females. Overall, a major takeaway of chemistry differences is to realize that our boys at times need different strategies for stress release than our girls.

The Basel researchers designed an experiment to determine whether women perform better on memory tests than men because of the way that they process emotional information. The researchers exposed 3,400 test participants to images of emotional content, finding that women rated these images as more emotionally stimulating than men, particularly in the case of negative images. When presented with emotionally neutral imagery, however, the men and women responded similarly.Cultural Differences in Emotion Recognition and Expression Assignment Paper

After being exposed to the images, the participants completed a memory test. The female participants were able to recall significantly more of the images than their male counterparts. The women had a particularly enhanced ability to recall the positive images. The study’s lead author, Dr. Annette Milnik, explained, “This would suggest that gender-dependent differences in emotional processing and memory are due to different mechanisms.”

Then, fMRI data from 700 participants suggested that womens’ stronger reactivity to negative emotional images is linked with increased activity of motor regions of the brain.

Previous studies have suggested that women display heightened facial and motor reactions to negative emotional stimuli.

“In our study, we see a similar pattern with the fMRI data,” Milnik said in an email to The Huffington Post. “One possible explanation would be that women might be better prepared to physically react to negative stimuli than males. Another explanation would be from normative expectations, with women being expected to be more emotional, and also to express more emotions.”

Here is how they differ.

What are emotions?
Imagine this: You sprint through the airport, on the run to catch your flight. While you try to make your way through the crowd of people waiting in line at the security check, you spot an old friend you haven’t seen in ages.

Before you can say anything, you tear up overwhelmed with excitement (and forget about the rush) while you give your friend a firm hug.

Emotions are lower level responses occurring in the subcortical regions of the brain (amygdala, which is part of the limbic system) and the neocortex (ventromedial prefrontal cortices, which deal with conscious thoughts, reasoning, and decision making).

Those responses create biochemical and electrical reactions in the body that alter its physical state – technically speaking, emotions are neurological reactions to an emotional stimulus.

Strength and Vulnerability Integration (SAVI) is a model associating age-related declines or physiological vulnerabilities with an increase in emotion-regulation strategies (Charles and Luong, 2013). SAVI suggests that in adulthood the functioning of the hypothalamicpituitary-adrenal (HPA) axis and the cardiovascular system diminishes. Activation of these two systems correlates highly with the perception of threat in humans and other species and thus impaired functioning might impact on a subjective decline in negative emotional states. SAVI posits that older adults have self-knowledge about their limited horizon. Then, they are motivated to positive experiences and also the accumulated emotional experience could help them to regulate their emotions. This theory also differentiates between avoidable and unavoidable negative experiences (Charles, 2010). Although elderly are usually oriented and motivated to quickly extricate themselves from negative situations, when negative experiences are highly stressful and inevitable, older adults’ recovery is poorer and presents more serious consequences (Charles and Luong, 2013; Piazza et al., 2013).Cultural Differences in Emotion Recognition and Expression Assignment Paper

Dynamic Integration Theory (DIT) relates the decline in cognitive resources to increased vulnerability in situations involving high arousal (Labouvie-Vief, 2003) and a number of studies defend this view. Keil and Freund (2009) showed that in young adults both pleasantness and unpleasantness increased with high emotional arousal, whereas in older adults, low-arousing stimuli were those experienced as most pleasant.

Advances in research and the continued interest in understanding how the emotional system functions in both aging adults and other life stages or life circumstances have generated the development of different Mood Induction Procedures (MIPs). These MIPs can be used to induce positive and negative emotions in a laboratory. Of all the methods implemented thus far, the presentation of film clips with affective content is currently one of the most effective and widely used MIPs (Gerrard-Hesse et al., 1994; Westermann et al., 1996). Film emotion induction is popular for various reasons: (a) simple standardization; (b) high ecological validity; (c) effectiveness in generating responses in the psychophysiological, motor and cognitive systems; (d) capacity to sustain an emotion at both subjective and physiological level for a reasonable time (Carvalho et al., 2012; Jenkins and Andrewes, 2012); and (e) facility to generate discrete emotions (Schaefer et al., 2010). Emotion induction by film clips is especially effective in eliciting negative emotions (Gerrard-Hesse et al., 1994; Westermann et al., 1996; Fernández-Aguilar et al., unpublished). In the literature, there are various published catalogs of film clips for use in research requiring elicitation of different emotions. As emotional targets, these catalogs have examined basic emotions such as anger, fear, disgust, sadness and amusement (Philippot, 1993). Some sets of clips have also included emotions such as surprise and satisfaction (Gross and Levenson, 1995; Rottenberg et al., 2007); tenderness (Schaefer et al., 2010); happiness and mixed emotions (Jenkins and Andrewes, 2012; Samson et al., 2016; Gilman et al., 2017).

Other mood induction procedures have worked successfully to assess emotional reactivity in older adults. For example, the Italian version of the Affective Norms for English Words (ANEW) worked successfully in both healthy aging individuals and Alzheimer’s Dementia patients (Mammarella et al., 2017; Di Domenico et al., 2016). However, given the large body of work on film clips as an emotion induction procedure, it is striking that only a few studies have examined the effect of this technique in aging research, and with inconsistent results. Beaudreau et al. (2009) studied the emotional reactions in older adults using the set compiled by Gross and Levenson (1995). They found that older adults reported more anger and less amusement compared to younger adults. The findings of Jenkins and Andrewes (2012) were more generalized. They found that older adults reported higher emotional intensity in response to positive and negative stimuli, especially for clips eliciting fear and amusement. The study by Fajula et al. (2013)revealed similar data but only in the case of negative emotions. Using the set compiled by Philippot (1993), they found that older adults reported higher intensity in the four primary negative emotions (fear, anger, disgust, and sadness) and that young adults reported higher intensity on joy and happiness.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Furthermore, there is a surprising lack of studies on emotion induction addressing other positive emotions apart from the global category of happiness. Attachment-related emotions such as love or tenderness are not usually included. In fact, to date, they have been included in only one database of film clips (Schaefer et al., 2010). Attachment emotions play a significant role in biological, emotional and social development and thus stimuli related to these emotions should be utilized in research on aging. Moreover, different aging models propose a positivity effect whereby older adults are motivated by emotion regulation strategies that maintain positive affective states and by enhanced emotional regulation to recover from negative affect states (Reed et al., 2014). Older adults have been found to favor positive information over negative information in memory and attention (Mather and Carstensen, 2005).

The ambiguity of the previous results motivated us to examine differences in young and older adults as regards their emotional responses when using film clips as the mood induction procedure. This may broaden our knowledge of the characteristics of emotional responses in older adults and how these are explained by models of aging. It also provides the possibility to identify differences between young and older adults in both baseline state and processes of emotional recovery.

Our focus on the baseline state draws on the use of neutral stimuli in a wide range of studies on MIPs. As well as using emotional target stimuli, they also include neutral stimuli in their film sets. Neutral stimuli are used as they enable each participant’s’ baseline data to be obtained before starting the experimentation and also because they facilitate emotional recovery following the induction of intense emotions. The literature recommends using stimuli free on any type of emotional content and with idiosyncratic characteristics similar to those of the stimuli to be used in the selected MIP (Hewig et al., 2005; Rottenberg et al., 2007). Furthermore, the use of neutral stimuli may help obtain a precise measure of the induction capacity of a specific MIP, considering intraindividually the differences between the state of the participants during exposure to the neutral stimuli and the emotional target stimuli.

The main purpose of this work is to expand our knowledge about fluctuations in positive and negative emotions in older adults when using film clips as a MIP. We compare emotional responses between young and older adults and study the differences between positive and negative induction. To this end, we used clips previously validated in a population of young Spanish adults (see Fernández et al., 2011), the majority of which were elaborated by Schaefer et al. (2010). The following hypotheses were considered: (1) negative mood induction will be more effective compared to positive mood induction both in young and older adults; (2) young and older adults will respond differently to the different negative emotional states induced; (3) young and older adults will respond differently to the different positive emotional states induced; (4) arousal levels will be higher in young adults compared to older adults; (5) baseline state is different in young and older adults and will determine the strength of negative and positive mood induction; and (6) emotion regulation after mood induction will be easier for older adults compared to young adults.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Participants

The final sample comprised 140 volunteers aged between 18 and 84 years (M = 39.02, SD = 25.32, 68.83% women). From the initial sample, 4 older adults and 7 young adults were excluded due to depressive symptoms. The participants were recruited from a research volunteer pool at the Department of Psychology at the University of Castilla- La Mancha (UCLM) Medical School, from an association at the Universidad de Mayores (a university program for older adults) and two socio-cultural centers in the city of Albacete. Participants were divided into age groups to form a younger group of 83 participants aged 18–26 (M = 18.87, SD = 1.63, 69.9% women) and an older group of 57 participants aged 60–84 years (M = 69.74, SD = 6.56, 68.4% women). Participants were receiving no psychotropic treatment or drug use and had no previous history of psychological, psychiatric or neurological disorder, according to the criteria of the Diagnostic and Statistical Manual of Mental Disorders Fifth Edition (DSM-V). They presented no auditory or visual impairments other than requiring corrective lenses. All were of Caucasian ethnicity and native Spanish speakers. They gave voluntary consent to take part in the study without obtaining any type of remuneration and according to the requirements of the approved ethics procedure of the Clinical Research Ethics Committee of the Albacete University Hospital.

Measures

Diagnostic Evaluation

As depressive symptomatology may affect emotional response, we administered the Beck Depression Inventory II (BDI-II) (Beck et al., 1961) prior to the experiment. The BDI-II is a self-report questionnaire that assesses symptoms of depression including anhedonia, sadness, loss of interest or energy, disturbances in eating and sleeping, loss of concentration or suicidal ideation. On the BDI, scores between 10 and 15 are considered in a dysphoric range and scores of 16 or above represent a depressed range (Kendall et al., 1987). Subjects scoring over 16 were excluded from our study. In the case of the older adults, the Mini Mental State Examination (MMSE) (Folstein et al., 1975) was used to discard cognitive impairment. MMSE is a screening tool measuring symptoms of dementia such as disorientation, alterations in memory, and alterations in the capacity for abstraction or in language. On the MMSE, scores between 9 and 11 are considered in the dementia range, scores between 12 and 24 indicate cognitive impairment, and scores between 24 and 26 suggest suspicion of pathology. Subjects scoring lower than 27 were excluded from our study. Both the BDI-II and the MMSE were administered in a paper-and-pencil version.Cultural Differences in Emotion Recognition and Expression Assignment Paper

The Positive and Negative Affect Schedule- state version (PANAS; Watson et al., 1988) was used to assess positive affect (e.g., interested, excited, proud) and negative affect (e.g., distressed, ashamed, upset) through 20 items with answers ranged between 0 (“not at all”) and 4 (“extremely”). This questionnaire was administered telematically just before starting the experimental session and to assess prior mood before the emotion elicitation procedure.

Measurement of Emotional Response

The subjective emotional response was evaluated using dimensional measures. The Self-Assessment Manikins (SAM) (Bradley and Lang, 1994) is a self-report questionnaire that assesses emotional response, measuring affective valence, arousal and dominance or emotional control. Considering the dimensional structure of affect (Russell and Barrett, 1999), we administered the items measuring valence and arousal. These two dimensions are those most commonly used in the literature (Russell, 1980; Watson et al., 1988) and, furthermore, permit comparison with somato-physiological measures. Thus, participants rated, on a 9-point Likert-type scale, how pleasant/happy/amused (9) or unpleasant/unhappy/sad (1) and how aroused (9) or relaxed (1) they felt while watching the emotional video clips. The questionnaire uses graphic figures which represent the different emotional states and is therefore rapid and simple to administer in both age groups, regardless of participants’ educational level.

Procedure

We selected 54 scenes from HD films dubbed in Spanish with an average length of 2′38″ (see Table 1). These fragments were among those in a battery of audiovisual stimuli validated in a population of young Spanish adults (see Fernández et al., 2011). The selected excerpts maintained the same features used in previous studies (Rottenberg et al., 2007; Schaefer et al., 2010). Furthermore, we added a scene from the film 127 h (Colson et al., 2010) to the disgust category, which presented the characteristics of stimuli used for disgust in previous studies. In accordance with the previously published film clip batteries, each segment was expected to induce an emotion from a specific category: amusement, tenderness, anger, sadness, disgust, fear and neutral state.

Philosophical and psychological theory has traditionally focused on intra-individual processes that are entailed in emotions. Recently sociologists, cultural anthropologists, and also social psychologists have drawn attention to the interpersonal nature of emotions. In this chapter we focus on the influence of others on emotional experiences and expressions. We summarise research on social context effects which shows that both emotional expression and experience are affected by the presence and expressiveness of other people. These effects are most straightforward for positive emotions, which are enhanced in the company of others. In the case of negative emotions, the effects of social context depend on the circumstances in which the emotion is elicited, and on the role of other persons in this situation. We discuss these social context effects in the light of a more general theoretical framework of social appraisal processes.

In the last post, we focused on the idea that a thought comes before an emotion. So once we’ve had that all important thought, and we end up feeling something, what are the forces out there that control how we express those feelings?Cultural Differences in Emotion Recognition and Expression Assignment Paper

Culture

Expressions of emotion can differ and mean different things depending on the cultural context. Stereotype alert here – the British stiff upper lip might seem a bit cold here in North America, the way Canadians like to point out their own faults could be seen as a sign of weakness in the US, the lavish outpouring of emotion at an Italian family gathering might seem overwhelming to a Japanese family.

Gender

Women are more likely to show vulnerability than men. Men are generally less shy about revealing their strengths than women. Women often score higher in tests aimed at measuring how well a person can identify and name the emotions of others than men. Naturally, all of these statements refer to men and women as a group. No one is trying to say every woman or every man is like this, but overall group statistics based on gender can tell us some useful things.

Social conventions – at least in North America

Sometimes society tell us – hold it – those emotions are not acceptable – none of that, thank you kindly. Men shouldn’t cry in public (unless they are athletes being trading from their team or retiring), women shouldn’t be angry, you don’t tell your life story to the barista at Starbucks when he asks how you’re doing. Society also gives us the message that only positive feelings are acceptable, and not even too much of that, please. If you’ve lost a loved one you do get a period of grief, but life is for the living, you’re meant to get over it, or barring that, don’t talk about it.

Social roles

Your social role can determine how and what types of emotion you can express, where you can do that expressing, and with whom. The boss doesn’t take an employee aside and talk about a nagging spouse (or at least he or she shouldn’t). The leader of a country doesn’t get on TV and collapse in tears due to feeling overwhelmed with the roles of the office.

Emotional contagion

Have you ever been to a funeral where you felt in control of your emotions and then you see another person start sobbing and you fall apart? The transfer of emotion from one person to another can affect emotional expression. We can also find that certain people wind up our emotions and others make us feel all mellow yellow.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Fear of self-disclosure

We often limit our emotional expression because giving away too much to others can be risky. It makes us vulnerable. We might be misunderstood, or maybe we’ll make people uncomfortable, or maybe our emotional honesty will be used against us.

So – what’s the point Saying What Matters lady?

We’re working at getting to know more about ourselves and our emotional expression so we can get out in the world and say what matters. Being aware of some of the forces that operate behind the scenes when it comes to expressing our emotions is helpful as we pursue this goal.

Then he and his colleagues manipulated photographs of real faces to control the degree to which the eyes and the mouth were happy, sad or neutral. Again, the researchers found that Japanese subjects judged expressions based more on the eyes than the Americans, who looked to the mouth.

Interestingly, however, both the Americans and Japanese tended to rate faces with so-called “happy” eyes as neutral or sad. This could be because the muscles that are flexed around the eyes in genuine smiles are also quite active in sadness, said James Coan, a psychologist at the University of Virginia who was not involved in the research.

Japanese Communication

Is the person in front of me right now angry or happy? This may sound like an obvious question, but in fact it is not always as easy to judge as it may seem. It is very likely that the smiling face of an innocent child really does show that they are happy, but your subordinate at work who approaches you with a smile may actually be feeling very angry.

Japan has long been regarded as a society where people read the atmosphere. As social animals, we humans live together for better or worse by reading the atmosphere as well as each other’s feelings, to a greater or lesser extent, in order to maintain good relations with each other. The act of guessing how another person is feeling is one part of reading the atmosphere.

How then, do we read people and understand how they are feeling? One source of information for doing so is language. However, most of us have had the experience of someone responding to an email by saying, “fine, understood,” which causes you to wonder whether the person was really happy with the arrangement or not. In face-to-face communication, one can use information from various aspects of non-verbal communication, such as the expression on the person’s face, the tone of their voice, and so on, but with email, where it is very difficult to put across non-verbal communication information, misunderstandings can easily arise.

An effective source of information in reading how someone is feeling is their facial expression. Of course, people can put on a bold face to cover up their true emotions, so we cannot always rely on facial expressions. Most people reading this article, however, have also seen firsthand that it is harder to control one’s facial expressions to hide emotions than it is to control one’s language.

What makes this all the more difficult is that the meanings of facial expressions in a given situation depend on the culture. The common expressions, “laugh with your face and cry in your heart, and apologize with a smile, are very well known to Japanese people, but to many people from other countries, they would seem peculiar and difficult to understand. These kinds of differences in expressing emotions and reading expressions can create cultural barriers and can often lead to obstacles in the path of cross-cultural communication that are a greater impediment even than language barriers. This is a serious issue, which can cause great misunderstandings with people from other countries in situations such as international negotiations.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Japanese People are Sensitive to Tone of Voice

Surprisingly, in the fields of psychology and neuroscience, research on how people read the emotions of others has focused almost exclusively on facial expressions, and many puzzles still remain regarding the cultural differences that exist in the human capacity to judge the emotions of others based upon information from multiple sensory organs about the other person’s face and voice, etc., as people do every day in ordinary situations.

Therefore, the author decided to address this issue in collaboration with Professor Beatrice de Gelder from Tilburg University, Holland, and in the following I would like to present the results of our Joint Japanese-Dutch International Research. In our research, we conducted a cross-cultural examination of how people judge the emotions of others by looking at how they connect the information they have obtained by reading the other person’s face and the information they have obtained by reading the other person’s voice, by using Japanese and Dutch students as the target of our research. In the experiments, we made videos in which the subjects’ vocal and facial expressions were congruent and videos in which they were incongruent, and then had the participants watch both videos (see Figure 1). We then asked the participants to focus only on either the face or the voice and determine the emotion of the person in the video. The results of the experiment showed that, compared to Dutch people, when Japanese people were asked to focus only on the face, they still remained strongly influenced by the tone of voice, which they were supposed to ignore. In contrast, when they were asked to focus on the voice, they were less strongly influenced than the Dutch people were by the facial expressions, which they were supposed to ignore (see Figure 2). In other words, this research showed that when Japanese people judge the emotions of others, they have a strong tendency to automatically pay attention to the tone of voice.

Figure 1: An example of a video that was shown to participants in our research experiment. We created an edited video of a person who was saying words with neutral meaning but saying them either happily or angrily, and we set conditions where the facial expression and voice were congruent (for example, both the face and voice expressed happiness) or incongruent (for example, the face looked happy but the voice sounded angry). The participants were asked to focus on either the voice or face according to the conditions involved, and while ignoring the other aspect, to determine the emotion of the person in the video. We adjusted the level of difficulty to make it the same for judging both the voice and the face by adding noise to the faces, etc.

Figure 2: If one compares how strongly Japanese and Dutch students are influenced automatically by information they are supposed to be ignoring when judging the emotion of another person, when focusing on facial expression, Japanese people were more influenced by the voice which they were supposed to be ignoring (21.0%) than Dutch people (12.3%) were. In contrast, when focusing on the tone of voice, Japanese people were hardly influenced at all by the facial expression which they were supposed to be ignoring (2.6%), as compared to Dutch people (10.0%).

These results show that culture influences the mechanism in the brain that combines information received from different sensory organs, such as in the case of things that have been seen and heard. This means that even the wiring of the brain differs according to cultural differences. In all probability, in the process of adapting to their cultural and linguistic environment in Japan, Japanese people optimize their response to give the most appropriate level of weight to things that they have seen and heard for that environment. So what aspects of culture and language, then, are linked to a tendency to give emphasis to voice? This is something that should become clear with further research.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Emotional Communication among Different Cultures

From the results of the research described above, one can partly explain quite satisfactorily why misunderstandings regarding emotions can occur between different cultures. If the speaker is a Japanese person, and the listener is a non-Japanese person, then the level of reliance of the two parties on facial expression and tone of voice respectively is likely to be different. If the speaker is smiling but their voice contains anger, when the listener relies on the facial expression of the speaker to determine their emotion, they may not notice the anger, and mistake the speaker’s smile as signifying satisfaction. In such a case, any conversation that follows will not proceed smoothly. In this way, intercultural misunderstandings may in part be caused by the method of expressing emotion as well as the method of reading another person’s emotions being different according to the culture of the speaker and listener.

In situations where the other person may not readily display their real emotions, such as in negotiations for business and the like, the ability to read the emotions of the other party could be extremely useful. Based on the results of the above research, it can be said that if the other party is Japanese, there is a higher probability that you will be able to judge their true emotion by focusing on their voice. In this experiment too, for both Japanese and Dutch participants, when they focused on the other person’s face they were more likely to read the emotion by their facial expression, and when they focused on the other person’s voice, they were more likely to read the emotion by their voice. This means that depending on whether one focuses on the face or the voice, the emotion that can be read may be different.

It is often said that Japanese people do not express their emotions much. But is that really true? If this is a discourse that has been created based upon the impressions of Europeans and Americans when looking at Japanese people, then it is only a matter of cultural relativity. It may well be that this is only an impression that is given since Japanese people tend to express their emotions less when compared to the standards of facial expressions and gestures displayed by Europeans and Americans, but in fact Japanese people may be expressing their emotions through their voices, and therefore expressing their emotions by a different means. Moreover, in conversations between Japanese people is it not fair to say that a highly developed mastery of reading the atmosphere is being used to the full and slight changes in tone of voice are conveying emotion and being used to read the emotions of the other person?

Toward Technologies that Break Cultural Barriers

In recent years, dramatic leaps have been made in terms of improvements in speech recognition language translation technology by computers, and it is likely that language barriers will rapidly come to be eliminated in going forward. But what about cultural barriers, in terms of emotions, and the like? We often misinterpret the emotions of even our close friends and family. When it comes to translating and properly understanding the emotions of foreigners, then it seems like the walls between cultures will still be in existence long into the future. But the author intends to develop the research described above in a step-by-step manner, in order to lead to an engineering application for emotion translation technology which combines emotions expressed through both facial and verbal expressions. This would be a new communications technology to “translate” the emotions expressed by a speaker from a different culture. With the assistance of the Ministry of Internal Affairs and Communications’ Strategic Information and Communications R&D Promotion Programme (SCOPE), I would like to go on to perform basic research into this going forward.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Note 1: This research has been made possible thanks to support from a Post-Doctoral fellowship for Research Abroad from the Japan Society for the Promotion of Science (2008-2009), and a Grant in Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology (19001004 ), and the European Commission (COBOL FP6-NEST-043403).

The author was born in Tokyo in 1975. He graduated from Saitama Prefectural Kawagoe High School and then graduated from a special course in Psychology from the School of Letters, Arts and Sciences I at Waseda University. He completed the PhD course at the University of Tokyo Graduate School of Humanities and Sociology, and holds a PhD in psychology. He served as a Researcher at the National Rehabilitation Center for Persons with Disabilities, a Researcher at the Research Institute of Electrical Communication Tohoku University, Research Associate at the University of Tokyo Graduate School of Humanities and Sociology, a Visiting Researcher at Tilburg University, Holland (on a Post-Doctoral fellowship for Research Abroad from the Japan Society for the Promotion of Science), and more, before attaining his current position as Assistant Professor at the Waseda University Institute for Advanced Study. His areas of specialization are psychology and cognitive science. His main published works include “Workshop on Cognitive Psychology (Ninchi shinrigaku waakshoppu),” (Co-authored; Waseda University Press), etc.

Research has shown that the expressive muscles around the eyes provide key clues about a person’s genuine emotions, said Coan. Because Japanese people tend to focus on the eyes, they could be better, overall, than Americans at perceiving people’s true feelings.

Although this might be a very useful skill, it could also have potential drawbacks, Yuki pointed out: “Would you really want to know if your friend’s, lover’s, or boss’s smile was not genuine? In some contexts, especially in the United States, maybe it is better not to know.”Cultural Differences in Emotion Recognition and Expression Assignment Paper

Examined the impact of individualism–collectivism at the cultural and individual level on the expression of emotion in Japan and the US. 100 students at an American university and 100 students at a Japanese university rated their anticipated degree of comfort in expressing a variety of emotions to in-group and out-group members and completed a 29-item individual-level individual–collectivism scale. Results show that individualism–collectivism expectations at the cultural level were partially supported, and only weak effects of individualism–collectivism at the individual level were found. The data are consistent with socialization into individualistic and collectivistic values as well as the lessening of these influences in US and Japanese society. They support the idea that individualism–collectivism is not a comprehensive and precise dimension but rather a loose collection of many different cultural characteristics. (PsycINFO Database Record (c) 2016 APA, all rights reserved)

Want to know how a Japanese person is feeling? Pay attention to the tone of his voice, not his face. That’s what other Japanese people would do, anyway. A new study examines how Dutch and Japanese people assess others’ emotions and finds that Dutch people pay attention to the facial expression more than Japanese people do.

“As humans are social animals, it’s important for humans to understand the emotional state of other people to maintain good relationships,” says Akihiro Tanaka of Waseda Institute for Advanced Study in Japan. “When a man is smiling, probably he is happy, and when he is crying, probably he’s sad.” Most of the research on understanding the emotional state of others has been done on facial expression; Tanaka and his colleagues in Japan and the Netherlands wanted to know how vocal tone and facial expressions work together to give you a sense of someone else’s emotion.

For the study, Tanaka and colleagues made a video of actors saying a phrase with a neutral meaning—“Is that so?”—two ways: angrily and happily. This was done in both Japanese and Dutch. Then they edited the videos so that they also had recordings of someone saying the phrase angrily but with a happy face, and happily with an angry face. Volunteers watched the videos in their native language and in the other language and were asked whether the person was happy or angry. They found that Japanese participants paid attention to the voice more than Dutch people did—even when they were instructed to judge the emotion by the faces and to ignore the voice. The results are published in Psychological Science, a journal of the Association for Psychological Science.

This makes sense if you look at the differences between the way Dutch and Japanese people communicate, Tanaka speculates. “I think Japanese people tend to hide their negative emotions by smiling, but it’s more difficult to hide negative emotions in the voice.” Therefore, Japanese people may be used to listening for emotional cues. This could lead to confusion when a Dutch person, who is used to the voice and the face matching, talks with a Japanese person; they may see a smiling face and think everything is fine, while failing to notice the upset tone in the voice. “Our findings can contribute to better communication between different cultures,” Tanaka says.

This study investigated the ability of non‐Hispanic White U.S. counseling psychology trainees and Japanese clinical psychology trainees to recognize facially expressed emotions. Researchers proposed that an in‐group advantage for emotion recognition would occur, women would have higher emotion‐recognition accuracy than men, and participants would vary in their emotion‐intensity ratings. Sixty White U.S. students and 60 Japanese students viewed photographs of non‐Hispanic White U.S. and Japanese individuals expressing emotions and completed a survey assessing emotion‐recognition ability and emotion‐intensity ratings. Two four‐way mixed‐factor analyses of variance were performed, examining effects of participant nationality/race, participant gender, poser nationality/race, and poser gender on emotion‐recognition accuracy scores and intensity ratings. Results did not support the in‐group advantage hypothesis, rather, U.S. participants had higher accuracy rates than Japanese trainees overall. No gender differences in accuracy were found. However, respondents varied in their intensity ratings across gender and nationality. Implications for training applied psychology students and for future research are presented.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Emotion‐recognition universality suggests that all humans recognize six or seven basic emotions (anger, contempt, disgust, fear, happiness, surprise, and sadness; Ekman, 1971; Matsumoto, 2002). Conversely, the cultural relativist or specific position argues that basic emotion recognition varies widely across cultures (Elfenbein & Ambady, 2002). Often, these positions conflate emotion recognition and expression as similar concepts, yet it is important to remember that these concepts are two different processes. This paper focuses on emotion recognition.

Emotion recognition is relevant to training applied psychologists since they often incorporate clients’ experiences of emotion in practice, regardless of theoretical orientation, work setting (e.g., schools, mental health clinics), or clients (adolescents, adults; Hutchison & Gerstein, 2017; Wester, Vogel, Pressly, & Heesacker, 2002). Emotion recognition, through facial expressions, is a typical first step in the perception and eventual reflection or interpretation of emotion, which are important to developing interpersonal sensitivity and nonverbal communication skills (Hall & Bernieri, 2001). Further, processes involved in feeling or expressing emotions often vary across cultures, for example between persons in the United States and Japan (Sue & Sue, 2008). Gerstein and Ægisdóttir (2012) reported that mental health professionals1

tend to apply Western or U.S.‐ centric concepts of emotion to clients from non‐Western cultures without considering cultural differences in emotional expression. Given the differences in emotional experiences and expression between persons in the United States and Japan (Mesquita & Walker, 2003), which impact the ability to recognize emotion, it seemed relevant to study emotion recognition among mental health trainees affiliated with these two cultural groups to better inform education practices. As discussed later, this study focused on emotion recognition of non‐Hispanic White U.S.2 and Japanese psychology graduate students.

Emotion Recognition and Intensity Research in Psychology

Early researchers found initial support for the universality of emotion recognition of basic emotions among participants in New Guinea who had not been exposed to outsiders’ understanding of facially expressed emotions (Ekman & Friesen, 1971). This study was replicated with many samples and the emotions of fear, anger, surprise, and happiness (Ekman, 1971). Further, Matsumoto (1992) found that for Japanese and U.S. individuals, happiness was the easiest emotion to recognize and fear the most difficult. A review by Russell (1994) also concluded that there was enough evidence to support the universality hypothesis of emotion recognition.Cultural Differences in Emotion Recognition and Expression Assignment Paper

While ample research on emotion recognition and intensity exists in other psychological specialties (e.g., social psychology), little exists in applied psychology. However, Machado, Beutler, and Greenberg (1999) compared U.S.‐based therapists to non‐therapists in their ability to accurately identify emotions presented by a client in a videotaped counseling session. The researchers found that experienced therapists were more accurate than non‐therapists in identifying the target emotion. The design of that study, however, was weakened by using non‐standardized stimulus instruments (e.g., videos) and a broad definition about the therapists’ level and type of clinical experiences. In another project, Hutchison and Gerstein (2012) studied the ability of counseling psychology and counseling trainees and undergraduate non‐psychology majors in the United States to accurately identify seven facially expressed emotions. They found no differences between undergraduates and graduate counseling trainees in their accuracy rates of emotions, contradicting Machado et al.’s results that experienced therapists were better at identifying emotions than non‐therapists. The findings of these two studies, however, are difficult to compare due to their distinct methods.

In addition to emotion recognition accuracy, there is evidence that people differ in their perceptions of the intensity of facially expressed emotions (e.g., perceived strength of an emotion) based on their ethnic identity, the emotion, and the poser’s ethnic background (e.g., Hutchison & Gerstein, 2012; Matsumoto, 1993). Placed in an applied context, like therapy or business negotiations, accurately identifying emotional intensity is an important aspect of emotion recognition (Hutchison & Gerstein, 2017). Accurate assessment is necessary since researchers have found that inaccurate assessment of emotion intensity may impact relationships negatively, as a result of miscommunication (Iwakabe, Rogan, & Stalikas, 2000). Further, when comparing therapists to non‐therapists in perceptions of clients’ emotions, Machado et al. (1999) found variation in the range of intensity ratings. Machado et al. suggested that practicing therapists and trainees did not consistently assess clients’ expressed emotional intensity. Hutchison and Gerstein (2012) also found that participants rated women’s compared to male posers’ expressions as more intense and rated Japanese compared to Caucasian American posers’ facial expressions as more intense.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Cultural Differences and In‐Group Advantage in Emotion Recognition

Cultural differences are also important when discussing emotion recognition. A comparison of U.S. and Japanese individuals’ beliefs about emotional expressivity, an antecedent to emotion recognition, illustrates some differences in U.S. and Japanese emotional expression and their associated values or expectations. U.S. applied psychologists often emphasize and value high levels of emotional expressiveness from clients (Sue & Sue, 2008). Such expressivity is a component of emotion recognition because expressing emotions on the face is a way clients communicate their emotions to others. In many Japanese cultures, however, expressing silence or modifying/restricting one’s emotions is a form of respect and reverence. Thus, a common Japanese social norm valuing emotional restraint or modification (Hwang & Matsumoto, 2012) may conflict with many U.S.‐based therapists’ focus on high emotional expressivity in sessions (Sue & Sue, 2008). By operating this way, when working with people whose emotional norms (often called cultural display rules) differ from those commonly held by many in the United States, therapists may make assumptions about emotional expressivity that may or may not actually exist. These assumptions may create misinterpretations and potential harm to clients (Sue & Sue, 2008). Further, considering that a primary goal of applied psychology training programs is to produce culturally competent professionals (Kaslow et al., 2009), trainees must gain knowledge of non‐Eurocentric models of emotion, specifically emotion expressivity. This can be achieved partly through training on the accurate recognition of facial expressions and factors influencing emotional expression.

Research on the in‐group advantage provides additional insight into one way to address cultural differences in emotion recognition and intensity ratings. When describing this advantage, Elfenbein and Ambady (2002) stated that individuals are more accurate in judging facially expressed emotions of persons in their own versus another culture. They conducted a meta‐analysis of 97 studies of emotion recognition and found a moderate effect size (r = 0.55). They concluded that their results supported the in‐group advantage hypothesis. Prior research on this hypothesis involving posed and spontaneous expressions of emotion, however, has yielded mixed results for this hypothesis (Matsumoto, 2002; Matsumoto, Olide, & Willingham, 2009).Cultural Differences in Emotion Recognition and Expression Assignment Paper

Gender also impacts emotion recognition. Women have been found to perform better than men in accurately decoding facially expressed emotions across thousands of participants in various countries (Biehl et al., 1997; Hall, Carter, & Horgan, 2000; Merten, 2005; Thompson & Voyer, 2014). Further, researchers have discovered evidence that women were faster than men in recognizing positive and negative emotions and better decoders of emotions. Additionally, judges of emotion more easily identified an emotion that a woman expressed (Matsumoto, 1992; Wester et al., 2002). In contrast, Hutchison and Gerstein (2012) found no differences between male and female counseling trainees and undergraduates in their ability to recognize emotions.

The magnitude of gender differences appears to be small to medium and this area lacks investigations of moderating factors that likely impact such differences (Thompson & Voyer, 2014). Researchers have suggested that rather than gender per se as the source of these differences, child‐rearing practices and gender socialization roles likely influence emotion recognition. Others propose biological explanations or underlying neurobiological mechanisms, although this research is often contradictory. Further, Thompson and Voyer’s (2014) meta‐analysis discovered that gender differences were frequently moderated by other variables, including type of emotion (anger and negative emotions), sensory modality, the posers’ gender in the stimuli, and age. Because the findings on gender and emotion recognition, specifically within applied psychology, continue to evolve (Wester et al., 2002), it is important for psychologists to avoid perpetuating stereotypes associated with gender and emotion.

Given the previously reviewed research, this study tested the following hypotheses:

Hypothesis 1: An in‐group advantage for emotion recognition would emerge. White U.S. participants would have higher accuracy scores of emotions when judging White U.S. compared to Japanese photographs. Also, Japanese participants would have higher accuracy scores of emotions when judging Japanese compared to White U.S. photographs.

Hypothesis 2: Female compared to male participants would have higher accuracy rates when judging facially expressed emotions.Cultural Differences in Emotion Recognition and Expression Assignment Paper

An exploratory examination of significant effects on participants’ intensity ratings was also carried out. We predicted differences in the endorsement of the strength of emotion intensity ratings (e.g., rating an emotion “moderate” or “strong”) among White U.S. and Japanese mental health trainees. We purposefully framed this exploration as non‐directional, as prior research has found differences in intensity ratings, but these differences were not consistent across independent variables (e.g., Hutchison & Gerstein, 2012; Matsumoto, 1993). It was possible, therefore, that participants would rate female posers as expressing more intense emotions (Hutchison & Gerstein, 2012), that Japanese posers would be less intense than U.S. posers (Hutchison & Gerstein, 2012), or that Japanese participants would rate emotions as less intense than U.S. participants (Matsumoto & Ekman, 1989).

Participants

Participants (n = 324) were graduate students in APA‐accredited counseling psychology doctoral programs in the United States (n = 218) and masters and doctoral level students in clinical psychology programs in Japan (n = 106). U.S. participants were recruited through Listserv announcements and requests sent to program training directors. The third author collected data from students at three universities in Japan by distributing paper versions of the survey in classes and asking trainees to return the surveys, if they agreed to participate. One person was excluded due to excessive amounts of missing data (five or more missing responses). These procedures resulted in a total of 166 U.S. (all racial identities) and 106 Japanese participants.

Additionally, to rigorously test the in‐group advantage hypothesis (Matsumoto, 2002), we included only students who shared identities with the posers in the Japanese and Caucasian Facial Expressions of Emotion (JACFEE) photo set—30 persons who self‐identified as non‐Hispanic White. Thirty people were selected since it was the lowest number of students available in one group (non‐Hispanic White U.S. men) of the design. This sample was not meant to imply adequate representation of all U.S. trainees. Rather, this was a restriction due to the best available stimulus materials to study emotion recognition. As a result, 120 participants were used in the analyses: 30 each of Japanese and U.S. men and women. The remaining students were assigned a number and a random‐number generator was used to identify 30 numbers for each subsample. Then, students whose number was selected comprised the final sample (Heppner, Wampold, Owen, Wang, & Thompson, 2016). This reduction to equal numbers of participants per group is also recommended for analyses of variance (ANOVAs; Keppel & Wickens, 2004). Levene’s test revealed that the assumption of homogeneity was met across groups (accuracy range p = .356 to .777; intensity range p = .101 to .385). The mean age of the total sample was 28.7 years (SD = 7.01), 29.6 years (SD = 5.27) for the White U.S. sample, and 27.6 years (SD = 8.39) for the Japanese sample. All Japanese participants identified as Japanese and all U.S. participants identified as non‐Hispanic White/Caucasian American.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Materials

The JACFEE (Matsumoto & Ekman, 1988) was used to present the emotions. In brief, the JACFEE includes 56 photographs displaying anger, contempt, disgust, fear, happiness, sadness, and surprise. White U.S. and Japanese nationals and Japanese American (referred to as “Japanese”) men and women depict these emotions. The photographs are balanced by racial identity and nationality (28 White U.S. and 28 Japanese faces), gender (28 men and 28 women), and emotion (eight photos per emotion). Since the JACFEE is a mix of Japanese and Japanese Americans, this study is not an exhaustive test of the in‐group advantage hypothesis, but the closest approximation given the availability of stimulus materials. The JACFEE was developed by gathering over a thousand photos where two independent raters used the Facial Action Coding System to code their perception of the emotions. Reliability between the two coders was .91 and intensity ratings of the emotions were consistent (“moderate to high”) across all emotions (Matsumoto & Ekman, 1988). For more information on the JACFEE’s development, read Ekman and Friesen (1978) and Fasel and Luettin (2003).

Translation process

In this study, the English survey was translated into Japanese following established translation and back‐translation procedures (van de Vijver & Hambleton, 1996). Five bilingual persons who spoke Japanese and English were employed. Along with being bilingual, all translators met other criteria that strengthened the quality of the translation, including possessing knowledge of the target language, familiarity with the target culture, and knowledge of the construct under study (van de Vijver & Hambleton, 1996).Cultural Differences in Emotion Recognition and Expression Assignment Paper

Procedures

Researchers contacted U.S. trainees through professional Listservs or the training directors of graduate psychology programs by providing a web link to the informed consent. If trainees agreed to participate, they subsequently completed the emotion‐recognition and ‐intensity survey, a demographic tool, and a qualitative measure (data collected for another study), with all measures completed online and presented in random order. After this, students were shown a debriefing document and the researchers’ contact information. The procedures for Japanese students were similar, with two notable differences. First, they were recruited through the third author in Japan. Further, they answered all materials in paper format, rather than online (stimulus pictures remained randomized). When designing the study, the third author noted that scholars in Japan typically use a paper format when collecting data. Based on this reality, we decided to employ this format to reduce method bias (van de Vijver & Poortinga, 1997).

As an incentive, we indicated that we would donate $1 to charities for each trainee who participated. After agreeing to participate, students viewed, on a computer screen or paper, one photograph and the question, “Which emotion is the person in the photograph expressing?” The response options were forced choice: anger, contempt, disgust, fear, happiness, sadness, surprise, a “none of these are correct” option, and an open‐ended “other” category. Students were also asked to indicate the intensity of each emotion on a 9‐point (0–8) Likert scale with anchors of not at all (0), a little (1), a moderate amount (4), and a lot (8). All 56 pictures were presented in a randomized order. This method has been used consistently by others (e.g., Biehl et al., 1997; Hutchison & Gerstein, 2012; Matsumoto, 1992, 1993).Cultural Differences in Emotion Recognition and Expression Assignment Paper

Data Analysis

To test Hypothesis 1 and 2, accuracy scores were summed (correct responses scored a 1 and incorrect responses scored a 0) across all seven emotions and analyzed in a 2 (participant nationality/race: White U.S. or Japanese) × 2 (participant gender: male or female) × 2 (poser nationality/race: White U.S. or Japanese) × 2 (poser gender: male or female) mixed factors ANOVA. To explore differences in intensity ratings, ratings were averaged across emotions and analyzed in a 2 (participant nationality/race: White U.S. and Japanese) × 2 (participant gender: male and female) × 2 (poser nationality/race: White U.S. and Japanese) × 2 (poser gender: male and female) mixed‐factors ANOVA. Between‐factors included participant nationality/race and gender, and within‐factors were poser nationality/race and gender.

Results

Correlations were computed to explore relationships between accuracy and intensity ratings. Correlations by poser gender and poser nationality were collapsed across participants’ nationality and gender. Of 16 possible correlations, no significant results were obtained (range −0.11 to 0.14). Although accuracy and intensity ratings previously had been treated as unique dependent variables since they were considered different constructs (e.g., Biehl et al., 1997; Hutchison & Gerstein, 2012; Matsumoto, 1992), the lack of significant correlations further supported the decision to analyze accuracy and intensity ratings in separate ANOVAs. Further, we only examined analyses related to our hypotheses: (a) the participant nationality × poser nationality interaction for accuracy scores to test Hypothesis 1; (b) the main effect for gender on accuracy scores to examine Hypothesis 2; and (c) an exploratory examination of significant effects on intensity ratings. Cronbach’s αs for the accuracy and intensity ratings were .89 and .95, respectively, indicating a high degree of internal consistency.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Emotion Accuracy

The four‐way interaction for participant nationality/race × participant gender × poser nationality/race × poser gender was significant (see Table 1), F(1, 116) = 9.403, p = .003, ηurn:x-wiley:00215368:media:jpr12182:jpr12182-math-0001 = .075. The four possible three‐way interactions were not significant. Among the six possible two‐way interactions, one was significant: participant nationality/race × poser nationality/race, F(1, 116) = 5.99, p = .016, ηurn:x-wiley:00215368:media:jpr12182:jpr12182-math-0002 = .05. This finding will be discussed in more detail later. A significant main effect for participant nationality/race also emerged, F(1, 116) = 67.76, p < .000, ηurn:x-wiley:00215368:media:jpr12182:jpr12182-math-0003 = .37. We chose not to conduct follow‐up univariate analyses for the significant 4‐way, as this interaction was not related to Hypothesis 1. Rather, we focused on the two‐way interaction effect of participant nationality/race × poser nationality/race to specifically assess the presence of an in‐group advantage effect for emotion recognition (Matsumoto, 2002). Recall that Hypothesis 1 stated that White U.S. trainees would be better at recognizing emotions expressed by White U.S. compared to Japanese posers, and conversely, Japanese trainees would be better at recognizing emotions expressed by Japanese compared to White U.S. posers. To confirm the presence of an in‐group advantage effect, the following patterns in the planned contrasts needed to be significant in the stated direction (Matsumoto, 2002): (a) contrast A needed to be significant, with White U.S. participants having higher accuracy scores for White U.S. posers than Japanese posers; (b) contrast B needed to be significant, with Japanese participants having higher accuracy scores for Japanese as compared to White U.S. posers; (c) contrast C needed to be significant, demonstrating that White U.S. participants had higher scores compared to Japanese participants on White U.S. posers; and (d) contrast D needed to be significant demonstrating Japanese participants had higher scores than White U.S. participants for Japanese posers.

WASHINGTON—Facial expressions have been called the “universal language of emotion,” but people from different cultures perceive happy, sad or angry facial expressions in unique ways, according to new research published by the American Psychological Association.

“By conducting this study, we hoped to show that people from different cultures think about facial expressions in different ways,” said lead researcher Rachael E. Jack, PhD, of the University of Glasgow. “East Asians and Western Caucasians differ in terms of the features they think constitute an angry face or a happy face.”Cultural Differences in Emotion Recognition and Expression Assignment Paper

The study, which was part of Jack’s doctoral thesis, was published online in APA’s Journal of Experimental Psychology: General®. Jack is a post-doctoral research assistant, and the study was co-authored by Philippe Schyns, PhD, director of the Institute of Neuroscience and Psychology at the University of Glasgow, and Roberto Caldara, PhD, a psychology professor at the University of Fribourg in Switzerland.

Some prior research has supported the notion that facial expressions are a hard-wired human behavior with evolutionary origins, so facial expressions wouldn’t differ across cultures. But this study challenges that theory and used statistical image processing techniques to examine how study participants perceived facial expressions through their own mental representations.

“A mental representation of a facial expression is the image we see in our ‘mind’s eye’ when we think about what a fearful or happy face looks like,” Jack said. “Mental representations are shaped by our past experiences and help us know what to expect when we are interpreting facial expressions.”

Fifteen Chinese people and 15 Caucasians living in Glasgow took part in the study. They viewed emotion-neutral faces that were randomly altered on a computer screen and then categorized the facial expressions as happy, sad, surprised, fearful, disgusted or angry. The responses allowed researchers to identify the expressive facial features that participants associated with each emotion.

The study found that the Chinese participants relied on the eyes more to represent facial expressions, while Western Caucasians relied on the eyebrows and mouth. Those cultural distinctions could lead to missed cues or misinterpreted signals about emotions during cross-cultural communications, the study reported.

“Our findings highlight the importance of understanding cultural differences in communication, which is particularly relevant in our increasingly connected world,” Jack said. “We hope that our work will facilitate clearer channels of communication between diverse cultures and help promote the understanding of cultural differences within society.”

What happened next traumatized me almost as much as the actual suicide sighting. Several people gathered around and were quietly laughing. Some took pictures and others were calling to their friends to come see what happened. I was so unnerved by the whole scene. Why were people laughing? Why wasn’t anyone covering his body? Moments later, the police showed up and I went on my way.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Times like these bring our humanity up close. How do we respond to the existential questions of life? And how do we face tragedy together? Yet these situations also highlight our profound differences.

Paul Ekman’s groundbreaking work sheds some light on the similarities and differences in how all 7.5 billion of us react emotionally to the same events. Ekman is a clinical psychologist who has spent the last several decades researching how to read emotions through facial expressions. Among the many seminal findings from his work, there are a couple critical points that are relevant to cultural intelligence:

Emotional triggers: what sets you off?

First, there’s a set of universal triggers that elicit the same emotion in nearly all of us.

The sight of something coming straight at you triggers fear, regardless of your personality or culture. From people in rural China to urbanites in London and Capetown, the sight of an oncoming car elicits the “flight” response (“Danger! I need to move out of the way).

A similar trigger occurs when experiencing unexpected, rough turbulence in flight. Even seasoned flight attendants admit that when they don’t expect it, a sudden jolt in the air frightens them.

Ekman claims that every human being has an auto appraisal system that monitors when we’re in danger. With practice and experience, some overcome these universal fears. But a primal response has evolved within all of us toward a certain set of triggers .

Second, there are unique triggers that are a result of how we’ve been socialized. Individuals from some cultures feel extremely annoyed when people cut in line while it doesn’t even faze others. People from some cultures are irritated when people speak loudly and others couldn’t care less about that. Some cultures are afraid of the oceans. Others seek it out. These variances stem primarily from how we were brought up. In addition, there are other triggers, which are rooted in our unique personalities and experiences (e.g. post-traumatic stress).Cultural Differences in Emotion Recognition and Expression Assignment Paper

Emotional expressions: how do you display your emotions?

Next, Ekman contends that people across all cultures have a universal way of expressing seven emotions: anger, fear, sadness, disgust, contempt, surprise and happiness.

Initially this claim didn’t ring true to me. Surely we can’t say that Germans, Chinese, and Italians all express happiness the same way. Isn’t nonverbal communication culturally conditioned? Yes and no .

Through a series of renowned, peer-reviewed studies, Ekman makes a convincing case that people all over the world signal happiness with the corners of their mouths up and their eyes contracted. Anger is expressed with the corners of the mouth down and sadness is expressed with the eyelids drooping. Even individuals who have been blind from birth manifest the same nonverbal expressions .

Emotional display rules: how should you manage your emotions?

How does this explain the fact that some cultures (e.g. African Americans and Italians) are usually far more affective in expressing their emotions while others cultures (e.g. Japanese and Germans) are far more neutral.

Cultural differences come into play by promoting the rules for how to appropriately manage emotional expressions. Parents teach children the appropriate display rules for various occasions, which get reinforced at school, through the media, and with peers. When should you show emotion, when should you exaggerate it, and when should you mask it? Our cultures teach us how to manage our feelings and we learn which emotions are appropriate for which situations. We develop mechanisms for masking seemingly inappropriate expressions.Cultural Differences in Emotion Recognition and Expression Assignment Paper

This brings me back to the horrific suicide I witnessed last week. It may well be that the giggling by my fellow bystanders was a disguise for their horror. Fake laughter and giggles are a very common response to nervousness and discomfort among many Asian cultures.

In all fairness, others looking at me in that moment would have had little idea that I felt a sense of grief and despair when encountering this event. I stood there for a moment with a very staid, neutral response given that my parents taught me that a neutral, stone face was the appropriate response to solemn occasions. Someone who learns how to read micro-expressions can discern when a facial expression is masking something else.

Cultural intelligence stems from the same body of research as emotional intelligence (EQ). Emotional intelligence is the first step in improving the way you work and relate with others. There’s little hope we can interact effectively in culturally diverse settings if we first can’t understand and regulate the emotions of ourselves and others like us. But cultural intelligence allows us to have those same social sensibilities when interacting with people who display their emotions in ways that are unfamiliar to us.

Giggles may mean laughter in one culture and embarrassment in another. Some individuals have been socialized to express anger by yelling while others simmer in silence.

When you’re irritated by a behavior that seems rude or awkward, consider alternative explanations for the behavior (e.g. giggling may not mean someone thinks a tragedy is funny). In addition, careful consideration in the midst of an emotional trigger can diminish the power of the trigger when used repeatedly.

If you consistently reflect on whether turbulence really puts you in danger or whether a spider is really going to harm you, you can begin to diminish the power of the emotional response. The same is true for behaviors that annoy you. If you reflect on the intent behind a loud talker or someone who spits in public, it can diminish how much it upsets you.Cultural Differences in Emotion Recognition and Expression Assignment Paper

We’re remarkably different in how we go about our profound similarities. When your counterpart seems foreign, start with what you have in common. And perhaps our shared humanity is the starting point for providing one another with the hope each of us needs to get through one day after the next. After all, we’re all in this together.

Culture—i.e., the beliefs, values, behavior, and material objects that constitute a people’s way of life—can have a profound impact on how people display, perceive, and experience emotions. The culture in which we live provides structure, guidelines, expectations, and rules to help us understand and interpret various emotions.

Expressing Emotions

cultural display rule dictates the types and frequencies of emotional displays considered acceptable within a certain culture (Malatesta & Haviland, 1982). These rules may also guide how people choose to regulate their emotions, ultimately influencing an individual’s emotional experience and leading to general cultural differences in the experience and display of emotion.

For example, in many Asian cultures, social harmony is prioritized over individual gain, whereas Westerners in much of Europe and the United States prioritize individual self-promotion. Research has shown that individuals from the United States are more likely to express negative emotions such as fear, anger, and disgust both alone and in the presence of others, while Japanese individuals are more likely to do so only while alone (Matsumoto, 1990). Furthermore, individuals from cultures that tend to emphasize social cohesion are more likely suppress their own emotional reaction in order to first evaluate what response is most appropriate given the situation (Matsumoto, Yoo, & Nakagawa, 2008).

Cultures also differ in the social consequences that they assign to different emotions: in the United States, men are often directly or indirectly ostracized for crying; in the Utku Eskimo population, the expression of anger can result in social ostracism.

Within a particular culture, different rules may also be internalized as a function of an individual’s gender, class, family background, or other factor. For instance, there is some evidence that men and women may differ in the regulation of their emotions, perhaps due to culturally based gender norms and expectations (McRae, Ochsner, Mauss, Gabrieli, & Gross, 2008).Cultural Differences in Emotion Recognition and Expression Assignment Paper

Interpreting Emotions

In everyday life, information from the environment influences our understanding of what facial expressions mean. In much the same way, cultural context also acts as a cue when people are trying to interpret facial expressions. People can attend to only a small number of the available cues in their complex and continuously changing environments, and increasing evidence suggests that people from different cultural backgrounds allocate their attention very differently. This means that people from different cultures may interpret the same social context in very different ways.

Are Emotions Universal?

Although conventions regarding the display of emotion differ from culture to culture, our ability to recognize and produce associated facial expressions appears to be universal. Research comparing facial expressions across different cultures has supported the theory that there are seven universal emotions, each associated with a distinct facial expression. That these emotions are “universal” means that they operate independently of culture and language. These seven emotions are happiness, surprise, sadness, fright, disgust, contempt, and anger (Ekman & Keltner, 1997). Even congenitally blind individuals (people who are born blind) produce the same facial expressions associated with these emotions, despite never having had the opportunity to observe them in other people. This further supports the theory that the patterns in facial muscle activity are universal for the facial expressions of these particular emotions.

Culture—i.e., the beliefs, values, behavior, and material objects that constitute a people’s way of life—can have a profound impact on how people display, perceive, and experience emotions. The culture in which we live provides structure, guidelines, expectations, and rules to help us understand and interpret various emotions.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Expressing Emotions

cultural display rule dictates the types and frequencies of emotional displays considered acceptable within a certain culture (Malatesta & Haviland, 1982). These rules may also guide how people choose to regulate their emotions, ultimately influencing an individual’s emotional experience and leading to general cultural differences in the experience and display of emotion.

For example, in many Asian cultures, social harmony is prioritized over individual gain, whereas Westerners in much of Europe and the United States prioritize individual self-promotion. Research has shown that individuals from the United States are more likely to express negative emotions such as fear, anger, and disgust both alone and in the presence of others, while Japanese individuals are more likely to do so only while alone (Matsumoto, 1990). Furthermore, individuals from cultures that tend to emphasize social cohesion are more likely suppress their own emotional reaction in order to first evaluate what response is most appropriate given the situation (Matsumoto, Yoo, & Nakagawa, 2008).

 

 GET   TODAY

 

Cultures also differ in the social consequences that they assign to different emotions: in the United States, men are often directly or indirectly ostracized for crying; in the Utku Eskimo population, the expression of anger can result in social ostracism.

Within a particular culture, different rules may also be internalized as a function of an individual’s gender, class, family background, or other factor. For instance, there is some evidence that men and women may differ in the regulation of their emotions, perhaps due to culturally based gender norms and expectations (McRae, Ochsner, Mauss, Gabrieli, & Gross, 2008).Cultural Differences in Emotion Recognition and Expression Assignment Paper

Interpreting Emotions

In everyday life, information from the environment influences our understanding of what facial expressions mean. In much the same way, cultural context also acts as a cue when people are trying to interpret facial expressions. People can attend to only a small number of the available cues in their complex and continuously changing environments, and increasing evidence suggests that people from different cultural backgrounds allocate their attention very differently. This means that people from different cultures may interpret the same social context in very different ways.

Are Emotions Universal?

Although conventions regarding the display of emotion differ from culture to culture, our ability to recognize and produce associated facial expressions appears to be universal. Research comparing facial expressions across different cultures has supported the theory that there are seven universal emotions, each associated with a distinct facial expression. That these emotions are “universal” means that they operate independently of culture and language. These seven emotions are happiness, surprise, sadness, fright, disgust, contempt, and anger (Ekman & Keltner, 1997). Even congenitally blind individuals (people who are born blind) produce the same facial expressions associated with these emotions, despite never having had the opportunity to observe them in other people. This further supports the theory that the patterns in facial muscle activity are universal for the facial expressions of these particular emotions.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Eye tracking work suggests that there are cultural differences in how Easterners and Westerners scan an emotional expression (Jack, Blais, Scheepers, Schyns, & Caldara, 2009). Recent work by Masuda and colleagues (Masuda et al., 2008) suggests that cultural differences in attentional patterns to contextual elements also influence emotion recognition: Japanese participants’ intensity ratings varied according to the emotional expression of surrounding faces, while Westerners’ intensity ratings did not. Attentional patterns consistent with these cultural differences were confirmed with eye tracking and recognition memory.

Whether emotion is universal or social is a recurrent issue in the history of emotion study among psychologists.1, 2, 3 Some researchers view emotion as a universal construct and that a large part of emotional experience is biologically based.4, 5 Ekman6 argued that emotion is fundamentally genetically determined, so that facial expressions of discrete emotions are interpreted in the same way across most cultures or nations. In addition, similar emotions are experienced in similar situations across cultures. In a study conducted by Matsumoto and colleagues,7 Japanese and American participants reported to feel happiness, pleasure, sadness, and anger in similar situations. In other words, people experienced positive emotions when they are in positive antecedent situations (e.g., meeting friends or achievements) but negative emotions when they encounter negative antecedent events (e.g., traffic or injustice), regardless of culture.

However, culture also influences emotion in various ways. Culture constrains how emotions are felt and expressed in a given cultural context. It shapes the ways people should feel in certain situations and the ways people should express their emotions.52 In a large number of studies,3, 8, 9 some aspects of emotion have been shown to be culturally different, because emotion is not only biologically determined, but also influenced by environment, and social or cultural situations.10 The role of culture in emotion experience has also been stressed in sociology theories. For example, Shott53argued that to experience emotion, people first experience physiological arousal and then they label this arousal as emotion. In this process, culturally defined and provided emotion words are used. Some other examples of emotional aspects that have cultural differences are ways of emotion expression,11 ways of facial expression and recognition of emotions,9 nature of emotions commonly experienced,7, 12, 13 and affectvaluation  Cultural Differences in Emotion Recognition and Expression Assignment Paper

2. Individualist and collectivist cultures

Cultural differences in various aspects of emotion have been studied and reported. Now, what is culture and how is it defined? In cross-cultural psychology, culture is referred to as “shared elements that provide the standards for perceiving, believing, evaluating, communicating, and acting among those who share a language, a historic period, and a geographic location (p. 408).”15 Since Markus and Kitayama8 published a monumental paper on comparisons of the self between the West (e.g., America) and the East (e.g., Japan), most cross-cultural studies have compared Western versus Eastern cultures.16 Eastern culture commonly indicates culture of East Asian countries such as Korea, Japan, and China. Western culture includes the culture of North American and Western European countries.

Markus and Kitayama8 introduced the term “self-construal” for establishing the differences between the two cultures. Westerners construe self as independent and separate from other people. This is referred to as independent self-construal. Those who have independent self-construal consider that the basic unit of society is the individual, and groups exist to promote individual’s well-being.17 For this reason, Western culture is identified as individualist culture.16 In individualist culture, individual’s uniqueness is important. People are encouraged to express their inner states or feelings, and to influence other people.18

By contrast, Easterners construe self as fundamentally connected to, and interdependent on, others. This is called interdependent self-construal. For those who have interdependent self-construal, the core unit of society is the group. In addition, individuals must adjust to the group so that society’s harmony is maintained.17 For this reason, Eastern culture is identified as collectivist culture.16 In a collectivistic cultural atmosphere, individuals try to modify themselves and not influence others to fit in the groups they are in.18 Although, in both individualist and collectivist cultures, all individuals have both independent and interdependent self-construals,8, 19, 20 each culture normally encourages to more strongly cultivate its promoted self-construal than the other Cultural Differences in Emotion Recognition and Expression Assignment Paper

3. Two-dimension structure of emotion: Valence and arousal

Myers58 argued that “physiological arousal, expressive behaviors, and conscious experience” are fundamental elements of emotion (p. 500). In other words, emotional arousal is one of the most important research topics in psychology literature. In line with this, one of the many researched aspects of emotion that shows cultural differences is emotional arousal level. Affective states (i.e., emotion, mood, and feeling) are structured in two fundamental dimensions: valence and arousal level.21, 22, 23 Russell24 proposed the circumplex model of affect.The circumplex model of affect proposes that all emotions are the product of two independent neurophysiological systems.25 In other words, affective states are systematically organized and represented as two bipolar dimensions: pleasure–displeasure (or valence) and degree of arousal. The degree-of-arousal dimension, which is also called activation–deactivation26 or engagement–disengagement,24 refers to the perception of the physiological activation level during affective experience.21, 27 In other words, high affective arousal can be understood as the activation of the autonomic nervous system.55 Literature shows that both emotional valence and arousal affect brainactivity28, 29 and cognitive behaviors such as decision making and memory.56

Russell24 categorized verbal expressions of emotion in the English language in the two dimensions of valence and arousal. Since then, this two-factor structure of emotion has been demonstrated numerously by many studies in different methods.30 This two-dimensional structure of emotion was also proved to be appearing in many different nations and cultures.24, 30, 31, 32 In other words, valence and arousal can account for all emotional states.33 Table 1 lists high and low arousal emotions as categorized in previous literature.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Table 1. List of high and low arousal emotions

Studies High arousal emotions Low arousal emotions
Russell (1980)24 Afraid, alarmed, angry, annoyed, aroused, astonished, delighted, distressed, excited, frustrated, glad, happy, tense At ease, bored, calm, contented, depressed, droopy, gloomy, miserable, pleased, relaxed, sad, satisfied, serene, sleepy, tired
Feldman (1993)51 Afraid, enthusiastic, nervous, peppy Calm, relaxed, sleepy, sluggish
Tsai (2007)36 Elated, enthusiastic, excited, fearful, hostile, nervous Calm, dull, peaceful, relaxed, sleepy, sluggish
Suh & Koo (2011)32 Irritated, joyful Helpless, peaceful

Emotions with different arousal levels have different purposes or functions.34 Russell26 argued that high arousal emotions are energized states that prepare action. These emotions correspond to situations where mobilization and energy are required. When a high arousal emotion is induced, decision making becomes focused and simplified.26 Moreover, high arousal emotions such as joy or anger are known to amplify the nervous system in various ways.35 By contrast, low arousal emotions are enervated states that prepare inaction or rest  Cultural Differences in Emotion Recognition and Expression Assignment Paper

4. Cultural differences in emotional arousal level

Cross-cultural differences in emotional arousal level have consistently been found. Western culture is related to high arousal emotions, whereas Eastern culture is related to low arousal emotions. These cultural differences are explained by the distinct characteristics of individualist and collectivist cultures. In Western culture, people try to influence others.8 For this purpose, high arousal emotions are ideal and effective.18By contrast, in Eastern culture, adjusting and conforming to other people is considered desirable.8 To meet this goal, low arousal emotions work better than high arousal emotions.18

In fact, in terms of positively valenced emotions, the arousal level of ideal affect differs by cultures. Ideal affect, or “affective state that people ideally want to feel” (p. 243) is important because people are motivated to behave in certain ways so that they feel the emotions they want to experience.36 Therefore, people in certain culture tend to experience the emotional state that are considered to be ideal in their culture. Tsai36 argued that Westerners value high arousal emotions more than Easterners, so they promote activities that elicit high arousal emotions. Actually, Americans, compared with East-Asians, are reported to prefer high arousal emotional states such as excitement37 or enthusiasm.38Even children of the West learn through storybooks that high arousal emotions are ideal, and the opposite is true for children of the East.39 Conception of happiness is also different in arousal level by culture. Lu and Gilmour40 conducted a cross-cultural study on the conception of happiness; they found that the American conception of happiness emphasized on being upbeat, whereas the Chinese conception of happiness focused on being solemn and reserved. This means that, in America, high arousal positive emotional states are considered as happiness, a desirable state. By contrast, low arousal positive emotional states are considered as happiness in China. This was replicated in another study. Uchida and Kitayama57 showed that Japanese people conceptualized happiness as experiencing low arousal positive emotions more than high arousal positive emotions, and it was vice versa for American people.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Owing to the cultural difference in the norm about emotional arousal level, differences in the actual arousal levels of emotional experience also emerge. In fact, Kacen and Lee41 conducted a cross-cultural study comparing Caucasians and Asians. Researchers used an arousal scale composed of four bipolar items, which consists of emotion adjectives representing different arousal levels. Emotion items in the arousal scale were stimulated–relaxed (reversed), calm–excited, frenzied–sluggish (reversed), and unaroused–aroused. The result showed that Caucasians were more likely to be in high arousal emotional states (i.e., stimulated, excited, frenzied, and aroused) than Asians, whereas Asians were more likely to be in low arousal emotional states (i.e., relaxed, calm, sluggish, and unaroused). In addition, Tsai and colleagues42 reported that the closer the participants to American rather than Chinese cultural orientation, the higher their cardiovascular arousal level during interpersonal tasks.

Another example of the difference of actual arousal levels of emotional experiences between individualist and collectivist cultures can be found from emotion scale research. Affect scales measuring positive and negative emotional experiences developed in America consist mostly of high arousal emotions. This is because emotion scale items are selected based on the emotional experience of people of their own cultural background. This suggests that American people experience high arousal emotions more than low arousal emotions.32For example, one of the most widely used emotion scale, Positive and Negative Affect Scale, was developed by American researchers.43 Emotion items in the Positive and Negative Affect Scale are weighted toward high arousal emotions such as enthusiasm, activation, and excitement.

Furthermore, cultural differences are also found in physiological and behavioral aspects of emotion. Research conducted by Scherer et al54 showed that Japanese participants, compared with American and European participants, reported significantly fewer physiological symptoms. Mesquita and Frijda2suggested that one possible explanation is that their physiological reactions in emotions are actually different. In addition, behaviors corresponding to emotional arousal level differ by culture. Westerners prefer to participate in more active sports than Easterners to elicit high arousal emotions.44 Moreover, parents lead their children to engage in activities that are likely to elicit valued emotions in the culture. For example, Western mothers are reported to encourage their children to play games that increase emotional arousal level.45 Therefore, cultural differences in emotional arousal level emerge at a relatively young age Cultural Differences in Emotion Recognition and Expression Assignment Paper

Support for cultural difference in the level of emotional arousal has also been found in value studies. According to Schwartz,46 individualism and independent self-construal are closely related to stimulation values. Individuals who have strong stimulation values are motivated to live an exciting and varied life, and to seek novelty and challenges in life. Behaviors derived from these goals are likely to induce high arousal emotions. Therefore, Schwartz’s46 study indirectly support that high arousal emotions are more frequently experienced in Western culture than in Eastern culture. This is also in line with the fact that impulsiveness and sensation-seeking behavior, which are closely related to emotional arousal,47 are also more profound in individualist countries than in collectivist countries.41

The fact that Asian cultural norm discourages experiencing or expressing high arousal emotions can also be explained from the perspective of traditional Asian medicine. In Korean or Chinese medicine, it is assumed that humans experience seven emotions (七情), including joy, anger, sadness, pleasure, love, greed, and hatred. From this standpoint, excessive emotional experience can be harmful and cause diseases, no matter how positive the emotions are.48, 49 For example, Hwabyung, also known as “anger syndrome,” a disease frequently reported in Korean culture, is argued to be resulted in suppression of anger, a high arousal emotion.50

Emotional arousal is a fundamental and important dimension of affective experience, along with valence. Findings consistently support cultural differences in the levels of emotional arousal between the West and the East. Westerners value, promote, and experience high arousal emotions more than low arousal emotions, whereas the vice versa is true for Easterners. As discussed above, emotion has a biological base. In addition, two fundamental dimensions of emotion, valence and arousal, are related to physiological aspect as well as brain activities. Therefore, cultural differences in emotion, especially in arousal level of emotion, can also have implications in other adjacent areas, such as neuroscience and science of medicine.

However, so far only a few researches on this aspect of emotion have been conducted in Asian medicine. As mentioned above, findings about emotion in psychology literature and Asian medicine are in line, in that Korean medicine cautions against excessive emotional activation, which can be translated as high emotional arousals in psychology. However, compared with studies on cultural differences in norms about emotional arousal level, fewer studies on cultural differences in emotional arousal level, per se, have been conducted, especially those with physiological measures. Therefore, additional research on cultural differences of emotional arousal level from the perspective of Asian medicine may become the stepping stone to an integrative medicine research on Asian medicine and psychology.Cultural Differences in Emotion Recognition and Expression Assignment Paper

These findings may be a reflection of cultural differences in worldview, termed the social orientation hypothesis (see Markus & Kitayama, 1991; Nisbett, Peng, Choi, & Norenzayan, 2001 for reviews; Varnum, Grossmann, Kitayama, & Nisbett, 2010). Differences in worldview may relate to attention: European Americans, coming from a more individualist culture, tend to exhibit an analytic pattern of attention and interpret objects in a scene by their defining attributes, while East Asians, coming from a more collectivist culture, tend toward a holistic pattern of attention and perceive objects in terms of their relationships to other objects (e.g., Masuda & Nisbett, 2001). In emotion recognition research this may be best reflected in differences in the purposes for which members of each culture attend to contextual information. Several studies suggest that Americans do use contextual information when making emotion judgments (Aviezer et al., 2008; Barrett & Kensinger, 2010). Instead of an “attend to context vs. ignore context” dichotomy, an investigation of cultural differences in how context is used in emotion perception may be more informative. In the present study we aimed to do just that by asking participants from different cultures to identify the emotion expressed by a target face surrounded by contextual faces of a different emotion. We sought to identify how context, defined as these surrounding facial expressions, was used to make emotion judgments by recording eye movements to the target and contextual faces during the emotion recognition task. Applying the social orientation hypothesis to emotion recognition in context adds texture to the simple idea that one group considers context while the other ignores it: Emotion judgments in both cultures may be influenced by context, but cultural differences in worldview may lead to differences in how the contextual information is used.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Cultural Differences in the Use of Context in Emotion Recognition

Given that context has been shown to influence emotion attributions in American samples, and East Asian cultures exhibit context-sensitivity in their visual attention (Masuda & Nisbett, 2001; Nisbett et al., 2001), it is clear that both individualist and collectivist cultures attend to contextual information. How such contextual information is used when making an emotion judgment is less clear, although the outcome of this process does appear to differ across cultures (Masuda et al., 2008). Thus, in the present study, we used eye tracking to specify cultural differences in attention patterns to contextual information in an emotion recognition task. In particular, we wondered whether members of an individualist culture would tend toward a more analytic (contrasting context) attentional style than members of a collectivist culture when judging the emotional expression of a target face surrounded by other facial expressions. We chose to use stimuli of real faces, rather than the cartoons used in prior cross-cultural emotion recognition research (Masuda et al., 2008), to enhance the ecological validity of the task.

Hypotheses

Hypothesis 1

We expected Chinese participants’ recognition accuracy1 to be more affected by the surrounding faces than Americans’ recognition. Thus, for our first hypothesis we expected Americans to be more accurate at identifying a target face surrounded by other facial expressions than Chinese.

Hypothesis 2

We also expected cultural differences in accuracy as a function of the emotion of the contextual faces. We expected American participants’ accuracy (but not Chinese participants’ accuracy) to benefit more on trials where a perceptually similar emotion is the context, compared to other contextual emotions. Past work suggests that when a misidentification is made, both American and Chinese participants commonly confuse surprise with fear (or vice versa) and anger with disgust (or vice versa; e.g., Isaacowitz et al., 2007; Wang, K. et al., 2006), perhaps because of their perceptual similarity (Aviezer et al., 2008; Susskind, Littlewort, Bartlett, Movellan, & Anderson, 2007) . If Chinese participants are more likely than Americans to integrate the expressions of the contextual faces with the target face, the combination of surprise and fear faces (or anger and disgust faces) in the display may lead to a perception of the contextual face rather than the target. On the other hand, if Americans are more likely to use a contrasting strategy, the presence of often-confused contextual faces may help differentiate the target face from the often confused expression.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Hypothesis 3

For the eye tracking data, we expected American participants would fixate more on the target face than Chinese participants because we thought Americans would focus on the target’s emotion as an individual emotion but Chinese would also attend to the contextual faces. Attending more to the contextual faces would indicate a more holistic attentional style because the contextual faces may be receiving more weight or consideration in the emotion judgment; whereas less relative fixation duration to contextual faces would be evidence for more of an analytic attentional style (Masuda et al., 2008). For our first eye tracking variable of interest, relative fixation duration, we expected American participants to spend greater relative fixation duration on the target faces than Chinese participants.

Hypothesis 4

The second way eye tracking data could suggest differences in patterns of attention is in the gaze pattern of the fixations from one area of interest (AOI) to another. An analytic pattern of attention may be executed by looking at the target face and then looking at one of the contextual faces and then looking back at the target face…and so on for each contextual face, suggesting that a contrasting strategy is being utilized to judge the emotion of the target face. This gaze pattern can be captured in eye tracking data by counting the transitions from one AOI to another that include the target face. If an individual has more transitions that include the target face, or target transitions, this means they are shifting their gaze back to the target face more often than an individual with fewer target transitions. Thus, a greater number of target transitions would reflect an analytic contrasting strategy. Thus, for our second eye tracking variable of interest, target transitions, we expected Americans to use a contrasting viewing strategy more than Chinese, as exhibited by Americans having a greater percentage of target transitions than Chinese.

Emotional databases are important tools to study emotion recognition and their effects on various cognitive processes. Since, well-standardized large-scale emotional expression database is not available in India, we evaluated Radboud faces database (RaFD)—a freely available database of emotional facial expressions of adult Caucasian models, for Indian sample. Using the pictures from RaFD, we investigated the similarity and differences in self-reported ratings on emotion recognition accuracy as well as parameters of valence, clarity, genuineness, intensity and arousal of emotional expression, by following the same rating procedure as used for the validation of RaFD. We also systematically evaluated the universality hypothesis of emotion perception by analyzing differences in accuracy and ratings for different emotional parameters across Indian and Dutch participants. As the original Radboud database lacked arousal rating, we added this as a emotional parameter along with all other parameters. The results show that the overall accuracy of emotional expression recognition by Indian participants was high and very similar to the ratings from Dutch participants. However, there were significant cross-cultural differences in classification of emotion categories and their corresponding parameters. Indians rated certain expressions comparatively more genuine, higher in valence, and less intense in comparison to original Radboud ratings. The misclassifications/ confusion for specific emotional categories differed across the two cultures indicating subtle but significant differences between the cultures. In addition to understanding the nature of facial emotion recognition, this study also evaluates and enables the use of RaFD within Indian population.Cultural Differences in Emotion Recognition and Expression Assignment Paper

The expression of emotions in humans is achieved via a complex combination of eyes, eyebrows, lips and facial muscles. Two standard guidelines had been proposed to categorize different facial expressions using a combination of these facial features: Izard’s [1] maximally discriminative facial movement coding system (MAX) and Ekman and Friesen’s [2] Facial Action Coding System (FACS). FACS is currently the most widely used method to portray basic facial expressions using facial muscle as action units, namely: happy, sad, surprise, angry, fear, disgust, contempt, and neutral. Using these guidelines various face databases have been developed; for example the NimStim database [3], Karolinska faces database [4], and Radboud faces database [5]. These databases consist of colour or gray scale pictures of people with different age groups, gender, and races (Asian, African, and Caucasian) portraying different expressions. These face databases are essential in investigating the fundamental questions about how emotions are perceived, recognized as well as to establish reciprocal interactions between affective and cognitive processes.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Face databases can aid in understanding emotion recognition across cultures. However, in order to use them, the same database needs to be evaluated within a particular culture. The evaluation should account for three important aspects: a) comparison of emotion recognition scores across the cultures (Is emotion recognition Universal?) b) given that each model would differ in the portrayal of emotional expression, how much ever trained, the models not recognized correctly should be eliminated from further studies within that culture, and c) when emotional stimuli are used across cultures, they need to be matched/ standardized on various parameters associated with emotional expressions like valence, intensity, genuineness, clarity and arousal. This is important since, many of these parameters lead to confound in controlled experimental designs. Validation of faces and expressions across cultures also helps in reducing the ambiguity associated with the images available in a particular database.

One of the central arguments in emotion literature, more specifically facial emotion expression, has been about the universality of emotion identification and recognition [6–8]. Of importance are the cross cultural studies that are a major contributing factor, either for or against universality. This issue, of whether emotional expressions are culture-specific or universal, has been debated for a long time [9–14]. Seminal research studies showed that basic emotions could be accurately recognized above chance across cultures [12,15]. However, it has been argued that most cross-cultural studies are confounded with cross-cultural contact, education, language, and familiarity It has also been reported that variations [16]. Variations in ethnicity, national and regional backgrounds, race, in-group versus out-group relations, facial display rules within a culture and social attitudes can influence emotion recognition across cultures [17–22].

To the best of our knowledge, very few studies have investigated cross-cultural emotion recognition with an Indian population. Elfenbein et al. [18] studied Indian, American and Japanese participants with photographs of facial expressions from the three cultural groups. While the photographs were generated from Indian and Japanese samples by asking models to display an expression by imagining an emotional scene (not following the FACS system), American posers followed the FACS manual for displaying prototypical emotional expressions. They reported that the trend of errors in emotion recognition were similar across the three cultures, partially supporting universality hypothesis but, also highlighted emotion specific cultural differences.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Cross-cultural emotional differences, especially in relation to display rules have also been studied in the context of either individualism or collectivism. In a broader sense, an individualistic culture endorses independence of an individual in a society, while a collectivistic culture supports group interactions and facilitates interdependence amongst its members [23,24]. These fundamental differences between the two types of cultures contribute to differences in general psycho-social attitudes [25–28] as well may contribute to differences in perception or ratings of emotional expressions as a function of whether they belong to in-group (their own region) or out-group members of a culture (other than their own region) [24,29]. For example, people in individualistic cultures (Americans) are more comfortable in displaying negative expressions than those from collectivistic cultures (Costa-Rica) [23,24]. A culture is not solely individualistic or collectivistic; nonetheless, Asians in general are considered more collectivistic than Western cultures in certain aspects [26]. India is not a purely collectivistic culture but rather shows features of both collectivism and individualism [24,25,27,28]. With Indian participants there are no measures of out-group facial emotion rating evaluated in the context of Individualism or collectivism. Given this background, we also wanted to check if rating and agreement rate data from Indian ratings for Radboud emotional faces (out-group) can be understood within the context of Individualism or collectivism.

Considering the above arguments, the motivation for this study was threefold. First, we aimed to test the universality of emotion recognition hypothesis in an Indian sample population from Allahabad with a full-fledged emotional database from another culture. At a broader level, we expected that there would be differences in subtle measures of emotion recognition like intensity, clarity etc. especially given that the faces belong to out-group members [16,30]. Second, we wanted to evaluate if differences in emotion recognition between Indian (out group) and Dutch (in group), follows those already reported for Individualistic or collectivistic cultures [21,22,24,29,31]. As Indians are reported to be a relatively more collectivistic culture than the Dutch [24], it could be expected that they would differ in agreement ratings for negative emotions in comparison to positive emotions, for out-group members. Third, we also aimed to validate the database for studies on emotions across cultures. To achieve this we selected the freely available Radboud Faces Database (RaFD) [5]. It offers ready to use colour pictures of Caucasian face stimuli of adults and children in three gaze directions and eight expressions: neutral, happy, angry, disgust, contempt, fear, surprise, and sad. All images were according to FACS guidelines and have been evaluated by taking ratings on parameters namely: valence, intensity, clarity, genuineness and correct identification of the expression [5]. For this study we selected only the adult facial expressions with frontal view and straight gaze direction.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Current research methodology is similar to that used by the developers of RaFD [5], in order to compare emotion rating and recognition differences between the two cultures (Indians and Dutch). In addition to the emotion categories and parameters originally used for RaFD, we also rated the database on ‘Arousal’ parameter, which is not available for RaFD. Emotions can be understood in terms of two parameters, namely valence and arousal [32,33], where valence could be positive or negative (pleasant/ unpleasant) and arousal represents the intensity of emotion felt by the participant (calm/ intense). Multiple studies suggest that emotion-cognition interactions are highly influenced by the arousal value of emotions than valence [34,35]. Given that arousal of an expression plays a significant role in emotion processing [36–38], having arousal ratings for this database would facilitate various cross cultural experimental studies in controlling for arousal values

Forty naïve observers (age range: 18–35 years, 25 females) with normal or corrected-to-normal vision provided informed written consent and participated in the experiment. All experimental protocols were approved by the Institutional Ethics Review Board of University of Allahabad.

Apparatus

The stimuli were presented using E-Prime 2.0 Professional software [39] on a Samsung PC with windows (1024 x 768, 85 Hz) and the data was analyzed in Matlab [40] and R software [41].

Stimuli

Only the front-faced straight gaze adult models from RaFD were used in this experiment. We used only seven expressions, namely; happy, angry, sad, surprise, disgust, neutral, and fear. We did not include ‘contempt’ expression, as it was the least accurately rated expression in the Radboud ratings. Moreover, the low accuracy rates for ‘contempt’ expression have been attributed to variations in facial features representing contempt expression across different cultures and regions [42,43].

A total of 39 models (19 females) each depicting the seven above mentioned expressions (273 images) were divided into two experimental sets (Set-1 & 2) of 19 and 20 models respectively (otherwise the duration of the experiment exceeded beyond two hours and technically not feasible to run on single participant). Set-1 had 133 and Set-2 had 140 images and the pictures were rated by two different groups of participants. The assignment of models into two groups was random. All expressions from each model were presented within one set only. Twenty participants rated each set. Each image was rated only once by a participant giving us twenty unique ratings for each image.

Each experimental set had two rating blocks presented sequentially across all the participants, namely attractiveness rating block and emotion rating block. Each trial began with the presentation of an image at the center of the screen, the task question above the image and the rating scale below the image (Fig 1). The images were present on the screen until the participant rated it. The participants entered the responses using a keyboard. For each model, all emotional expressions were presented sequentially then followed by next model and all its expressions. Model image order was randomized for both blocks.

An Image was presented at the center of the monitor with the rating scale at the bottom and corresponding question at the top of the image. Participants were instructed to classify the emotion portrayed by the image followed by rating the same image for the five parameters namely valence, clarity, genuineness, intensity and arousal on a five point Likert-type scale.

https://doi.org/10.1371/journal.pone.0203959.g001

Participants rated attractiveness on a 5-point Likert like scale (1- unattractive, 5 –attractive). Only neutral expression of each model was used in this task so that the participants become familiar with the images for a given set. Same models were used in the emotion-rating task. The first question was based on emotion categorization task, where the participants were instructed to report the intended expression of the image on a 7-point nominal scale (1- happy; 2- surprise; 3- disgust; 4- neutral; 5- sad; 6- angry; 7- fearful). The participants were instructed to choose the label that best described the expression. This emotion categorization task was different from the original [5] task in two ways. First, we did not include the contempt expression in the task, for reasons mentioned above. Second, for the emotion categorization task we did not have ‘others’ option as used in their study [5]. Most of the ‘others’ responses among Dutch raters in the original article [5] were for contempt expression and, since we dropped the ‘contempt’ expression, we also dropped the ‘others’ option as well [44]. Apart from these two differences all other rating scales were similar to original task [5]. After emotion categorization task, participants rated the valence of the expression (negative to positive), clarity of the expression (unclear to clear), genuineness (false to genuine), intensity (weak to strong), and arousal (calm to excited) on a 5-point Likert type scale (1 to 5), one after the other sequentially for the same model. As mentioned previously, one of our objectives was to test the universality hypothesis of emotion recognition, and to achieve that we requested and obtained the original classification and rating data of Dutch participants [5] from the authors.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Attractiveness rating

On a scale of 1–5, the mean attractiveness ratings for male and female adult models were not significantly different, t(37) = – 1.158, p = .254, CI = [-0.81 0.22]. Since we did not have individual values for attractiveness ratings from the Dutch participants, we were not able to do a statistical analysis comparing the attractiveness ratings for the two populations. But the mean ratings (mean ± standard deviation) of Indian and Dutch samples [5] for male (Indian = 2.13 ± 0.77, Dutch = 2.10 ± 0.58) and female (Indian = 2.42 ± 0.82, Dutch = 2.36 ± 0.53) adult models were similar across the two cultures (Fig 2).

Bar plot comparing the mean attractiveness ratings for male and female models of Radboud database from Indian (white bar) and Dutch raters (grey bar). Error bars represents standard error of mean.

https://doi.org/10.1371/journal.pone.0203959.g002

Expression agreement analysis

We evaluated the agreement rates, that is, the percentage of instances an emotion was correctly categorized as the intended expression (Fig 3). Overall (mean ± standard deviation) agreement rates across all emotion categories were 83.9% ± 15.7% (Median = 85%). Agreement rates for individual pictures by Indian and Dutch raters are provided in the supporting information (S1 Table). A one variable repeated measure (RM) ANOVA on arcsine-transformed agreement rates for the seven expressions was performed. The Mauchly’s test showed significant deviation of sphericity, W(6) = 0.26; p < .001, for expression (εExpression = 0.68), so Greenhouse-Geisser corrected values were used. The analysis showed significant effect of expression, F(4.08, 155.04) = 25.08; p < .0001, ηp2 = 0.39. Post-hoc Tukey Kramer’s analysis showed that the agreement rates were significantly higher (all p < .001, all Cohen’s d = 1.17 ≤ d ≤ 1.93) for happy expression (M = 97.9%, SD = 3.2%) compared to all other expressions. The agreement rates for neutral, sad, surprise and disgust were not significantly different from each other (Fig 3). The agreement rates for angry (M = 71.5%, SD = 9.8%) and fear (M = 71.9%, SD = 12.8%) were the lowest and significantly differed from all other expressions (all p < 0.01; all d = ~ 1.9). In contrast, for the Dutch ratings, lowest agreement was observed for contempt (M = 50%, SD = 15%) and the second lowest was for disgust (M = 77.3%, SD = 11.1%) while highest agreement rate was for happy expression (M = 98%, SD = 3%). Since we did not include ‘contempt’ expression, we could not compare the two datasets for this particular expression.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Bar plot comparing mean agreement rates for the seven expressions between Indian (White bars) and Dutch (Grey bars) participants. Error bars represent standard error of mean.

https://doi.org/10.1371/journal.pone.0203959.g003

A two-way RM ANOVA for agreement rates comparing the ratings from the Dutch and the Indian participants was performed with expression (7 expressions: happy, surprise, disgust, neutral, angry, sad, and fear) and culture (2 Cultures: Indian and Dutch) as within subjects factors. Mauchly’s test showed significant deviations from sphericity for the expression factor, W(6) = 0.13, p < .001. Greenhouse-Geisser corrections were applied to the expression factor (εExpression = 0.60). There was a significant main effect of expression, F(3.6, 136.8) = 25.90, p < .001, ηp2 = 0.40, and culture, F(1, 38) = 29.90, p < .001, ηp2 = 0.44. Also, the interaction between expression and culture was significant, F(6, 228) = 17.14, p < .001, ηp2 = 0.31. Post-hoc Tukey Kramer’s analysis showed no significant difference (all p > .30) between agreement rates of Indian and Dutch raters for happy, surprise, disgust and sad expressions. But significantly low agreement rates were found among Indians compared to Dutch raters for angry (p < .01, d = 0.79), neutral (p = .045, d = 0.75) and fearful (p < .01, d = 1.36) expressions. As mentioned previously, angry and fear were the expressions for which lowest agreement rates were observed within Indian raters.

There were few negative expressions (e.g. fear, angry) for which there was lack of consensus among raters and the agreement rates were low (~ 70%). Fig 4 shows a three-dimensional plot of mean percentage of chosen expressions by the participants, as a function of intended expressions by the models. This plot also represents a confusion matrix, that is, how often an intended expression (of a model in RaFD) was confused for any other expression in this force-choice paradigm. The confusion matrix shows that intended fear was confused as displaying surprise (10%) emotion, intended surprise was confused with fear (9%), and intended disgust was confused with angry (8%). Such categorization errors were also reported by the Dutch raters (see Fig 4, [5]). Indian raters categorized intended angry as sad (14%) and disgust (8%), while intended neutral was classified as sad (8%). Visual inspection indicates that Indian raters misclassified angry and neutral expressions more often as compared to Dutch raters.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Some societal factors could influence our ability to process faces, and facial recognition could in turn predict social intelligence. In 2010, Johnston et al conducted an experiment in which they asked participants to view photographs of subjects showing varying levels of smiles and were asked things like which of the subjects they “would talk about personal issues with” or “would ask where the bathroom is”. The results suggested that perceivers tended to point to those demonstrating a “smile” type emotion, particularly if issues of trust or cooperation were involved. Participants evaluated those subjects who seemed to be enjoying themselves more positively than those who were smiling but were classified as “non-enjoyment” (either a grimace or a “fake” smile) (Johnston et al., 2010). While the existent literature does little to explore societal differences in perception and expression of emotion, Chen’s contribution in 2014 along with Johnston’s in 2010 suggest that the emotions we display certainly impact how we interact with each other in our day-to-day responsibilities in society.

There is a much larger body of research exploring cultural differences in emotion recognition and expression. In 1989, David Matsumoto noted that in the 1972 Ekman study, U.S. participants outperformed the Japanese. He suggested that some cultures such as Japanese follow social norms that might inhibit the understanding of emotion in cases where understanding it might be disruptive to social harmony. The literature suggests that Matsumoto collaborated with other researchers over the course of the next few years to explore these cultural differences in emotion recognition. One of the earliest significant contributions in this time period was in 1992 when Matsumoto & Assar outlined requirements for experiments studying cultural differences in expression. They established that in these studies (1) participants from various cultures must view the same set of stimuli, (2) expressions must meet the criteria for validly and reliably portraying said emotion, (3) each poser must appear only once in the set of stimuli, and (4) expressions must include posers of more than one race. (Matsumoto & Assar, 1992). In 1997, Michael Biehl, David Matsumoto, and Paul Ekman all collaborated with a number of other colleagues on exploring the differences in level of recognition and ratings of intensity across Hungarian, Japanese, Polish, Sumatran, US, and Vietnamese participants and the 7 core emotions established by Paul Ekman in 1972. The results revealed high agreement across countries in identifying the emotions portrayed in the photos, but cross-national differences in the exact level of agreement in anger, contempt, disgust, fear, sadness and surprise. In 2009, Matsumoto and colleagues tested whether the same cultural differences applied in recognizing spontaneous emotion by taking video frames from Olympic medal winners just after they had won or lost a medal, having them FACS coded into emotions, and then presenting them to observers of different cultures. They found that in the case of spontaneous emotion, observers of different cultures utilize the same facial cues when judging emotions, and the signal value of facial expressions is similar across cultures (Matsumoto et al., 2009).Cultural Differences in Emotion Recognition and Expression Assignment PaperHilary Elfenbein and Nalini Ambady began a new branch of the cultural research by suggesting that there is an “in-group advantage” in the understanding of emotion: that participants were generally more accurate in recognizing emotions expressed by members of their own culture than in recognizing emotions expressed by members of another. The experiment was replicated across both positive and negative emotions and tested on non-facial nonverbal channels of emotion such as tone of voice and body language (Elfenbein & Ambady, 2002). Joshua Ackerman and his colleagues furthered this research by claiming they had found a “cross-race effect” in a study that asked participants to memorize emotional face stimuli and recall them later. Their results suggested White participants were more likely to remember angry Black faces than angry White faces and explained it with a biological response: that White participants found Black faces threatening and it was an evolutionary mechanism to remember them. This research was replicated by Eva Krumhuber and Antony Manstead in 2011 (Krumhuber & Manstead, 2011) and again by Steven Young and Kurt Hugenberg in 2012 (Young & Hugenberg, 2012) using the same stimuli set. JD Gwinn and Jamie Barden argued that in replicating the 2006 work by Ackerman, these two studies failed to validate the stimuli set. They noted that the stimuli contained only 4 black subjects whose facial expression were all quite “unusual”. They re-tested the effect of angry expressions on the memory of White and Black faces with some newly designed stimuli and found that angry expressions impaired memory for Black faces, compared to neutral which was contrary to the previous findings. They tested both a White and Black participant sample, finding similar results. They concluded that the cross-race effect was better explained by stereotype-congruency.

All of the literature discussed thus far in exploring biological, societal, and cultural differences in expression and recognition of emotion use nearly identical research methods: collecting a set of facial expression stimuli founded in Ekman’s 1972 theory of emotion or creating a new set that is then coded using the same FACS developed in 1978, presenting that to a panel of observers controlling for the variable of interest, asking them a set of questions about the faces presented, and then analyzing the results for significant differences. There is an entirely separate branch of work founded in Ekman’s 1978 FACS research that has sought over time to automate the coding process using machine learning and computer vision. The review suggests that it began in 1992 when Susanne Kaiser and Thomas Wehrle demonstrated a method where small dots were affixed to the faces of participants who were themselves FACS experts expressing various facial emotions. The dot patterns were captured and digitized from the videos using a special algorithm, and an artificial neural network was then used to automatically classify the distances and dot patterns into the separate emotions. In 1997, Curtis Padgett and Garrison Cottrell advanced the neural net classification method by testing three different representation schemes as input to the classifier to compare results (a full face projection, an eye-and-mouth projection, and an eye-and-mouth projection onto random 32×32 patches from the image) (Padgett & Cottrell, 1997). The results suggested that the latter of the three systems achieved an 86% generalization on new face images. During the same year, two other significant contributions were made testing alternative feature sets as input to machine learning classifiers: one by Lanitis, Taylor and Cootes which used measurements of the shapes of key facial features and spatial arrangements to achieve between 70% and 99% accuracy on a normal test set of 200 images (Lanitis, Taylor & Cootes, 1997) and one by Essa and Pentland which used estimates of facial motion called optical flow extracted from video slides to achieve similar results (Essa & Pentland, 1997). M.S. Barlett and colleagues advanced the research in 1999 by successfully feeding a hybrid feature set of facial features and optical flow estimations into a three-layer artificial neural network to automatically detect the presence of facial action units 1 through 7 in a facial image (out of Ekman’s total of 46 from the 1978 research) (Barlett et al., 1999).Cultural Differences in Emotion Recognition and Expression Assignment Paper

Neural networks remained the method of choice for automatic facial emotion and facial action classification through the 1990s. In 2005, Meulders, De Boeck, Van Mechelen, and Gelman proposed a probabilistic feature analysis to extract the most relevant features to producing an expression, with the goal of identifying a minimal feature set that could more efficiently classify facial emotions. While neural networks remain a popular and effective method for classifying emotion even in recent research (Meng et al., 2016), the literature shows emergence of other methods that can make more efficient classifications with smaller feature sets, like support vector machines and hidden Markov models. These very same methods were used in 2012 by Jiang, Valstar and Pantic to create a fully automatic facial action recognition system (Jiang et al., 2012).

The methods that I will employ in my research are not focused on the automated recognition of emotion in facial expressions. Instead, we will use FACS-coded faces from the Cohn Kanade database of tagged facial images (Lucey et al., 2010) to measure how one’s own emotions effect our ability to perceive emotion in others. I would like to contribute to the current body of literature around societal context by asking the key question: does how we feel impact how we perceive others?

Q6 by Dr. Gregg Vesonder

In Kahneman’s book, System 1 is the term used to explain the part of our brain that makes quick, automatic decisions based only on information from the past. In other words, it is a low-energy decision making engine and does not bother to expend any energy making decisions using information that is not already known. System 2 is the part of our brain which is capable of making slow, well-thought out decisions and that often requires an extra expense of energy for critical thinking and incorporating pieces of information that may not be fully known or understood. There is a relationship between the Systems in that, ideally, the two systems work in harmony and when System 1 requires a little more thinking power to make a decision it turns to System 2 for processing. The theory suggests that all illogical decision making comes from cases where this harmony does not exist (Kahneman, 2011).Cultural Differences in Emotion Recognition and Expression Assignment Paper

Kahneman explains that System 1 is easily influenced, impatient, impulsive, and more driven by emotion than System 2. When System 1 is fired up or under load (i.e. from emotions), System 2 tends to fail to override and performs poorly. In addition, every time we have an emotional experience, we are providing System 1 with more information on which it will automatically use to make a quick decision in the future. So, even if System 1 is not under load at the time of decision making, emotional experiences emotional in the past are still influencing the decision making process of our ‘autopilot’ System 1 which, Kahneman writes, actually makes the majority of our decisions even when we believe we are actually making rational decisions with System 2. I believe that emotional content and emotional experiences heavily influence our decision making, even if we are not emotional in the moment.

My hypotheses do not presently take this into account, but perhaps asking the participants to think about which faces are exhibiting certain emotions might actually be considered a System 2 task as it requires some level of thinking and careful examination. Based on this, it would be interesting to test if priming a subject with an emotional stimulus (i.e. suppressing System 2) in advance of completing the questionnaire would significantly alter emotion perception and the results.

Q7 by Dr. Gregg Vesonder

Gestalt Principles state that a whole is greater than the sum of its parts, or in other words, that the whole picture tells a different story than any individual piece. The concept of figure and ground explains that we have a perceptual tendency to separate parts or “figures” out from their background based on traits such as shapes, colors, or size. The focus in any moment is on the figure. The ground is simply the backdrop. Sometimes this is a stable relationship, but sometimes (in an unstable relationship) our attention shifts such that what was formerly the figure is now the ground, and vice versa. (Grais, 2017). In the example presented in the question text, a smile might be considered unstable. We may perceive it as “happy” when presented it in a blank context, or if the individual is sitting on the beach with their family. But we may perceive it as an altogether different emotion if that same smile is on the face of a shooter holding a gun.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Similarly, the Gestalt concept of “Proximity” explains that objects that appear close together appear to form groups. A smile alone may require more thinking to decide whether it is actually a “happy” emotion being shown than a smile among 11 other smiles or a group of people who are smiling in a photo. Context in this way does not have to be environmental with a single figure, but can also include multiple figures that exhibit some similar features.

Gestalt theory also explains that we tend to group things together that share similarities (i.e. shape, color, size) in the concept of “Similarity”. We have grown to recognize smile as a smile and a frown as a frown based on being exposed to hundreds if not thousands of past interactions with individuals who have exhibited those facial expressions. When confronted with a new expression, we are comparing features of that new expression to those from the past and classifying it based on similarities. They may not be as simple as “color” and “shape” and in fact may be quite complex and comprise over a hundred unique features we cannot describe individually, but the concept holds. In this way, past context can affect present perception of emotion.

In all of these cases, the common theme is that context absolutely effects our perception of emotion. When we test our theories, we should consider the context not only of the stimuli we are asking participants to tag but perhaps even the context of the participant. Do I perceive emotion when I myself am in the comfort of my own home versus just before leaving the office after a stressful day at work? Even when presented an identical image of a smile, my own context might alter my response.

Q8 by Dr. Gregg Vesonder

Yes. The primary common theme is that each of these items will invoke an emotional response in us. In 2017, Schindler et al. conducted a meta-review of the literature around extant measures of emotional response to stimuli from various domains, ranging from film, music and art to consumer products, architecture, and physical attractiveness, and developed a new assessment tool called the Aesthetic Emotions Scale (AESTHEMOS) designed to measure the stimuli’s perceived aesthetic appeal from any of these domains (Schindler et al., 2017). What they discerned from their literature review is that extant measures of emotion have become very domain-focused because the way we respond emotionally to, say, a landscape, is different than how we might respond to a piece of music (i.e. the collection of emotions invoked are typically different), but both responses are emotional ones. They call these responses “aesthetic emotions”.

While AESTHEMOS focuses on creating a domain-agnostic assessment tool for measuring emotional response to stimuli, one contribution by my research would be to measure how our own emotions affect our perception of stimuli from these various domains, just as we do with faces. For instance, if the lit review conducted by Schindler et al. suggests that different combinations of emotions are invoked by stimuli from different domains, then what does feeling angry do to our perception of the world around us? Are we less likely to enjoy art and music, or will we feel more enjoyment (happiness) from certain types of art and music more in those cases because they provide an outlet?Cultural Differences in Emotion Recognition and Expression Assignment Paper

I say: yes, there are common themes in the perception of all of these stimuli in that they all invoke an emotional response in us. And I hypothesize that our state of emotion effects our perception of them, and therefore their effect on us, in different ways depending on the domain that they come from.

Q9 by Dr. Gregg Vesonder

Data Structure

The structure of the raw data collected has 4 main sections in a single flat table containing a total of 137 columns… First there is a unique Session ID (to the user and device) for every submission along with their Age, Gender, whether or not they identify as a Native English Speaker, and their baseline self-rated emotion response (Happy, Sad, Angry, Afraid, Surprised):

Following this there is a long series of columns containing the 8 images that the user was shown for each emotion (since they are presented from a random pool) and whether or not the user flagged that image (i.e. “Tap the faces that look ‘happy’”). We do this for each of the 5 emotions and twice more for “NOT Happy” and “NOT Sad” resulting in a total of 112 columns containing this data:

The third component is the same user’s self-rated emotion responses on a scale of 1-5 after they have been asked to tag all of the faces to see if playing the ‘game’ has had any impact on emotion:

And finally there is a series of time stamps indicating the time that the user submits each task, designed to see if there is variation in response time depending on the emotional responses.

Preprocessing

Before any quantitative analysis, data processing will be applied to calculate some additional features:

(1) For each face tagged (there are 56), we will compare them to the already-tagged Cohn-Kanade database from whence they come (Lucey et al., 2010) to see if the user was “correct”. This will generate 56 new features explaining, for each face, we know whether they correctly identified the dominant emotion.

(2) For each emotion (there are 5) and each “non” emotion (there are 2) we will tally the total number of responses correct and incorrect, as well as the overall total correct and incorrect. This will generate 16 new features.

(3) For each time stamp, we will calculate the completion time (in seconds) it took each participant to complete the step as well as the total time to completion. This will generate 10 new features.

(4) Each participant will also be placed in an age group: (18 to 24), (25 to 44), (45 to 64), (65 and over) based on those collected by the US Census Beaureau.

(5) For each participant, we will create 5 new binary features, each representing a positive or negative flag for feeling each emotion. For example, if a participant responds 1-2 (low) on the ‘sad’ scale, they will be considered “not sad”. If they respond 3-5 (mid-high) on the ‘sad’ scale, they will be considered “sad”.

Our dataset will now have a total of 225 columns. 26 of those features are of interest for statistical analysis (those generated in (2) and (3)) and the remainder will be treated as explanatory variables or preserved for exploratory hypotheses.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Qualitative Analysis

Before any quantitative statistical analysis is performed, a qualitative assessment of the data will be conducted. Histograms will be generated for age group, gender, the native English speaker flag to look for any anomalies or outliers in the distributions that should be removed prior to formal analysis. The way the application is designed should not allow for any missing data. Rows with empty cells or empty responses will be removed before statistical analysis as they indicate a system error or abandonment of the questionnaire and preliminary data collection suggests that such cases should be sparse (<5%).

The distributions of the features calculated in preprocessing will be checked for normalcy to assess the appropriate statistical test method for comparing between-group responses.

An exploratory data analysis will be conducted to produce summary visualizations of the responses. Visualizations that explain the average number of correct/incorrect responses and average response times by age group, gender, English speakers, and emotional baseline will be created to tell a data story and present the results in summary. The visualizations will also aid in identifying any outliers or particularly interesting patterns.

Quantitative Analysis

The Spearman and Pearson partial correlations and their statistical significance will be calculated between the participants’ emotional responses (i.e. Happiness level) and the number of correct/incorrect responses to each emotion and correct/incorrect responses overall. We will calculate these over all ages and genders, as well as within gender and age groups.

A student’s t-test will be conducted to test for statistically significant differences in the number of correct/incorrect responses to ALL emotions for each emotion group (i.e. do the number of correct “happy” responses differ between the “happy” and “not happy” groups?).

A one-way Analysis of Covariance (ANCOVA) test will also be conducted to compare the dependent variable (number of correct responses to each emotion) between emotion groups (i.e. “happy” vs. “not happy”) while including (1) age, (2) gender, and (3) native English speaker as covariates.

Finally, unsupervised learning techniques will be used to identify clusters of participants with similarities in their emotional responses that are more complex than obvious to the human eye. K-means clustering with varying levels of k will be employed on the participants’ responses to the emotional questionnaire (5 features) and the elbow method will be used to identify the optimal k. DBSCAN (density-based spatial clustering of applications with noise) will also be used to generate the same emotion clusters. Then, ANOVA and ANCOVA tests will be performed once more to compare the number of correct/incorrect responses between these new complex “emotion clusters” that were generated by k-means and DBSCAN, and the results compared.

Each of these quantitative analyses will generate a LOT of results, but will all be done algorithmically so that significance levels and correlations can be compared easily in the end.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Q10 by Dr. Babak Heydari

There are various machine learning methods that are popularly used for classification problems like the one in question. Deciding on the most effective approach is usually a function of things like the size of the sample set, the dimensionality of the feature space, whether or not we believe the data is linearly separable, and any underlying assumptions the method might make about the distribution of the data. A few of the more popular methods are discussed here as well as advantages and disadvantages of each and the reason for final selection.

Logistic Regression – one of the simpler and more traditional approaches, and often a good place to start, logistic regression fits a linear regression model to the training data and makes predictions by computing the probability that a dependent variable falls into a specific category as a linear function of independent variables. While one of its advantages are its simplicity, it assumes that the features are generally linear and that the feature space is linearly separable. There are few disadvantages to starting out with Logistic Regression in a new classification problem and then trying more advanced methods from there.

Naïve Bayes – Based on Bayes theorem that works on conditional probability: that the probability that something will happen given that something else has already occurred. Given this, we can calculate the probability of an event using its prior knowledge. The Naïve Bayes classifier assumes this holds true for the data we are using to make our prediction. It also assumes that all of the features in the data set we are using are unrelated to each other. This can be a disadvantage if learning the relationships between features would provide more accurate classification, since it is unable to do so. However, it is fast, simple, and highly scalable. It also works well with categorical data if the data is not linearly separable.

K Nearest Neighbors (KNN) – The KNN algorithm makes a prediction of a class based on the feature similarity of the test data to the existing (training) data. The advantage to this is that it is a non-parametric method, meaning it makes no prior assumptions about the distribution of the data and is therefore very helpful when we have no prior knowledge and need to let the structure of the data speak for itself. It works very well in real-world cases, and because there is no (or very minimal) formal “training” period, it is generally very fast. However, because it makes the prediction based on the “nearness” of similar items, it requires we come up with a meaningful measure of distance, which can be a challenge depending on the type of data we are working with. For the same reason it is insensitive to outliers, it is very sensitive to irrelevant features inappropriately included in the measure of distance.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Support Vector Machines – SVM’s separate the data into classes by maximizing the margin between classes using what are called “support vectors”. There are both linear SVM’s as well as non-linear when it is not possible to separate the training data using a hyperplane (in other words, the boundary the SVM creates doesn’t have to be a straight line). The benefit of non-linear SVM’s are that we can capture much more complex relationships between classes, but at the expense of being computationally expensive. Because they do not make any strong underlying assumptions of the data and because of their ability to understand complex relationships, they often provide some of the best classification performance for real world classification problems when simpler methods do not produce acceptable performance.

Decision Trees & Random Forests – Decision Trees use a branching methodology to make predictions just as the name would suggest. Each “branch” of the tree represents a decision made based on a prior decision, and a “leaf” node at the end of a branch represents a predicted class. They help make decisions under uncertainty, and also provide a nice visual representation of a decision situation (like deciding between classes). They also work well on categorical or even mixed data since they do not make any assumptions about the data or linearity. However, the accuracy of decisions generally goes down as the dimensionality of features go up and they generally do not work well for high dimensionality data sets. Random Forests generate multiple decision trees with different random samples of the data and then use the “most popular” prediction as the final output.

Artificial Neural Network & Deep Nets – Finally, artificial neural networks represent an entire branch of research that uses simulations of biological neural networks to make decisions or make predictions using data. The basic anatomy of an ANN consists of an input layer containing the feature set that is being used to make predictions, an output layer which contains one or more “nodes” representing an output of the network (this could be, for example, multiple classes), and a series of hidden layers which transform the input to the output data. All nodes are connected by a weight, which is reinforced when a neuron reaches a threshold and “fires” to those nodes on the right. We compare the output to that in the training set, adjust the weights to reduce error, and then make another guess. An ANN keeps doing this until it feels it can’t decrease the error any more. “Deep Learning” networks are simply ANNs with a much higher number of hidden layers. ANNs are very computationally expensive, but work well when the feature space is complex and generalized decisions need to be made by detecting patterns that may or may not be detectable by humans. They have been shown to work very well in computer vision applications, and are popular in facial recognition and emotion detection as seen in the literature, however, due to their complexity and computation intensity I will not be using ANN’s in this response.

Selected Model: For this problem, I have ruled out traditional methods like Logistic Regression and Naïve Bayes and the more advanced and computationally intense methods of ANNs and Deep Learning networks. Decision Trees will become too complex with the high dimensionality of continuous variables, and while the KNN approach may also provide good performance, coming up with a meaningful definition of distance may be difficult. For an implementation with relatively good performance and moderate complexity, I will be implementing a linear Support Vector Classifier for this problem.

Major Steps:

(processor.py)

1) First, we use the OpenCV (Open Computer Vision) library which has a pre-trained model that detects the face from an image to extract JUST the face from the images in the JAFFE database and use the ‘glob’ package to sort those files into subdirectories labeled by emotion. We chose three emotions to focus on to reduce the problem space: happy, sad, and angry  Cultural Differences in Emotion Recognition and Expression Assignment Paper

(classifier.py)

2) Next, we initialize a face detector and landmark predictor class using the open source ‘Dlib’ library which contains a pre-trained model to extract landmark coordinates from a facial image. These are going to be the features we use to train the SVC to recognize emotion. The pre-trained models have learned to extract the landmarks of 68 unique landmarks on the face. Rather than compute distances to a centroid or anything complex, we will use the raw coordinate values as input features to the SVC.

3) For 10 iterations, we:

a. Pull the images from the emotion directories (happy, sad, angry)

b. Split them into an 80/20 train & test set and append the emotion labels

c. Extract the facial landmarks and store the x and y coordinates in an array

d. Train an SVC classifier on the 80% training data

e. Test the performance of predictions on the 20% test data

f. Append the accuracy to an array

4) Calculate the mean of all of the iteration accuracies to produce a final result.

The mean accuracy on a single run (10 iterations) was 0.878, which is quite good. However, since the sample size is so small, it is worth noting that the iteration accuracies bounce between the same few values… this is because for each emotion, the test set is only ~6 faces or so. In some cases, we even see 100% accuracy which is likely improbable at scale. In order to improve the model’s performance and ability to generalize, we might try:

– Including additional data. The sample size is pretty small in this example (~30 faces per each emotion) which gives the model less data to use to differentiate.

– Exploring different features. For this example we used the raw coordinates, but we might explore using distance measures, for example, between the facial landmarks.

– Image transformations. For this example we leave the greyscale images as they are, but there are a number of techniques that apply transformations to the images that make differentiating features stand out more (for example, adjusting contrast, or applying filters that reduce the number of pixels to only the principal components of an NxN grid overlaid on the image).Cultural Differences in Emotion Recognition and Expression Assignment Paper

SVC performance is typically visualized by projecting the feature space into two dimensions and then visualizing the linear separation. Because of the high dimensionality of the feature space in this problem (68), projection is quite difficult and the visualizations become meaningless. I am including an output of the model here and a link to the git repository below:

runfile(‘/Users/jmanfre/dev/python/jaffe/classifier.py’, wdir=’/Users/jmanfre/dev/python/jaffe’)

In the literature review conducted in response to Dr. Mansouri’s question, I explored the evolution of a taxonomy of research branching from a foundation in Paul Ekman’s work in 1972 on emotion expression (Ekman, 1972) and in 1978 on the coding of facial expression through facial action units (FACS) (Ekman, 1978). I discussed that through the 1990s and 2000s there was a significant amount of research done on societal and cultural effects on emotion recognition and expression, as well as an evolution of computational methods used to code for those emotions when expressed. In this response, I will elaborate on other more recent branches of the taxonomy that grew from the same roots in Ekman’s 1972 research.Cultural Differences in Emotion Recognition and Expression Assignment Paper

While there was a heavy focus on the effects of cultural and societal context through the early 2000s, more recently, Barrett & Kensinger explored whether context in general is routinely encoded during emotion perception. Their research in that study formally validated that people remember the context more often when asked to label an emotion in a facial expression than when asked to simply judge the expression itself. Their research suggested that facial action units when viewed in isolation might be insufficient for perceiving emotion and that context plays a key role. (Barrett & Kensinger, 2010). One year later in 2011, Barrett, Mesquita & Gendron continued the research to test various context effects during emotion perception, such as visual scenes, voices, bodies, and other faces, and NOT just cultural orientation. Their findings suggested that, in general, context is automatically encoded in perception of emotion and plays a key role in its understanding.

There is a branch of research that began to explore differences in biology and their effects on emotion perception. The first part of that branch focused on age. In 2010, Phills, Scott, Henry, Mowat, and Bell conducted a study where they compared the ability to recognize emotion between healthy older adults, those with Alzheimer’s disease, and those with late-life mood disorder. Emotion detection was impaired, expectedly, in those with Alzheimer’s, and also slightly in the MD group. (Phills, Scott, Henry, Mowat & Bell, 2010). They also found that issues with emotion perception predicted the quality of life in older adults, indicating that emotion decoding skills play an important role in the well-being of older adults and prompting some further research on the Age relationship. In 2011, Kellough and Knight conducted a study that suggested that there is a positivity bias in older adults, and explained it by suggesting that these effects were related to “time perspective” rather than strictly to age per se. (Kellough & Knight, 2011). This research was validated in a systemic meta-analysis in 2014 by Reed, Chan and Mikels where their analyses indicated that older adults indeed show a processing bias toward positive versus negative information, and also that younger adults show the opposite pattern. (Reed, Chan & Mikels, 2014). In 2011, Riediger, Voelkle, Ebner & Lindenberger conducted a study that included not just adults but also younger raters to assess the age effect more broadly. They found results that also suggested the age of the poser might effect the raters’ ability to correctly identify the emotion (Riediger, Voelkle, Ebner & Lindenberger, 2011). This was studied specifically by Folster, Hess & Werheid in 2014 where they concluded that the age of the face does indeed play an important role for facial expression decoding, and that older faces were typically more difficult to decode than younger faces (Folster, Hess & Werheid, 2014).

The second part of the “biology” branch explored Gender. There were a couple of point studies conducted in 2010, one by Collignon et al. in a multisensory study where participants were asked to categorize fear and disgust expressions through facial expression, and also accompanied by audio. They found that women tended to process the multisensory emotions more efficiently than men (Collignon et al., 2010). The second study the same year that highlighted gender differences was by Hoffman et al. where their results suggested women were more accurate than men in recognizing subtle facial displays of emotion, even though there were no significant differences observed when the facial expressions being identified were labeled as “highly expressive” (Hoffman et al., 2010).Cultural Differences in Emotion Recognition and Expression Assignment Paper

In the late 2000s and early 2010s the large majority of new literature around facial expression perception seems to focus on its relationship with an assortment of psychological disorders. In 2010, Bourke, Douglas and Porter found that there was evidence in patients with clinical Depression of a bias toward sad expressions and away from happy expressions (Bourke, Douglas & Porter, 2010). The same year, Schaefer et al. conducted a similar study for bipolar depressive raters specifically, and found evidence of emotional processing abnormalities (Schaefer et al., 2010). Kohler et al. tested various controls for a similar bipolar rater panel to explore whether there were other explanatory factors and found the same deficit regardless of task type, diagnosis, age of onset/duration of illness, sex, or hospitalization status, suggesting that difficulty with emotion perception is likely a stable deficit in depressive disorders (Kohler et al., 2011). In 2012, Penton-Voak et al. furthered the research by testing the effects of emotion perception training on depressive symptoms and mood in young adults. They found that there was some evidence for increased positive mood at a 2-week follow-up compared to controls, suggesting that modification of emotional perception might lead to an increase in positive effect (Penton-Voak et al., 2012). This sort of finding has seeded further research about how emotion perception training or intervention might actually be used to aid those suffering from psychological disorders, namely depression. We will discuss this later.

Like depression, there is a large body of work focusing on schizophrenia. In 2010, Chan, Li, Cheung, and Gong noted that there was mixed evidence regarding whether patients with schizophrenia have a general facial emotion perception deficit or only a deficit in specific facial emotion recognition tasks (Chan, Li, Cheung, and Gong, 2010). They conducted a meta-analysis of 28 facial emotion perception studies and found patients with schizophrenia that included control tasks, and their findings demonstrated a general “moderate to severe” impaired ability to perceive facial emotion in schizophrenics. This seeded a chain of follow up research. Brown and Cohen in 2010 studied which specific symptoms of schizophrenia seemed to contribute to the deficit. They found that impaired ability to label emotional faces did not correlate with symptoms, but were generally associated with lower quality of life and disorganization (Brown & Cohen, 2010). The same year, Linden et al. studied the same ability in raters but with a focus on working memory. Their results actually indicated a preserved implicit emotion processing in schizophrenia patients, which contrasts with their impairment in explicit emotion classification (Linden et al., 2010). In 2011, Amminger et al. examined at-risk patients for schizophrenia as well as those who were clinically stable with first-episode diagnosis to test whether emotion recognition deficit was apparent in people at risk before Cultural Differences in Emotion Recognition and Expression Assignment Paper

I just read a paper this morning with the title of “facial expression of emotion are not culturally universal”, which argues that 1) whereas westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, easterners do not, and 2) easterners represent emotional intensity with dinstinctive dynamic eye activity. While I find it interesting in terms of its methodology and the findings, I am also concerned about its validity due to the small sample. According to my own research using faces in MSCEIT emotion perception part, people across China and US have consensus on what emotion a face displays, the differences only lie in their consensus level, which means in China people have a significantly lower level of consensus than US people. I think it is too rash to jump to the conclusion that emotion recognition/perception is cultural different with such a small sample. And I believe there is a meta-culture on emotion recognition/perception while some cultural differences also exist.

The biological significance of the face as an instrument for communication starts in infancy. As early as 9 minutes after birth, infants prefer to look at faces rather than objects, and as young as 12 days old, babies have the ability to imitate facial gestures. This ability later contributes to the development of cognitive skills such as language and mentalizing (i.e., understanding others’ intentions).

Not all is straightforward when it comes to reading emotions. Especially when reading emotions across cultures. Despite the universality of basic emotions, as well as the similar facial muscles and neural architecture responsible for emotional expression, people are usually more accuratewhen judging facial expressions from their own culture than those from others. This can be explained by the existence of idiosyncratic and culture-specific signatures of nonverbal communication. These cultural “accents”influence interactions between nature (biology) and nurture (cultural contexts), which, in turn, affect the perception and interpretation of emotions.

So, how does culture influence emotion perception?

One way is in the perception of the intensity of emotions. For example, Americans have been shown to rate the same expressions of happiness, sadness and surprise more intensely compared to the Japanese. Furthermore, differences have been found in the way we infer internal experiences from external displays of emotion. When asked to rate faces on how intensely they were portraying certain emotions and how intensely the posers were actually feeling these emotions, American participants, for instance, gave higher ratings to the external appearance of emotions. The Japanese participants, on the other hand, assigned higher ratings to internal experiences of emotions. Therefore, depending on cultural contexts, internal turmoil might not necessarily be legible on the face, while an overly excited smile might be masquerading only lukewarm enthusiasm.Cultural Differences in Emotion Recognition and Expression Assignment Paper

This cross-cultural discrepancy in interpreting emotion intensity has been attributed to display rules.

Display rules are “cultural norms that dictate the management and modification of emotional displays depending on social circumstances” (Matsumoto et al., 2008, p.58). Culture-specific display rules are learned during childhood. These rules can tell us whether it’s appropriate to amplify, de-amplify, mask or neutralize our emotional displays, as well as provide us with normative prescriptions for when and how to display our emotions.

A classical study from the 1970s that demonstrates cross-cultural differences in display rules involved American and Japanese participants watching stressful films under two conditions – once alone, and once with an experimenter in the room (Ekman, 1971). Participants from both cultures produced similar facial expression when watching the films alone. However, with the presence of the experimenter, the Japanese masked their negative emotions through smiles. The Americans, on the other hand, continued to display their negative emotions in front of the experimenter. These differences were explained by differences in display rules in Japan and in the US: namely, the Japanese tendency to conceal negative emotions in social settings in order to maintain group harmony, and the tendency to endorse emotion expression among individualistic cultures such as the US.

Cross-cultural variations have also been found in the cues we look for when interpreting emotions. Research tracking eye movements to assess where people direct their attention during face perception has shown that across cultures, people may be sampling information differently from faces. For instance, when identifying faces, East Asian participants focused on the central region of the face around the nose, giving more importance to the eyes and gaze direction. Western Caucasian participants, on the other hand, expected signals of facial expressions of emotion from the eyebrowsand the mouth region.

Attentional biases were also highlighted when participants were asked to look at faces with conflicting expressions (i.e., sad eyes with a happy mouth). The results showed that Japanese participants gave more weight to the emotion portrayed by the eyes, while American participants were relatively more influenced by the mouth region. One possible explanation for these differences is that display rules prescribing high levels of affect control prompt people to pay closer attention to features that are more difficult to manipulate and thus carry more information about true emotional states (i.e., eyes). Whereas, in cultures with less stricter display rules, people concentrate on the mouth, as it is the most expressive part of the face.Cultural Differences in Emotion Recognition and Expression Assignment Paper

Examples of emoticons commonly used in different cultures. Note how in Asian cultures, eyes are typically used to express emotion, while in Western cultures the mouth reflects the emotion expression.

Source: Marianna Pogosyan

According to some researchers, these results can be reflected in the stylized emotion expressions depicted with emoticons: in Japan, emoticons convey emotion mostly through eyes, and in the West – mostly through the mouth.

While we all use facial expressions as indispensable tools for social communication, culture influences emotion perception in various subtle yet important ways. From cognitive styles shaped by display rules, to attentional biases brought forth by the weight we give to facial cues, an awareness of these cultural influences may improve the accuracy with which we decode emotions during our interactions with people from other cultures. Cultural Differences in Emotion Recognition and Expression Assignment Paper

Get a 10 % discount on an order above $ 100
Use the following coupon code :
NURSING10