A new study in the Proceedings of the National Academy of Sciences has been receiving an enormous amount of negative press, as their study of 'emotional contagion,' has been called 'secret mood manipulation,' 'unethical,' and a 'trampl[ing] of human ethics.' Researchers took 689,003 participants, and used the Linguistic Inquiry and Word Count (LIWC) software to manipulate the proportion and valence of 'positive' and 'negative' emotional terms that appeared on users news feeds. They then argued that emotional contagion propagates across social networks. This study has a number of flaws, and the fact that it passed Institutional Review Board (IRB) review is the least of them.
Facebook claims they demonstrated emotional contagion, but cannot show that they actually successfully manipulated emotions AT ALL.
That's right, the reason I'm upset is that they didn't manipulate emotions; not because I wanted them to -- as that would potentially be an enormous violation of ethics -- but because they claimed they did and published it in a peer-reviewed journal, without actually proving anything of the sort.
There are so many flaws with the methodology that I'm going to limit myself to bullet points covering the most glaring problems:
- "Posts were determined to be positive or negative if they contained at least one positive or negative word, as defined by Linguistic Inquiry and Word Count software (LIWC2007) (9) word counting system, which correlates with self-reported and physiological measures of well-being, and has been used in prior research on emotional expression (7, 8, 10)." -- I'm friends with a ton of jazz musicians. When they call something bad, this is not a negative term, but would be interpreted as such by the LIWC.
- More generally, depending on the social circle, terms like bad, dope, stupid, ill, sick, wicked, killing, ridiculous, retarded, and terrible should be grouped differently. There is absolutely no indication that the researchers took slang or dialect variation in English into account.
- This study does not -- and cannot -- demonstrate actual emotional contagion. They have a much better chance of demonstrating lexical priming than emotional contagion. Except, they can't demonstrate that either, because all of the terms are aggregated, so they only know that words with 'negative valence' are predictors of the use of other words with negative valence.
- "people ’s emotional expressions on Facebook predict friends’ emotional expressions, even days later (7) (although some shared experiences may in fact last several days)" -- That is, there's no control for friends in social networks sharing a real-world experience and posting about it on Facebook using similar emotional terms.
- "there is no experimental evidence that emotions or moods are contagious in the absence of direct interaction between experiencer and target."
In other words, the Facebook study does not control for shared experiences being described in similar terms, does not control for different semantic and pragmatic contexts (e.g., "those guys were BAD, son. [Piano player] was STUPID NASTY on the gig last night!" is extremely positive, but would be interpreted by LIWC as extremely negative), and conflates emotional contagion with lexical priming (simply, the increased likelihood of using a given term if it is 'primed' by previous use or by previous use of a related term).
In order for this study to say anything even remotely interesting, the researchers would first have to demonstrate that they can get at actual emotional state through social media posts. Then, they would have to demonstrate that they could reliably determine actual emotional state from social media posts (what is the probability that a Facebook user is experiencing sadness given that they have used descriptive terms about sadness in their posts?). Next, they'd have to separate out confounds (e.g. "nasty" for "good"). Then they'd have to demonstrate that there is in fact a 'contagion' effect. Finally, they'd have to demonstrate that the apparent contagion effect was not just lexical priming (that is, me repeating "sad" because I was primed by another person's use of the word "sad," while not actually feeling sadness). If this post is any indication, they'd also have to figure out a way to control for discussion of emotion -- this post is chock full of negative terms, while being emotionally neutral, since I'm discussing emotional terms.
The real travesty is not that the Facebook study passed IRB; it's that it passed peer review.
This is indicative of a larger problem in the sciences: there is a bias toward dramatic findings, even if they're not terribly well supported. As a linguist, it feels like linguistics suffers more from this than other fields, since there have been a slew of recent dramatic articles published about linguistic topics by non-linguist dabblers who employ terrible methodology (for instance, making claims about linguistic typology predicting economic behavior, but getting all the typologies wrong!). Whether linguistics as field suffers from this more than other fields remains to be proven by a well designed study. That said, when people decide to do research that relies heavily upon understanding linguistic behavior, it behooves the researchers to, I don't know, maybe...consult a linguist.
Ultimately, the Facebook study was (just barely) within the realm of ethical study on human subjects, although their definition of informed consent was more than a little blurry. What's truly terrible about it is the fact that they make very strong claims about emotional contagion on social networks that their research does not justify, and they passed peer review.
©Taylor Jones 2014
Have a question or comment? Share your thoughts below!