“Let’s conduct an online experiment.” It sounds innocuous enough, that is unless you run one of the largest online social networks, and you didn’t tell the participants.
The story centres around the publication of a paper in the Proceedings of the National Academy of Sciences of the United States of America (PNAS) under the rather Orwellian title “Experimental evidence of massive-scale emotional contagion through social networks” So, what did Facebook do, and WTF (why the fury)?
Researchers from the Core Data Science Team at Facebook, the University of California, and Cornell University used almost three quarters of a million users (689,003 to be precise) in a study conducted in mid-January 2012. The nature of the academic review cycle meant that it took until June 2014 for the research to be published. The researchers set out to manipulate the emotional state of users over a one week period, by reducing either the number of emotionally positive or emotionally negative items in the participants Facebook News Feed, and then examined their posted status updates for positive or negative emotion. The participants selected were all users who viewed Facebook in English. It isn’t clear from the paper if that included “English (UK)” users – or indeed “English (Pirate)” users for that matter. There were also two control groups, where status updates were omitted at random, as News Feeds that were not manipulated already contained a positive emotional bias (22.4% of posts contained negative words, 46.8% of posts contained positive words).
Happy talking – The Findings
The findings are described as “controversial” in the paper itself, but for different reasons than those being discussed online. There are certainly a number of methodological questions about the research. Machine algorithms were used to identify the emotional content of the messages, which is something that is fraught with problems in short bursts of text like status updates. The researchers were only assessing the emotional state of the Facebook users based on what they posted to the platform, their actual emotional state may have been quite different. Those two things aside, the findings suggest that reading emotionally positive updates (let’s call it “happy talk” for now) results in writing happy talk, and reading emotionally negative updates results in users posting emotionally negative updates. The study also indicated that people who were exposed to fewer emotional posts (ones with neither positive nor negative content) were less emotional in their own updates in the following days.
Subjects versus Participants
The bigger controversy is over the participants. This is an historically thorny issue for psychologists. In the 70’s it was common to write about “subjects” in research papers. That isn’t a term you will see in use today, at least not in a psychology or medical journal. At the start of the 90’s the BPS updated their code of conduct and ethical principles to refer to “participants”, enforcing the strong belief that people should participate in, rather than be subjects of, research. After much self-reflection over many dubious studies that caused long lasting psychological harm, researchers realised that their positions of power over participants required them to hold each other to account over how they carried out research. Telling someone that they are participating in an experiment is not enough to constitute consent. The BPS guidelines state that people should be “..given ample opportunity to understand the nature, purpose, and anticipated consequences of any professional services or research participation, so that they may give informed consent to the extent that their capabilities allow.”
There have been long standing ethical differences between the APA in the US and the BPS in the UK, not least of which is their historic views about psychologists involvement in torture – the APA allowed its members the ‘Nuremberg defence’ while the BPS warned that members would be struck off for even “countenancing” cruel, inhuman or degrading procedures. It is a harsh example, and after half a decade that gap was closed, but it is a stark reminder that psychology and psychological research in the US and Europe (and the UK within that) are distinct in their histories, perspectives and practices.
Were the Facebook users participants, subjects, or just human guinea pigs? The paper states that research was carried out “ ..such that no text was seen by the researchers. As such, it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.” I’m not an ethicist, and nor do I play one on TV, but I struggle to understand how that might be construed as “informed consent.” The key word there, in both the APA guidelines and the UK’s BPS ethics guidelines is “informed”. I did a quick small survey (n=9) and none of my respondents realised that accepting the Facebook terms and conditions had enrolled them as participants into unlimited psychology experiments [see update: they actually didn’t] – the text of the paper implies that Facebook has conducted a number of these studies.
There are instances where participants, because of the nature of the experiment, have to be mislead or misinformed about their participation. These are the exception, and that sort of experiment usually undergoes considerable additional ethical scrutiny, and even in the US, are prohibited if they put participants at more than minimal risk.
There are also other forms of experimental design that under go increased ethical oversight, specifically those involving children, and those involving emotional manipulation. The paper appears silent on the age of the participants (are children honest and accurate when they fill in their ages on Facebook, even if they screened by age?), and participants weren’t debriefed (which would be a normal experimental requirement). There are places where psychologists are allowed to run riot, as it were, for example people in public spaces who could reasonably expect to be observed. But in the Facebook experiment, this was not observation, it was wilful intervention, and it was not a public space, it was a privately run, commercial online platform.
The long and short of it is that I am very surprised that the academic institutions involved allowed the research to go ahead. If one tried to carry it out in the UK, it would be a very short conversation with a university ethics board. The imbalance of power between the participants and the researchers is quite staggering.
It gets worse
What I am going to say next in no way lets Facebook, UoC or Cornell off the hook: There is worse. When asked for comment, the body that gave ethical approval essentially said, Facebook manipulates the News Feed all the time, so it is fine. Let’s call that the “stuff happens” defence. There will definitely be questions about the ethical approval in the US, especially if federal money was involved (Cornell has back tracked on an earlier statement implying that state funding was), as there is a strict approval process (IRB).
In the UK in 2012 (the most recent figures from the ONS) there were 5,981 suicides in people aged 15 and over. That is 11.6 deaths per 100,000. Bending the statistics, and scaling that number to the Facebook study, that is 80 people within the study population size. That isn’t suicide risks, that is 80 deaths. Now, that number may have little accuracy, but start to think about the number of people in that study who may have been suffering from depression or mental health issues. What effect might it have had on them? We are not talking about one or two people, we are talking about thousands. Remember, the impact of the study wasn’t known before it was conducted, and in reality, it still isn’t known. Were people permanently impacted by their unwitting participation? This is the sort of thing that got psychologists into hot water in the 70’s, only then it was studies involving just a few dozen people. These news studies at Internet scale. What happens if that negative sentiment had spiralled out of control? Who would have pulled the plug? Could it have impacted on the stock market? The shadow cast by Milgram and Zimbardo hang a heavy over the shoulders of psychologists, even today.
I said it gets worse, and it does. If I was a software engineer, with no qualifications in psychology, no membership of an appropriate professional body, and no experience in conducting experiments with human participants, but I worked for a large social network platform, I could carry out this sort of manipulation and experimentation on a daily basis with no ethical oversight and no external accountability. This happens everyday, all the time. We call it behavioural marketing, marketing optimisation and a host of other terms. All effective marketing is manipulation. Traditionally the marketeers’ tools were blunt, and there was relatively little feedback to guide the precision of our blows. Much like the behavioural psychologists of the 1950’s, we hacked our way through customer markets, gradually learning. The scale of The Internet, and the emotional transparency promoted by social networks has changed that. Now marketeers and engineers alike are turning into emotional surgeons, with no responsibility after the body leaves the operating table.
Is it time for online marketeers to have a code of ethics, and to be subject to ethical oversight? What is and isn’t an appropriate form of emotional manipulation? Which discoveries have to be publicly disclosed and which do not? Facebook has rubbed the big data lamp, and the social engineering gene is firmly out of the bottle, and now users want to know what it wants to do to them.
No Harm, No Foul?
The emotional effect found in the study is relatively small, and with such a short study people (7 days), it doesn’t allow for systemic change. Users might “mute” negative users over time (or even “positive” ones for that matter – friends constantly going on expensive holidays and raving about it, we are looking at you). It also doesn’t allow for the effects of normalisation – the experiment produced a shift in the emotional content of the News Feed, over time, it is likely that users would normalise to the increased level of “happy talk”, and revert to their normal emotional balance.Perhaps. There are other secondary effects that make operationalising this sort of research very problematic too. If users realise that “happy talk” updates are more likely to appear in their friends’ News Feed, will that influence how people post to Facebook? I can already imagine big brand agencies working hard to post the “happiest” updates, as if corporate Facebook pages weren’t awkward enough already.
In reality, the findings of the study tell us very little that we didn’t know already, and includes references to two other studies of many with similar findings. What the study does tell us a lot about is Facebook itself, and the ways in which they conduct research. It raises questions about if, and how, social platforms are accountable for the well-being of their users. Marketeers, and platform owners, aren’t likely to get away with being ethically “vague” for much longer. Just because you didn’t pay for a service, it doesn’t give the service provider the right to turn you into a guinea pig. Perhaps we need to stop using the term ‘users‘ just as psychologists stopped using the word ‘subjects‘. Language drives attitudes, and viewing people as ‘big data’ leads to a mindset that is dangerously abstracted from the human consequences of action, or inaction.
Update: It transpires that the “research” clause was not in the Facebook terms and conditions at the time that the study was conducted. A Facebook spokeswoman told the Guardian that “when someone signs up for Facebook, we’ve always asked permission to use their information to provide and enhance the services we offer. To suggest we conducted any corporate research without permission is complete fiction. Facebook clearly hasn’t yet grasped the difference between “permission” and “informed consent” or between “scientific experimentation” and “corporate research”.