In 2014, Facebook (FB) published a paper in the Proceedings of the National Academy of Sciences (PNAS) called ‘Experimental evidence of massive-scale emotional contagion through social networks.’* This paper described the results of a research study, designed by Cornell University and carried out by FB ‘data scientists,’ in which FB had intentionally manipulated the newsfeeds of nearly 700,000 users with either overwhemingly negative or overwhelmingly positive news items. The goal was to see if it was possible to influence moods and perceptions, as measured by ‘likes’ and ‘shares,’ etc., and then to determine whether it was possible to measure the spread–the ‘contagion’–of that influence. The answer was ‘yes,’ it is possible to influence moods and perceptions, and ‘yes’ it is possible to both spread and measure this influence–all through the magic of information manipulation.
There was an immediate and vocal outcry over what many considered to be ethical violations in this study, made worse by FB’s highly inappropriate and dismissive response to the nearly universal condemnation of their methods. Cornell ultimately distanced themselves from the study and PNAS issued an editorial ‘expression of concern’–one step below a retraction–and a correction on the publication.
US law requires that research involving human subjects–including (especially) social science and psych research–be done under the supervision of an institutional review board (IRB) tasked with ensuring human subjects are adequately protected. This is the oversight role Cornell was supposed–but ultimately failed–to adequately provide for the FB study. The IRB system was put in place in response to egregious abuses of human subjects of research in the 19th and 20th centuries. The major role of an IRB is to ensure that experiments involving people are well-designed, risk is limited, there is actual merit to the question being studied, and–most importantly–that people in research experiments have been properly informed of the goals, risks and alternatives before they agree–and they must voluntarily agree–to participate. This is known as ‘informed consent’ and it is the law (45 cfr §46.116). At its very core, informed consent codifies individual choice about participation in research under the law. People can choose whether not they wish to be involved in research and they have the legal right to say no. Human beings cannot be forced to participate in research against their will, and if they do wish to participate, they have the right to know exactly what they are signing up for.
It is clear FB’s ’emotional contagion’ study failed to provide anything close to adequate informed consent. In fact, no option to consent was provided at all to the 700,000 FB users selected for this study. None were even made aware their newsfeeds were being manipulated. FB had no way of knowing if they were putting individual users who may be psychologically fragile at risk because they did not ask permission–or even inform them they were being targeted. When the recklessness of this approach was pointed out to them (and to Cornell) after the paper was published, FB’s flippant response was essentially ‘we think it’s highly unlikely we drove anyone to suicide.’ The fact is they do not know whether their psychological influence operation drove anyone to suicide. They do not know what level of completely unnecessary stress or distress they imposed on users, some of whom may have had underlying mental health issues that put them at significant risk. Worst of all they did not, and still do not, appear to care.
FB claimed that by agreeing to their Terms of Service (ToS), users had, in effect, provided informed consent.** This is clearly not so, as FB ToS do not come close to satisfying the required elements of informed consent laid out in 45 cfr §46.116. However, no one stopped them. No one has ever attempted to curb, regulate, or in any way force FB to comply with the law. So essentially, if you use the FB platform, whether you know it or not, you have agreed to be their Guinea pig, and have given up any right to be informed about psychological or other experimentation FB or its partners may decide to subject you to.
In the fallout from the Cambridge Analytica (CA)/FB revelations, FB has taken to media in a ‘We Were Duped by Cambridge Analytica Pity Party and Innocent Victim Tour.’ As they continue this disingenous effort to reframe their role, just remember that four years ago they proudly published their ‘research’ into manipulating and influencing users. They were more than happy to brag about it back then. CA was not just some app developer using the FB platform independently, like thousands of other app developers. On the contrary FB and CA shared and exchanged, and continue to share and exchange, staff--an incestuous revolving door of talent between two companies perfectly aligned in their goals. FB showed it was possible to secretly influence users, then worked with CA to weaponize this ability for political gain. The fact that they may have engaged in psychological influence operations on behalf of a client in order to swing an election should not surprise anyone. They basically told us in 2014 they were interested in doing exactly this sort of thing.
It’s also good to remember that Mark Zuckerberg has very publicly expressed his interest in running for president. Could it be that FB’s unusually close working relationship with CA was because they supported CA’s goal to get Trump elected? Or could it be that Zuckerberg’s interest was not so much in electing Trump the candidate, per se, but in seeing how far his powerful platform, combined with psy-ops and influence campaigns, could go in altering the outcome of an election? Success here would serve as a de facto pilot study and dry run for his own ambitions. Powerful private interests manipulating the electorate for personal gain is the ultimate threat to democracy. It is clear that democratic governments lack the will and/or ability to protect their citizens from this threat, so our only option is to protect ourselves and just say no to these abusive platforms.
We were warned in 2014 that FB felt it had the right to abuse its users–in violation of the law–to its own ends. We’ve just had a massive second warning. Within minutes of FB announcing they cut ties with CA and CA CEO Alexander Nix being suspended, Nix and his cronies created a new corp, EMERData, with many of the same bad actors on board and offering essentially the same services CA. FB has not agreed to cut ties with Emerdata. They will do it again and FB will let them. Continuing to patronize this service as if there is nothing wrong is simply no longer an option. We have been fooled at least twice now. If we want to continue to be Mark Zuckerberg’s ‘dumbfucks,’ shame on us.
*Here is a link to the PNAS paper on ’emotional contagion,’ along with the editorial expression of concern. Read it and see if you still believe FB is just a nice social engagement platform that somehow got out of hand. Their efforts to manipulate are systematic, aggressive and deliberate–they are publishing studies on them. We ignore this at our peril.
**Informed consent is not a form you sign. It is an interactive process meant to ensure that the prospective subject of research is fully aware of what they are being asked to do, what the potential risks/benefits are, if there are alternatives to participating, etc. Adequate informed consent also allows the prospective research subject to ask questions. There are times when informed consent requirements can be tailored to fit specific circumstances–like when revealing study particulars can impact subject behavior. This was FB/Cornell’s justification for deceiving study participants. In general, for exceptions to be granted, the circumstances must warrant such deceit–for example, the information to be gained is critical enough to justify deception. In these cases, closer observation of the subjects during and after the study period is required as an extreme step to ensure safety. FB’s study did not meet these criteria. A study design that enrolled more participants than needed and informed them that they would anonymously randomized to either the study group or the control group at some undefined point would have allowed them to get consent w/o impacting outcomes. There also was no urgency to this study–it was done for corporate profit, not to gather information on an emergent issue of significant importance. Lastly, they did no safety monitoring at all. They manipulated emotions and then let the chips fall where they may.
One way to gauge whether or not FB’s ToS is adequate informed consent is to ask yourself if you understood when agreeing to the ToS that you had just agreed to be an unwitting subject in pychological experiments. If the answer is no, you were not afforded informed consent.
Additional Reading on this Topic