Facebook tramples human research ethics and gets published by PNAS for the effort

Facebook may have experimented with controlling your emotions without telling you

I start out an angry bastard on most days, but that’s just before coffee. After that, I actually lighten up and quite enjoy life and laughter. I’m really not the bitter old curmudgeon I tend to unleash when I write. Even much of my political ranting is spent more tongue-in-cheek and facepalming than actually risking a real aneurysm.

But this pisses me right the fuck off.

If you’re not familiar with human research, I urge you to brush up on something called an Internal Review Board. Sure, it’s Wikipedia, but for our purposes it’s sufficient to get you up speed here. At hospitals that engage in animal testing, tons of paperwork outlining methods, protocols, etc. need to be filed with their IRB before a mouse so much as gets injected with saline as a control. In academic settings, psychologists must do much the same when testing various of their theories before they can proceed. Hell, I know a grad student in philosophy who had to jump through hoops before she could even pose thought experiments to human subjects.

To many, this might seem a bit absurd. How could asking someone questions hurt them? Or expose the institution to risk and/or litigation? It’s plausible that a question could pull the rug out from under someone, leading to what, an existential crisis? A crisis of faith? These might change the subject’s behaviors going forward, which behaviors, in retrospect, might appear in a poor light and be construed as damage. Hell, in this day and age, there’s plenty of litigious souls who would consider having a sad “damage.”

From an institutional point of view, IRBs have many functions, but at the end of the day it’s largely about mitigation of risk and liability.

More importantly, these boards are about ethics. There is a right way and a wrong way to conduct research, especially when it involves humans. What those right and wrong ways are form the body of a great deal of research and debate, but that is exactly because it is so very important. Ethicists and the professionals that rely on them have a vested interest in doing what’s right. Sometimes this is simply because doing the right thing is simply the right thing to do. Sometimes it’s risk management. Sometimes it’s about building and protecting a brand. What company today really wants to have their brand associated with unethical human studies?

We have an answer to that question now. Facebook.

Here’s the kicker, as I see it:

None of the users who were part of the experiment have been notified. Anyone who uses the platform consents to be part of these types of studies when they check “yes” on the Data Use Policy that is necessary to use the service.

Facebook users consent to have their private information used “for internal operations, including troubleshooting, data analysis, testing, research and service improvement.” The company said no one’s privacy has been violated because researchers were never exposed to the content of the messages, which were rated in terms of positivity and negativity by a software algorithm programmed to read word choices and tone.

Seriously? This might pass muster for some legal beagle whose answer to the question, “what does this law mean?” is “it depends on who is paying me.” This does not pass my sniff test even remotely. Truly, when you signed up for Facebook, did you even bother to read this policy before you consented? For most of you, the answer is, “of course not. Who the hell reads these things? I just want to see pictures of kittehs.” For the rest of you, in your wildest dreams did you imagine that “research” as mentioned in the agreement meant you’d conceivably be used in…not just marketing research or computer systems testing of some kind, but actual psychological or sociological research?

Did you know that you were consenting to have your emotional state manipulated?

693,003 people in particular probably did not.

How many wives got black eyes after this experiment?

How many road rage episodes were triggered?

How many razor blades went from bad idea to suicide attempt?

We’ll never know. The risk of even one, especially in the garish context of corporate research for profit, is too great a risk. Whether or not you think I’m being silly is of no importance. The importance is that Facebook made that decision for you, back when you probably didn’t bother reading the terms, or, like me, naively thought those terms meant things other than this.

Worse, Proceedings of the National Academy of Science legitimized this travesty of human research ethics by publishing this paper. Granted, Facebook is no Mengele. Hell, like it or not, Mengele’s unethical, nay, barbaric methods, have provided valuable medical data that we benefit from to this day, data that could never have been gained in any other manner. As a global society of civilized humans, we were supposed to have learned something from that and applied it.

Apparently we didn’t. I can only hope there is sufficient and legitimate outcry from tried and true ethicists that will keep, if not the likes of Facebook from doing this again, august journals like PNAS from aiding and abetting this kind of abrogation of such basic ethics that even first year sociology students learn a thing or two about them by way of Tea Room Trade.

—-

Image credit: Tolbasiaigerim @ Wikimedia Commons. Licensed under Creative Commons.

Comment Policy:

Comments on this blog are disabled.  If you find this article noteworthy for any reason, I encourage you to share it widely (with attribution, naturally) and help spread the conversation away from the narrow confines of one tiny blog. Thank you.

 

Advertisements

3 thoughts on “Facebook tramples human research ethics and gets published by PNAS for the effort

Comments are closed.