There was another round of confused Facebook outrage this weekend when a story in the Atlantic revealed the social media giant had intentionally toyed with users’ mood — allegedly for science, but in reality, for money. FB turned up the sadness dial on some users’ news feeds, and found out the sadness was contagious (happiness was, too).
Facebook manipulated people, and boy are academics mad. I’d like to stand over here on my years-in-the-making soap box and welcome all those folks to the world of Facebook outrage. We’ve been awaiting your arrival! Facebook manipulates users all the time, folks. That’s why users do things constantly they wouldn’t do otherwise if they understood the privacy consequences.
The study that’s produced the outrage did qualify as science. It was published in the prestigious journal Proceedings of the National Academy of Sciences. It has plenty of folks scrambling.
It’s a bad idea to harm people for science. In many cases it’s merely unethical, not illegal, but it’s a really bad idea. When harm is unavoidable — say you are trying an experimental drug that might cure a person with cancer, or might kill them faster — scientists must obtain “informed consent.” Now, informed consent is a very slippery idea. A patient who is desperate and ready to try anything might not really be in a position to give informed consent, for example. Doctors and researchers are supposed to go the extra mile to ensure study subjects *truly* understand what they are doing.
Meanwhile, in many social science studies, informed consent prior to the experiment would wreck the study. Telling Facebook users, “We are going to try to manipulate your mood” wouldn’t work. In those cases, researchers are supposed to tell subjects as soon as is feasible what they were up to. And the “harm” must be as minimal as possible.
Everyone who’s conducted research (including me!) has a grand temptation to bend these rules in the name of science — but my research has the power to change the world! — so science has a solution to that problem. Study designs must be approved by an Institutional Review Board, or IRB. This independent body decides, for example, “No, you may not intentionally make thousands of people depressed in order to see if they will buy more stuff. At least if you do that, you can’t call it science.”
Pesky folks, those IRB folks. My science friends complain all the time that they nix perfectly good research ideas. A friend who conducts privacy research, for example, can’t trick people into divulging delicate personal information like credit card numbers in research because that would actually cause them harm.
Facebook apparently decided it was above the pesky IRB. Well, the journal editor seemed to say the research was IRB-approved, and then later, seemed to say only part of the research was IRB approved, all of which seems to say no IRB really said, “Sure, make thousands of people depressed for a week.”
And while a billion people on the planet have clicked “Yes” to Facebook’s terms of service, which apparently includes language that gives the firm the right to conduct research, it doesn’t appear Facebook did anything to get informed consent from the subjects. (If you argue that a TOS click means informed consent, send me $1 million. You consented to that by reading this story).
Back to Facebook researchgate. The problem isn’t some new discovery that Facebook manipulates people. Really, if you didn’t realize that was happening all the time, you are foolish. The problem is the incredible disconnect between Facebook and its data subjects (ie, people). Our petty concerns with the way it operates keep getting in Facebook’s way. We should all just pipe down and stop fussing.
The researchers are starting to feel bad about this project. Somebody who calls himself “Danger Muffin” and says he designed the study made a sort of apology post last night. But even in that statement, the wordplay begins all over again, a depressing skirting of truth that reveals for the billionth time Facebook’s culture of manipulation for its own ends.
“Nobody’s posts were ‘hidden,'” writes Adam D. I. Kramer, aka Danger Muffin, in his not-really-an-apology. “They just didn’t show up on some loads of Feed.” And nobody had sex with Monica Lewinsky, either. Keeping posts off folks’ walls is virtually the same thing as hiding them, and Kramer knows that.
Those are the words of a company that it very good at justifying its own behavior.
Let’s review what’s happened here. Facebook:
1) Decided it was above the standard academic review process
2) Used a terms of service click, in some cases years old, to serve as “informed consent” to harm subjects
3) Even when called out, defended the research with mumbo-jumbo worthy of a presidential candidate, like “Nobody’s posts were hidden.”
Think carefully about this: What wouldn’t Facebook do? What line do you trust someone inside Facebook to draw?
If you’d like to read a blow-by-blow analysis of what went on here – including an honest debate about the tech world’s “so-what” reaction – visit Sebastian Deterding’s Tumblr page.
Here’s the basic counter-argument, made with the usual I’m-more-enlightened-than-you sarcasm of Silicon Valley:
“Run a web site, measure anything, make any changes based on measurements? Congratulations, you’re running a psychology experiment!” said Marc Andreessen, Web browser creator and Internet founding father or sorts. “Helpful hint: Whenever you watch TV, read a book, open a newspaper, or talk to another person, someone’s manipulating your emotions!”
In other words, all those silly rules about treating study subjects fairly that academic institutions have spent decades writing – they must be dumb. Why should Silicon Valley be subject to any such inconvenience?
My final point: When the best defense for doing something that many people find outrageous is you’ve been doing it for a long time, it’s time for some soul-searching.
*Reading this story means you’ve consented to give me $1 million. Unless you think “consent” granted in fine print is bogus.