Is Facebook Using Us As Guinea Pigs?
Facebook Emotion Experiment Draws Criticism
Well, it’s happened. Those tiny words jumbled together in the terms of service section of a web-site that nobody reads has given none other than the largest internet company in the world (in terms of data transfer and human reach) the right to conduct experiments on us. We are, of course, talking about our friendly social networking site, Facebook.
Facebook conducted a massive psychological experiment on nearly 700,000 unwitting users in 3012, and the activity has just come to light, causing a furor over the internet.
To determine whether it could alter the emotional state of its users and prompt them to post either more positive or negative content, the site’s data scientists enabled an algorithm, for one week, to automatically omit content that contained words associated with either positive or negative emotions from the central news feeds of 689,003 users.
The research, published in the March issue of the Proceedings of the National Academy of Sciences, sparked a different emotion—outrage—among some people who say Facebook toyed with its users emotions and uses members as guinea pigs.
“What many of us feared is already a reality: Facebook is using us as lab rats, and not just to figure out which ads we’ll respond to but actually change our emotions,” wrote Animalnewyork.com, a blog post that drew attention to the study Friday morning.
Facebook has long run social experiments. Its Data Science Team is tasked with turning the reams of information created by the more than 800 million people who log on every day into usable scientific research.
On Sunday, the Facebook data scientist who led the study in question, Adam Kramer, said he was having second thoughts about this particular project. “In hindsight, the research benefits of the paper may not have justified all of this anxiety,” he wrote on his Facebook page.
“While we’ve always considered what research we do carefully,” he wrote, Facebook’s internal review process has improved since the 2012 study was conducted. “We have come a long way since then.”
The impetus for the study was an age-old complaint of some Facebook users: That going on Facebook and seeing all the great and wonderful things other people are doing makes people feel bad about their own lives.
The study, Mr. Kramer wrote, was an attempt to either confirm or debunk that notion. Mr. Kramer said it was debunked.
According to an abstract of the study, “for people who had positive content reduced in their News Feed, a larger percentage of words in people’s status updates were negative and a smaller percentage were positive. When negativity was reduced, the opposite pattern occurred.”
The controversy over the project highlights the delicate line in the social media industry between the privacy of users and the ambitions—both business and intellectual—of the corporations that control their data.
Companies like Facebook, Google Inc. GOOGL +0.39% and Twitter Inc. TWTR -1.05%rely almost solely on data-driven advertising dollars. As a result, the companies collect and store massive amounts of personal information. Not all of that information can be used for advertising—at least not yet. In the case of Facebook, there is an abundance of information practically overflowing from its servers. What Facebook does with all its extra personal information—the data isn’t currently allocated to the advertising product—is largely unknown to the public.
Facebook’s Data Science team occasionally uses the information to highlight current events. Recently, it employed it to determine how many people were visiting Brazil for the World Cup. In February, The Wall Street Journal published a story on the best places to be single in the U.S., based on data gathered by the company’s Data Science Team.
Those studies have raised few eyebrows. The attempt to manipulate users’ emotions, however, struck a nerve.
“It’s completely unacceptable for the terms of service to force everybody on Facebook to participate in experiments,” said Kate Crawford, visiting professor at MIT’s Center for Civic Media and principal researcher at Microsoft Research.
Ms. Crawford said it points to broader problem in the data science industry. Ethics are not “a major part of the education of data scientists and it clearly needs to be,” she said.
Asked a Forbes.com blogger: “Is it okay for Facebook to play mind games with us for science? It’s a cool finding, but manipulating unknowing users’ emotional states to get there puts Facebook’s big toe on that creepy line.”
Slate.com called the experiment “unethical” and said “Facebook intentionally made thousands upon thousands of people sad.”
Mr. Kramer defended the ethics of the project. He apologized for wording in the published study that he said might have made the experiment seem sinister. “And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it,” he wrote on Facebook.
DOES FACEBOOK HAVE TOO MUCH POWER?
There may be no company in history with as much power to influence what we think and feel as Facebook.
Facebook is big, and has a larger reach than any medium in history. And Facebook could, if it chose to, derive countless things about you whether or not you choose to reveal them, including your sexual orientation, relationship status, propensity to use drugs, IQ, political orientation, etc.
The question is, what happens if and when Facebook decides to act on all this data? Not just sell it to marketers, but use it to influence your state of being in order to achieve a particular aim.
For example, what if there is an optimal mix of positive and negative content in your news feed that will keep you using Facebook for the greatest number of minutes a day? With this experiment, Facebook has already revealed it has the power to shape what you read in exactly this way, connecting that power to the number of minutes you spend on the sight is a trivial exercise in statistics.
Facebook would be foolish not to use this insight to manipulate our emotions in order to keep us on the site as long as possible, and because the algorithms Facebook uses to determine what shows up in your news feed aren’t public, there’s no way any of us would ever know how they did it. Nor are there any regulations forbidding the company from doing so. (I reached out to Facebook to ask whether anything like this is already part of Facebook’s algorithm. No response yet.)
Here’s another example of Facebook’s power: In 2010, Facebook showed it can increase voter turnout in a U.S. election, by pushing the right kind of message to users. Given the demographics of Facebook, which skewed more young and tech-savvy in 2010 than it does today, it’s worth asking whether, in so doing, Facebook managed to unwittingly influence congressional elections at the time.
The algorithms that shape Facebook’s news feed — and the search results we see in Google, and the posts that appear in the “discovery” tab on Twitter TWTR -1.05%, and on and on — are all black boxes. We have almost no idea how Facebook has decided to influence the 1.2 billion people who use the site regularly.
If, in his dotage, Mark Zuckerberg decides to become a Hearst-type media mogul, who actively shapes the news to further his own ends, there’s nothing to stop him. In some ways, this makes Facebook a media company like any other in history. The difference is that with its infinite pools of data and ability to micro-target changes in its algorithm to every single user, in some ways Facebook has more power than any media mogul of yore.
It’s also worth asking whether Facebook has a moral obligation to use its data for good. If Facebook can infer our mood from our posts, should it attempt to develop an algorithm to determine which of its users is most likely to commit a violent act, or to commit suicide? Does Facebook, like those who argue we should be putting anti-depressants in the water supply, have an obligation to show its saddest denizens only the sort of posts that might cheer them up?
Facebook has become less a social entertainment site, and more a social utility. As policies have changed, and the site has become more intrusive in the pursuit of profits, it has become less popular but not necessarily less used. This is because the costs of changing a utility are high in terms of time and effort. Imagine if you keep in touch with hundreds of people or have a business that uses Facebook to stay in touch with a customer base, and then decide to move to another platform – how do you take your network with you? The consequences of changing would be costly if not unworkable.
As Facebook becomes less loved — if not less used — the company has responded by buying up networks that truly are loved (for now), like Instagram and WhatsApp. Facebook remains the company’s core social utility, but increasingly it’s a collection of social technologies targeted at different needs and demographics.
In the final analysis, Facebook is not only large – but also growing tentacles.