Mssv random header image

We’re Smart People and We Mean Well

June 30th, 2014 · No Comments

By now, many of you will know about Facebook’s experimental study in which they attempted (successfully, they claim) to make their users sadder or happier by manipulating their News Feeds – without their informed consent. To call the study controversial would be an understatement. Unethical, arrogant, and bone-headed would be a little more accurate.

Beyond the critical question of ethics and the dubious scientific worth of the study lies the fascinating reaction from Facebook and from the wider technology community (by which I mean prominent venture capitalists, entrepreneurs, and developers). It’s best to look at this reaction in comparison to other tech-related uproars that have engulfed the internet. Take the political issues like SOPA and net neutrality, for example. The whole tech community lent their voices and their wallets to internet-side of those movements, and they were all very happy to assume that their antagonists (’old media’ businesses, etc.) were flat-out evil and greedy reactionaries.

When the tables are turned and Facebook is under attack for running a psychological experiment on its users, we hear… nothing at all. Radio silence from VCs and leaders of big companies like Amazon, Google, Apple, Dropbox, and any other brand-name tech company. Those who are brave enough to defend Facebook usually comes from a place of utter bafflement: “We technologists are smart people and we mean well – isn’t that enough for you? In fact, the problem isn’t with what Facebook has done, it’s with your foolish and imperfect understanding of it. Here, let me explain it to your irrational, inconsistent child-like mind so that you can see how we’re trying to help you, and after I’m done you will praise us!”

The most common defence of Facebook invokes A/B testing, routine experiments companies run all the time in order to optimise their websites and apps. There are two real responses to this assertion:

1. “A/B testing” is too vague a term for useful comparison in this case; one might as well say that “experiments” are either all good or bad. Testing two two versions of a website selling hats and seeing which one version in more clicks on the “Buy” button? No-one has a problem with this, because when you’re on a retail website, you have a reasonable expectation that the owners are optimising their commercial message (just as Starbucks might optimise the names of their drinks). But what if Google A/B tested its search ranking algorithms in order to make you feel more positive about the US government? I don’t think we’d be too happy about that. Which leads us to:

2. Facebook (and indeed, Google) is different to most websites. It is a place where users want to keep up to date with their friends and family; it’s no exaggeration to say that it’s a lens through which a billion people view the world. People using Facebook are not expecting that they are being actively manipulated there; at least not in a way that directly acts against their interests (i.e. to make them sadder).

You might say that this attitude is naive and we should be suspicious of everyone and everything. And indeed we are, in certain places. When we watch TV or read a newspaper and see an advert, we know that the advert is trying to manipulate us and it’s something that most of us accept, even if we aren’t too happy about it. But that’s why it’s crucial that adverts are clearly identified – not just on TV, not just in newspapers, but also in tweets and Facebook posts. You need to know what is and what isn’t commercial speech.

I suspect that technologists would rather not think about the ethics of A/B testing, which is used so widely precisely because it’s a very powerful manipulative tool that can change users’ behaviour, earning you millions or even billions more. Perhaps they’re worried that Facebook’s study might unleash regulation on A/B testing in general, or any kind of user manipulation. And, frankly, they are right to be worried. I suspect that the blithe nature of Facebook’s experiment will make people very worried about the other kinds of studies going on in private.

Yet rather than directly engage with people who criticise Facebook’s practices, their defenders instead think that we are stupid. It almost feels like they have completely internalised the messianic mission of Silicon Valley, where every disruptive startup is out to “save the world”, a stance which conveniently requires any right-thinking and ethical entrepreneur to make shit-tons of money by manipulating their users as fast as they can (in order to reach scale, etc.).

Disagree with Facebook and you disagree with the mission. Disagree with the mission, and you are an evil person.

Of course, there is a bit more to this story than what I’ve already said. Many Silicon Valley people subscribe to a utilitarian calculus of ethics, which appeals to their rational, big data personalities. Thus they might also defend Facebook by saying that the company is ultimately trying to increase the sum total of happiness in the world, and they can only do it by conducting these experiments. Ignoring the fact that it would be absolutely impossible to definitively conclude that because we can’t predict the other effects of these experiments (such as the fact that knowing Facebook has such broad control over people’s views of the world might make them sadder), the bigger problem is that there are different standards of ethics out there.

According to my standard of a good life, I would rather not have benevolent masters strapping rose-tinted spectacles onto my face. In other words, better to be Socrates dissatisfied than a pig satisfied. Others may disagree, but that’s the point. There is no single ethics out there. People can have rational disagreements and there is no use to saying that we’re foolish for not wanting to be helped by Facebook. Never has Upton Sinclair’s maxim been truer: “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”

What is to be done?

I am still on Facebook, although I feel very unhappy about it (hah). There is currently no alternative for me to keep up to date with many of my friends and family. You could say that this makes me a hypocrite, just like the Occupy Wall Street protestor who uses an iPhone – made by a capitalist company that seeks to minimise its tax burden by whatever means necessary.

I don’t think that’s the case. This is the world we live in. You cannot get a phone that isn’t made by a capitalist company. due to economies of scale. I cannot keep up to date with my friends and family in other ways, due to network effects. The goal is to strive towards something else, even if we can’t be perfect while we do it.

To that end, I pledge to donate $1000 to any non-profit or B-corp organisation seeking to replace Facebook’s core social network functions that is able to raise $10 million in total donations.

Tags: adrian · psych · tech

0 responses so far ↓

  • There are no comments yet...Kick things off by filling out the form below.

Leave a Comment