This year, I’ve committed to reading more books, for reasons I discuss in this podcast. So far, I’ve read eight books, which is six ahead of my ‘25 books in 2016′ schedule:
- The Night Circus by Erin Morgenstern: Not sure what all the fuss was about. The worldbuilding and descriptions of magic were well done, but ultimately rendered empty by the flat characters, who were quite literally plot devices.
- Luna: New Moon by Ian McDonald: Game of Thrones meets The Moon is a Harsh Mistress, but in a good way.
- What Technology Wants by Kevin Kelly: Achieves that rare feat of being a book about technology that doesn’t feel instantly dated. Worth reading, and a new take on the techno-optimist slant.
- Hark! A Vagrant by Kate Beaton: Great fun, as expected from the webcomic.
- City of Stairs by Robert Jackson Bennett: Surprisingly enough, a novel with great worldbuilding and decent characters that isn’t part of a 7-book series.
- Sword of My Mouth: A Post-Rapture Graphic Novel by Jim Monroe
- Common Sense by Thomas Paine: Still stirring; decided to read this after the related In Our Time. Not exactly book-length, I know.
- Step Aside, Pops by Kate Beaton: Also great fun.
Currently reading Superforecasting by Philip Tetlock; so far, so good, except for the feeling that it would’ve made for a killer 20,000 word New Yorker piece rather than an entire book.
I’ve been a fan of Philip Reeve after reading his thrilling Mortal Engines quartet. Strictly speaking, Philip Reeve is a young adult SF/fantasy author, but I found this series to be more imaginative and darker than many other ‘adult’ novels. A lot of his other books have been for younger children, but when I heard that he’d written an out-and-out SF novel called Railhead, I had to check it out.
Railhead is an exciting amalgam of two of my favourite SF series: Dan Simmons’ Hyperion Cantos (well, the first two books, anyway), and Iain M. Banks’ Culture series. The Hyperion part stems from Railhead’s network of wormholes, connected by – of course – railways; plus the presence of godlike AIs with their own cryptic plans. The Culture part is represented by the slightly-smarter-than-human AI trains, with appropriately Banksian names, plus the well-written action, explosions, drones, and AI avatars. There’s also a dash of Dune and Hunger Games in there, as well.
Perhaps the most Banksian thing – and the most surprising to see in a young adult SF novel – is Railhead’s refreshingly modern treatment of gender norms and sexuality. Some characters are gay, and some characters regularly switch sexes, leading to offhanded passages like this:
She was gendered female, with a long, wise face, a blue dress, silver hair in a neat chignon.
Malik got a promotion. He got himself a husband, a house on Grand Central, a cat.
And, to cut the story short, it fell in love with him. And he fell in love with it. In the years that followed, Anais came to him again and again. Sometimes its interface was female, sometimes male. Sometimes it was neither. Different bodies, different faces, but he always knew it.
An unexpected but pleasant surprise!
Tags: book · review · sf
December 9th, 2015 · 1 Comment
Our office manager Sophie passed me the phone. “It’s someone from Google,” she said. I raised an eyebrow. Perhaps this was an invitation to an event, or another chance to test prototype hardware, or something even more magical.
I unmute the phone. “Hello?”
“Hi, I’m Tim, from Google Digital Development. I’d love to talk about how we can help you promote your apps on the Google Play Store better.”
How disappointing — they were just selling Google search ads. I quickly made my excuses and hung up.
Three months later: “Hi Adrian! My name is Mike, I’m from Google Digital Development -”
Seven months: “Hey Adrian! I’m from Google Digital -”
Twelve months: “I’m Sean, I’m from Google Digi -”
To this day, it keeps happening and I keep getting my hopes up, like a child. Why don’t I learn that ‘Google’ on the phone equals ‘Irish guy cold-calling with ad sales’?
Because I haven’t told you about the times Google contacts us about actual interesting projects. It’s usually by email, but sometimes they do call. Not on a regular schedule, of course — but at random, unpredictable times.
This pattern of frustration mixed with intermittent success is essentially a variable reinforcement schedule. If you’ve read any article about addiction in the last twenty years, you’ll know that a variable reinforcement schedule can be used to make rats compulsively press a lever in the hope of getting another pellet of food; and that the same schedule could explain how addictive behaviour develops in humans.
Some people in the tech community act as if variable reinforcement schedules were occult knowledge, magic words capable of enchanting muggles into loosening their wallets. If only we could learn the secrets of variable reinforcement schedules, we could make them addicted to our new app — all those microtransactions, all those ad views, oh my!
So when people learn that I studied experimental psychology and neuroscience at Cambridge and Oxford — and that I run a company that designs health and fitness games — they are taken aback. They are fascinated. And then… they are disappointed, but only after I tell them that the principles of variable reinforcement schedules and operant conditioning can be learned by a dedicated student in a few hours. Moreover, if experimental psychologists were all capable of making the next Candy Crush, they wouldn’t spend most of their time complaining about the quality of tea in the staff common room.
That doesn’t mean that variable reinforcement schedules are bunk, though.
Variable reinforcement schedules help explain why I spend an hour a day mindlessly checking Gmail, Metafilter, Reddit, Twitter, and Hacker News. Even when I know, with 99% certainty, that nothing interesting will have happened in the 15 minutes since I last checked them, I still type Command-R — because maybe this time I’ll get lucky.
More broadly, it’s why we pay attention to the constant interruptions that plague our screens — there’s no cost to the person sending the interruption, and occasionally, it’s of real interest to us.
This plague has its origins from the dawn of email, but this year’s it’s broken out into the mass consciousness, at least if you measure by rapidly proliferating NYT opinion pieces and TEDx talks. It’s most recently been discussed by Tristan Harris (here’s his TEDx talk); Harris is a design philospher at Google, but he originally arrived there after they acquired his company, Apture, back in 2011. His particular interest right now is the Time Well Spent movement.
The purpose of the movement is to encourage the design of products and tools that allow users to make informed choices about how they spend their time. In other words, a user visiting a ‘good’ YouTube might be asked how long they want to watch videos for. After their time is up, the website would tell them to do something more useful and come back later.
I’m sympathetic to Time Well Spent, not least because their success would save me a lot of time. But on balance, I’m skeptical that companies can be convinced to engineer their products to make them less compulsive out of the goodness of their hearts, any more than advertisers and publishers can be convinced to reduce the number of obnoxious and unsafe ads out there.
I’m happy to be proven wrong, but let’s put it this way: Harris works at Google, and I don’t see any friendly ‘how long do you want to spend surfing the web?’ dialogs in Chrome. No, perhaps we should take matters into our own hands — like we did with third party ad blockers.
While it took ad blockers many years to gain traction, they’re now used by a significant percentage of browsers — at least 15% in the US, 20% in the UK, and 25% in Germany. The advent of Content Blocking in iOS may see those numbers continue to grow. So it’s tempting to think that a similar strategy, centred around browser extensions, could help disrupt the many variable reinforcement schedules that bind our attention.
In fact, many such apps and extensions exist, like Freedom, StayFocusd, and LeechBlock. Let’s call them ‘compulsion blockers’. Not all compulsion blockers are apps — at university, my friend Alex’s version of a compulsion blocker was giving me his network cable while he was trying to write an essay.
Compulsion blocker apps have not made much of an impact. You’d know if they had, because the wailing from app developers and games companies would be deafening. It’d make publishers’ complaints about ad blockers seem like a kitten’s meow — just imagine if 20% of people used compulsion blockers to reduce their Facebook or Tumblr or YouTube time. It’d be the bonfire of the unicorns!
Why haven’t they been more successful?
- Many people actually enjoy browsing Facebook and YouTube, thank you very much. And how dare you say that they’re wasting their time refreshing Reddit every five minutes!
- While some people (e.g. the readers and author of this article) may believe that compulsive browsing on computers is the main problem, the truth is that compulsive smartphone usage is much worse. And making compulsion blockers for smartphones is really, really tricky.
It’s technically possible to a create compulsion blocker for Android phones; some kind of custom launcher app that replaces the home screen and can monitor and block the usage of any app or website (just imagine the permissions list you’d need!) Unfortunately, custom home screens aren’t very popular beyond power users. Even the full might of Facebook wasn’t enough to make their custom Home launcher a success. People just don’t seem to care that much.
But it gets worse: it is literally impossible to make a compulsion blocker for the iPhone and iPad. Third-party developers simply cannot make apps that block or control the behaviour of other apps, and any attempts to make an end-run around Apple’s locked-down App Store distribution model have not been successful. I can’t imagine this will change any time soon, either.
If a technological solution can’t be found on smartphones, perhaps we need to go further up the stack. Maybe when augmented reality glasses finally arrive, we can use them to blank out our phones whenever we try to open up Candy Crush for the twentieth time!
But our technological masters — Apple, Facebook, Google, Microsoft — they aren’t dummies. They realise that augmented reality and virtual reality represent the ‘final compute platform’ that could subsume all other computing and display devices. They would do anything to control and monetise that future, including prohibiting developers from making apps that control other apps, just like Apple does. It’ll be the war to end all platform wars.
Let’s summarise: compulsion blockers aren’t popular on desktops, they’re neglected or prohibited on smartphones, and the same may be true on future platforms as well. All hope is lost.
Or is it?!!!
There are other things in this world that are highly addictive. They’re called drugs. We even have ‘drug blockers’ like naltrexone, which block the action of opioids on a molecular level. The slow-release injectable version of naltrexone is called Vivitrol, and can be used to control heavy opiate and alcohol addictions.
Naltrexone and Vivitrol aren’t household names because most people aren’t dangerously addicted to drugs or alcohol. They aren’t much used as a preventative measure either, because a lot of people enjoy taking drugs and drinking alcohol, thank you very much.
Likewise, most people aren’t dangerously addicted to Facebook, so they don’t feel they need a compulsion blocker. For my own part, I don’t use a one because my behaviour doesn’t seem too bad, and I also quite enjoy browsing the web.
Let’s assume that it gets worse, though. Not a foolish assumption given that there are thousands of people spending billions of dollars, trying to make us compulsively use their apps and websites. Maybe the hour a day I spend checking websites goes up to two or three hours a day, in which case I will be highly motivating to get myself a compulsion blocker.
Unfortunately, compulsive experiences generate a lot of cash. The people behind those experience will therefore be highly motivated to circumvent any blockers — consider the phenomenon of advertisers paying popular ad blockers to let their ‘acceptable ads’ through. Yes, there is no escaping capitalism.
For that reason, if we want to genuinely reduce compulsive behaviour, we can’t simply ask VC-backed or publicly-owned companies to play nice. We can’t even ask their employees to play nice; there are just too many smart people out there who are more than happy to take Facebook or Google or Supercell’s $250,000 salaries a year and turn a blind eye to questionable design practices.
Here’s what we can do: we can outcompete them. There’s a reason why we don’t spend literally all of our time on computers or smartphones messing about on Facebook or Candy Crush, and that is because there are better things to do. It might be reading Station Eleven, or watching Mad Max: Fury Road, or playing Life is Strange.
We also need tools and devices and venues that allow us to experience these things without interruptions. Lately I’ve made a habit of going to the cinema to watch movies — it helps me focus on the movie rather than checking my phone, and I come out appreciating it more. Likewise, I bought a Kindle Paperwhite so I can more clearly delinate my time between browsing the web and reading a proper book.
You can make money with some of these things. Not unicorn money, perhaps, but certainly a lot. More importantly, a good book, a good movie, a good game — these things are all worth of creation and consumption in and of themselves.
A good movie or book doesn’t compel us with a variably reinforced schedule to visit it again and again and again, until we’re exhausted. No, they compels us to come back because they’re well-made, right from their beginning to their very satisfying, and very final, end.
Tags: neuro · psych · tech · web
December 6th, 2015 · 1 Comment
Two weeks ago, I was at the Six to Start offices discussing the cost of shipping packages internationally for our next Virtual Race. I bent over to pick up something on the floor and felt an intense stabbing pain in my lower right back. I attempted to straighten up, but it hurt to much that I dropped to my knees and, on the advice of Matt, lay down on the floor for a few minutes.
This alleviated the pain somewhat, but I was still barely able to walk. Even sitting down didn’t help. That morning, I’d packed my running gear to use on the way back, but it was obvious nothing of the sort was on the cards. Still, I was determined to hobble back home that night, which I successfully did.
Things hadn’t improved the next day, or the day after that. I’d evidently strained or pulled a muscle in my back, and it wasn’t going to clear up quickly.
What struck me in those days was how difficult it was to do anything. Getting up from a sofa or from bed, putting on trousers, tying shoelaces, even brushing my teeth – all these activities caused pain, to the extent that something which would normally take 10 seconds and no thought at all instead could take a few minutes each. Everyone was very helpful during this time, particularly my girlfriend, but my back pain still caused real problems. I worried about how long it would last for – would I need to figure out some new way of exercising other than running? How might this affect my work? If it lasted much longer, it would certainly have worsened my health in other ways.
Thankfully, after a week, I was back to 90% and able to start running again, and now I’m pretty much at 100%. Part of the reason for the quick recovery, I think, is because I was already very healthy and had a habit of walking a lot; I’m told that back pain is worsened by not moving, and in my experience, that’s definitely the case.
However briefly, I gained a new understanding of what it means to have back pain. More broadly, I realised the kind of difficulties people have when it’s just hard or tiring or painful to move in general. It’s not news to me that many, many people have these problems, and I never doubted that walking or stretching or so on was genuinely difficult – but it’s one thing to believe it, and another thing to experience it. It’s actually astonishing to me how hard it was to do everyday tasks.
I don’t have any bright ideas about how to treat or combat back pain; I’m not about to suggest that an app* would solve it, or that we should all get exoskeletons (although that would be pretty cool). It’s just clear to me that it’s a problem that, while seemingly invisible, is bound to seriously reduce a person’s quality of life and exacerbate or create new ailments.
*If you could measure posture in real time using wearable devices, you could create an app or chatbot or game that might gently encourage people to move and stretch in a sensible way. But that’s a) obvious and, more importantly, b) rather far off given the NHS’ (in)ability to deploy that kind of technology to patients.
Spoilers abound for the entire plot of Kim Stanley Robinson’s Aurora
I wouldn’t be exaggerating if I said that Kim Stanley Robinson’s Mars Trilogy changed my life. I was 14 and reading plenty of Arthur C. Clarke and Isaac Asimov when I idly flipped through our monthly book club brochure. They usually didn’t have any science fiction, so I was surprised to see an entire page devoted to a book called Red Mars. It was by some author I’d never heard of and therefore of questionable quality, but Arthur C. Clarke himself urged readers to give it their time. “The ultimate in science fiction,” or something similarly unambiguous.
We bought the book – we had to, that’s how book clubs worked – and I fell in love with the idea of colonising Mars. I felt as if Kim Stanley Robinson had demonstrated that not only was it possible, not only was it sublime, but it was absolutely necessary for the project of humanity becoming a fairer, more enlightened people. At an impressionable age, this book made the biggest impression, and was enough to spark my ambition to write an essay, win a competition, travel to a Mars conference in the US on my own, organise youth groups, speak at TED, and so on.
I am not active in the Mars exploration movement, or even the space exploration movement any more. I remain deeply interested, but it became clear to me that the road to Mars would be much longer and much harder than anyone had expected. Even now, even with SpaceX, it feels as if the decades keep ticking up. What once might have happened in 2020 will now happen in 2030, or 2040, or later. And when we get there, what then? Creating a world from scratch is hard, slow work.
Kim Stanley Robinson regrets the effect the Mars trilogy had on people like me. At least, that’s the impression I got from Aurora, a tale of the near-impossibility, and hence near-pointlessness, of creating an Earth-like environment outside of Earth. It’s not his fault; the science has changed since the 90s. We now know that Mars has much less nitrogen than we need for growing plants, and the vast amounts of perchlorates on the surface are a serious hazard to humans. These, and other new obstacles, could lengthen the time to terraform Mars from centuries to millennia, or tens of millennia. Perhaps our technology will advance to meet the challenge, but there’s no question the challenge is herculean.
Yet no-one seems dissuaded by this. In fact, I had never even heard of the nitrogen and perchlorates problem until reading Aurora. It’s as if merely asserting that colonising Mars is an imperative for the survival of humanity suddenly makes it possible. What must happen, will happen.
And why is colonising Mars an imperative? Because, in part, of Kim Stanley Robinson’s Mars trilogy.
So Aurora is a corrective. We follow an attempt to colonisation a planet orbiting Tau Ceti, light years from Earth. In short, it fails. Everything fails. Not the just colonisation of Tau Ceti, but the very starship that took the colonists there as well. All the beautifully designed miniature Earth-like biomes on the starship fail, because that’s what happens to enclosed ecosystems with a wide variety of flora and fauna, all evolving at different rates.
Our colonists do try, though. A engineer/biologist is positively heroic in her efforts to keep the starship running, a rather unusual note in a science fiction novel (although not, to be fair, The Martian); and some colonists are so determined to press on with the project in Tau Ceti that they choose to take the one in ten thousand chance of creating a new world. Those are, of course, terrible odds. Only in a certain kind of story do you win that gamble, and this is not that kind of story.
What kind of story is it, then? An anti-space exploration story? Not really. Robinson describes a solar system full of thriving outposts and colonies, all trading with one and another, and most crucially, with Earth. He talks about the eventual colonisation of Mars – in a few thousand years time. This is not the imagination of someone who wants to smash rockets. In his world, Space exploration is exciting, it’s laudable, it’s inevitable, but it’s not a solution to preserving the future of humanity. And while volunteers will line up to take the riskiest of gambles, it’s not so clear that their children and grandchildren, left on a fragile miniature ecosystem too far from Earth, should have to risk their lives as well. No, the future of humanity is best assured by preserving the future of Earth’s ecosystem.
This kind of talk used to sound like sedition to me, spread by shortsighted fools who’d say, “Why explore space when we have problems on Earth?” It still does, sort of. It may not seem like it, but humanity is wealthier than ever, and I still think we can well afford to explore and travel in space, and to Mars.
The problem is, it’s not just on Mars that the facts have changed, with its nitrogen and perchlorates – it’s Earth as well, with its warming air and rising seas and fraying ecosystem. So I don’t feel unjustified in changing my mind as well about our priorities and how we think about the future of humanity, not after reading Aurora.
It’s been almost twenty years since I first opened Red Mars, but I’m still impressionable – at least, by Kim Stanley Robinson.
Tags: book · future · sf · space
November 30th, 2015 · 2 Comments
In order to prevent yet more tragedies like the shooting at the Planned Parenthood centre in Colorado Springs, gun rights activists – and rightwingers in general – often suggest that we need to prevent the ‘mentally ill’ from gaining access to firearms. In fact, even Democrats and centrists say that, “I think as a state, but as a country, we have got a lot more thinking about this, of how to make sure we keep guns out of the hands of people that are unstable,” as the governor John Hickenlooper said (and the Mayor of Colorado Springs, John Suthers, echoed).
This strikes me as one of those anodyne statements that is simultaneously impossible to disagree with and yet completely useless. Of course we should be tough on the causes of crime. Of course we should improve our children’s education. And of course we should prevent those who we think are likely to kill civilians with guns from possessing guns. The question is how we do that.
Robert Dear, the man suspected of the Planned Parenthood killings, was considered to be strange, not dangerous. He had no entry in any database that marked him as being mentally unstable – because no such database exists. It’s hard to imagine how one could ever exist; the notion stinks of Precrime-style profiling, an attempt to predict crimes that haven’t yet been committed. Mental illness is not a crime, and the great majority of people who are mentally ill (a woolly category if I’ve ever seen one) do not commit violent crimes. And even if that were not the case, science is yet to produce a foolproof ‘mental illness’ detector.
I realise that the whole ‘guns don’t kill people, stop mentally ill people from getting guns’ is essentially a smokescreen. My point is that it’s a terrible, incoherent smokescreen. Gun rights activists love to cite the US Constitution, but I can’t think of a worse violation of it than a law prohibiting individuals who’ve been designated as ‘mentally ill’ from ever possessing arms. What if Obama designates all gun advocates as being mentally ill?!!?!!11!
With the advent of ‘content-blocking’ in iOS 9, I run an adblock on all my devices* – desktop, laptop, phone, and tablet. Like several hundred million other people, I see next-to-no display adverts on the web. After a few days it becomes so normal to see the online world without ads that it’s a genuine shock when you have to turn your adblocker off.
Assuming that adblocker usage grows and isn’t negated by in-app advertising (e.g. in Facebook and Twitter) or native ads (e.g. in Buzzfeed), who does this favour? Could it be the biggest ‘legacy’ brands from Proctor and Gamble and Heinz who can afford to run expensive, unblockable ad campaigns during live TV events, along with outdoor display advertising? After all, I can’t run an adblocker on my eyes quite yet, so I still see billboards and posters and store promotions – most of which seem to be for the biggest and oldest brands.
Perhaps their ultimate advantage will be small. Adblocker uptake on mobile devices won’t be significant for a few years, which is plenty of time for big and small companies to find alternatives. Although I’m not sure what the alternative will be when we have heads-up displays that do block ads.
*I offset my guilt about this by spending quite a bit of money on subscriptions and memberships
Tags: ad · future
October 9th, 2015 · 1 Comment
I’m confident that in a hundred years, eating meat will be regarded in the negative way we now view racism or sexism – an ugly, demeaning, and unnecessary act. Like smoking, it will simply fall out of fashion because we’ll find better and healthier alternatives, although we’ll still occasionally eat humanely reared-and-killed animals. Note that I still eat meat even though I should know better.
The interesting thing about eating meat is that it encapsulates a multitude of sins. You might worry about its impact on your own health; or perhaps on the environment, given the amount of water and land that a cow requires and the methane greenhouse gases it produces; or of course, on the life and suffering of the animal itself.
From an environmental standpoint, we should be eating far fewer cows and far more chickens, since the latter require less energy input to grow for a given calorie, and therefore (all things being roughly equal) produce less of negative impact. Or we should forget about the chickens and eat sustainably caught-or-farmed fish, which are even more energy efficient and have the smallest carbon footprint.
But what about from a suffering standpoint? You can feed far more people with a single cow than a single chicken, so if we want to reduce the suffering of animals, maybe we should be eating cows. But are cows more sentient than chickens? I don’t know how you measure that. And maybe the environmental impact of a single cow produces more suffering on other sentients than a chicken.
I feel like I’m taking utilitarianism to a place far beyond its ability to survive. I should probably read more Peter Singer.
Tags: food · future · science
September 22nd, 2015 · 1 Comment
A vast swathe of people now believe that it’s impossible to have intelligent debate online. This is not an unreasonable belief; scroll down on any newspaper website, let alone YouTube, and you’ll discover the shouting matches that inhabit most comments sections. Jessica Valenti recently wondered whether we shouldn’t simply shut down all comments, like Popular Science and, in part, The Verge, have done. Of the Guardian, she said:
My own exhaustion with comments these days has less to do with explicit harassment – which, at places like the Guardian, is swiftly taken care of. (Thank you, moderators!) Rather, it’s the never-ending stream of derision that women, people of color and other marginalized communities endure; the constant insistence that you or what you write is stupid or that your platform is undeserved. Yes, I’m sure straight, white, male writers get this kind of response too – but it’s not nearly as often and not nearly as nasty.
It is strange that she praises the Guardian’s moderators for taking down explicit harassment, but doesn’t consider that they could also remove the ‘never-ending stream of derision’. When The Times or The Telegraph choose which letters to publish in their printed editions, we don’t consider the letters that didn’t fit as having been censored. And just because web pages can be infinitely long doesn’t mean that newspapers suddenly have an obligation to publish everything.
It’s clear the writers and editors at the Guardian care deeply about combatting sexism and racism; that much is evident through the paper. That’s why I regard their refusal to properly moderate their comments to be an astonishing abandonment of principle. It is not a question of free speech or censorship – people may take their hateful speech elsewhere online, and rage at authors to their heart’s content. It just doesn’t have to happen on the article itself. And if it is a question of cost, then remove the comments entirely.
Unmoderated comments sections like the Guardian’s may start out well, but they inevitably succumb to entropy, giving prominence to those who have effectively unlimited time to shout and argue. I’m sure I’m not the only person who has considered trying to reason with ignorant commenters, only to conclude that they have far more time than me, and far less inclination to listen. The transient nature of articles and comments doesn’t help; why bother arguing that sexism is real for the tenth time on the tenth article?
But the real reason why I detest what the Guardian is doing is because their comments sections are, bit by bit, contributing to defeatism and pessimism. The unrepentent toxicity held within them makes it seem as if there’s no point trying to improve the world or change people’s minds. How many times we do hear “I’ve lost my faith in humanity after reading the comments”? In reality, the comments that are see are come from a tiny, unrepresentative sample of the population – but because they are supposedly open to all and they represent some of the little free conversation we see amongst strangers, we conclude that they are representative.
Well, they are not. And the Guardian’s comments are not representative of what could be possible in a well-moderated community. I’ve often praised Metafilter for it’s excellent moderation, and I was reminded of that by a thread in which someone complained their ‘completely harmless’ comments had been deleted for no reason. A moderator explained:
For context, this is about a couple of comments deleted from the thread about how pop songs are all written by the same guy (link goes to my note in the thread). The comments were about Taylor Swift’s short-shorts and her legs. My prediction was, this would cause a pointless derailing fight in the thread, so I deleted them. These were the comments:
“These kids today probably don’t have time to write. The energy they put into these elaborate stage shows. Plus TayTay walking around New York in her short shorts avoiding the paparazzi…”
“I got a kick out of one pic of Taylor and her legs sitting on the floor of a fabulous all white garret jotting down tablature.”
You may look at those comments and think, but there’s no outright harassment, how could they be moderated? Well, as a few people pointed out, they are sexist and gross. People are free to be sexist and gross in their own homes or with their friends – but not on Metafilter. When you read Metafilter, you do not conclude that the world is composed of sweetness and light; people often have strong disagreements there (but not violent disagreements). You would conclude, however, that it is possible for people to change and learn and be reasonable; that you can have faith in humanity.
And if you criticise Metafilter for not being representative either, because it has full-time moderators, then you would be criticising the entire project of civilisation; the idea that we can organise ourselves and improve our culture in a way that makes the world better, not worse.
Tags: newspaper · web
September 20th, 2015 · No Comments
Two years ago, A History of the Future in 100 Objects was published. The book describes a hundred slices of the future of everything, spanning politics, technology, art, religion, and entertainment. Some of the objects are described by future historians; others through found materials, short stories, or dialogues.
Today, I’m making all 100 chapters available online, for free.
The book has sold a few thousand copies – reasonably well for a first author. More importantly, it was received well by the people whose opinions I value; I was invited to speak at the Long Now Foundation last summer by Stewart Brand, and it was praised by the BBC’s Stephanie Flanders and by Grantland’s Kevin Nguyen, who called it one of the ‘overlooked books of 2013‘. Next month, I’ll be speaking about the same ideas at the Serpentine Gallery’s Transformation Marathon.
So, at this point I’m much more interested in spreading the ideas far and wide. Of course, you can still buy the book via Amazon or directly from me (it’s very nicely formatted), but I’m just as happy if you read it on the web.
I wrote A History of the Future in 100 Objects because I’ve always been deeply fascinated by what’s coming next. I’m a neuroscientist and experimental psychologist by training, and a games designer and CEO by trade. It’s my job to think up new ideas and ways to improve people’s lives, and perhaps because of that, I’m optimistic – cautiously, skeptically optimistic – about the future.
The future that I want to realise is the hard-fought utopia of Kim Stanley Robinson and Iain Banks and Vernor Vinge, not the dystopia that dominates fiction nowadays. But I’m not naive, and technoutopianism brings me out in hives, so don’t expect me to tell you that technology will make everything better.
This book is my small contribution to the exploration of the future. It turns out that writing a hundred short stories was far, far more difficult than I had ever imagined, and in truth only some of the chapters hit the mark perfectly. But even so, I think there are plenty of fun ideas there.
Tags: adrian · book · future