I’ve been struggling to get started writing a new book. I find it all to easy for my time out of work to be nibbled away, seconds and minutes and hours, by genuinely intriguing articles, blog posts, videos, comments, TV shows, work, and games. Like a lot of people, I have the urge to complete tasks and fill up progress bars, but with the internet and media, the progress bar can never be filled. And so I never end up starting that book, even though I have plenty of notes and (I think) good ideas.
But maybe that’s not the real reason. I did write a book a few years ago, after all, and I don’t recall being any less busy or distracted back then. Perhaps it’s because the media environment has become even more distracting – who knows?
Coincidentally, I heard Elizabeth Gilbert talk about this very subject on the Longform podcast. I’ll first admit that I only knew one thing about Gilbert beforehand, which is that she wrote the highly successful Eat, Pray, Love; a book that turned into a movie starring Julia Roberts, which a lot of people whose opinions I trust found very shallow. So I was skeptical when I saw the episode’s guest, but not so skeptical that I deleted the episode out of hand; the Longform people have earned that much trust from me over the years.
Here are a couple of good bits from the episode, firstly on being multitalented:
…When it comes to deciding what you’re going to be, it helps if there’s only one thing you’re good at… I know a lot of multitalented people… but I do think it’s hard to them sometimes to know where to put their energy. And it’s easier if you’re not so great at a bunch of stuff.
I confess that I think of myself as multitalented. I like to think that, given sufficient effort, I could become pretty good at making videos or games or writing or whatever. I like learning new things. And for me, that makes it hard to decide whether my next big personal project should be a game or a book or something else.
Another good bit is about inspiration, and why it’s valuable to identify the things that you really care about when it comes to taking on a big personal project:
The calculus has to be, what’s the thing that makes me want to get up in the morning, what’s the thing that I’m psyched that I get to do this…. It’s about being very awake, about being very alert. The work is clearing your life of distractions enough so you are actually capable of feeling that excitement when it arrives. That you haven’t overbooked yourself in ten different directions so that you are so exhausted that you wouldn’t know inspiration if it punched you in the face. You can’t do that to yourself. It’s about being sober. It’s about being hopeful. It’s about a certain faith, it’s a way of being, which is about being ready.
And it’s about trusting your own curiosity enough to follow it, even if it doesn’t make sense. Even if the inspiration that you had doesn’t align with anything you’ve done before, even if it doesn’t seem like it would be marketable, even if it’s something that you can’t even believe you’re interested in, but you sort of have to have full faith that if you’re curious about something, it’s for a reason, that it’s a clue on the great scavenger hunt, and that you follow that clue and then the next and then next.
The tricky bit is that you have to start from a place of ‘this is what I’m most excited about, this is what I’m most curious about’, and then you have to recognise and know what will happen, which is that six months into it, it’s going to feel very boring and tedious because making things is often boring and tedious.
Another idea is going to come along very seductively, and do the dance of the seven veils in the corner of your studio, and say, I’m a much more interesting, much more exciting idea, why don’t you abandon this project that you’ve been working on for six months and come and run away with me to paradise. And you have to be smart enough to know not to do that, because six months from now that project will also be dull and boring and another idea will come and seduce you have to be able to stay through it thorugh the boring part to get to the end, so when those other seductive new ideas come along, you have to tell them to take a number, that we’re doing this now. And until this thing is finished, I’m not going to run away with you.
First it’s the excitement, then it’s the discipline… I have this theory that everything that’s interesting is mostly boring. So, life is filled with all these really interesting things and we chase the high and the buzz of the excitement of that thing, but 90% of that thing is boring.
None of this is new to me. In fact I’ve given similar advice to other people. But sometimes you need to someone else to tell you what you already know, and Gilbert did that pretty damn well in this podcast.
On Ep 226 of the Core Intuition podcast, Manton Reece discussed his 30 Coffee Shops in 30 Days challenge, which he promptly followed up with a 30 Libraries in 30 Days challenge. They also jokingly talked about a ‘30 Kickstarters in 30 Days’ challenge, which immediately made me wonder, as a Kickstarter veteran and aficionado, whether it could be done well.
Of course it could be done, given low enough pledge goals. But I wonder what the bounds of this idea are. Could one person really launch 30 satisfying projects in 30 days, and deliver them in a reasonable amount of time – say, two years? Would you need more than one person to do this? What counts as ’satisfying’? If it was, say, writing 30 100-word stories or drawing 30 single-frame cartoons, that seems a little too easy. But 30 completely unique projects is probably too much to expect.
And how could you promote this? Practically speaking, most Kickstarters are powered by friends and family, and even then it’s hard enough to get them to back you a single time, let alone 30 times. Sure, you can make the standard pledge level $1 for each project, but they’d still need to remember to visit Kickstarter once a day.
Realistically, working in a team would make this much easier – it’d give you access to a much broader pool of backers. Or if you insisted on doing it as an individual, you’d need Batman-levels of preparation.
I quite like these kinds of creative constraints (see Perplex City, A History of the Future in 100 Objects, etc.) but perhaps this is a bridge too far.
There are about 20 plug socket types being used around the world today, but only one really matters for modern devices: USB-A. And USB is truly a worldwide standard. Practically all the devices I might carry around – phone, tablet, watch, camera — can be powered directly via USB cable. My next laptop will be powered by USB. Even my Philips electric toothbrush can plug into a USB socket.
It’ll be several years until we can expect to see USB-A and USB-C sockets in the same places that we see plug sockets, which means I’ll still have to carry around charger bricks and plug adaptors when I travel abroad, but if you’ve flown on a plane or stayed in a modern(ish) hotel in the last couple of years, you’ll have spotted USB sockets.
This is a wonderful thing, the peace dividend of the smartphone wars. If I was staying in a hotel or friend’s house in practically any country, I could be sure of borrowing a charger cable or adaptor. Just think of all the waste and pointless peripherals avoided. Other dividends include the widespread usage of 4G/LTE and wifi standards, and soon enough we’ll be able to add wireless charging.
I’m curious to see if and when USB-C replaces USB-A as the socket type of choice. There’s a lot to like about USB-C in terms of reversibility (no getting the plug upside-down), increased power output, and size. But given the typical cycles of replacing infrastructure in hotels, airports, cars, planes, etc., I imagine it’ll be another decade before that really happens.
This week, I bought a new iPad Pro 9.7″ to replace my iPad Mini 2. I use my iPad at home for at least two hours every day, mostly for web browsing and reading magazines, so it didn’t feel like a stretch to spend the not-inconsiderable £619 to get an upgrade. I was particularly interested in the iPad Pro’s new screen (40% lower reflectance than the Air 2, maybe 70+% over the Mini 2; laminated display; etc.), the Apple Pencil support, and most importantly, a 3x speed increase compared to what I have now.
Has my Mini 2 gotten slower since I bought it two and a half years ago? It feels like it, but according to benchmarks, iOS 9 actually increased the speed of the Mini 2 for my most common activity, web browsing. Perhaps the benchmarks are wrong, but it’s also likely that I just expect much more from my devices every year – not just because web pages and apps are becoming more complex, but due to the ratcheting-up of performance on my other devices. When I first got my iPad Mini 2, I’m sure it made my iPhone 5 feel slow in comparison, but my iPhone 6 now makes the Mini 2 feel slow.
And now the iPad Pro makes my iPhone 6 feel slow(ish). That’s to be expected, but more surprisingly, in my tests it loads webpages just as fast as my 27″ iMac from late 2012, which has 24GB of RAM; the iPad Pro has ‘only’ 2GB. Last night I used FaceTime while browsing the web and scrolling in Twitter, and there was nary a hiccup. I’m sure I could make it slow down with, say, a dozen Safari tabs and Grand Theft Auto, but that’s not a common use-case for me.
The display is just as good. Yes, it has lower reflectance, which makes for a more pleasant reading experience (no getting distracted by subtle reflections in front of the text); yes, it can go brighter. But the real MVP is the True Tone feature, which basically white-balances the display by sensing the colour temperature of your surroundings. It’s not headline-grabbing but as soon as you turn it off, you realise just how blue the display would be without it. The ultimate effect is less eye strain because it makes the iPad feel more like a piece of paper rather than some artificial glowing rectangle. I wouldn’t be surprised if True Tone was introduced to all new Apple displays in the next couple of years.
Naturally, the world wouldn’t complete without Apple fanatics who are deeply, personally offended by the iPad Pro not having, say, USB 3 support or 4GB of RAM or a faster Touch ID sensor. Without them, it’s apparently not a sufficiently impressive upgrade over the iPad Air 2 from 18 months ago. I think that’s arguable, but what’s more interesting to me is that there are people who really want to upgrade a 1.5 year old tablet.
Now, we all know people who upgrade their phones every year, and while I don’t care enough to do that, I can understand the impulse because it still feels like there’s a rapid pace of improvements in smartphones. But I don’t know anyone who upgrades their computer every year. In fact, it wouldn’t even be possible to do such a thing on many Macs, because they don’t get updated that often – and in any case, the upgrades would get you a scant 10-20% speed increase.
Tablets occupy a middle ground. Since they share the same core processors as phones, they share the tremendous speed improvements. But their other features are changing less rapidly; people just don’t care as much about the camera or touch sensor on tablets as they do on their phones, because they use their tablets less frequently and for a narrower range of tasks. So I find it baffling that anyone would even want to upgrade their iPad every release.
I suppose people are upset because it’s called the iPad Pro and that Apple are marketing it as a replacement for your computer. If so, that’s unfortunate. ‘Pro’ is a marketing term; the iPad Pro is no more meant for ‘professionals’ than the Lenovo Yoga 3 Pro laptop is meant for professionals. The iPad will never be a true replacement for a traditional computer until it’s much more flexible and runs a windowed operating system… but… who cares? Many people don’t need a traditional computer any more, and most people are using traditional computers far less – I know I am. For the rest of the time, I’m happy using my tablet.
Tags: adrian · apple · tech
I’m intrigued by the proliferation of explicitly time-based self-care plans, like the 7 Minute Workout. They aren’t a new phenomenon – we’ve had 30 day diets and things like NaNoWriMo for decades. But it feels like the duration of these plans are getting shorter and shorter.
Part of the change is surely due to science. We know now that high-intensity interval training can produce better results in terms of fitness than longer but less intense exercise, by putting our heart and muscles under shorter, sharper periods of stress. Crucially, we know the mechanisms of why this works – it’s not just an observation, we can really see how our body’s cells and organs respond to stress.
But there are different degrees of rigour and certainty in science. A lot of the self-care plans based on psychology and neuroscience are, to my mind, based on much fuzzier research. I don’t mean to say that the researchers in question are incompetent or lying; it’s that their research is taken lightyears too far by companies marketing products.
Let’s imagine researchers conduct a study where they place university students in an MRI scanner and observe their brains while they’re listening to different sounds for ten minutes; maybe some students hear music, some hear white noise, some hear speech, and so on. They find that the students who hear the music have a different kind of brain activity in regions associated with focus or relaxation, or whatever, and the students also report that they feel more relaxed afterwards. So perhaps something is going on with the music, or that type of music, and it’s worthy of more study.
But then let’s say a company sees this research and makes an app – 10 Minute Relaxation (I’m making this up) – which plays calming music to you. They say their app is proven ‘by science’ to make you more relaxed in just ten minutes. Well, clearly not; what ‘works’ on university students sitting in an MRI may not work at all on a 50 year old sitting on a bus.
In any case, it doesn’t matter whether it works or not, it sounds good and people want a fast solution proven by science. The app makers can point at the study and the apps’ users get a nice placebo effect.
Not along ago, the time in London was different from the time in Edinburgh. Not that it mattered – it took so long to travel between the two cities, and the journey was so unreliable, that knowing the time down to the minute would have been pointlessly expensive (clocks and watches being pretty high tech back a century or two ago).
But now we have smartphones, which means that we agree on the time down to the second, and we can know our ETA via Google Maps and Uber down to the minute. We can be more efficient – no more idly waiting for ten minutes at the coffee shop for friend, because they can let us know they’re running late; we can spend that ten minutes on something else. Maybe it’s playing a game or reading Facebook – or maybe it’s something productive, like a 10 Minute Relaxation session.
The gaps in our busy lives are shrinking, which means that self-care solutions must also shrink.
Any one of us can become an exceptional artist or writer or games designer or YouTuber or actor. Any one of us can lose our jobs in an instant. Any one of us can have their entire field of work vanish in just a few years, thanks to automation and globalisation. So we are in competition with everyone else, which is a recipe for serious anxiety. It means you always need to be improving yourself; and it’s easy to see why shorter solutions can feel more manageable and rewarding than, say, the 7 Month Workout, or the 10 Year Relaxation session.
Tags: neuro · psych · science
February 14th, 2016 · 1 Comment
Stop the presses: storytelling has just entered the digital age! Every month, daring authors are creating new kinds of interactive experiences that push the boundary of what’s possible, featuring such innovations as ‘branching storylines’, ‘non-linear narratives’, and ‘illustrations’ – none of which would be possible in printed books. These authors are being aided by risk-taking, forward-thinking publishers, and together they are trailblazing paths into imaginative new territories.
You too can be part of this revolution! But it’s not enough just to write a good digital story – the true mark of success is not critical praise, popular acclaim, or financial success, but rather, it’s being covered in mass media.
That’s why I conducted an exhaustive survey of digital storytelling coverage on traditional media such as newspapers, trade publications, and general interest websites. By means of a proprietary deep learning algorithm I developed last night, I extracted the precise elements that will help – or hinder – your quest to get coverage, and assigned each one a point value. Naturally, nothing is guaranteed, but if your digital story ends up with a high point score, you can be confident you’ll be lauded by the likes of the New York Times and BBC.
Without further ado, the guide!:
+10 points if you’ve been engaged by a traditional publisher (bonus 20 points if it’s by a well-known one such as Penguin Random House or HarperCollins)
+10 if you’re an established novelist (bonus 20 if you hate apps and have never used a smartphone before)
+10 if it comes out at the same time as the traditional novel it was so clearly originally written as
-10 if your digital stories have sold more than 10,000 copies (-20 if they’ve sold more than 100,000; no-one likes that populist stuff)
-50 if anyone has ever called or compared it to ‘a game’
+20 if it’s episodic
+20 if its chapters can be read in any order
+20 if it has pretty illustrations that’ll look great in an article (bonus 20 if it has animations)
+20 if you hate Twitter, would never use it, and are prepared to write a piece saying so
+30 if you claim you have never played games or interactive fiction, yet are confident that your story is superior and more innovative
+5 if it does stupid-ass locational bullshit that means the journalist can get a day out of the office to try it out
+10 if the author is willing to say that “this kind of thing is just a bit of fun and will never replace real books”
-20 if it’s science fiction, fantasy, or romance
+10 if it’s based on Shakespeare, Dickens, or similarly out-of-copyright classic authors
+10 if it’s for kids (bonus 5 points if it’s ‘educational’)
+20 if your story involves Google, Facebook, Amazon, or Apple (bonus 10 points if it’s actually made by them)
+20 if your publisher has raised $1 million+ in VC
-20 if your publisher is profitable
-30 if your publisher has existed for more than 5 years
With thanks to Naomi Alderman, who provided essential help on the survey
Tags: book · tech · writing
This year, I’ve committed to reading more books, for reasons I discuss in this podcast. So far, I’ve read eight books, which is six ahead of my ‘25 books in 2016′ schedule:
- The Night Circus by Erin Morgenstern: Not sure what all the fuss was about. The worldbuilding and descriptions of magic were well done, but ultimately rendered empty by the flat characters, who were quite literally plot devices.
- Luna: New Moon by Ian McDonald: Game of Thrones meets The Moon is a Harsh Mistress, but in a good way.
- What Technology Wants by Kevin Kelly: Achieves that rare feat of being a book about technology that doesn’t feel instantly dated. Worth reading, and a new take on the techno-optimist slant.
- Hark! A Vagrant by Kate Beaton: Great fun, as expected from the webcomic.
- City of Stairs by Robert Jackson Bennett: Surprisingly enough, a novel with great worldbuilding and decent characters that isn’t part of a 7-book series.
- Sword of My Mouth: A Post-Rapture Graphic Novel by Jim Monroe
- Common Sense by Thomas Paine: Still stirring; decided to read this after the related In Our Time. Not exactly book-length, I know.
- Step Aside, Pops by Kate Beaton: Also great fun.
Currently reading Superforecasting by Philip Tetlock; so far, so good, except for the feeling that it would’ve made for a killer 20,000 word New Yorker piece rather than an entire book.
I’ve been a fan of Philip Reeve after reading his thrilling Mortal Engines quartet. Strictly speaking, Philip Reeve is a young adult SF/fantasy author, but I found this series to be more imaginative and darker than many other ‘adult’ novels. A lot of his other books have been for younger children, but when I heard that he’d written an out-and-out SF novel called Railhead, I had to check it out.
Railhead is an exciting amalgam of two of my favourite SF series: Dan Simmons’ Hyperion Cantos (well, the first two books, anyway), and Iain M. Banks’ Culture series. The Hyperion part stems from Railhead’s network of wormholes, connected by – of course – railways; plus the presence of godlike AIs with their own cryptic plans. The Culture part is represented by the slightly-smarter-than-human AI trains, with appropriately Banksian names, plus the well-written action, explosions, drones, and AI avatars. There’s also a dash of Dune and Hunger Games in there, as well.
Perhaps the most Banksian thing – and the most surprising to see in a young adult SF novel – is Railhead’s refreshingly modern treatment of gender norms and sexuality. Some characters are gay, and some characters regularly switch sexes, leading to offhanded passages like this:
She was gendered female, with a long, wise face, a blue dress, silver hair in a neat chignon.
Malik got a promotion. He got himself a husband, a house on Grand Central, a cat.
And, to cut the story short, it fell in love with him. And he fell in love with it. In the years that followed, Anais came to him again and again. Sometimes its interface was female, sometimes male. Sometimes it was neither. Different bodies, different faces, but he always knew it.
An unexpected but pleasant surprise!
Tags: book · review · sf
December 9th, 2015 · 1 Comment
Our office manager Sophie passed me the phone. “It’s someone from Google,” she said. I raised an eyebrow. Perhaps this was an invitation to an event, or another chance to test prototype hardware, or something even more magical.
I unmute the phone. “Hello?”
“Hi, I’m Tim, from Google Digital Development. I’d love to talk about how we can help you promote your apps on the Google Play Store better.”
How disappointing — they were just selling Google search ads. I quickly made my excuses and hung up.
Three months later: “Hi Adrian! My name is Mike, I’m from Google Digital Development -”
Seven months: “Hey Adrian! I’m from Google Digital -”
Twelve months: “I’m Sean, I’m from Google Digi -”
To this day, it keeps happening and I keep getting my hopes up, like a child. Why don’t I learn that ‘Google’ on the phone equals ‘Irish guy cold-calling with ad sales’?
Because I haven’t told you about the times Google contacts us about actual interesting projects. It’s usually by email, but sometimes they do call. Not on a regular schedule, of course — but at random, unpredictable times.
This pattern of frustration mixed with intermittent success is essentially a variable reinforcement schedule. If you’ve read any article about addiction in the last twenty years, you’ll know that a variable reinforcement schedule can be used to make rats compulsively press a lever in the hope of getting another pellet of food; and that the same schedule could explain how addictive behaviour develops in humans.
Some people in the tech community act as if variable reinforcement schedules were occult knowledge, magic words capable of enchanting muggles into loosening their wallets. If only we could learn the secrets of variable reinforcement schedules, we could make them addicted to our new app — all those microtransactions, all those ad views, oh my!
So when people learn that I studied experimental psychology and neuroscience at Cambridge and Oxford — and that I run a company that designs health and fitness games — they are taken aback. They are fascinated. And then… they are disappointed, but only after I tell them that the principles of variable reinforcement schedules and operant conditioning can be learned by a dedicated student in a few hours. Moreover, if experimental psychologists were all capable of making the next Candy Crush, they wouldn’t spend most of their time complaining about the quality of tea in the staff common room.
That doesn’t mean that variable reinforcement schedules are bunk, though.
Variable reinforcement schedules help explain why I spend an hour a day mindlessly checking Gmail, Metafilter, Reddit, Twitter, and Hacker News. Even when I know, with 99% certainty, that nothing interesting will have happened in the 15 minutes since I last checked them, I still type Command-R — because maybe this time I’ll get lucky.
More broadly, it’s why we pay attention to the constant interruptions that plague our screens — there’s no cost to the person sending the interruption, and occasionally, it’s of real interest to us.
This plague has its origins from the dawn of email, but this year’s it’s broken out into the mass consciousness, at least if you measure by rapidly proliferating NYT opinion pieces and TEDx talks. It’s most recently been discussed by Tristan Harris (here’s his TEDx talk); Harris is a design philospher at Google, but he originally arrived there after they acquired his company, Apture, back in 2011. His particular interest right now is the Time Well Spent movement.
The purpose of the movement is to encourage the design of products and tools that allow users to make informed choices about how they spend their time. In other words, a user visiting a ‘good’ YouTube might be asked how long they want to watch videos for. After their time is up, the website would tell them to do something more useful and come back later.
I’m sympathetic to Time Well Spent, not least because their success would save me a lot of time. But on balance, I’m skeptical that companies can be convinced to engineer their products to make them less compulsive out of the goodness of their hearts, any more than advertisers and publishers can be convinced to reduce the number of obnoxious and unsafe ads out there.
I’m happy to be proven wrong, but let’s put it this way: Harris works at Google, and I don’t see any friendly ‘how long do you want to spend surfing the web?’ dialogs in Chrome. No, perhaps we should take matters into our own hands — like we did with third party ad blockers.
While it took ad blockers many years to gain traction, they’re now used by a significant percentage of browsers — at least 15% in the US, 20% in the UK, and 25% in Germany. The advent of Content Blocking in iOS may see those numbers continue to grow. So it’s tempting to think that a similar strategy, centred around browser extensions, could help disrupt the many variable reinforcement schedules that bind our attention.
In fact, many such apps and extensions exist, like Freedom, StayFocusd, and LeechBlock. Let’s call them ‘compulsion blockers’. Not all compulsion blockers are apps — at university, my friend Alex’s version of a compulsion blocker was giving me his network cable while he was trying to write an essay.
Compulsion blocker apps have not made much of an impact. You’d know if they had, because the wailing from app developers and games companies would be deafening. It’d make publishers’ complaints about ad blockers seem like a kitten’s meow — just imagine if 20% of people used compulsion blockers to reduce their Facebook or Tumblr or YouTube time. It’d be the bonfire of the unicorns!
Why haven’t they been more successful?
- Many people actually enjoy browsing Facebook and YouTube, thank you very much. And how dare you say that they’re wasting their time refreshing Reddit every five minutes!
- While some people (e.g. the readers and author of this article) may believe that compulsive browsing on computers is the main problem, the truth is that compulsive smartphone usage is much worse. And making compulsion blockers for smartphones is really, really tricky.
It’s technically possible to a create compulsion blocker for Android phones; some kind of custom launcher app that replaces the home screen and can monitor and block the usage of any app or website (just imagine the permissions list you’d need!) Unfortunately, custom home screens aren’t very popular beyond power users. Even the full might of Facebook wasn’t enough to make their custom Home launcher a success. People just don’t seem to care that much.
But it gets worse: it is literally impossible to make a compulsion blocker for the iPhone and iPad. Third-party developers simply cannot make apps that block or control the behaviour of other apps, and any attempts to make an end-run around Apple’s locked-down App Store distribution model have not been successful. I can’t imagine this will change any time soon, either.
If a technological solution can’t be found on smartphones, perhaps we need to go further up the stack. Maybe when augmented reality glasses finally arrive, we can use them to blank out our phones whenever we try to open up Candy Crush for the twentieth time!
But our technological masters — Apple, Facebook, Google, Microsoft — they aren’t dummies. They realise that augmented reality and virtual reality represent the ‘final compute platform’ that could subsume all other computing and display devices. They would do anything to control and monetise that future, including prohibiting developers from making apps that control other apps, just like Apple does. It’ll be the war to end all platform wars.
Let’s summarise: compulsion blockers aren’t popular on desktops, they’re neglected or prohibited on smartphones, and the same may be true on future platforms as well. All hope is lost.
Or is it?!!!
There are other things in this world that are highly addictive. They’re called drugs. We even have ‘drug blockers’ like naltrexone, which block the action of opioids on a molecular level. The slow-release injectable version of naltrexone is called Vivitrol, and can be used to control heavy opiate and alcohol addictions.
Naltrexone and Vivitrol aren’t household names because most people aren’t dangerously addicted to drugs or alcohol. They aren’t much used as a preventative measure either, because a lot of people enjoy taking drugs and drinking alcohol, thank you very much.
Likewise, most people aren’t dangerously addicted to Facebook, so they don’t feel they need a compulsion blocker. For my own part, I don’t use a one because my behaviour doesn’t seem too bad, and I also quite enjoy browsing the web.
Let’s assume that it gets worse, though. Not a foolish assumption given that there are thousands of people spending billions of dollars, trying to make us compulsively use their apps and websites. Maybe the hour a day I spend checking websites goes up to two or three hours a day, in which case I will be highly motivating to get myself a compulsion blocker.
Unfortunately, compulsive experiences generate a lot of cash. The people behind those experience will therefore be highly motivated to circumvent any blockers — consider the phenomenon of advertisers paying popular ad blockers to let their ‘acceptable ads’ through. Yes, there is no escaping capitalism.
For that reason, if we want to genuinely reduce compulsive behaviour, we can’t simply ask VC-backed or publicly-owned companies to play nice. We can’t even ask their employees to play nice; there are just too many smart people out there who are more than happy to take Facebook or Google or Supercell’s $250,000 salaries a year and turn a blind eye to questionable design practices.
Here’s what we can do: we can outcompete them. There’s a reason why we don’t spend literally all of our time on computers or smartphones messing about on Facebook or Candy Crush, and that is because there are better things to do. It might be reading Station Eleven, or watching Mad Max: Fury Road, or playing Life is Strange.
We also need tools and devices and venues that allow us to experience these things without interruptions. Lately I’ve made a habit of going to the cinema to watch movies — it helps me focus on the movie rather than checking my phone, and I come out appreciating it more. Likewise, I bought a Kindle Paperwhite so I can more clearly delinate my time between browsing the web and reading a proper book.
You can make money with some of these things. Not unicorn money, perhaps, but certainly a lot. More importantly, a good book, a good movie, a good game — these things are all worth of creation and consumption in and of themselves.
A good movie or book doesn’t compel us with a variably reinforced schedule to visit it again and again and again, until we’re exhausted. No, they compels us to come back because they’re well-made, right from their beginning to their very satisfying, and very final, end.
Tags: neuro · psych · tech · web
December 6th, 2015 · 1 Comment
Two weeks ago, I was at the Six to Start offices discussing the cost of shipping packages internationally for our next Virtual Race. I bent over to pick up something on the floor and felt an intense stabbing pain in my lower right back. I attempted to straighten up, but it hurt to much that I dropped to my knees and, on the advice of Matt, lay down on the floor for a few minutes.
This alleviated the pain somewhat, but I was still barely able to walk. Even sitting down didn’t help. That morning, I’d packed my running gear to use on the way back, but it was obvious nothing of the sort was on the cards. Still, I was determined to hobble back home that night, which I successfully did.
Things hadn’t improved the next day, or the day after that. I’d evidently strained or pulled a muscle in my back, and it wasn’t going to clear up quickly.
What struck me in those days was how difficult it was to do anything. Getting up from a sofa or from bed, putting on trousers, tying shoelaces, even brushing my teeth – all these activities caused pain, to the extent that something which would normally take 10 seconds and no thought at all instead could take a few minutes each. Everyone was very helpful during this time, particularly my girlfriend, but my back pain still caused real problems. I worried about how long it would last for – would I need to figure out some new way of exercising other than running? How might this affect my work? If it lasted much longer, it would certainly have worsened my health in other ways.
Thankfully, after a week, I was back to 90% and able to start running again, and now I’m pretty much at 100%. Part of the reason for the quick recovery, I think, is because I was already very healthy and had a habit of walking a lot; I’m told that back pain is worsened by not moving, and in my experience, that’s definitely the case.
However briefly, I gained a new understanding of what it means to have back pain. More broadly, I realised the kind of difficulties people have when it’s just hard or tiring or painful to move in general. It’s not news to me that many, many people have these problems, and I never doubted that walking or stretching or so on was genuinely difficult – but it’s one thing to believe it, and another thing to experience it. It’s actually astonishing to me how hard it was to do everyday tasks.
I don’t have any bright ideas about how to treat or combat back pain; I’m not about to suggest that an app* would solve it, or that we should all get exoskeletons (although that would be pretty cool). It’s just clear to me that it’s a problem that, while seemingly invisible, is bound to seriously reduce a person’s quality of life and exacerbate or create new ailments.
*If you could measure posture in real time using wearable devices, you could create an app or chatbot or game that might gently encourage people to move and stretch in a sensible way. But that’s a) obvious and, more importantly, b) rather far off given the NHS’ (in)ability to deploy that kind of technology to patients.