Film critics were not kind when A.I. Artificial Intelligence was released in 2001. A.I. was directed by Steven Spielberg but originated from, and was made with, Stanley Kubrick, up until his death in 1999. A lot of reviewers accordinly blamed Spielberg for pretty much everything they disliked about the film, notably its final 30 minutes which appeared to be overly sentimental.
I enjoyed the movie when it was released. Admittedly, a lot of that was because I’d played the associated ARG, which also provided more context for the final 30 minutes. But it was hard to convince my friends that it was a good movie, especially in the face of critics.
In the past five years, prominent critics have begun reappraising A.I., to its benefit. A better understanding of the ending, and the relationship between Spielberg and Kubrick, sheds much light on the intention and message of the movie. In short, Spielberg didn’t write the ending, Kubrick put the teddy bear in, the aliens are actually machines, and the ending isn’t happy:
Watching the film again, I asked myself why I wrote that the final scenes are “problematical,” go over the top, and raise questions they aren’t prepared to answer. This time they worked for me, and had a greater impact. I began with the assumption that the skeletal silver figures are indeed androids, of a much advanced generation from David’s. They too must be programmed to know, love, and serve Man. Let’s assume such instructions would be embedded in their programming DNA. They now find themselves in a position analogous to David in his search for his Mommy. They are missing an element crucial to their function.
When the epilogue begins, it’s Kingsley’s voice that explains the ice age and the passage of time. Does that mean David’s story – ie AI – is itself a creation myth, told by these futuristic mechas about the making of their kind, as an attempt to understand the elder beings that made them?
“Human beings must be the key to the meaning of existence,” the Kingsley mecha tells David, and the line sounds odd until you realise these creatures hold humans in the same awed regard as humanity holds its gods. Dr Hobby’s son died so that David might live, and these new mecha are descended from David’s line.
In that light, AI’s ending isn’t twee, but wrenchingly sad. The love we’re seeing, between a mecha and a clone, is a simulacrum, as manufactured as a movie. But if it feels like the real thing to us, what does that tell us about the real thing? In that moment, Spielberg shows us real fear and real wonder, knotted together so tightly it becomes impossible to tell the two apart.
Unpredictability, though, is not necessarily what audiences want, which brings us to the focal point of controversy over A.I., and a major reason the movie is more of a cult item than a confirmed modern classic: the film’s ending. Initially, David’s drive leads him to the bottom of the ocean, staring at a statue of the Blue Fairy, convinced that if he waits long enough, she will work her magic. You may have heard, or even subscribed to, the belief that this moment, with David waiting underwater indefinitely, is the “correct” end to the film. But the movie presses on past this neatness, jumping forward thousands of years. The Earth has frozen over, and an advanced race of mecha-beings (not aliens!) uncovers David. Through a process that is, admittedly, a little drawn out with explanations (including, essentially, two different types of narration), the mecha-beings, eager to learn from a robot who knew humans, agree to revive Monica for David. In this form, though, she’s more of a ghost; she can only stay revived for a single day. She and David spend a perfect day together before she drifts off to sleep, accompanied by her mecha son, essentially a dying ember of human life.
It’s understandable, then, that so many backseat directors would dutifully follow that program. This is not, however, Spielberg’s obligation. The film frequently adopts a robot’s point of view, but was not made by one. By sticking with David after thousands of years’ worth of waiting, Spielberg stays true to a robot perspective while also deepening David’s sadly close connection to human experience, a far trickier balancing act than having David dead-end at the bottom of the ocean. The actual and vastly superior ending of A.I. is more than a bleak kiss-off; it imagines humanity’s final moments of existence (if not literally, certainly metaphorically) as a dreamy day of wish fulfillment. David wants to be a “real boy,” and the scenes with the ghostly Monica turn his desperation and sadness from an imitation-human abstraction to a desire with an endpoint, which in this case coincides with, more or less, the end of humanity as we know it. As such, the sequence also turns the comforting idea of dying happily into something pretty fucking sad. Spielberg hasn’t grafted a happy ending onto a dark movie; he’s teased the darkness out of what his main character wants. David’s artificial intelligence has given him the very human ability to obsess, and then to take solace in his own happiness above anything else.
Mark Kermode at the BBC: AI Apology (published 2013)
In 2002, Spielberg told film critic Joe Leydon that “People pretend to think they know Stanley Kubrick, and think they know me, when most of them don’t know either of us”. “And what’s really funny about that is, all the parts of A.I. that people assume were Stanley’s were mine. And all the parts of A.I. that people accuse me of sweetening and softening and sentimentalizing were all Stanley’s. The teddy bear was Stanley’s. The whole last 20 minutes of the movie was completely Stanley’s. The whole first 35, 40 minutes of the film – all the stuff in the house – was word for word, from Stanley’s screenplay. This was Stanley’s vision.” “Eighty percent of the critics got it all mixed up. But I could see why. Because, obviously, I’ve done a lot of movies where people have cried and have been sentimental. And I’ve been accused of sentimentalizing hard-core material. But in fact it was Stanley who did the sweetest parts of A.I., not me. I’m the guy who did the dark center of the movie, with the Flesh Fair and everything else. That’s why he wanted me to make the movie in the first place. He said, ‘This is much closer to your sensibilities than my own.’”
Snap Judgment is the novel of podcasts for me – each episode is hard to get into, and each story can be intimidatingly unpredictable, as personal tales inevitably are. But overall, the podcast is surprisingly rewarding and consistent. That’s a real achievement compared to more highly-produced podcasts that are like crystals, almost too perfect and artificial in their construction – as Radiolab and Gimlet Media can be, for example.
So consider this a short note of appreciation for Snap Judgment. It’s not my favorite podcast but it does good.
I’ve been struggling to get started writing a new book. I find it all to easy for my time out of work to be nibbled away, seconds and minutes and hours, by genuinely intriguing articles, blog posts, videos, comments, TV shows, work, and games. Like a lot of people, I have the urge to complete tasks and fill up progress bars, but with the internet and media, the progress bar can never be filled. And so I never end up starting that book, even though I have plenty of notes and (I think) good ideas.
But maybe that’s not the real reason. I did write a book a few years ago, after all, and I don’t recall being any less busy or distracted back then. Perhaps it’s because the media environment has become even more distracting – who knows?
Coincidentally, I heard Elizabeth Gilbert talk about this very subject on the Longform podcast. I’ll first admit that I only knew one thing about Gilbert beforehand, which is that she wrote the highly successful Eat, Pray, Love; a book that turned into a movie starring Julia Roberts, which a lot of people whose opinions I trust found very shallow. So I was skeptical when I saw the episode’s guest, but not so skeptical that I deleted the episode out of hand; the Longform people have earned that much trust from me over the years.
Here are a couple of good bits from the episode, firstly on being multitalented:
…When it comes to deciding what you’re going to be, it helps if there’s only one thing you’re good at… I know a lot of multitalented people… but I do think it’s hard to them sometimes to know where to put their energy. And it’s easier if you’re not so great at a bunch of stuff.
I confess that I think of myself as multitalented. I like to think that, given sufficient effort, I could become pretty good at making videos or games or writing or whatever. I like learning new things. And for me, that makes it hard to decide whether my next big personal project should be a game or a book or something else.
Another good bit is about inspiration, and why it’s valuable to identify the things that you really care about when it comes to taking on a big personal project:
The calculus has to be, what’s the thing that makes me want to get up in the morning, what’s the thing that I’m psyched that I get to do this…. It’s about being very awake, about being very alert. The work is clearing your life of distractions enough so you are actually capable of feeling that excitement when it arrives. That you haven’t overbooked yourself in ten different directions so that you are so exhausted that you wouldn’t know inspiration if it punched you in the face. You can’t do that to yourself. It’s about being sober. It’s about being hopeful. It’s about a certain faith, it’s a way of being, which is about being ready.
And it’s about trusting your own curiosity enough to follow it, even if it doesn’t make sense. Even if the inspiration that you had doesn’t align with anything you’ve done before, even if it doesn’t seem like it would be marketable, even if it’s something that you can’t even believe you’re interested in, but you sort of have to have full faith that if you’re curious about something, it’s for a reason, that it’s a clue on the great scavenger hunt, and that you follow that clue and then the next and then next.
The tricky bit is that you have to start from a place of ‘this is what I’m most excited about, this is what I’m most curious about’, and then you have to recognise and know what will happen, which is that six months into it, it’s going to feel very boring and tedious because making things is often boring and tedious.
Another idea is going to come along very seductively, and do the dance of the seven veils in the corner of your studio, and say, I’m a much more interesting, much more exciting idea, why don’t you abandon this project that you’ve been working on for six months and come and run away with me to paradise. And you have to be smart enough to know not to do that, because six months from now that project will also be dull and boring and another idea will come and seduce you have to be able to stay through it thorugh the boring part to get to the end, so when those other seductive new ideas come along, you have to tell them to take a number, that we’re doing this now. And until this thing is finished, I’m not going to run away with you.
First it’s the excitement, then it’s the discipline… I have this theory that everything that’s interesting is mostly boring. So, life is filled with all these really interesting things and we chase the high and the buzz of the excitement of that thing, but 90% of that thing is boring.
None of this is new to me. In fact I’ve given similar advice to other people. But sometimes you need to someone else to tell you what you already know, and Gilbert did that pretty damn well in this podcast.
Of course it could be done, given low enough pledge goals. But I wonder what the bounds of this idea are. Could one person really launch 30 satisfying projects in 30 days, and deliver them in a reasonable amount of time – say, two years? Would you need more than one person to do this? What counts as ’satisfying’? If it was, say, writing 30 100-word stories or drawing 30 single-frame cartoons, that seems a little too easy. But 30 completely unique projects is probably too much to expect.
And how could you promote this? Practically speaking, most Kickstarters are powered by friends and family, and even then it’s hard enough to get them to back you a single time, let alone 30 times. Sure, you can make the standard pledge level $1 for each project, but they’d still need to remember to visit Kickstarter once a day.
Realistically, working in a team would make this much easier – it’d give you access to a much broader pool of backers. Or if you insisted on doing it as an individual, you’d need Batman-levels of preparation.
I quite like these kinds of creative constraints (see Perplex City, A History of the Future in 100 Objects, etc.) but perhaps this is a bridge too far.
There are about 20 plug socket types being used around the world today, but only one really matters for modern devices: USB-A. And USB is truly a worldwide standard. Practically all the devices I might carry around – phone, tablet, watch, camera — can be powered directly via USB cable. My next laptop will be powered by USB. Even my Philips electric toothbrush can plug into a USB socket.
It’ll be several years until we can expect to see USB-A and USB-C sockets in the same places that we see plug sockets, which means I’ll still have to carry around charger bricks and plug adaptors when I travel abroad, but if you’ve flown on a plane or stayed in a modern(ish) hotel in the last couple of years, you’ll have spotted USB sockets.
This is a wonderful thing, the peace dividend of the smartphone wars. If I was staying in a hotel or friend’s house in practically any country, I could be sure of borrowing a charger cable or adaptor. Just think of all the waste and pointless peripherals avoided. Other dividends include the widespread usage of 4G/LTE and wifi standards, and soon enough we’ll be able to add wireless charging.
I’m curious to see if and when USB-C replaces USB-A as the socket type of choice. There’s a lot to like about USB-C in terms of reversibility (no getting the plug upside-down), increased power output, and size. But given the typical cycles of replacing infrastructure in hotels, airports, cars, planes, etc., I imagine it’ll be another decade before that really happens.
This week, I bought a new iPad Pro 9.7″ to replace my iPad Mini 2. I use my iPad at home for at least two hours every day, mostly for web browsing and reading magazines, so it didn’t feel like a stretch to spend the not-inconsiderable £619 to get an upgrade. I was particularly interested in the iPad Pro’s new screen (40% lower reflectance than the Air 2, maybe 70+% over the Mini 2; laminated display; etc.), the Apple Pencil support, and most importantly, a 3x speed increase compared to what I have now.
Has my Mini 2 gotten slower since I bought it two and a half years ago? It feels like it, but according to benchmarks, iOS 9 actually increased the speed of the Mini 2 for my most common activity, web browsing. Perhaps the benchmarks are wrong, but it’s also likely that I just expect much more from my devices every year – not just because web pages and apps are becoming more complex, but due to the ratcheting-up of performance on my other devices. When I first got my iPad Mini 2, I’m sure it made my iPhone 5 feel slow in comparison, but my iPhone 6 now makes the Mini 2 feel slow.
And now the iPad Pro makes my iPhone 6 feel slow(ish). That’s to be expected, but more surprisingly, in my tests it loads webpages just as fast as my 27″ iMac from late 2012, which has 24GB of RAM; the iPad Pro has ‘only’ 2GB. Last night I used FaceTime while browsing the web and scrolling in Twitter, and there was nary a hiccup. I’m sure I could make it slow down with, say, a dozen Safari tabs and Grand Theft Auto, but that’s not a common use-case for me.
The display is just as good. Yes, it has lower reflectance, which makes for a more pleasant reading experience (no getting distracted by subtle reflections in front of the text); yes, it can go brighter. But the real MVP is the True Tone feature, which basically white-balances the display by sensing the colour temperature of your surroundings. It’s not headline-grabbing but as soon as you turn it off, you realise just how blue the display would be without it. The ultimate effect is less eye strain because it makes the iPad feel more like a piece of paper rather than some artificial glowing rectangle. I wouldn’t be surprised if True Tone was introduced to all new Apple displays in the next couple of years.
Naturally, the world wouldn’t complete without Apple fanatics who are deeply, personally offended by the iPad Pro not having, say, USB 3 support or 4GB of RAM or a faster Touch ID sensor. Without them, it’s apparently not a sufficiently impressive upgrade over the iPad Air 2 from 18 months ago. I think that’s arguable, but what’s more interesting to me is that there are people who really want to upgrade a 1.5 year old tablet.
Now, we all know people who upgrade their phones every year, and while I don’t care enough to do that, I can understand the impulse because it still feels like there’s a rapid pace of improvements in smartphones. But I don’t know anyone who upgrades their computer every year. In fact, it wouldn’t even be possible to do such a thing on many Macs, because they don’t get updated that often – and in any case, the upgrades would get you a scant 10-20% speed increase.
Tablets occupy a middle ground. Since they share the same core processors as phones, they share the tremendous speed improvements. But their other features are changing less rapidly; people just don’t care as much about the camera or touch sensor on tablets as they do on their phones, because they use their tablets less frequently and for a narrower range of tasks. So I find it baffling that anyone would even want to upgrade their iPad every release.
I suppose people are upset because it’s called the iPad Pro and that Apple are marketing it as a replacement for your computer. If so, that’s unfortunate. ‘Pro’ is a marketing term; the iPad Pro is no more meant for ‘professionals’ than the Lenovo Yoga 3 Pro laptop is meant for professionals. The iPad will never be a true replacement for a traditional computer until it’s much more flexible and runs a windowed operating system… but… who cares? Many people don’t need a traditional computer any more, and most people are using traditional computers far less – I know I am. For the rest of the time, I’m happy using my tablet.
I’m intrigued by the proliferation of explicitly time-based self-care plans, like the 7 Minute Workout. They aren’t a new phenomenon – we’ve had 30 day diets and things like NaNoWriMo for decades. But it feels like the duration of these plans are getting shorter and shorter.
Part of the change is surely due to science. We know now that high-intensity interval training can produce better results in terms of fitness than longer but less intense exercise, by putting our heart and muscles under shorter, sharper periods of stress. Crucially, we know the mechanisms of why this works – it’s not just an observation, we can really see how our body’s cells and organs respond to stress.
But there are different degrees of rigour and certainty in science. A lot of the self-care plans based on psychology and neuroscience are, to my mind, based on much fuzzier research. I don’t mean to say that the researchers in question are incompetent or lying; it’s that their research is taken lightyears too far by companies marketing products.
Let’s imagine researchers conduct a study where they place university students in an MRI scanner and observe their brains while they’re listening to different sounds for ten minutes; maybe some students hear music, some hear white noise, some hear speech, and so on. They find that the students who hear the music have a different kind of brain activity in regions associated with focus or relaxation, or whatever, and the students also report that they feel more relaxed afterwards. So perhaps something is going on with the music, or that type of music, and it’s worthy of more study.
But then let’s say a company sees this research and makes an app – 10 Minute Relaxation (I’m making this up) – which plays calming music to you. They say their app is proven ‘by science’ to make you more relaxed in just ten minutes. Well, clearly not; what ‘works’ on university students sitting in an MRI may not work at all on a 50 year old sitting on a bus.
In any case, it doesn’t matter whether it works or not, it sounds good and people want a fast solution proven by science. The app makers can point at the study and the apps’ users get a nice placebo effect.
Not along ago, the time in London was different from the time in Edinburgh. Not that it mattered – it took so long to travel between the two cities, and the journey was so unreliable, that knowing the time down to the minute would have been pointlessly expensive (clocks and watches being pretty high tech back a century or two ago).
But now we have smartphones, which means that we agree on the time down to the second, and we can know our ETA via Google Maps and Uber down to the minute. We can be more efficient – no more idly waiting for ten minutes at the coffee shop for friend, because they can let us know they’re running late; we can spend that ten minutes on something else. Maybe it’s playing a game or reading Facebook – or maybe it’s something productive, like a 10 Minute Relaxation session.
The gaps in our busy lives are shrinking, which means that self-care solutions must also shrink.
Any one of us can become an exceptional artist or writer or games designer or YouTuber or actor. Any one of us can lose our jobs in an instant. Any one of us can have their entire field of work vanish in just a few years, thanks to automation and globalisation. So we are in competition with everyone else, which is a recipe for serious anxiety. It means you always need to be improving yourself; and it’s easy to see why shorter solutions can feel more manageable and rewarding than, say, the 7 Month Workout, or the 10 Year Relaxation session.
Stop the presses: storytelling has just entered the digital age! Every month, daring authors are creating new kinds of interactive experiences that push the boundary of what’s possible, featuring such innovations as ‘branching storylines’, ‘non-linear narratives’, and ‘illustrations’ – none of which would be possible in printed books. These authors are being aided by risk-taking, forward-thinking publishers, and together they are trailblazing paths into imaginative new territories.
You too can be part of this revolution! But it’s not enough just to write a good digital story – the true mark of success is not critical praise, popular acclaim, or financial success, but rather, it’s being covered in mass media.
That’s why I conducted an exhaustive survey of digital storytelling coverage on traditional media such as newspapers, trade publications, and general interest websites. By means of a proprietary deep learning algorithm I developed last night, I extracted the precise elements that will help – or hinder – your quest to get coverage, and assigned each one a point value. Naturally, nothing is guaranteed, but if your digital story ends up with a high point score, you can be confident you’ll be lauded by the likes of the New York Times and BBC.
Without further ado, the guide!:
+10 points if you’ve been engaged by a traditional publisher (bonus 20 points if it’s by a well-known one such as Penguin Random House or HarperCollins)
+10 if you’re an established novelist (bonus 20 if you hate apps and have never used a smartphone before)
+10 if it comes out at the same time as the traditional novel it was so clearly originally written as
-10 if your digital stories have sold more than 10,000 copies (-20 if they’ve sold more than 100,000; no-one likes that populist stuff)
-50 if anyone has ever called or compared it to ‘a game’
+20 if it’s episodic
+20 if its chapters can be read in any order
+20 if it has pretty illustrations that’ll look great in an article (bonus 20 if it has animations)
+20 if you hate Twitter, would never use it, and are prepared to write a piece saying so
+30 if you claim you have never played games or interactive fiction, yet are confident that your story is superior and more innovative
+5 if it does stupid-ass locational bullshit that means the journalist can get a day out of the office to try it out
+10 if the author is willing to say that “this kind of thing is just a bit of fun and will never replace real books”
-20 if it’s science fiction, fantasy, or romance
+10 if it’s based on Shakespeare, Dickens, or similarly out-of-copyright classic authors
+10 if it’s for kids (bonus 5 points if it’s ‘educational’)
+20 if your story involves Google, Facebook, Amazon, or Apple (bonus 10 points if it’s actually made by them)
+20 if your publisher has raised $1 million+ in VC
-20 if your publisher is profitable
-30 if your publisher has existed for more than 5 years
With thanks to Naomi Alderman, who provided essential help on the survey
This year, I’ve committed to reading more books, for reasons I discuss in this podcast. So far, I’ve read eight books, which is six ahead of my ‘25 books in 2016′ schedule:
The Night Circus by Erin Morgenstern: Not sure what all the fuss was about. The worldbuilding and descriptions of magic were well done, but ultimately rendered empty by the flat characters, who were quite literally plot devices.
Luna: New Moon by Ian McDonald: Game of Thrones meets The Moon is a Harsh Mistress, but in a good way.
What Technology Wants by Kevin Kelly: Achieves that rare feat of being a book about technology that doesn’t feel instantly dated. Worth reading, and a new take on the techno-optimist slant.
I’ve been a fan of Philip Reeve after reading his thrilling Mortal Engines quartet. Strictly speaking, Philip Reeve is a young adult SF/fantasy author, but I found this series to be more imaginative and darker than many other ‘adult’ novels. A lot of his other books have been for younger children, but when I heard that he’d written an out-and-out SF novel called Railhead, I had to check it out.
Railhead is an exciting amalgam of two of my favourite SF series: Dan Simmons’ Hyperion Cantos (well, the first two books, anyway), and Iain M. Banks’ Culture series. The Hyperion part stems from Railhead’s network of wormholes, connected by – of course – railways; plus the presence of godlike AIs with their own cryptic plans. The Culture part is represented by the slightly-smarter-than-human AI trains, with appropriately Banksian names, plus the well-written action, explosions, drones, and AI avatars. There’s also a dash of Dune and Hunger Games in there, as well.
Perhaps the most Banksian thing – and the most surprising to see in a young adult SF novel – is Railhead’s refreshingly modern treatment of gender norms and sexuality. Some characters are gay, and some characters regularly switch sexes, leading to offhanded passages like this:
She was gendered female, with a long, wise face, a blue dress, silver hair in a neat chignon.
Malik got a promotion. He got himself a husband, a house on Grand Central, a cat.
And, to cut the story short, it fell in love with him. And he fell in love with it. In the years that followed, Anais came to him again and again. Sometimes its interface was female, sometimes male. Sometimes it was neither. Different bodies, different faces, but he always knew it.
What are the 100 objects that future historians will pick to define our 21st century? A javelin thrown by an 'enhanced' Paralympian, far further than any normal human? Virtual reality interrogation equipment used by police forces? The world's most expensive glass of water, mined from the moons of Mars? Or desire modification drugs that fuel a brand new religion?
A History of the Future in 100 Objects describes a hundred slices of the future of everything, spanning politics, technology, art, religion, and entertainment. Some of the objects are described by future historians; others through found materials, short stories, or dialogues. All come from a very real future.