As I approach my third decade in the games industry, my natural curiosity about new technologies is now mixed with worry: that if I don’t learn them quickly enough, they might also bring my obsolescence. I don’t want to be that person who refuses to use a new tool and declares that it’s the children who are wrong, so I keep wondering if now’s the time to pick up Godot or start getting serious about VR.
Specific technologies aren’t what keep me up at night, though. You can always adapt to them pretty quickly. It’s entirely new ways of working that are the most challenging to adopt, whether that’s going from working in an office to working remotely, or going from a single game release to making “games as a service”. And I think AI may be one of the biggest changes to game development in a long while.
AI isn’t new to games, of course – it’s been used for decades, to govern the behaviour of NPCs and generate graphics. But in the last year, it’s been impossible to miss the tidal wave of AI-generated art from tools like DALL-E 2 and Stable Diffusion, which turn text prompts into increasingly decent-looking graphics; more than a few developers are using these tools to prototype art. Meanwhile, Nvidia’s GET3D AI tool can generate 3D meshes and textures, and various companies are commercialising “neural radiance fields” that can generate 3D views of complex scenes from just a few 2D photos – photogrammetry on the cheap, in other words, providing that you don’t mind the AI dreaming up the details.
Perhaps we don’t need 3D models for computers to create convincing moving images on 2D screens, though. That’s what Google’s Imagen Video and Meta’s Make-A-Video AI systems are doing, by converting prompts into short movies. They’re quite basic, but improving fearsomely quickly. And there’s AI writing tools based on OpenAI’s GPT-3 language model, like Sudowrite, which are already being used to help write novels. They aren’t limited to generating prose, either – you can use them to brainstorm ideas for plot and dialogue.
These AI tools might transform how companies create assets for games, like speedily generating the 3D models and dialogue for hundreds of NPCs in the next Call of Duty, but from a operational perspective, they’re comparatively minor changes to the overall development workflow. It’s games like AI Dungeon that are truly transformational though, where AI-powered conversational interaction is the central component of the game; and the Bureau of Multiversal Arbitration, a multiplayer game on Discord involving AI art generation.
There are important ethical questions here. Most generative AI tools were trained by scooping up massive amounts of text and graphics from the internet, often without their creators’ consent; some AI proponents claim this is perfectly fine, making a distinction between inspiration and copying, though the legal situation is far from settled. The problem becomes acute when people use AIs to create art “in the style of” specific living artists, who understandably believe that they’re being exploited and their livelihoods threatened.
And that leads to a broader question of whether we’re comfortable with AI tools disrupting entire professional fields overnight. It’s likely that a lot of the work that today goes to illustrators and 3D modellers and writers might soon be performed by AI systems. Claims that AI will simply increase the demand for skilled human professionals, or that professionals can easily retrain, displays a callous disregard for very real economic pain. What responsibility do the people benefitting from these tools – gamers who might get cheaper and more interactive games, game developers whose profit increases – have toward the wellbeing of those whose livelihoods might disappear?
Even if we solved these problems with new laws and income transfers, there’s yet another question: how will game developers use AI tools? As a writer, I instinctively bristle at the idea that an AI could write better than me. But I can understand a developer who was a less confident writer using AI tools to write the script for a game where the story wasn’t the main attraction. And I think there will be some games that don’t benefit at all from AI assistance.
Back in 2012, I saw Jason Roberts presenting a demo of his beautiful puzzle game, Gorogoa. It felt like a game out of time, one that could’ve been made in the 80s, or the 2000s, or even the 2040s, and indeed, the version he released five years later looked exactly the same as the demo. Maybe AI could’ve helped Jason with some of the art, but the puzzles and animations and story were so idiosyncratic that I can’t help but wonder that no matter the instructions you gave an AI, it wouldn’t have produced what he made.
Originally published in EDGE magazine issue 380 (February 2023). Photo by DeepMind on Unsplash.