Technology and the Virtues: Change Yourself, Change the Future

Vallor_TechnologyAndTheVirtues_comp_1_v2.jpg

Why write about the future? I’ve never seriously tried to predict the future, a fool’s game if there ever was one. Most science fiction writers are perfectly aware of the contingent nature of the future, and prefer to think about how new technology, and the new abilities it affords us, might alter our lives and habits and culture and institutions.

Today, 24/7 technology reporting offers us constant, hazy glimpses of possible futures. In one, we might downvote an obnoxious stranger at a glance with augmented reality glasses. In another, we can live, work, and sleep in an autonomous pod on wheels. The details don’t matter, like whether the pod is made by Google or VW or Ford – what matters is whether this vision provokes desire or distaste in us. And by ‘us’, I don’t mean humanity as a whole, but individuals, all of whom have some degree of choice about how they approach that future.

Some degree. One of the depressing realities of the 21st century is how we’ve  become ensnared by global capitalism such that if you want to live, work, and socialise with your friends and family, you don’t have any choice about the technology you use. Sure, you can choose between Apple and Google, and Instagram and Snapchat, and Gmail and Outlook, but if you want a job, if you want to stay in touch with your friends and family, if you want to get invitations to birthday parties and weddings, you will use a smartphone, an instant messaging app, an email provider, all of which are made by the same three or four corporations.

Our seeming powerlessness runs head-on into the abuses of power by those very same corporations. Even if you are concerned about Facebook’s policies, what difference would it make if you deleted your account? Should you stop using Uber and use Lyft? Or not use ridesharing at all? Just how bad are we meant to feel about joining Amazon Prime and exploiting warehouse workers? If have no choice over what technologies we adopt, and if those technologies exert more and more power over our lives, how can we hope our lives will be better tomorrow than they are today, other than hoping that corporations won’t “be evil”?

I don’t know why Prof. Shannon Vallor’s book, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, attracted so little notice when it was published in 2016. Perhaps it’s because she counsels a middle path between starry-eyed Silicon Valley techno-utopianism and deeply conservative techno-pessimism. Perhaps her formidable academic credentials are seen by journalists as inferior to working at Google as a design ethicist for a few years. I really couldn’t say.

Regardless, Technology and the Virtues is the most useful, thorough, realistic, and hopeful book I’ve read that explains how we as individuals, and as a global species, should evaluate how we should use and choose technology today and in the future. Vallor, a philosopher of technology at Santa Clara University, claims that today’s technologies are so powerful and pervasive that our decisions about how to live well in the 21st century are not simply moral choices, but that:

they are technomoral choices, for they depend on the evolving affordances [abilities] of the technological systems that we rely upon to support and mediate our lives in ways and to degrees never before witnessed.

which means:

a theory of what counts as a good life for human beings must include an explicit conception of how to live well with technologies, especially those which are still emerging and have yet to become settled, seamlessly embedded feature sof the human environment. Robotics and artificial intelligence, new social media and communications technologies, digital surveillance, and biomedical enhancement technologies are among those emerging innovations that will radically change the kinds of lives from which humans are able to choose in the 21st century and beyond. How can we choose wisely from the apparently endless options that emerging technologies offer? The choices we make will shape the future for our children, our societies, our species, and others who share our planet, in ways never before possible. Are we prepared to choose well?

This question involves the future, but what it really asks about is our readiness to make choices in the present.

Upon which principles should we make those choices?

Continue reading “Technology and the Virtues: Change Yourself, Change the Future”

The Fable of the Anti-Dragonist Thought Leadership

A riposte by Zarkonnen to Nick Bostrom’s The Fable of the Dragon-Tyrant, a tedious story that spends 5000 words telling us that death from ageing is bad and we should try to prevent it:

One day, an anti-dragonist on a speaking tour visited a town. When he arrived, most of the town’s inns were already full, and he had to make do with a small room in a small in in a run-down part of the town. The next morning, he stood outside the inn on his soap box and told people about how the dragon could be defeated. A small crowd gathered around him. When he had finished speaking, a woman asked: “My children are hungry. My husband went off to war against the tigers and never came back. How does killing the dragon help them?”

“Well, they too will one day be fed to the dragon!”

“But they are hungry now. My baby is very weak. She cries all the time. Even if she doesn’t die, she’s going to grow up stunted.”

“I’m sure you can find a way. Anyway, I’m here to talk about the dragon, it’s…”

Another interrupted him: “My son was killed by the king’s men three weeks ago. They laughed as they cut him down. No one will hear my case.”

“Well, I’m sure they had a good reason. Your son was probably a criminal.”

Another said: “My family beats me because I don’t want to marry the man they chose for me. Right now, I wouldn’t mind being eaten.”

“Listen. I’m not interested in the problems of you little people. They’re not my problems, and anyway, you’re probably lying, or exaggerating, or just not trying hard enough. But I’m scared of the dragon, because the dragon’s going to eat everyone, including me. So we should concentrate on that, don’t you agree?”

And the people rolled their eyes and walked away.

When Surveillance Goes Private: A 2027 Retrospective

I’d like to begin with a story.

I was born in the UK — in Birmingham — although obviously I don’t have the accent! My parents came from Hong Kong, but we didn’t visit it until I was a few years old, since it’s quite the trip for any family.

The approach to the old Hong Kong airport in Kowloon Bay is hair-raising. You descend between skyscrapers, so close that you can practically see inside their windows. We were staying with relatives near the airport, which was fun, if noisy.

Me and my brother did the rounds of our aunts and uncles and grandparents, but eventually it was time for my parents to see their own friends. We were left with our cousins and the world’s greatest collection of pirated Famicom and Sega Megadrive videogames.

Now, these cousins. Their great aunt Agatha lived with them. As I was told it, she’d travelled the world, sailed the seas, fallen in love with all sorts of people, and made her fortune. Now in her eighties, she was still as sharp as a tack, with photographic memory and a wickedly funny tongue.

Agatha couldn’t easily walk any more, so more often than not, she’d sit in her armchair in the corner, situated just so she could see the whole living room and kitchen and hallway, and watch everyone coming and going. She wanted to know what was going on in the home, but more importantly, she wanted to be useful — and she was.

If you were on your way out but you’d forgotten to get pick up your keys, auntie Agatha would remind you (very loudly). If you were looking around for a letter or book you’d misplaced, she’d know precisely where you’d left it. She’d even watch you while you were doing your chores and tell you just which spots you’d forgotten to dust. Her job, as she saw it, was to help the household flourish, and keep them safe.

I’m sure some of you have figured out where I’m going with this. Almost forty years later, we all have auntie Agathas, watching over us in every room of our homes.

Today, in 2027

8 out of 10 households in the UK and US now have multiple home cameras. It’s one of the most astonishing success stories in the history of technology, with an adoption curve almost as impressive as smartphones in the previous decade. But unlike smartphones, we’ve bought many more than one per person.

Worldwide figures

What fuelled the rise of home cameras? Let’s start with the devices themselves.

Technology

Why did the home camera revolution only begin in 2018 and not earlier? Fast and cheap internet was an essential condition, allowing owners to monitor their homes on the move and abroad. Another boost came from the ‘smartphone dividend’, which reduced the price of camera components.

But beyond 2018, two technological revolutions fuelled the rise of home cameras: charging and sensors.

Early Home Cameras

Nowadays, it’s hard to believe that almost all home cameras in the mid-teens were wired. These cameras had no batteries and had to be tethered to a power outlet at all times, constraining their placement within homes and generally causing an unsightly mess.

From 2018 to 2023, home cameras adopted batteries lasting one week to one month — a massive improvement over tethering, as they could be mounted anywhere, including outdoors and in bathrooms — but arguably more irritating than wires, as their “low-power” chirping became a frequent sound in many homes.

It wasn’t until the full rollout of resonance charging, or more broadly speaking, ‘charging at a distance’, that cameras truly permeated every room and corner of our homes. Freed from the need to be wired or retrieved every month, and completely weatherproofed, they were stuck in the corners of ceilings, thrown onto roofs, hung on walls, mounted on gates, and balanced precariously on shelves. Providing they remained within range of a resonance station, they could be placed and forgotten for years.

The improvement in the sensor capabilities of home cameras has been even more extraordinary. In 2018, most cameras had a laughably-named ‘high-definition’ resolution of 1920 x 1080 — barely enough to distinguish small objects across a room. Matters were soon improved with the introduction of ‘High Speed 4K’ sensors that could examine minute changes in skin bloodflow to monitor people’s heartrate and emotional state. Soon after, cameras reached beyond the visible spectrum to infrared and ultraviolet, essential for home security and health applications.

It wasn’t until the introduction of multipath LIDAR in 2024 that the supremacy of cameras in our hearts and homes was assured. Various primitive forms of LIDAR had been present in earlier cameras, as an aid to home VR and augmented reality through precision depth mapping and 3D positioning. Multipath LIDAR, however, multiplied the reach of our cameras by using reflections to see around corners into other rooms; to interpolate new camera angles; and to even see inside objects. It finally provided total awareness of all objects within a home, without the need for excessive numbers of cameras.

In fact, the most advanced multipath systems now pose a threat to the business model of the camera manufacturers who’ve emphasised quantity over quality. Now that a single camera can take the place of many, it’s likely that overall camera shipments could begin falling.

Enough about technology — why did people invite cameras into their homes, and what did they use them for? I’ve identified five broad applications, in rough chronological order: Continue reading “When Surveillance Goes Private: A 2027 Retrospective”

Artificial Intelligence: Another Inspection

Film critics were not kind when A.I. Artificial Intelligence was released in 2001. A.I. was directed by Steven Spielberg but originated from, and was made with, Stanley Kubrick, up until his death in 1999. A lot of reviewers accordinly blamed Spielberg for pretty much everything they disliked about the film, notably its final 30 minutes which appeared to be overly sentimental.

I enjoyed the movie when it was released. Admittedly, a lot of that was because I’d played the associated ARG, which also provided more context for the final 30 minutes. But it was hard to convince my friends that it was a good movie, especially in the face of critics.

In the past five years, prominent critics have begun reappraising A.I., to its benefit. A better understanding of the ending, and the relationship between Spielberg and Kubrick, sheds much light on the intention and message of the movie. In short, Spielberg didn’t write the ending, Kubrick put the teddy bear in, the aliens are actually machines, and the ending isn’t happy:

Roger Ebert: Great Movie: A.I. Artificial Intelligence Movie Review (published 2011). See his original review for comparison.

Watching the film again, I asked myself why I wrote that the final scenes are “problematical,” go over the top, and raise questions they aren’t prepared to answer. This time they worked for me, and had a greater impact. I began with the assumption that the skeletal silver figures are indeed androids, of a much advanced generation from David’s. They too must be programmed to know, love, and serve Man. Let’s assume such instructions would be embedded in their programming DNA. They now find themselves in a position analogous to David in his search for his Mommy. They are missing an element crucial to their function.

Robbie Collin at The Telegraph: AI revisited: a misunderstood classic (published 2014)

When the epilogue begins, it’s Kingsley’s voice that explains the ice age and the passage of time. Does that mean David’s story – ie AI – is itself a creation myth, told by these futuristic mechas about the making of their kind, as an attempt to understand the elder beings that made them?

“Human beings must be the key to the meaning of existence,” the Kingsley mecha tells David, and the line sounds odd until you realise these creatures hold humans in the same awed regard as humanity holds its gods. Dr Hobby’s son died so that David might live, and these new mecha are descended from David’s line.

In that light, AI’s ending isn’t twee, but wrenchingly sad. The love we’re seeing, between a mecha and a clone, is a simulacrum, as manufactured as a movie. But if it feels like the real thing to us, what does that tell us about the real thing? In that moment, Spielberg shows us real fear and real wonder, knotted together so tightly it becomes impossible to tell the two apart.

Jesse Hassenger at the AV Club: Contrary to popular opinion, Spielberg found the perfect ending for A.I.

Unpredictability, though, is not necessarily what audiences want, which brings us to the focal point of controversy over A.I., and a major reason the movie is more of a cult item than a confirmed modern classic: the film’s ending. Initially, David’s drive leads him to the bottom of the ocean, staring at a statue of the Blue Fairy, convinced that if he waits long enough, she will work her magic. You may have heard, or even subscribed to, the belief that this moment, with David waiting underwater indefinitely, is the “correct” end to the film. But the movie presses on past this neatness, jumping forward thousands of years. The Earth has frozen over, and an advanced race of mecha-beings (not aliens!) uncovers David. Through a process that is, admittedly, a little drawn out with explanations (including, essentially, two different types of narration), the mecha-beings, eager to learn from a robot who knew humans, agree to revive Monica for David. In this form, though, she’s more of a ghost; she can only stay revived for a single day. She and David spend a perfect day together before she drifts off to sleep, accompanied by her mecha son, essentially a dying ember of human life.

It’s understandable, then, that so many backseat directors would dutifully follow that program. This is not, however, Spielberg’s obligation. The film frequently adopts a robot’s point of view, but was not made by one. By sticking with David after thousands of years’ worth of waiting, Spielberg stays true to a robot perspective while also deepening David’s sadly close connection to human experience, a far trickier balancing act than having David dead-end at the bottom of the ocean. The actual and vastly superior ending of A.I. is more than a bleak kiss-off; it imagines humanity’s final moments of existence (if not literally, certainly metaphorically) as a dreamy day of wish fulfillment. David wants to be a “real boy,” and the scenes with the ghostly Monica turn his desperation and sadness from an imitation-human abstraction to a desire with an endpoint, which in this case coincides with, more or less, the end of humanity as we know it. As such, the sequence also turns the comforting idea of dying happily into something pretty fucking sad. Spielberg hasn’t grafted a happy ending onto a dark movie; he’s teased the darkness out of what his main character wants. David’s artificial intelligence has given him the very human ability to obsess, and then to take solace in his own happiness above anything else.

Mark Kermode at the BBC: AI Apology (published 2013)

And finally, Steven Spielberg in conversation with Joe Leydon:

In 2002, Spielberg told film critic Joe Leydon that “People pretend to think they know Stanley Kubrick, and think they know me, when most of them don’t know either of us”. “And what’s really funny about that is, all the parts of A.I. that people assume were Stanley’s were mine. And all the parts of A.I. that people accuse me of sweetening and softening and sentimentalizing were all Stanley’s. The teddy bear was Stanley’s. The whole last 20 minutes of the movie was completely Stanley’s. The whole first 35, 40 minutes of the film – all the stuff in the house – was word for word, from Stanley’s screenplay. This was Stanley’s vision.” “Eighty percent of the critics got it all mixed up. But I could see why. Because, obviously, I’ve done a lot of movies where people have cried and have been sentimental. And I’ve been accused of sentimentalizing hard-core material. But in fact it was Stanley who did the sweetest parts of A.I., not me. I’m the guy who did the dark center of the movie, with the Flesh Fair and everything else. That’s why he wanted me to make the movie in the first place. He said, ‘This is much closer to your sensibilities than my own.'”

Initial Thoughts on KSR's Aurora

Spoilers abound for the entire plot of Kim Stanley Robinson’s Aurora

I wouldn’t be exaggerating if I said that Kim Stanley Robinson’s Mars Trilogy changed my life. I was 14 and reading plenty of Arthur C. Clarke and Isaac Asimov when I idly flipped through our monthly book club brochure. They usually didn’t have any science fiction, so I was surprised to see an entire page devoted to a book called Red Mars. It was by some author I’d never heard of and therefore of questionable quality, but Arthur C. Clarke himself urged readers to give it their time. “The ultimate in science fiction,” or something similarly unambiguous.

We bought the book – we had to, that’s how book clubs worked – and I fell in love with the idea of colonising Mars. I felt as if Kim Stanley Robinson had demonstrated that not only was it possible, not only was it sublime, but it was absolutely necessary for the project of humanity becoming a fairer, more enlightened people. At an impressionable age, this book made the biggest impression, and was enough to spark my ambition to write an essay, win a competition, travel to a Mars conference in the US on my own, organise youth groups, speak at TED, and so on.

I am not active in the Mars exploration movement, or even the space exploration movement any more. I remain deeply interested, but it became clear to me that the road to Mars would be much longer and much harder than anyone had expected. Even now, even with SpaceX, it feels as if the decades keep ticking up. What once might have happened in 2020 will now happen in 2030, or 2040, or later. And when we get there, what then? Creating a world from scratch is hard, slow work.

Kim Stanley Robinson regrets the effect the Mars trilogy had on people like me. At least, that’s the impression I got from Aurora, a tale of the near-impossibility, and hence near-pointlessness, of creating an Earth-like environment outside of Earth. It’s not his fault; the science has changed since the 90s. We now know that Mars has much less nitrogen than we need for growing plants, and the vast amounts of perchlorates on the surface are a serious hazard to humans. These, and other new obstacles, could lengthen the time to terraform Mars from centuries to millennia, or tens of millennia. Perhaps our technology will advance to meet the challenge, but there’s no question the challenge is herculean.

Yet no-one seems dissuaded by this. In fact, I had never even heard of the nitrogen and perchlorates problem until reading Aurora. It’s as if merely asserting that colonising Mars is an imperative for the survival of humanity suddenly makes it possible. What must happen, will happen.

And why is colonising Mars an imperative? Because, in part, of Kim Stanley Robinson’s Mars trilogy.

So Aurora is a corrective. We follow an attempt to colonisation a planet orbiting Tau Ceti, light years from Earth. In short, it fails. Everything fails. Not the just colonisation of Tau Ceti, but the very starship that took the colonists there as well. All the beautifully designed miniature Earth-like biomes on the starship fail, because that’s what happens to enclosed ecosystems with a wide variety of flora and fauna, all evolving at different rates.

Our colonists do try, though. A engineer/biologist is positively heroic in her efforts to keep the starship running, a rather unusual note in a science fiction novel (although not, to be fair, The Martian); and some colonists are so determined to press on with the project in Tau Ceti that they choose to take the one in ten thousand chance of creating a new world. Those are, of course, terrible odds. Only in a certain kind of story do you win that gamble, and this is not that kind of story.

What kind of story is it, then? An anti-space exploration story? Not really. Robinson describes a solar system full of thriving outposts and colonies, all trading with one and another, and most crucially, with Earth. He talks about the eventual colonisation of Mars – in a few thousand years time. This is not the imagination of someone who wants to smash rockets. In his world, Space exploration is exciting, it’s laudable, it’s inevitable, but it’s not a solution to preserving the future of humanity. And while volunteers will line up to take the riskiest of gambles, it’s not so clear that their children and grandchildren, left on a fragile miniature ecosystem too far from Earth, should have to risk their lives as well. No, the future of humanity is best assured by preserving the future of Earth’s ecosystem.

This kind of talk used to sound like sedition to me, spread by shortsighted fools who’d say, “Why explore space when we have problems on Earth?” It still does, sort of. It may not seem like it, but humanity is wealthier than ever, and I still think we can well afford to explore and travel in space, and to Mars.

The problem is, it’s not just on Mars that the facts have changed, with its nitrogen and perchlorates – it’s Earth as well, with its warming air and rising seas and fraying ecosystem. So I don’t feel unjustified in changing my mind as well about our priorities and how we think about the future of humanity, not after reading Aurora.

It’s been almost twenty years since I first opened Red Mars, but I’m still impressionable – at least, by Kim Stanley Robinson.

Do adblockers favour legacy brands?

With the advent of ‘content-blocking’ in iOS 9, I run an adblock on all my devices* – desktop, laptop, phone, and tablet. Like several hundred million other people, I see next-to-no display adverts on the web. After a few days it becomes so normal to see the online world without ads that it’s a genuine shock when you have to turn your adblocker off.

Assuming that adblocker usage grows and isn’t negated by in-app advertising (e.g. in Facebook and Twitter) or native ads (e.g. in Buzzfeed), who does this favour? Could it be the biggest ‘legacy’ brands from Proctor and Gamble and Heinz who can afford to run expensive, unblockable ad campaigns during live TV events, along with outdoor display advertising? After all, I can’t run an adblocker on my eyes quite yet, so I still see billboards and posters and store promotions – most of which seem to be for the biggest and oldest brands.

Perhaps their ultimate advantage will be small. Adblocker uptake on mobile devices won’t be significant for a few years, which is plenty of time for big and small companies to find alternatives. Although I’m not sure what the alternative will be when we have heads-up displays that do block ads.

*I offset my guilt about this by spending quite a bit of money on subscriptions and memberships

Sentience Footprint

I’m confident that in a hundred years, eating meat will be regarded in the negative way we now view racism or sexism – an ugly, demeaning, and unnecessary act. Like smoking, it will simply fall out of fashion because we’ll find better and healthier alternatives, although we’ll still occasionally eat humanely reared-and-killed animals. Note that I still eat meat even though I should know better.

The interesting thing about eating meat is that it encapsulates a multitude of sins. You might worry about its impact on your own health; or perhaps on the environment, given the amount of water and land that a cow requires and the methane greenhouse gases it produces; or of course, on the life and suffering of the animal itself.

From an environmental standpoint, we should be eating far fewer cows and far more chickens, since the latter require less energy input to grow for a given calorie, and therefore (all things being roughly equal) produce less of negative impact. Or we should forget about the chickens and eat sustainably caught-or-farmed fish, which are even more energy efficient and have the smallest carbon footprint.

But what about from a suffering standpoint? You can feed far more people with a single cow than a single chicken, so if we want to reduce the suffering of animals, maybe we should be eating cows. But are cows more sentient than chickens? I don’t know how you measure that. And maybe the environmental impact of a single cow produces more suffering on other sentients than a chicken.

I feel like I’m taking utilitarianism to a place far beyond its ability to survive. I should probably read more Peter Singer.

A History of the Future, Now Free

Two years ago, A History of the Future in 100 Objects was published. The book describes a hundred slices of the future of everything, spanning politics, technology, art, religion, and entertainment. Some of the objects are described by future historians; others through found materials, short stories, or dialogues.

Today, I’m making all 100 chapters available online, for free.

The book has sold a few thousand copies – reasonably well for a first author. More importantly, it was received well by the people whose opinions I value; I was invited to speak at the Long Now Foundation last summer by Stewart Brand, and it was praised by the BBC’s Stephanie Flanders and by Grantland’s Kevin Nguyen, who called it one of the ‘overlooked books of 2013‘. Next month, I’ll be speaking about the same ideas at the Serpentine Gallery’s Transformation Marathon.

So, at this point I’m much more interested in spreading the ideas far and wide. Of course, you can still buy the book via Amazon or directly from me (it’s very nicely formatted), but I’m just as happy if you read it on the web.

I wrote A History of the Future in 100 Objects because I’ve always been deeply fascinated by what’s coming next. I’m a neuroscientist and experimental psychologist by training, and a games designer and CEO by trade. It’s my job to think up new ideas and ways to improve people’s lives, and perhaps because of that, I’m optimistic – cautiously, skeptically optimistic – about the future.

The future that I want to realise is the hard-fought utopia of Kim Stanley Robinson and Iain Banks and Vernor Vinge, not the dystopia that dominates fiction nowadays. But I’m not naive, and technoutopianism brings me out in hives, so don’t expect me to tell you that technology will make everything better.

This book is my small contribution to the exploration of the future. It turns out that writing a hundred short stories was far, far more difficult than I had ever imagined, and in truth only some of the chapters hit the mark perfectly. But even so, I think there are plenty of fun ideas there.

200 Years of Change

A game I like to play at history museums is imagining the present-day equivalents of past behaviour and objects. So at The Geffrye Museum of the Home in Hoxton, London, it’s fun to look at their Period Rooms and link up past and present behaviours.

thumbnailgenerator

Take the 1935 Living Room; the armchairs are pointed at the fireplace (which obviously would be a TV today), and there’s a record player and radio in the corner (also TV/hifi). Or the 1695 Parlour, in which the woman of the house would spend her day noting down the household receipts on the writing cabinet (i.e. iMac) before joining her husband for dinner and listening to him read out the day’s newspaper (watching Netflix).

1790

Then there’s the 1790 Parlour, with a set of playing cards laid out on the table. Just imagine what present day families might do when entertaining friends – why, they’d… play cards! Or maybe boardgames. Yes, it turns out that we still all want reasons to talk and gossip in an formalised way, and the things we did back 200 years ago are still pretty much exactly the same now.

1998

The Period Rooms go all the way up to 1998.

As you might expect from me, another fun thought experiment is imagining what the Period Room and gallery notes for 2014 would be; probably a room dominated by a big Samsung TV with a Playstation, some bluetooth speakers, Ikea bookshelves, a corner sofa, surround sound speakers, and a coffee table. “Here, the co-habiting couple would gather in the evening to watch ‘television serials’ and ‘YouTube cat videos’, while perusing social media on Twitter and Facebook on their tablet computers.”

The Period Room for 2034, of course, would just be an empty room with a near-invisible projector, an easy chair, and a virtual reality headset.

Eternal Fundraising, Luxuries as Resiliency, Isometric Buildings

Mr. Miller Doesn’t Go to Washington, a bracingly honest story about running for Congress. It just astonishes me quite how much time candidates – and elected politicians – have to spend on fundraising. Hours. A. Day.

I had written before about how crazy it is that we expect politicians to spend four hours a day (or more) on the money chase. But nothing prepares you for what it’s like to be in the candidate’s chair.

First order of business is introducing you to the bizarre rites and rituals associated with reaching out to the 1/20th of 1 percent of Americans who fund campaigns, and I soon learned consultants have studied dialing for dollars with anthropological precision. One consultant’s motto is, “Shorter calls means more calls!”—i.e., more money. So stop all the chitchat. When you make the “ask,” another told me—and that’s typically for the max of $2,600 per person, $5,200 per couple—just say the number and pause: Don’t keep talking. And above all, don’t leave L.A. for an out-of-town fundraiser unless you’re guaranteed to rake in at least $50,000, and preferably 100 large. Anything less and it’s not worth the hassle.

Blessed are the wastrels, for their surplus could save the Earth, a fascinating argument that luxury industries represent a massive pool of ‘unplanned’ resiliency in the face of disasters (as opposed to planned resiliency, which can easily be defunded):

Organic farms are an example [of a less excessive ‘luxury’]: they use their inputs (land, grain, animals) to produce food at higher cost and lower quantity than conventional farming. The advantages of organic food appeal to richer, western consumers. But if the situation were desperate, organic farms could be retooled for mass production of lower-quality but still edible foods. The same goes for factories making super-plasma, hyper-surround cinema-experience televisions (or similar toys for the wealthy). This rich demand maintains a manufacturing base for extreme luxury products, but one that could be repurposed for mass production of less extravagant but more useful products if needed.

Concrete Jungle – Building the Buildings: I had always assumed that the lovely 2D isometric buildings you see in games like SimCity must be the product of superbly trained artist. While I don’t doubt the skill involved, this step-by-step guide on drawing pixel perfect isometric buildings (using 3D intermediates) is fascinating:

29102014-5

Once everything is arranged pleasingly, it’s time to render. I’m using Blender to generate my renders- it’s completely free and it’s rendering engine is delicious. The scene I’m using has the render camera set up to render isometrically (is that a word?) What’s outputted is something that looks like this but bigger:

29102014-61