When Surveillance Goes Private: A 2027 Retrospective

I’d like to begin with a story.

I was born in the UK — in Birmingham — although obviously I don’t have the accent! My parents came from Hong Kong, but we didn’t visit it until I was a few years old, since it’s quite the trip for any family.

The approach to the old Hong Kong airport in Kowloon Bay is hair-raising. You descend between skyscrapers, so close that you can practically see inside their windows. We were staying with relatives near the airport, which was fun, if noisy.

Me and my brother did the rounds of our aunts and uncles and grandparents, but eventually it was time for my parents to see their own friends. We were left with our cousins and the world’s greatest collection of pirated Famicom and Sega Megadrive videogames.

Now, these cousins. Their great aunt Agatha lived with them. As I was told it, she’d travelled the world, sailed the seas, fallen in love with all sorts of people, and made her fortune. Now in her eighties, she was still as sharp as a tack, with photographic memory and a wickedly funny tongue.

Agatha couldn’t easily walk any more, so more often than not, she’d sit in her armchair in the corner, situated just so she could see the whole living room and kitchen and hallway, and watch everyone coming and going. She wanted to know what was going on in the home, but more importantly, she wanted to be useful — and she was.

If you were on your way out but you’d forgotten to get pick up your keys, auntie Agatha would remind you (very loudly). If you were looking around for a letter or book you’d misplaced, she’d know precisely where you’d left it. She’d even watch you while you were doing your chores and tell you just which spots you’d forgotten to dust. Her job, as she saw it, was to help the household flourish, and keep them safe.

I’m sure some of you have figured out where I’m going with this. Almost forty years later, we all have auntie Agathas, watching over us in every room of our homes.

Today, in 2027

8 out of 10 households in the UK and US now have multiple home cameras. It’s one of the most astonishing success stories in the history of technology, with an adoption curve almost as impressive as smartphones in the previous decade. But unlike smartphones, we’ve bought many more than one per person.

Worldwide figures

What fuelled the rise of home cameras? Let’s start with the devices themselves.


Why did the home camera revolution only begin in 2018 and not earlier? Fast and cheap internet was an essential condition, allowing owners to monitor their homes on the move and abroad. Another boost came from the ‘smartphone dividend’, which reduced the price of camera components.

But beyond 2018, two technological revolutions fuelled the rise of home cameras: charging and sensors.

Early Home Cameras

Nowadays, it’s hard to believe that almost all home cameras in the mid-teens were wired. These cameras had no batteries and had to be tethered to a power outlet at all times, constraining their placement within homes and generally causing an unsightly mess.

From 2018 to 2023, home cameras adopted batteries lasting one week to one month — a massive improvement over tethering, as they could be mounted anywhere, including outdoors and in bathrooms — but arguably more irritating than wires, as their “low-power” chirping became a frequent sound in many homes.

It wasn’t until the full rollout of resonance charging, or more broadly speaking, ‘charging at a distance’, that cameras truly permeated every room and corner of our homes. Freed from the need to be wired or retrieved every month, and completely weatherproofed, they were stuck in the corners of ceilings, thrown onto roofs, hung on walls, mounted on gates, and balanced precariously on shelves. Providing they remained within range of a resonance station, they could be placed and forgotten for years.

The improvement in the sensor capabilities of home cameras has been even more extraordinary. In 2018, most cameras had a laughably-named ‘high-definition’ resolution of 1920 x 1080 — barely enough to distinguish small objects across a room. Matters were soon improved with the introduction of ‘High Speed 4K’ sensors that could examine minute changes in skin bloodflow to monitor people’s heartrate and emotional state. Soon after, cameras reached beyond the visible spectrum to infrared and ultraviolet, essential for home security and health applications.

It wasn’t until the introduction of multipath LIDAR in 2024 that the supremacy of cameras in our hearts and homes was assured. Various primitive forms of LIDAR had been present in earlier cameras, as an aid to home VR and augmented reality through precision depth mapping and 3D positioning. Multipath LIDAR, however, multiplied the reach of our cameras by using reflections to see around corners into other rooms; to interpolate new camera angles; and to even see inside objects. It finally provided total awareness of all objects within a home, without the need for excessive numbers of cameras.

In fact, the most advanced multipath systems now pose a threat to the business model of the camera manufacturers who’ve emphasised quantity over quality. Now that a single camera can take the place of many, it’s likely that overall camera shipments could begin falling.

Enough about technology — why did people invite cameras into their homes, and what did they use them for? I’ve identified five broad applications, in rough chronological order: Continue reading “When Surveillance Goes Private: A 2027 Retrospective”

A History of the Future in 100 Objects

Last year, I listened to a programme on Radio 4 called A History of the World in 100 Objects. It took 25 hours, or 1500 minutes.

In the show, the BBC and the British Museum attempted to describe the entire span of human history through 100 objects – from a 2 million year-old Olduvai stone cutting tool, to the Rosetta Stone, to a credit card from the present day. Instead of treating history in a tired, abstract way, the format of the show encouraged real energy and specificity; along with four million other listeners, I was riveted.

After the show ended, I immediately thought, “What are the next 100 objects going to be?”

Which 100 objects would future historians in 2100 use to sum up our century? A vat-grown steak? A Chinese flag from Mars? The first driverless car? Smart drugs that change the way we think? And beyond the science and technology, how would the next century change the way in which we live and work? What will families, countries, companies, religions, and nations look like, decades from now?

I couldn’t stop thinking about it – it was the perfect mix of speculation grounded in science fact and science fiction. So I’m creating a new blog called A History of the Future in 100 Objects. I’m going to try and answer those questions through a series of 100 posts, one for each object. Along the way, I want to create a podcast and a newspaper ‘from the future’, and when I’ve finished, I’ll put it all together as a book.


Before I begin, though, I’m raising money to help pay for the podcast and printing the newspapers and books, and I need your help.

If you visit my Kickstarter page, you can pledge money towards the project in return for all sorts of goodies, including getting copies of the newspaper and books.

(Kickstarter is a very neat way of funding projects through individual pledges. A creator – like me – sets up a project and a target amount, and only if the target is reached does any money get paid. So there’s no risk – if I don’t make the target, then you won’t get charged! Plus they take payments on credit cards from around the world, which is handy and much easier than messing about with PayPal).

I’m really excited about this project – it’s going to be the first book-length piece of writing I’ll have done, and it’s going to combine a lot of my experience from writing about science and technology and thinking about the future. It also touches on a big interest of mine, which is new modes of publishing: I toyed around with pitching the idea to a publisher first, but I want to see how far I can get with the community’s help (that’s you!).

So, if you’re interested in the project, please check out the Kickstarter page and support it – even just a single dollar is really helpful! And if you know anyone who might be interested, please pass the word on.

It’s a brave new world out there – let’s see what’s going to happen…

iPhone 4: The Last Mobile Phone

The iPhone 4 may be the last major advance in mobile phones we’ll ever see. There’ll still be plenty of incremental and useful improvements, but it’s hard to see what kind of attention-grabbing features are left:

  • The Retina screen, at 326 pixels per inch, approaches the limits of human vision; it’s the end of the line for these kinds of displays. The iPhone could do better outdoors, but that doesn’t seem to have been a particularly successful selling point for eInk; and they could go 3D, but I’m not convinced that consumers even want that (not that it’d be too hard anyway).
  • Battery life is now about 10 hours; we’d all be happy if it was longer, but most people have gotten used to recharging their phones every night, so improvements beyond a day or two are not big selling points.
  • Network speed and reliability absolutely could be better, but this is an issue for network operators, not manufacturers like Apple. No doubt when the next super-fast standard (LTE) is widespread, we’ll see a new chip dropped into every phone. So what? It’s still the same phone.
  • At last, the iPhone has two cameras, and the main camera performs very nicely, with good 5MP photos (I always laugh when I hear people boasting about their 8MP mobile phones, given that their photos always end up on Facebook) and 720p HD video. I don’t see many people clamouring for 1080p video.
  • GPS, digital compass and a gyroscope are all built-in now – what more do you need? Unless the Galileo navigation satellites offer significant improvements over GPS for consumer applications, I think we’re at the end of the line here as well.
  • While it’d be nice if you could roll up the iPhone, or it was as thin as a credit card, we’re approaching the point of diminishing returns here. It’s not as if it’s busting anyone’s pockets any more.
  • Speed: Again, diminishing returns – it takes me only a few seconds to load up most apps. With iOS 4’s ‘multitasking’, switching between commonly-used apps is almost instant. Having said that, I’m sure we’ll see dual-core processors at some point and people will get all excited about the battery life improvements and apps taking only two seconds to load instead of three.

You can quibble about the details – maybe I’m wrong about battery life or processor speed – but I don’t see any major technological advances over the horizon that Apple – or any other company – can use as a killer feature. The iPhone 5 and iPhone 6 will be faster, longer-lasting, thinner and lighter, but they’ll still be basically the same. They won’t have three cameras, or ultra-HD video, or a 6″ screen, or a month-long battery. There won’t be much at all that distinguishes the iPhone from the top-of-the-range Android phones*, which will be quick to catch up; and more importantly, there won’t be much that distinguishes the iPhone 6 from the iPhone 7 (certainly not the screen, unless it goes 3D).

In other words, we’ve reached the ultimate destination of hand-held communication devices with displays – that is, mobile phones. It’s not going to get any better than this.

*Other than iOS, of course, which will continue to improve, along with the apps. But how important will the new hardware be for achieving this?

What’s Next?

With the mobile phone reaching a plateau, Apple will have to look elsewhere in order to make the advances in user experience that consumers will pay enormous amounts of cash for. The iPad is one area, the Apple TV is another. But what of personal communication devices?

Apple patent application for 3D viewing glasses

There are some promising candidates, such as subvocalisation tech and the long-awaited augmented reality glasses (which Apple has been researching since at least 2008). Both would promise major improvements in communication, work, and entertainment, but I have yet to see good demonstrations of either tech in practice.


Without them, it’s unlikely that they’re ready for market – remember that the iPod was far from being the first personal MP3 player, and the iPhone was certainly not the first smartphone. Perhaps Apple has some ultra-secret tech up their sleeves, with Foxconn factories just waiting to spin into action – but that’s just a fantasy. Just look how difficult it is to make an iPad, let alone laser-based computer glasses.

It looks like we’re going to spend a few years in limbo between mobile phones and whatever comes next. Good job Apple has the iPad to tide it over.

The Long Decline of Reading

“It doesn’t matter how good or bad the product is, the fact is that people don’t read anymore. Forty percent of the people in the U.S. read one book or less last year. The whole conception is flawed at the top because people don’t read anymore.”

– Steve Jobs on eBook readers and the Amazon Kindle

Steve Jobs frequently makes disparaging remarks about markets that Apple later enters (MP3 players, mobile phones, games, etc), so there’s little reason to believe that we won’t all have ‘iBooks’ in three years time. Still, the numbers don’t lie – 40% of people in the US (and 34% in the UK) do not read books any more. They may surf the web, or the read the occasional newspaper, but they do not read more than one book (fiction or non-fiction) in a year.

The closer you look at the statistics, the more depressing it gets. In the US, only 47% of adults read a work of literature – and I don’t mean Shakespeare, I mean any novel, short story, play or poem – in 2006. If that doesn’t sound too bad, consider that it’s declined by 7% in only ten years. It doesn’t matter whether you look at men or women, kids, teenagers, young adults or the middle-aged; everyone is reading less literature, and fewer books.*

When I share this ray of sunshine, I encounter three different reactions, the first being acceptance: “Oh well, that’s too bad! What’s for dinner?” But it’s not just bad, it’s awful. Reading skills for all levels of educational attainment are declining, up to and including people with Masters and PhDs. Reading is strongly correlated with all sorts of good things, such as voting, volunteering, civic responsibility, and even exercise. Furthermore, reading skill at a young age is a very good predictor of future educational success and earnings. Correlation is not causation, but it’s a fact that employers are demanding people with better reading and writing skills.

* I suppose there is one piece of good news, in that those aged over 75 are reading slightly more than they used to…

The second is denial: “Are you really sure these statistics are accurate? And even if they are, most people never read books in the first place.” The statistics are as accurate as any that can be found. Most of the numbers quoted here are from the 2007 National Endowment for the Arts report To Read or Not To Read, which conducted its own surveys and collated others from the US government and universities; and all with large sample sizes. I’ve quoted from sections of the report here, but the whole thing is well worth reading.

In case the non-Americans think that none of this applies to them, and that they can stop reading now, they wouldn’t be alone in their countries. Where America goes culturally and technologically, the rest of the world tends to follow. I haven’t been able to find as good statistics for the UK (and I have looked), although those at the Literacy Trust are not cause for celebration.

I am not talking about basic literacy here, which has been steadily rising for the last few centuries and effectively reaching 100% in most developed countries and many others besides. Basic literacy does not show any signs of slipping, but we are in dire straits if that’s the best we can do. It is true that book reading has never been anywhere close to universal, but it is also true that book reading, and the reading of literature, is gradually declining across all age ranges.

Finally, the third is defensive: “So what? People are reading more than ever on the web!” I am not aware of any research showing how much people – young people in particular – read on the web; it’s notoriously hard to measure, since the nature of the technology changes very quickly. In any case, I suspect that the total volume of words that people read on the web is really quite high, perhaps higher than what they would have otherwise read in books.

If we were only worried about the number of words people read, then we could take heart from a couple of game designers I met at a reading event. One said that his mobile phone game had 30,000 words in it. The other informed the audience that his quiz game not only required reading because the questions were written out – rather than spoken – but it actually had a traditional three-act structure (just like real literature) because it had a beginning, middle, and end. I could go on, but I think you get the idea: reading is not only about quantity, it is about quality and complexity. Reading 100 tabloid articles is not the same as reading ten essays or a single book.

The situation is undeniably bad. What’s going to happen next? Continue reading “The Long Decline of Reading”

Meeting Room Yield Management

Six to Start is based in a large building containing dozens of managed and serviced offices. On the way to the shared kitchen at work, I noticed two empty meeting rooms. It occurred to me that, just like an empty seat on a plane, an empty meeting room is lost cash. Sure, there is a small cost on keeping the room clean and well-maintained, but the standard fees for meeting room use provide an enormous profit margin. Given that most of the cost for the room – building it and buying furniture – has already been paid, surely it would be wise to keep it in use as much as possible, even at a lower per-hour fee, in order to maximise profit?

I suspect that most building managers don’t bother doing this for one main reason – it would take too much work. To prevent losing money either through oversupply (by means of unused meeting rooms that could’ve been offices) or undersupply (by means of lost meeting room fees when all the rooms are full) there is usually a certain ratio of meeting rooms to offices.

Obviously the calcuation isn’t perfect. Most rooms will be empty most of the time, and occasionally all the rooms will be full. In order to still try and make money, managers will set the fees at a rate that will – over time – cover costs, even when the room is empty.

This is incredibly inefficient – as inefficient as an airline setting a single price for tickets within a class, and then letting the plane fly with any seats empty. In 1985, American Airlines began a yield management program in which otherwise empty seats were sold cheaply. Nowadays, we all know that there are certain days where tickets cost much more, and that we can also snap up bargains if we wait until the very last minute.

So, why not perform yield management on meeting rooms? Set up a simple tracking system for usage of all meeting rooms in a building and dynamically set prices based on both historic and live demand. Bump up the prices for rooms at peak times (late morning, early afternoon) and for those reserving in advance for important meetings, and reduce them for slower times (evening, weekends). Allow non-time sensitive customers to check prices so that they can snap up a bargain if the room is empty for an impromptu brainstorm.

The main reason I’m interested in this is not because Six to Start needs to use meeting rooms a lot, or that I see this as a brilliant business opportunity (then again, who knows…); it’s because my thoughts have lately often turned to organising events like Barcamps and miniconferences. These sorts of events are relatively easy to set up, but you do still need to find a reasonably large amount of space, which can be tricky to find. I remember standing on the roof garden during GameCamp (kindly hosted by Sony 3Rooms by Brick Lane) and looking out at the large office buildings nearby, thinking of the dozens if not hundreds of meeting rooms that were going empty right at the moment. Rooms that could be used – and paid for – by any number of interest groups, clubs, conferences and reading groups. If buildings plugged their data into a central website (say, RentAMeetingRoom.com) which aggregated and displayed all meeting room availability and prices in a city, you could really make the system much more efficient. Perhaps in time you would even have people buying meeting room futures, or suchlike.

There must be any number of physical resources like airplane seats in which:

  • There is a fixed amount of resources available for sale.
  • The resources sold are perishable. This means that there is a time limit to selling the resources, after which they cease to be of value.
  • Different customers are willing to pay a different price for using the same amount of resources.

where yield management isn’t being used because the prices don’t justify it yet (after all, flights are more expensive). But as the price of the software comes down and administering the use of the resources becomes more streamlined, I think we’ll be seeing yield management being applied to all sorts of weird things like cars, bicycles, rarely-used powertools, pianos, gardens and so on. What a glorious future we have ahead of us!

Tip of the Tongue

A phenomenon well-known by psychologists, and pretty much everyone else, is called ‘tip of the tongue’, and it’s described in this American Scientist article:

When we have something to say, we first retrieve the correct words from memory, then execute the steps for producing the word. When these cognitive processes don’t mesh smoothly, conversation stops.

Suppose you meet someone at a party. A coworker walks up, you turn to introduce your new acquaintance and suddenly you can’t remember your colleague’s name! My hunch is that almost all readers are nodding their heads, remembering a time that a similar event happened to them. These experiences are called tip-of-the-tongue (or TOT) states. A TOT state is a word-finding problem, a temporary and often frustrating inability to retrieve a known word at a given moment. TOT states are universal, occurring in many languages and at all ages.

The article goes on to explain that tip-of-the-tongue may be caused by weak connections between words and their phonology (their sound) in our brain; the weaker they are, the more likely it is that you will know a word, but you just can’t recall how to say it.

There’s also a general theory of memory, that we retrieve memories through their connections to other memories – the stronger the connections, the easier the recall. You can imagine a cascading chain of memories of a moment years ago, set off by a particular smell or piece of music from that day; or revising for a exam for months and months, baking those connections in.

What’s interesting is that these connections are now being externalised from our brain, and supplemented by computers and the internet. Here’s what I mean: earlier today, I needed to recall the name of someone who’d won a prize. I couldn’t remember what the prize was, what it was for, or even when this happened. I did, however, know that it would be in an email, and the email would contain the word ‘Jeremy’. So I did a search in my mail for ‘Jeremy’, and a quick scan of the search results later revealed the email.

I don’t relate this to show that I am some sort of search ace; far from it. Plenty of people use searches in their mail, their RSS feeds, their computers, or even the entire web, to supplement things that they already know but just can’t retrieve. These days, the searches are fast enough, and the information kept in databases broad enough, that this practice of laying down virtual connections is accelerating.

I expect that as we store increasing amounts of important information on computers, and we continually improve the speed and accessibility of searches (through, say, silent messaging), we will find it ever more difficult to see where our memory and recall processes end, and where those of our computers begin. We’ll be able to remember far more, far faster – and if we’re ever disconnected from our computers, it’ll be even more painful.

Puzzle Quest, and the USA alone

Unfortunately I’m going to have to disappoint you – I’m not actually going to write a review of Puzzle Quest here; there are plenty of good ones already out there. The one thing I will say is that the game ended far earlier than I imagined – it comes with a large, scrollable world map, and when I reached the final mission, at least half of it was unexplored. I was quite relieved though, as I’d already spent a good dozen hours playing it and was getting worried at the amount of time I was wasting (and yes, I call it wasting, because even though playing Bejeweled is sometimes fun, there are more interesting ways to have fun).

Up until the final mission, I’d sailed through the game, having discovered a strategy that would reliably defeat all opponents except in the unluckiest of games (wear the Firewalker’s Staff, then cast Hand of Powe twice, then Fireball on the densest collection of skulls you can find, in case you were interested). I assumed that the final mission would be tricky and require a few tries, but I’ve found it so overwhelmingly difficult that I’ve just given up. Your opponent in the mission, Lord Bane, frankly has spells so powerful that they break the game; the only way to beat him is to be extremely lucky. On one try, I almost succeeded, but even then I knew that it was a complete fluke. A disappointing end to an otherwise entertaining and impressively addictive game.

(Incidentally, I don’t think that the computer cheats in Puzzle Quest – I often had incredibly good luck in battles. But I do consider the setup of the final battle to be cheating.)

On a completely different note, there’s an interesting discussion going on at the Apolyton forums. What would happen if:

…in the blink of an eye the United States of America as it exists right now is placed on a imaginary Earth where humans have been extinct since the late stone age. To the Americans it seems like every country in the world has instantly reverted to a pristine natural state without any infrastructure or population and with undepleted resources. They have no instant explanation, but assume that with a few months of research they could realize they were dropped off on a alternative Earth.

Of course, this is a completely fantastical scenario, but it’s educational to speculate on because it reveals a lot of assumptions about America’s economy, military, politics, religion, ethnic groups, all sorts of issues. What would the military/industrial complex do without any enemies to fight? Would religious groups go to found new colonies? Would expatriates in the US want to re-establish their home countries? Could America retain high-technology (e.g. computer chips) without their factories in Asia? Does America grow enough food for itself, or will it suffer from lack of imports? If the US can’t rely on cheap labour in Asia to produce its goods, who can they use?


The acronym TTS is well known among those who develop call centre software, GPS car navigation devices and software for the blind. It means ‘Text To Speech’, and is more commonly known as voice synthesis, such as the conversion of written text (e.g. ‘Take the first turn on the left into Coronation Street’) into a computer-generated voice.

STT would therefore mean ‘Speech To Text’, and is usually called voice recognition. Voice recognition has been around for many years now, and is used in a simple form in call centres (‘Which size of pizza you would like – small, medium or large?’) and in a more sophisticated form in dictation software. Converting speech into text is unsurprisingly very difficult and quite computationally intensive.

The reason it shouldn’t be surprising is because we … don’t … speak … with … spaces … in … between … words, weactuallyspeakinacontinuousflow. Working out how where one word begins and another one ends is tricky enough, but there are even more difficult problems. Take accents, for example: I’m a native English speaker and I still find it difficult to follow what some die-hard Scousers say. Or take the inconvenient fact that many words have the same pronunciation; way, whey, weigh (these are known as homonyms).

There are various clever strategies tricks that programmers have used to make voice recognition actually possible, such as getting users to train dictation software to their accent and by using context to decide which words might have been said. But it’s still tricky, and it’s not quite there yet.

Of course, I’m an unabashed optimist about technology, and I’m as interested in its societal effects as the way in which it actually works. So, let’s just imagine it’s 2017 and voice recognition is not only extremely good, but extremely widespread. Your mobile phone can transcribe your all phone calls, and some techies even have jewelery that will transcribe everything within earshot.

What does this do? It dramatically reduces the portion of our life that is not digitised and searchable. Already, we can refer to emails, text messages, photos (pretty much all of which are now digital) and instant messages whenever we want to check what someone said or did. As with all technology, this has its upsides and downsides. I frequently search through old correspondence to find out someone’s favourite music or where an old friend works now. But it does mean that even private conversations online could eventually become public, and so I have to watch what I say. This is most apparent when it comes to legal proceedings – it’s perfectly possible for someone to demand to see your emails or IMs if you’re involved in a suit, and deleting them all can look very suspicious.

With some effort, though, you can remain fairly discreet online. I’m not convinced you can do the same when it comes to talking out loud. If people begin transcribing all their conversations, all the time, it’ll be impossible to not slip up. I suppose you could go ‘off-the-record’ and turn it off, but who would know if you really did? After all, why not transcribe everything? Imagine how useful it would be during meetings – notetaking would be vastly simplified (although not eliminated). Imagine how tempting it would be to try and look at the conversations your friends had about you. Imagine how people might post conversations directly to Facebook. Live.

And for more mundane purposes, imagine having every spoken word on radio and TV transcribed. While I enjoy listening to some podcasts, there are only a few occasions (gym, coach, planes) where I can do that; I’d rather just read transcripts most of the time. This would suddenly free up vast amounts of high quality material, and Radio 4 would suddenly become one of the web’s most popular destinations.

This is not science fiction. It could be done quite easily now – I could wear a small lapel microphone, connect it to my iPod and set it recording all day. When I get home, I could upload the recording to my computer and run it through a voice recognition program on a PC. It’d pick up what I said pretty reliably, and it’d probably get a reasonable percentage of what other people said. There are probably some people who already do this.

In a couple of years, I can imagine this process happening even more smoothly, where the recording is automatically synchronised with my computer and uploaded to Google’s servers, which crunch through it with the power of a million PCs and return it to me, a few minutes later, with 99.9% reliability, with each speaker identified and each conversation handily logged and cross-referenced in Google Mail.

It’s coming. It’s not that difficult. The question is, how will we deal with it?

The Death of Publishers

Update: Virginie Clayssen has done a wonderful French translation of this post on her weblog teXtes

Adrian Buys an eBook Reader

A couple of weeks ago, I idly visited mobileread.com and discovered something incredible – Tiger Direct in the US were selling Sony eReaders for $100, a discount of $250. Thanks to the rampaging power of the British pound, that’s less than £50. I’d always been interested in getting an eBook reader, so this was a brilliant opportunity to try one on the cheap.

A few frantic instant messages to US friends, and it was ordered. A lot of people at Mobileread were worried the price was a mistake, but we later discovered that it was an experiment by Sony, presumably to see how fast 1000 units would sell. Answer: less than half a day, and that’s only because it began when the US was asleep (amusingly, many of the units consequently went to Europeans).

eBooks and the Future of Book Publishing

The impending arrival of my eReader has had me thinking, once again, about the future of the book publishing industry. Like most of the other early adopters, I intend to load my eBook up with a few hundred out-of-copyright classics from Project Gutenberg; all of Dickens, Austen, Bronte, Conan Doyle, Shakespeare and others would be a fine start (and there goes the classics market!).

What about more recent books that are still under copyright? Well, you can buy novels and short stories from places like the Sony Connect Store and smaller operations like Fictionwise. Unsurprisingly, these books have DRM (like most songs from the iTunes Store) and this can pose a problem for early adopters with eBook readers that aren’t compatible. Also, the prices of the eBooks are startlingly uncompetitive with traditional retailers: it’s almost always possible to buy physical copies cheaper from Amazon or its Marketplace sellers.

All of this means that eBook readers are left with only one advantage over physical books – the ability to carry hundreds of books in the space of an average hardback. That’s still pretty good, but it’s not worth $350.

But what if you could get copyrighted books for free? Now that would change things. Already, there’s a small but growing number of ‘ripped’ books floating around the web and on torrent sites. They’re mostly expensive textbooks or bestsellers; all of the Harry Potter novels are online, of course (that’s where I read the first two) and it’s well known that the final novel was ripped before it went on sale. Since people tend to read pirated books on their computers, which is uncomfortable, it’s not surprised that there’s relatively limited number of ripped books so far. This will quickly change with the advent of good and affordable eBook readers.

Ripping Books and Swapping Them

Ripping a physical book is not as easy as ripping a TV show or CD. Ripping a CD into MP3s is a one-click operation, and recording a TV show is not much more difficult for those who are experienced. Physical books, however, either require transcription by hand, which is tedious (but an interestingly parallelisable task) or a scanner with autofeed (you slice off the spine, then run the pages through a scanner and OCR them). The results aren’t as good as music or videos, since errors creep in and you can lose the formatting, but it’s usually good enough.

So, for the moment, ripping books isn’t quite the industrial, casual operation that ripping music or video is – but it’s getting easier every day. I imagine enterprising rippers will buy Ebooks online, take screenshots of all the pages and then OCR them – or simply crack the encryption. These rippers need not even be breaking the law by doing this – last year, Australia made it legal for people to carry out ‘format shifting’, in recognition of the fact that everyone was ripping their CDs into MP3s anyway. The law doesn’t just let you shift music between different formats – it’s also for photographs, videos, magazines – and books. In other words, if someone in Australia buys a book, they are perfectly entitled to rip it and create an unencrypted copy. Should that copy somehow find its way onto the Internet, well…

It could reach everyone in the world. It only has to be done once.

Ripped books do have one huge advantage over MP3s and videos; they are tiny. An uncompressed novel takes up about 100kb in plain text; even with formatting, you could compress it down to around 50kb. That means that a hundred novels would be 5MB – a wholly unremarkable size that could be emailed between friends easily. Ten thousand novels – say, the last 20 years of books worth reading – would take up 500MB. That’s about the same size as a ripped TV show that millions of people around the world routinely download every week.

The point is that text is trivially easy to send around the internet. We do it every day when we surf the web. When you couple that reality with affordable eBook readers, you have a serious problem for publishers. Continue reading “The Death of Publishers”

Arup’s Key Speech

Lately, I’ve been thinking about the values that companies hold, and how they influence what they do. Many companies have mission statements or tenets or core values; some of them adhere to those values, some ignore them, and some can literally be defined by them. But are they actually helpful, and how do you come about them?

An article in this week’s New Yorker about Arup showed how they handle it. I’d seen the name ‘Arup’ in many places over the last few years, in association with some very interesting major building and construction projects, from the Dongtan ecocity in China to the Marsyas art exhibit in the Tate Modern. They seemed a rather inscrutable company – while people like Norman Foster seem to be in the papers every day, I never saw anything about Arup.

Well, Arup are structural engineers who have branches all over the world. They work on a lot of interesting projects, obviously, but what really caught my interest from this article was what it said about the company’s philosophy.

Ove [the founder of Arup] died in 1988, at the age of ninety-two, but he is still a presence. A talk that he gave to his partners in 1970 is referred to at the firm as “the key speech” and is required reading for all new employees. In it, Ove explores themes that constitute a sort of mission statement: the importance of working noncompetitively with colleagues, of engaging in interesting, useful, and morally responsible work, and of pursuing “total archicture,” in which structural, aesthetic, human, and environmental considerations are treated as parts of a whole. In a related lecture, Ove said, “By creating a model fraternity, so to speak, we make a contribution to what is almost the central problem of our time: how to overcome the social friction and strike which threatens to overwhelm mankind. We could become a small-scale experiment in how to live and work happily together.”

Today, one conspicuous manifestation of the fraternal approach is the company’s internal computer network, known as Ovanet. Among its features is a large collection of technical forums, covering most of the firm’s many specialities and subspecialities. An engineer in the Arup office in Darussalam, Brunei, say, can post a question in the appropriate Ovanet forum about the bending moment of a particular loaded beam, and be reasonably certain that, overnight, the problem will be taken up sequentially by colleagues in Arup offices around the world.

The firm has grown substantially since Ove’s death, but it has done so in the manner he prescribed, by expanding horizontally into related fields, and by following the passions of the engineers, who are encouraged to create absorbing projects for themselves. Arup is a privately held trust, operated for the benefit of its employees, and its leaders don’t brood about short-term financial results. (The firm had revenues of 826 million dollars in 2006, and profits of 61 million) […]

Ove’s concept of the “model fraternity” is really an engineering scheme – a way of routing gravity through a professional organisation, and through a life. The Arup co-operative model is less a business plan than a human structural paradigm; it’s a reciprocal network, in which the load paths are mutually supporting, and it’s the true basis of Ove’s “small-scale experiment in how to live and work happily together.” Because of the nature of their work, Balmond and his Arup colleagues have been able to achieve something professionally that no single architect, however distinguished, could ever come close to: they have helped design a significant sampling of the greatest buildings of their time.

Continue reading “Arup’s Key Speech”