One Thousand Days Later

(or Fast Times at Cambridge High)

The predictive ability of science fiction is a very much hit and miss affair. For every astounding success made by the Clarkes and the Gibsons of the world, there are a hundred others that miss completely; after all, we aren’t flying to work in helicopters or living on Mars (yet). As a result, it’s difficult to tease out the likely predictions from the unlikely ones before the face – but it’s always fun to try.

Vernor Vinge is one of the better and more successful science fiction writers out there right now, and probably the most well known of his predictions is of online pseudonyms in his novella, True Names. In his most recent story, Fast Times at Fairmont High, Vinge depicts a world barrelling towards a head-on collision with the Singularity, where schoolkids can learn skills that would take us three years in as many days and the Internet permeates every part of life. As is usual with Vinge, it’s a great story, and it also has some truly original ideas. [1]

Fast Times

For almost two months, the world had haunted Brazilian towns and Brazil-oriented websites, building up the evidence for their ‘Invasion from the Cretaceous’. The echoes of that were still floating around, a secondary reality that absorbed the creative attention of millions. Over the last twenty years, the worldwide net had come to be a midden of bogus sites and recursive fraudulence. Until the copyrights ran out, and often for years afterwards, a movie’s on-line presence would grow and grow, becoming more elaborate and consistent than serious databases. Telling truth from fantasy was often the hardest thing about using the web. The standard joke was that if real ‘space monsters’ should ever visit Earth, they would take one look at the nightmares.

In Fast Times at Fairmont High, movies aren’t movies as we understand them – they aren’t just unidirectional, non-interactive pieces of entertainment that last for 120 minutes. Instead, movie studios create alternate realities that are seeded in our own world. They foster these creations and encourage the ‘audience’ to participate and create the story themselves, through the use of careful set-pieces and supplementary material. [2]

This is a more important idea than you might first think. These movies would be creating the nucleus of ideas that the fertile imaginations of the audience can tap into and expand, thus solving the age old problem of content being hard and expensive to develop. We can already see the primitive stages of this happening right now, with the huge proliferation of fan fiction on the Internet as well as massively multiplayer online role playing games, but to encourage fan fiction deliberately as the main goal of the movie is completely novel. And in this case, desirable. Movie studios tend to view fan fiction as detrimental, diluting their intellectual property. On the contrary, fan fiction can be used to increase the popularity and duration of a story, increasing mindshare and thus making money off supplementary material that is suddenly very popular (e.g. related games, non-interactive movies, etc).

“They’ve actually started the initial sequence. You know, what will attract hardcore early participants. The last few weeks there have been little environment changes in the park, unusual animal movements.”

Likewise, we know that early adopters/hardcore players/trailblazers are often the ones who first discover newly created ideas and concepts, and develop them. Witness the fanatical player communities that grow up around new games – and often games that haven’t even been released.

This will be the ultimate destination for massively multiplayer games – games essentially without rules, created in real-time by the interactions of the players, and subtly guided by an outside organisation. The game as life, in other words.

So, I’d like to pursue two threads for the rest of this essay. Firstly, what technologies can we expect to help us get to the scenario depicted in Fast Times, and secondly, how will immersive fiction games evolve to reach that point. I’m going to look at both threads in the perspective of where we might be in a thousand days – just under three years.

I’m only going to consider the future of immersive fiction games in this essay. Doubtless the current gaming genres we have now, such as first person shooters, puzzle games and simulation games, will continue to live on with increasing complexity, finesse and graphics. However, I find the development of immersive fiction games to be of the most interest since the emerging technologies will affect it most, and simply because it’s a completely new genre.


The increasing penetration of the Internet into our everyday lives will be the driving force behind almost all of the innovations made in the immersive fiction genre. Raw computing power and bandwidth will be eclipsed in importance by the requirement for well constructed and organised paths of communication between players. Until we have AIs capable of creating and altering narratives, which will not happen within the next thousand days, the overriding concern of immersive fiction developers will be to make interaction between individual players and the game as easy and transparent as possible.

It’s hard to miss the general trend towards smaller and more portable devices that give you access to computing power and the Internet. Though at times it may seem that the introduction of affordable high-speed mobile Internet devices (3G mobile phones being the main player) is agonizingly slow, it’s worth noting that the majority of the United States and Western Europe have only had broadband Internet access for the last two years or so. 3G networks are set to become active across the world this year, and so even if we assume that uptake of high speed mobile Internet will actually be slower than that of fixed broadband, then in 1000 days we can still expect that there will be over 300 million worldwide subscribers. [3]

Coupled with GPS or other positioning systems, 3G phones and other mobile Internet devices will allow entire countries to become game ‘playgrounds’. Not that the potential isn’t already with us right now; geocaching could easily be turned into a large aspect of an immersive fiction game, and it doesn’t require any mobile Internet access. The different is, though, that without some form of mobile Internet, any players leaving their home computers are effectively cut off from each other and cannot co-operate or interact with other players as easily. The ability to access the Internet on the move will allow a widening of possibilities – for example, the spontaneous mass gathering of players (‘smart mobs’), interactive treasure hunts, etc.

A handy benefit of this will be a fostering of competition between different player groups within games (be it via geography or otherwise). In past immersive fiction games that players just don’t seem to split up into groups that compete in any real way; this is a problem because group competition often makes things more fun and allows the groups to entertain each other, saving the game developers some time. The simple reason behind this is because there’s just no incentive or basis to form different groups; if a game is conducted entirely on the Internet with no differentiation within the game (e.g. separate game threads) and thus no geographic differentiation, then how can you expect the players to split up?

However, once you put them into situations where they are interacting with a certain set of players for more time, they’ll naturally coalesce.

The nature of the competition can be determined by the game developers; anything from co-operating on major goals but competiting to be the ‘best’ group at solving individual goals, to complete paranoid out-and-out competition, is possible. It should add something to the experience.

Some possible requirements for ‘world playground’ games may include pinpointing player locations, which poses some obvious technical and privacy problems. Also desirable would be methods for game developers to peer in on player communications; this might be accomplished by inserting stooges into player groups or offering to host web forums for players.

Other problems will be created by the freedom given to players if they engage in world playground games; they may be more liable to deviate from planned activities and storylines. To counter this, game developers will have to both allow for greater flexibility in the game plan, and also limit the effect they are prepared to allow the players on the storyline.


Current and past immersive fiction games have suffered largely from a combination of a poverty of high quality content, organisation and funding. Furthermore, it has never become clear to the media (and thus greater public) what immersive fiction games are. The positive publicity and momentum generated by the Microsoft AI game was quickly squandered by EA’s Majestic, and the failure of any similarly large organisation to field a successful immersive fiction game (ABC’s Push Nevada, BBC’s The Spooks) has further compounded the negative reputation the genre has gained.

As a result, the genre is stagnating. It will be difficult to raise funding for a serious effort at a new immersive fiction game; however, investors may be convinced if the game is significantly different from past games.

The immersive fiction genre is at an interesting point now. It’s not sink-or-swim, but I suspect there are still some companies out there willing to give immersive fiction one last, big shot. However, if these companies fail, the genre will probably go into hibernation for a fair few years, and we’ll only see amateur games.

Let’s talk about the next, and possibly last, batch of ‘big’ games. These will be released within the next twelve months or so, and there won’t be many of them. They’ll have funding on the level of that provided to EA’s Majestic; i.e. seven figures, and they will employ industry gaming veterans. Expect the Microsoft AI team to be snapped up pretty damn quickly, if they haven’t already gone. We’ll call these games the first generation.

High quality levels of production will feature in all of these games. It’s very unlikely that they will require paid subscriptions, instead being funded by some sort of sponsorship or advertising. If they do require players to pay, it will be in some qualitatively different way than what we’re used to. All evidence up to this point (Majestic, TerraQuest) indicates that players are not prepared to pay for immersive fiction games, as it stands now.

Mobile Internet gaming will form a large part of their pitch, although to be honest, it just isn’t feasible for a first generation game coming out in 2003, unless it happened in Japan. But they’ll experiment with the early adopters, and they’ll probably have a few media-worthy successes. The general format of the game will not deviate hugely from the Microsoft AI game; it’s the only format that’s worked so far. In other words, the player community will remain homogeneous and the play will be centred mostly online.

The second generation of games will include cross-promotion and may interact with other forms of media, e.g. television. The mobile Internet will feature a larger role, moving a significant proportion of the play ‘offline’ (insofar as it is away from a fixed computer. Note to self: I need a new term for this). Game developers will start to experiment with dividing up the player community and creating separate plot strands. There will be some real effort to open the genre up to a wider audience; you’ll probably be seeing fewer science fiction games, for example, but you will be seeing aggressive game promotion on TV, newspapers and other media outlets.

Smart mobs and mobile positioning devices will play a pioneering role in these games. It is very likely that mobile phone companies, and other makers of mobile Internet devices, will choose to sponsor immersive fiction games to showcase their products. However, game developers will also be hoping for sources of revenue other than from major advertisers. Possibilities include targeted microadvertising, for example on the geographic level using mobile positioning, and automated product placement. Direct income from the players themselves is still unlikely, although it’s possible that some games developers will pursue television or movie tie-ins. Finally, if any game is lucky enough to become a ‘phenomenon’, then it could pay its own way through sale of branded items.

Particularly adventurous second generation games will aim to recruit player-generated content (using the ‘fan fiction’ model). Ideas on how to implement this successfully will have been tested out in first generation games, so it should be possible to have a decent amount of content added this way. Other second generation games will decide that it’s not worth the trouble.

Many third generation games, taking place at around 2005-7, will offer significant opportunities for players wishing to ‘go mobile’, but all games will still probably cater for the large number of players who don’t. A new profession of interactive storyteller will emerge, in which they will carefully weave together player-generated stories and guide them along an overall arc. The starting phase of the game, by introducting the scenario and offering players ‘something new’ will become paramount. At this point, one thousand days from now, I fully expect that all media outlets will be watching immersive fiction like hawks, and movie studios will be sitting upright. Rather fitting, since it was the AI game that started the genre off.


The immersive fiction genre has to evolve if it is to survive in the increasingly competitive world of massively multiplayer games. Ironically, immersive fiction games also happen to be the best placed games to take advantage of the burgeoning mobile Internet technologies and the explosive growth in online communities. By emphasising their narrative-based nature as well as the ‘no rules’ aspect, immersive fiction games should be able to attract large numbers of players. In time, they have the potential to create an entirely new form of entertainment, one that is about the connections between individuals, both online and offline, both through geography and shared interests.
Continue reading “One Thousand Days Later”

Rewarding Behaviour

While browsing through hot-shot Cambridge lecturer and security expert Markus Kuhn’s homepage, I came across these two articles about the detrimental effects rewards can have on performance: For Best Results, Forget the Bonus and Studies Find Reward Often No Motivator.

While some may view these articles as part of the backlash against behaviourism, I do think they raise interesting points, especially in the context of massively multiplayer online games.

For example: ‘rewards rupture relations’. When a multiplayer game offers only a few high value prizes, there is little reason for players to co-operate with one another. This can prevent communities from forming, which in themselves can provide a great deal of entertainment and value to players.

‘Creativity and intrinsic interest diminish if task is done for gain’. If you are taking part in an entertaining game that is offering a prize of (say) a million dollars, it will be difficult not to view the game as a means to an end rather than an end in itself.

Studies have shown that creativity, risk taking and enjoyment decrease when a reward is offered for a task, even if the task was originally interesting. Perhaps this is because when a reward is introduced, people focus narrowly on it and try and achieve it quickly and safely. After all, there’s no point taking chances if there’s a million dollars at stake, is there? And if you want to be sure to have a real shot at winning, you might feel you have to ignore the ‘frivolous’ aspects of your task and make sure you are completing the task as proscribed. The game becomes a job to accomplish.

I don’t believe that high value prizes on massively multiplayer online games have any use other than briefly attracting someone’s interest. Offering such prizes belies an ignorance of their delayed effects. All the successful MMORPGs – Everquest, Lineage, Ultima Online, and now The Sims – do not offer players any kind of significant prize, and they appear none the worse for it.

Not all rewards are detrimental, of course. It really does depend on what you want to accomplish. If you want to attract attention to your new product – if all you want is publicity – then it’s perfectly fine to stage a competition with a big prize to promote it. But if you want people to subscribe to your game and play it for an extended period of time, it may not be such a good idea.

Still, sometimes it may be useful to reward players in a massively multiplayer online game, simply to give them recognition. However, it is clearly vital to ensure that players do not perceive the rewards as the ‘point’ of playing the game – or else you start to head down a path that could see them leaving as they find they aren’t enjoying themselves any more.

Pattern Recognition

(Warning: This entry has absolutely nothing to do with massively multiuser online entertainment, if that’s what you’re here for)

In my research project at the moment, I’m using a nifty little program to aid my pattern recognition.

A major part of my project involves me taking recordings of a signal (in this case, electrochemical spikes from a neuron) and discriminating them from the noise inherent in the system. Sometimes the noise is loud, and sometimes there is more than one signal (i.e. multiple neurones). In a recent case, I had eight different signals and a significant amount of noise.

Now, the way most people would go about discriminating the signal from the case I described is through hardware; they’d hook their recording apparatus up to a black box, and they would set a value X on that black box. Anything in their recording that went above value X would be recorded (on a separate channel) as a spike. Now, this seems reasonable enough since spikes are just that – they are spikes in voltage, and if you have a good recording with only one signal and little noise, you can be 100% confident in getting all of the spikes and no false positives.

But if you have lots of noise, and the signal is weak, you will have to set value X such that you may miss some of the spikes and get some false positives (because the spikes are only a bit above the level of the noise). Maybe you might not care about this if you’re just doing a simple analysis of the spike rate, but I’m not – I’m doing something a bit more complicated that involves information theory and it really is important for me to try and get all the spikes and no noise. Thus, a simple hardware discrimation of the spikes just ain’t good enough*.

(*Hardware discrimination can actually be a bit more complicated than this, but essentially it all boils down to seeing if the voltage goes above X and/or below Y or Z or whatever)

So what you really have to do is to look at the shape of a spike. A neural spike is quite distinctive – it generally has a slight bump, then a sharp peak, then a little trough. In other words, it doesn’t look like random noise. This means that you can do some software analysis of the shape.

The more computer-savvy of you readers are probably thinking – aha, no problem, we’ll just get some spike recognition neural network kerjigger in, and then that’s it. Well, you know, it’s not as easy as that, because spike shape can change over time and sometimes noise looks like a spike, and vice versa. It turns out that the best way to check whether a spike is really a spike is by looking at it – after all, the human brain is a pretty powerful neural net. Unfortunately, if you’re looking at a spike train with 50,000 spikes, this isn’t really feasible.

So a guy in my lab has made a nifty piece of software that will analyse each of the putative spikes in a recording (putative because they pass a trigger level – just like how a hardware discriminator works). Using a mathematical method of your choice (FFT, PCA, wavelet, cursor values, etc) it will assign a numerical value to each spike. You can then plot these values against each other to get a 2D scattergram. You do this three times, and hopefully you get three scattergrams that graphically isolate your chosen signal from the noise (or from other signals) on the basis of the analysis method you chose.

Next, you go and mark out which spikes you want (each spike is represented by a scatter point) by drawing ellipses, and finally you use Boolean algebra to say, ‘OK, I want all the points I circled in plot A, but not those that are shared with plot B or plot C’. At any point, you can check out what a particular spike or group of spikes looks like on a graph. And then you can export your freshly discriminated spikes.

It works surprisingly well, and I think this is because it is a marriage of the supreme pattern recognition abilities of humans with the brute force processing power of computers. I’m fairly sure it’s one of the best methods in current use for discriminating spikes from a recording, and it’s a shame that people don’t think that this is a worthwhile thing to do (but that’s a story for another time).

Hold on, though: this wouldn’t be a proper post if it didn’t have any wild speculation. So, humans are good at pattern recognition in general. But we’re incredibly, uncannily good at facial recognition. We can distinguish two near identical faces and recognise someone we’ve only seen for a second out of thousands of faces. Pretty damn good.

It turns out that facial recognition and plain old pattern/object recognition are governed by different systems in the brain; we know this because there is something called a double dissocation between them. In other words, there are people who, for some reason, cannot recognise faces but can recognise objects fine, and vice versa. This strongly suggests that they run on different systems.

So how about we leverage our skills at facial recognition by converting other forms of information (say, spike trains, weather patterns, stockmarket data) into facial features? How might that work, eh? It could allow us to sense subtle differences in information and aid our recognition by no end.

Of course, I have no real idea whether this would work, or exactly how to do it – maybe you can take a recording of data (or real time data, I don’t know) and use different methods to analyse it and use the output values to describe different facial parameters. Hmm…


Saw Donnie Darko a second time today, with a friend from Leeds; it survived rewatching quite well.

Afterwards, I described my ‘Dance Dance Revolution’ theory of cognitive development to her. It’s a little like Piaget’s controversial theory (although obviously much sillier). Jean Piaget was a psychologist who believed that children when through qualitatively different levels of cognitive maturity as they grew up. For example, he said that during the ages of six to twelve, children were in the ‘concrete operational stage’ in which they can perform cognitive ‘operations’ (like mental rotation, that sort of thing). However, only when they reached the age of twelve and graduated to the ‘formal operational stage’ could they concentrate on hypothetical situations and solve highly abstract, logical problems.

Piaget’s theory is a veritable piece of Swiss cheese now, what with all the holes that have been poked in it. Even so, it’s still interesting to discuss it, and I have based my Dance Dance Revolution theory upon it. Indeed, I propose that a new stage can be added to his progression of cognitive maturity, called ‘Dance Dance Revolution appreciation’.

There are those in the world who do not show an appreciation of Dance Dance Revolution; for some inexplicable reason, they have an urge to mock what is an inoffensive, entertaining and healthy game that promotes exercise and social skills. These people, I believe, have not achieved full advancement of their cognitive facilities. On the other hand, those who have passed through the ‘DDR stage’ will demonstrate an understanding of the true qualities that DDR holds; such people are at the zenith of cognitive development, I believe, and will in addition exhibit greater emotional development and what can be best described as ‘all round coolness’.

(If a DDR machine is not available nearby for testing purposes, a video of a DDR freestyler is an acceptable substitute)


It used to be that I’d reply to all personal email as soon as it arrived. Those days, alas, have been gone for some time now. While I do receive more personal emails, response time has not increased proportionally – more likely it’s increased logarithmically. I’m not entirely sure why this is so. It’s not as if I have to expend huge amounts of mental resources in my reply, although sometime I do have to make important decisions. I’m not even sure why I’m bothered about all of this.

I do have an idea though. Email is supposed to be the ultimate in instantaneous communication – forget about mobile phones, email is the true medium for communicating ideas fast and cheaply. And as a result I think we’ve been seduced into thinking that all emails should be replied to within minutes, or at least hours.

But what difference does it make on response time if you send a letter via the post or email? The recipient still has to figure out a response and summon up the will to write it. So while the universal constant of procrastination hasn’t changed, speed of message delivery has, and consequently we think that we should get replies quicker no matter what. I certainly do, at least. Considering that many emails are not letter-like in depth or length and so have short response times, perhaps this blinds us to the longer response times required for other emails.

Take, for example, some emails I’ve sent out to people asking what they think the first words on Mars will be. I sent them out about four hours ago, and I expected replies three hours ago. I haven’t received any. Is this a surprise, if I think about it? No – if someone asked me what my first words on Mars would be, I’d probably flag the message, let the question rumble in the back of my brain for a day or two, and then reply. Unrealistic expectations, as ever.

Driving simulators

Something I’ve been idly wondering about on and off for a while is why there aren’t any decent driving simulator/trainers for PCs (or consoles). Surely there must be a market for this sort of thing? If you sold a package with force-feedback driving wheel, pedals and gearstick, together with a fairly up to date and realistic graphics engine (Gran Turismo 3 comes to mind immediately) for maybe £100, wouldn’t you get a fair number of sales?

Granted, it obviously wouldn’t replace the entire driving experience but it’d go a long way in teaching people the basics, and also clutch control, speed and so on. Add on a written driving test trainer and it’d be perfect. I’m a bit ambivalent about using a VR headset – I know that they’re cheaper these days, but I don’t know much about compatibility issues, or lag time and graphics.

The problem is that I can’t think of anyone who’d attempt this. Games publishers might view it as an unknown market, and the developers of simple ‘edutainment’ software simply don’t have the skills to pull something like this off. I wouldn’t imagine hardware being too much of a problem; you could just rebrand or bundle existing force-feedback peripherals.

The Decline of Metafilter

(This is a break from your regularly scheduled programming about massively multiuser online entertainment. Normal service will resume shortly).

Once again, Metafilter has me worried. Far be it for me to predict the imminent demise of one of the Internet’s most popular and well-known weblogs when it has confounded the predictions of countless others, but this time I think there’s a real problem.

Metafilter’s unique feature is that it has practically no moderation. If you’re a registered user, you can post a link to the front page of Metafilter once a day, every day, and you can post as many comments as you want. Chances are that if your link isn’t a duplicate or something completely useless and/or inflammatory, it won’t get taken down by the harried site administrator, Matt Haughey. Your links and comments cannot be rated and as such they are all presented on an equal footing; therefore, there is no quick and easy way of filtering out links or comments that other users believe are bad (which you can do on Kuro5hin and Slashdot).

“But,” I hear you cry, “how on Earth can this system work if Metafilter has thousands of users? Won’t there be hundreds of links per day and thousands of comments, making the front impossible to navigate and allowing lots of substandard content to clutter the place up?”

The answer isn’t simple. A couple of months ago, Metafilter had about 14,000 registered users and perhaps ten times that number of unregistered readers. However, not all 14,000 users posted a link every day – if they did, the site would be unreadable. Instead, there were a mere 20 or so links posted per day, and on average they wouldn’t be too bad. There are many reasons why there weren’t more links posted per day; many of the 14,000 user accounts were defunct and of the active ones, there were many people who simply never wanted to post a link. The strongest limiter, though, was the fact that there is an ingrained culture in Metafilter that states, “Think long and hard before making a post which will go in front of tens of thousands of people. Don’t waste their time.”

And so things were fine; as with any large community, there were spats, feuds, arguments and flames on a day to day basis. Yet there wasn’t any apparent downward trend in thread quality, and the mythical ‘golden age’ of Metafilter contined on in the present. As for me, I visited the website on a daily basis and found many interesting articles from the links provided. Every so often I’d post my own link. Things were fine.

It couldn’t last forever, of course. Metafilter was in the midst of an artificial situation – for the past four months, it had closed down new user registrations due to insufficient server processing power. When a new server came online – two months ago – registrations were reopened cautiously, letting only 20 users in per day, and people who didn’t want to wait could pay $5 for immediate registration.

User numbers bloomed by over a thousand in less than two months. I shrugged my shoulders and thought, “What difference is 15,000 from 14,000? Little will change.” I’m not so sure now. The thousand users who’d joined had been waiting for months to get in – they weren’t a representative sample of Metafilter users, and clearly they wanted the priveleges that came with being registered, namely the ability to post and comment.

The sky didn’t fall, and it still hasn’t – I think link post counts have gone up, and I think there are more comments than usual. Subjectively, I think quality is slowly inching down but I know all about the rose-tined spectacles effect. I can’t see Metafilter coming to a crashing halt. Instead, I can see a gradual decline.

I liken the current situation to an event in Blue Mars, by Kim Stanley Robinson (Mars, what a surprise, eh?). In this book, the Martian natives (humans) have enjoyed a period of relatively slow population grow after they destroyed the space elevator that allowed large numbers of immigrants in. Once a replacement elevator was installed, they were in danger of becoming swamped by Earth immigrants who were unfamiliar with their customs and formed their own little communities, sealed off from the natives. Conflicts arose, and the natives felt they were being overwhelmed.

The ‘solution’ (and I use quote marks because, realistically, it wasn’t perfect) was to greet the immigrants not with hostility but with open arms and try and accept them. After a fashion, the immigration crisis subsided.

This sort of solution is pretty damned hard to enact in reality, especially in a place like Metafilter. The ‘immigrants’ – the new users – are on average not as familiar with the culture of Metafilter as older users and they are more likely to slip up by not understanding what constitutes a good post, and simply enough, how not to piss other people off.

With increasing user numbers, retaining a high quality and manageable number of posts and comments will become more and more difficult. Solutions that were in the past dismissed are now being considered, such as rating posts and comments. Ultimately, I think Metafilter’s current model is not sustainable and either the site will fractionate, or a rating system will be introduced. Unmoderated posting is only possible in a small community, and it’s a testament to the self-control of Metafilter’s users that it’s managed to work for so long. Note that I’m not criticising rating systems; personally I think they’ll be useful, if not without their faults. Ditto for unmoderated posting.

None of this will come as a surprise to people who regularly read the ubiquitous and frankly pap books that talk about Online Communities (my dislike of them is well known; in brief, I think that their authors have no experience in serious writing or analysis and rely too much on anecdotal reports, case studies and vague, contradictory homilies. But enough of that). What I’m saying isn’t particularly original, but I did feel that it hasn’t been said openly recently.

Sooner or later, Metafilter will have to change dramatically. It was a grand experiment while it lasted.


(Warning: Ramble ahead)

Earlier today, I was listening to a guy describe a project I might do next year for neurobiology, trying to figure out some of the characteristics of Golgi neurones in the cerebellum. The way you can identify these neurones, other than looking at them under a microscope, is to insert a super-thin electrode into them and look at their electrical activity. We’ve all seen what heartbeat readouts look like on TV, like a sharp spike. Well, the electrical output from neurones tends to look like that as well. Different types of neurones exhibit different and unique spike properties, such as spike magnitude, length, and interspike intervals.

So you can identify Golgi neurones by looking at their electrical readouts. This can take a bit of time, having to look back and forth all the time. What many researchers do is to hook up the output signal from the electrode to a loudspeaker, so each spike makes a click. I’m told that in time you can become extremely proficient at identifying different types of neurones very quickly by simply listening to their activity.

This kind of process is of course pattern recognition, and it struck me how skilled humans were at doing this and recognising and distinguishing new types of patterns. To do a similar thing on a computer right now would require a fair bit of coding – it wouldn’t be impossible by any means, and it might not be that difficult. But it would probably take longer than learning it yourself. That’s not to say that doing it on a computer is a waste of time, clearly if you want to automate the neurone-finding procedure and link the electrode position controls to the computer it’s worth it.

Even a computer wouldn’t be able to identify the type of a neurone with perfect accuracy though – neurones aren’t perfect things. It could give you probabilities though. And this set me onto a completely different train of thought. Usually probabilities of events or identification are shown in a numerical or percentage quantity, e.g. it’s 80% likely that it will rain tomorrow. Unfortunately, it seems that humans aren’t all too good at assessing probabilities – for example, it’s been shown that we ignore Bayes theorem while calculation probabilities ourselves.

We don’t say to each other, I think there’s an 80% probability of it raining tomorrow. We say, it’s a fairly good chance that it’ll rain tomorrow. And I think that people would respond to this type of framing probabilities better than numerical ways, in various circumstances. It just makes it more familiar.

And then I realised that we aren’t too hot on judging probabilities that way either, since according to human signal detection theories we can alter our criterion for the probability of events depending on, basically, how we’re feeling. And then I started writing this, and unfortunately I don’t have anything more to say at the moment.


Something popped into my head today as I was scribbling down some notes during a supervision: does the fact that I write with a pencil (as opposed to a pen or biro) affect my writing style, and on a higher level, my method of thinking?

Pencils provide a much less constrained and linear way of putting thoughts down onto paper, in that pencil marks can be easily and quickly erased. Thus, I’m not too bothered with making the occasional correction or altering what I’ve written so that it’s more accurate, whereas if I used some non-erasable implement that option wouldn’t be open to me. Conversely, perhaps using a pencil is making me lazy and those who write using pens have less cause to make corrections.

Taking this further, what about writing on the computer? Words, sentences and paragraphs can all be moved about at the click of a button, and rarely does a supervisor not warn us against getting into a habit of writing all essays on the computer, as this won’t help us write essays in exams. I tried writing an essay on paper a couple of weeks ago, and it went down perfectly fine. In fact, I probably did it faster than I would’ve done on the computer since I could draw diagrams quicker. Score one for paper.

As others have said, probably the best solution would combine the qualities of paper and computers – I imagine some kind of smart paper which you can either write on (it has handwriting recognition, naturally) or hook up to a wireless keyboard would be ideal (many people can type faster than they can write). You’d be able to annotate the paper and move sentences and words about with ease, and it’d be intuitive for all users. It’ll probably be on the market in another ten years.