Putting your life online – recording and organising all of your emails, conversations and other life events on a computer to serve as a supplemental memory system. This isn’t a particularly new concept, but it is likely to be the first decent implementation. Very interesting stuff – I wonder how it’ll affect kids growing up with it.
Wiki-wiki-wah!
I’ve finally done something I’ve been meaning to do for months; set up a wiki. Or to be exact, three wikis. Wikis are webspaces in which any material can be edited or added by anyone; a bit like a community whiteboard with hyperlinking, if you will. It sounds like a recipe for disaster, but after only three days, the Metafilter wiki is coming along quite nicely (and I didn’t even write everything on it!).
The New Mars wiki isn’t showing any real signs of life but I consider my Mars projects to be more long term than anything else, so I’m not bothered about the lack of activity there. Today, I set up an Immersive Fiction Gaming wiki.
I originally intended to use Twiki as the engine for all of my wikis, but after spending a fruitless hour trying to install it on Sunday, I decided that:
a) It wasn’t worth the effort.
b) It probably wasn’t even possible to install it anyway, given access rights and other technical issues.
So I used the UseMod engine instead, which proved to be a much more pleasant experience. Wikis aren’t perfect by any means, but they do offer interesting new possibilities on the creation of new ideas and content. I’m looking forward to seeing how these three wikis develop in their different ways.
Army of Penguins
Such is the power of Google; I go to a lecture by Steve Jones tonight, and five minutes after I get back I can find the exact same image of a bull elephant seal surrounded by its army of penguins used by Steve.
Eyes for you
This optical illusion has been making the rounds on the Internet recently, and most people are astonished to find that the A and B squares are the same shade, to the extent that they consult Photoshop to confirm that A is not darker than B. The explanation for this is simple – the eye is better at distinguishing sharp boundaries rather than shallow gradients of shade or places of even shade.
This satisfies most people. But why is this so? Why can’t the eye do that, and also be able to quantitatively compare the shades of two spatially separated areas?
It all comes down to space, or rather, lack thereof. There are roughly 130 million photoreceptor cells in the retina of the eye, each of which individually making measurements of the amount of light falling on it. However, if you look at the optic nerve bundle that conveys information from the retina to the brain, you’ll find that it contains only 1 million nerve fibres. That’s a contention ratio of 130 to 1, and the physiology of the situation dictates that one neurone cannot possibly contain all the information produced by 130 photoreceptors. There is a good reason for this, to do with the wiring of the retina and neural bandwidth limits, but you’ll just have to take my word for it for now.
As a result of all of this, there’s a significant loss in information from the total amount gathered by the photoreceptors, and that which is sent to the brain. The optimal solution would be to transmit the information that is most important to the survival of the organism, and that happens to be edges – sharp changes in shade – that you see, and of course that’s what the eye does. Each photoreceptor is linked to adjacent photoreceptors via ‘higher’ cells (still in the retina), and these higher cells perform a bit of processing called lateral inhibition.
Lateral inhibition is a relatively simply process that enhances edges between areas of different shade, and it’s mediated by a mechanism called centre-surround antagonism, which you can see in this Mach Bands demonstration.
Ultimately, a much better solution would be to have a whopping great big optic nerve that had 130 million nerve fibres in it, one for each photoreceptor in the retina. Alas, as I said earlier, it’s a question of space and there just isn’t enough room in our heads for such a big optic nerve, so we have to make do with a smaller one that causes us to have so much fun looking at visual illusions. Human vision isn’t perfect, but it is ‘good enough’ which is really the story of evolution.
Guardian best blog
The Guardian has released the results of their ‘Best UK Blog’ competition, in which I was pleasantly surprised to see mssv.net on the shortlist. Also check out the related Metafilter thread for more comments.
Salon
I just renewed my subscription to Salon.com. Salon is one of only three places on the web that I’m prepared to pay to read (the other two are Kuro5hin and Metafilter – both of which I’ve donated a few dollars to). I don’t visit Salon quite as much as I used to in the past, by virtue of not visiting it daily, but I do read it often enough for it to be worth the $30/year subscription.
To be honest, I feel a little sorry for Salon. They’ve got a fair number of subscribers, but I heard that they’re losing money and it would be a shame to see them go bust; a bit like seeing a cute kitten die of a fatal, inherited illness.
Incidentally, if you want a cut-price subscription to Salon for $20/year (that’s a third off) then just contact me; current subscribers can give ‘gift’ subscriptions for less money (you’d have to send me the $20, of course). And no, I don’t see anything wrong with this, because I think that the only people likely to want this are those who are merely curious and wouldn’t have gone for the full rate.
A potential problem averted
There’s been a bit of a disturbance on the New Mars forums I administrate. In the past two days someone has signed up and made 40 posts, all of which are hostile to space and Mars exploration. People got upset and complained (very politely though). I decided to do a bit of research.
One peculiar thing about this newbie was that he managed to do a huge amount of writing in only two days – too huge. A quick search on Google Groups revealed that he was merely reposting old material that he’d sent to Usenet several years ago. Problem solved.
Here is a copy of my message sent to the newbie. Firm, but fair, I thought. I certainly don’t want to discourage debate on the forums and I don’t mind if people disagree with what people say – we already have a couple of those types of people on the forums – but I do care if people repost old material and don’t seem to be prepared to engage in useful discussion.
The BA Festival of Science
Thanks to a generous grant from Trinity College at Cambridge University, I was able to attend the full week-long British Association for the Advancement of Science Annual Festival of Science in Leicester this year, from September 9th to 13th. Curiously enough, no-one uses the acronym BAAS while in America they do use AAAS – instead we simply call it the ‘British Association’ which no doubt causes some confusion.
Anyway, the BA Festival of Science is a week long event that can’t really be described as a conference as it doesn’t have a particularly focused nature aside from being about ‘science’ – and even that isn’t accurate, since there were plenty of lectures given outside the traditional remit of science, such as economics and philosophy. The lecture schedule consists of several parallel tracks that tend to last from half a day to a day covering distinct topics, for example, ‘Life and Space’ or ‘Radioactive waste – can we manage it?’ In addition to the lectures were debates and workshops.
This year there was quite a spread of topics such that on some days I had a very hard time trying to decide which to attend; in retrospect I think I managed a decent spread.
I originally intended to write up some of my notes made during the Festival as a series of pieces in the ‘Middling’ weblog, until I realised that I simply didn’t have the patience for that. So this article will attempt to string together my thoughts on some of the more interesting lectures I attended.
Visualisation using sound
Professor Stephen Brewster, University of Glasgow
This was a fairly interesting lecture summarising the work Brewster’s group has been doing with the MultiVis project. What they’re trying to do is to give blind people access to data visualisations, such as tables, graphs, bar charts and so on. Current methods include screen readers, speech synthesis and braille; these have the (perhaps) obvious problems of presenting data in a serial manner that is consequently slow and can overload short term memory, thus preventing quick comparisions between different pieces of data.
A good example of this is how blind people would access a table.
10 | 10 | 10 | 10 | 10 | 10 |
10 | 10 | 10 | 10 | 10 | 10 |
10 | 10 | 10 | 10 | 20 | 20 |
10 | 10 | 10 | 10 | 20 | 30 |
To access the table, item by item speech browsing would probably be used, so you can imagine a computer voice reading from left to right, ‘Ten, ten, ten, ten, ten…’ etc. This has the serious problem of being extremely slow, and currently there is no way for a blind person to get an overview of this table and importantly, be told that the interesting information is in the bottom right hand corner.
The solution? Multimodal visualisation, and in this case, sonification – that is, the use of sound other than speech. Sonification offers fast and continuous access to data that can nicely complement speech. Prof. Brewster demonstrated a sound graph, on which the y-axis is pitch and the x-axis time, so for the line y=x you would hear a note rising in pitch linearly. This worked quite well for a sine wave as well.
Multiple graphs can be compared using stereo, and an interesting result is that the intersection between graphs can be identified when the pitch of the two lines is identical. So, imagining that you are trying to examine multiple graphs, you might use parallel sonification of all graphs in order to find intersections and overall trends, and serial sonification in order to find, say, the maximum and minimum for a particular graph.
3D sound also offers possibilities for the presentation of multiple graphs; different graphs could be presented from different angles through headphones. Continuing this further, soundscapes would allow users to control access to graphs simply by moving the orientation of their head. Access by multiple users is possible, so you could have one person guiding another through the soundscape.
Such sonification aids can also be used together with tactile stimuli such as raised line graphs; by placing sensors on a user’s fingertips and connecting them to a computer, users could naturally explore a physical graph while a ‘touch melody’ would indicate (for example) the horizontal or vertical distance between their two fingers. External memory aids could be built in by allowing users to place ‘beacons’ on graphs, perhaps by tapping their fingers – as the user moves away from the beacon, the beacon sound diminishes.
Of course, sonification can also be used for sighted people.
I don’t doubt that these concepts have been explored before, but this presentation was the first I’ve encountered that has dealt with them in such a comprehensive manner and also produced practical demonstrations.
Information foraging and the ecology of the World Wide Web
Dr. Will Reader, Cardiff University
This was perhaps the most interesting Internet related lecture at the Festival of Science; I was impressed by the way Dr. Reader drew upon previous research, which is something that I think many web pundits forget to do. My notes:
Some background: information foraging occurs because people have a limited time budget in which to find answers. According to a recent survey, 31.6% of people would use the Internet to find the answer to any given question – this is the largest percentage held by any single information resource on the survey. However, if you collect together all the people who would use other people as an information resource in order to answer their question (i.e. not only friends and family, but also teachers, librarians, etc) then the humans still win.
H. A. Simone once said something along the lines of ‘Information requires attention, hence a wealth of information results in a poverty of attention. What is then needed is a way to utilise attention in the most optimal manner.’
To use a traditional metaphor, you could call humans ‘informavores’ (eaters of information). When humans read in search of an answer, we are trying to maximise the value of information we receive over the cost of the interaction.
What is meant by the value of information? The value of a text relies principally on relevance, reliability and the difficulty of understanding. Examining the latter factor in detail, it’s theorised that the amount learned from a text (or any information resource) follows a bell curve when plotted against the overlap between the person’s own knowledge, and the information in the text. So – if there is a very small overlap (i.e. almost everything in the text is new) or a very large overlap (everything in the text is already known), little is learned. When the overlap is middling, the amount learned is high.
Dr. Reader carried out an experiment to test this theory in which subjects were given a limited amount of time to read four texts about the heart (something like 15 to 30 minutes). They then had to write a summary of what they’d learned. The texts varied in difficulty, from an encyclopaedia entry to a medical journal text.
The results of the experiment showed that people were indeed adaptive in choosing which texts to spend the most time reading according to their personal knowledge on the subject; in other words, they read the texts that contained a middling amount of information overlap the most. However, the subjects did act surprisingly in one way – they spent too long reading the easiest text.
Is this a maladaptive strategy? Maybe not – it could be sensible. Given the time pressure the subjects were under, they may have simply been trying to get the ‘easy marks’ by reading the easy text.
It turns out that there are two different access strategies when reading multiple texts on a single subject (or accessing multiple information sources). There’s ‘sampling’ in which subjects choose the best text available. They do this by skim reading all of the texts quickly and then deciding on the best. It sounds easy enough, but it’s very demanding on memory if you have several texts to read. People spontaneously use the sampling strategy only 10% of the time.
The majority strategy is called ‘satisficing’ (yes, that’s the right spelling), the aim of which is to get a text that is ‘good enough’. Simply enough, a person will read the first text, and then move on if they aren’t learning enough.
All of this changes when people are presented with summaries of texts. Now, sampling is the majority strategy. These summaries, or outlines, are judged by people to be reliable clues to the content of the text – an information ‘scent’, if you will.
This begs the question, why don’t people use the first paragraph of a text as an impromptu outline? It’s because the first paragraph is not necessarily representative of the rest of the text; we all know how texts can change rapidly in difficulty, particularly in scientific journals.
Outlines can sometimes be misleading. In a study carried out by Salmoni and Payne (2002), when people use Google for searching, they can sometimes be more successful at saying whether a fact is on a given page if they do not read the two line summary/extract in each link in a search result page. This suggests that the Google extract is not as useful as we might believe.
Another experiment by Dr. Reader confirms what many of us anecdotally know. Subjects were asked to research a subject using the Internet through Google. They were given 30 minutes, and then had to write a summary afterwards. The results:
Mean unique pages viewed: 20.8
Mean page time visit: 47.6 seconds
Mean longest page time visit: 6.43 minutes
This shows that some pages were only visited for a matter of seconds, whereas others were visited by several minutes.
Dr. Reader concluded with a few suggestions for improvements to search engines. They could index the difficulty and the length (in words) of search results, and also the reliability of a page. This is already done in Google via Page Rank (essentially calculated by the number and type of pages linking to the page in question), but Dr. Reader also suggests using annotation software (like the ill-fated Third Voice) and interestingly, education. We should educate Internet users in how to quickly and accurately evaluate the reliability of a page.
All in all, an interesting lecture.
The march of the marketeers: invasive advertising and the Internet
Dr. Ian Brown, University College London
I didn’t learn much from this lecture, but that’s only because I’m very interested in the subject anyway and keep abreast of all the latest developments. However, it was a very comprehensive and up to date lecture, unlike some of the reporting you see in the mass media. One thing that I did find interesting was Dr. Brown’s claim that some digital TV channels have ‘unmeasureably small audiences’.
Since audiences are measured by sampling a few hundred or thousand people who have little monitors attached to their TVs, if no-one in the sample group watches a programme or channel, then as far as the survey company is concerned, no-one in the entire country watched it. Even for supposedly popular programmes such as the Nationwide League Football matches on ITV digital, there were zero viewers in the sample group. This is understandably causing problems with advertisers.
Dr. Brown went on to talk about Tivo and all the rest, but I’m not going to cover that.
And all the rest…
I’m giving a very skewed view of the Festival here because I only took notes on things that were completely new to me and that I felt would interest people here. Consequently, I didn’t take any notes in the space lectures I went to, even though some of them, such as ‘Living and working in space’ by Dr. Kevin Fong and the lecture given by Sir Martin Rees were excellent. The former was a very entertaining and information lecture about space medicine on long duration space missions, and the latter was all about posthumans and the Fermi Paradox.
I was actually stunned by Sir Martin’s lecture; not because of its content (I read lots of SF, thank you very much) but because it was coming from him – the Astronomer Royal, no less! In the past, such respectable people wouldn’t touch esoteric subjects like posthumans with a bargepole.
Then there was the talk on DNA nanomachines by Dr. Turberfield from Oxford University; I hadn’t quite grasped the possibilities of DNA assembly before that lecture, and neither did I truly understand how DNA computing could be used to solve a variant of the travelling salesman problem, but afterwards I did (in other words, it was a good lecture). Dr. Turberfield also showed a model of his current work in trying to construct a DNA nanomachine motor, which he confesses probably doesn’t have much immediate practical use but certainly is fun.
Most of the lectures I attended were pretty good; some were excellent, of which I’ve only mentioned a few above. If you ever find that the BA Festival is taking place nearby one year (next year it’s in Salford) then it’s probably worth getting hold of a programme and attending for a day or two. You’ll learn a lot.
It used to be that I’d reply to all personal email as soon as it arrived. Those days, alas, have been gone for some time now. While I do receive more personal emails, response time has not increased proportionally – more likely it’s increased logarithmically. I’m not entirely sure why this is so. It’s not as if I have to expend huge amounts of mental resources in my reply, although sometime I do have to make important decisions. I’m not even sure why I’m bothered about all of this.
I do have an idea though. Email is supposed to be the ultimate in instantaneous communication – forget about mobile phones, email is the true medium for communicating ideas fast and cheaply. And as a result I think we’ve been seduced into thinking that all emails should be replied to within minutes, or at least hours.
But what difference does it make on response time if you send a letter via the post or email? The recipient still has to figure out a response and summon up the will to write it. So while the universal constant of procrastination hasn’t changed, speed of message delivery has, and consequently we think that we should get replies quicker no matter what. I certainly do, at least. Considering that many emails are not letter-like in depth or length and so have short response times, perhaps this blinds us to the longer response times required for other emails.
Take, for example, some emails I’ve sent out to people asking what they think the first words on Mars will be. I sent them out about four hours ago, and I expected replies three hours ago. I haven’t received any. Is this a surprise, if I think about it? No – if someone asked me what my first words on Mars would be, I’d probably flag the message, let the question rumble in the back of my brain for a day or two, and then reply. Unrealistic expectations, as ever.
iainbanks.net
Iain Banks has a shiny new website completely devoid of content, apart from an extract from Dead Air (it’s not bad). Oh, and the covers of his non-SF novels have been updated and become quite respectable now.