The BA Festival of Science

Thanks to a generous grant from Trinity College at Cambridge University, I was able to attend the full week-long British Association for the Advancement of Science Annual Festival of Science in Leicester this year, from September 9th to 13th. Curiously enough, no-one uses the acronym BAAS while in America they do use AAAS – instead we simply call it the ‘British Association’ which no doubt causes some confusion.

Anyway, the BA Festival of Science is a week long event that can’t really be described as a conference as it doesn’t have a particularly focused nature aside from being about ‘science’ – and even that isn’t accurate, since there were plenty of lectures given outside the traditional remit of science, such as economics and philosophy. The lecture schedule consists of several parallel tracks that tend to last from half a day to a day covering distinct topics, for example, ‘Life and Space’ or ‘Radioactive waste – can we manage it?’ In addition to the lectures were debates and workshops.

This year there was quite a spread of topics such that on some days I had a very hard time trying to decide which to attend; in retrospect I think I managed a decent spread.

I originally intended to write up some of my notes made during the Festival as a series of pieces in the ‘Middling’ weblog, until I realised that I simply didn’t have the patience for that. So this article will attempt to string together my thoughts on some of the more interesting lectures I attended.

Visualisation using sound
Professor Stephen Brewster, University of Glasgow

This was a fairly interesting lecture summarising the work Brewster’s group has been doing with the MultiVis project. What they’re trying to do is to give blind people access to data visualisations, such as tables, graphs, bar charts and so on. Current methods include screen readers, speech synthesis and braille; these have the (perhaps) obvious problems of presenting data in a serial manner that is consequently slow and can overload short term memory, thus preventing quick comparisions between different pieces of data.

A good example of this is how blind people would access a table.

10 10 10 10 10 10
10 10 10 10 10 10
10 10 10 10 20 20
10 10 10 10 20 30

To access the table, item by item speech browsing would probably be used, so you can imagine a computer voice reading from left to right, ‘Ten, ten, ten, ten, ten…’ etc. This has the serious problem of being extremely slow, and currently there is no way for a blind person to get an overview of this table and importantly, be told that the interesting information is in the bottom right hand corner.

The solution? Multimodal visualisation, and in this case, sonification – that is, the use of sound other than speech. Sonification offers fast and continuous access to data that can nicely complement speech. Prof. Brewster demonstrated a sound graph, on which the y-axis is pitch and the x-axis time, so for the line y=x you would hear a note rising in pitch linearly. This worked quite well for a sine wave as well.

Multiple graphs can be compared using stereo, and an interesting result is that the intersection between graphs can be identified when the pitch of the two lines is identical. So, imagining that you are trying to examine multiple graphs, you might use parallel sonification of all graphs in order to find intersections and overall trends, and serial sonification in order to find, say, the maximum and minimum for a particular graph.

3D sound also offers possibilities for the presentation of multiple graphs; different graphs could be presented from different angles through headphones. Continuing this further, soundscapes would allow users to control access to graphs simply by moving the orientation of their head. Access by multiple users is possible, so you could have one person guiding another through the soundscape.

Such sonification aids can also be used together with tactile stimuli such as raised line graphs; by placing sensors on a user’s fingertips and connecting them to a computer, users could naturally explore a physical graph while a ‘touch melody’ would indicate (for example) the horizontal or vertical distance between their two fingers. External memory aids could be built in by allowing users to place ‘beacons’ on graphs, perhaps by tapping their fingers – as the user moves away from the beacon, the beacon sound diminishes.

Of course, sonification can also be used for sighted people.

I don’t doubt that these concepts have been explored before, but this presentation was the first I’ve encountered that has dealt with them in such a comprehensive manner and also produced practical demonstrations.

Information foraging and the ecology of the World Wide Web
Dr. Will Reader, Cardiff University

This was perhaps the most interesting Internet related lecture at the Festival of Science; I was impressed by the way Dr. Reader drew upon previous research, which is something that I think many web pundits forget to do. My notes:

Some background: information foraging occurs because people have a limited time budget in which to find answers. According to a recent survey, 31.6% of people would use the Internet to find the answer to any given question – this is the largest percentage held by any single information resource on the survey. However, if you collect together all the people who would use other people as an information resource in order to answer their question (i.e. not only friends and family, but also teachers, librarians, etc) then the humans still win.

H. A. Simone once said something along the lines of ‘Information requires attention, hence a wealth of information results in a poverty of attention. What is then needed is a way to utilise attention in the most optimal manner.’

To use a traditional metaphor, you could call humans ‘informavores’ (eaters of information). When humans read in search of an answer, we are trying to maximise the value of information we receive over the cost of the interaction.

What is meant by the value of information? The value of a text relies principally on relevance, reliability and the difficulty of understanding. Examining the latter factor in detail, it’s theorised that the amount learned from a text (or any information resource) follows a bell curve when plotted against the overlap between the person’s own knowledge, and the information in the text. So – if there is a very small overlap (i.e. almost everything in the text is new) or a very large overlap (everything in the text is already known), little is learned. When the overlap is middling, the amount learned is high.

Dr. Reader carried out an experiment to test this theory in which subjects were given a limited amount of time to read four texts about the heart (something like 15 to 30 minutes). They then had to write a summary of what they’d learned. The texts varied in difficulty, from an encyclopaedia entry to a medical journal text.

The results of the experiment showed that people were indeed adaptive in choosing which texts to spend the most time reading according to their personal knowledge on the subject; in other words, they read the texts that contained a middling amount of information overlap the most. However, the subjects did act surprisingly in one way – they spent too long reading the easiest text.

Is this a maladaptive strategy? Maybe not – it could be sensible. Given the time pressure the subjects were under, they may have simply been trying to get the ‘easy marks’ by reading the easy text.

It turns out that there are two different access strategies when reading multiple texts on a single subject (or accessing multiple information sources). There’s ‘sampling’ in which subjects choose the best text available. They do this by skim reading all of the texts quickly and then deciding on the best. It sounds easy enough, but it’s very demanding on memory if you have several texts to read. People spontaneously use the sampling strategy only 10% of the time.

The majority strategy is called ‘satisficing’ (yes, that’s the right spelling), the aim of which is to get a text that is ‘good enough’. Simply enough, a person will read the first text, and then move on if they aren’t learning enough.

All of this changes when people are presented with summaries of texts. Now, sampling is the majority strategy. These summaries, or outlines, are judged by people to be reliable clues to the content of the text – an information ‘scent’, if you will.

This begs the question, why don’t people use the first paragraph of a text as an impromptu outline? It’s because the first paragraph is not necessarily representative of the rest of the text; we all know how texts can change rapidly in difficulty, particularly in scientific journals.

Outlines can sometimes be misleading. In a study carried out by Salmoni and Payne (2002), when people use Google for searching, they can sometimes be more successful at saying whether a fact is on a given page if they do not read the two line summary/extract in each link in a search result page. This suggests that the Google extract is not as useful as we might believe.

Another experiment by Dr. Reader confirms what many of us anecdotally know. Subjects were asked to research a subject using the Internet through Google. They were given 30 minutes, and then had to write a summary afterwards. The results:

Mean unique pages viewed: 20.8
Mean page time visit: 47.6 seconds
Mean longest page time visit: 6.43 minutes

This shows that some pages were only visited for a matter of seconds, whereas others were visited by several minutes.

Dr. Reader concluded with a few suggestions for improvements to search engines. They could index the difficulty and the length (in words) of search results, and also the reliability of a page. This is already done in Google via Page Rank (essentially calculated by the number and type of pages linking to the page in question), but Dr. Reader also suggests using annotation software (like the ill-fated Third Voice) and interestingly, education. We should educate Internet users in how to quickly and accurately evaluate the reliability of a page.

All in all, an interesting lecture.

The march of the marketeers: invasive advertising and the Internet
Dr. Ian Brown, University College London

I didn’t learn much from this lecture, but that’s only because I’m very interested in the subject anyway and keep abreast of all the latest developments. However, it was a very comprehensive and up to date lecture, unlike some of the reporting you see in the mass media. One thing that I did find interesting was Dr. Brown’s claim that some digital TV channels have ‘unmeasureably small audiences’.

Since audiences are measured by sampling a few hundred or thousand people who have little monitors attached to their TVs, if no-one in the sample group watches a programme or channel, then as far as the survey company is concerned, no-one in the entire country watched it. Even for supposedly popular programmes such as the Nationwide League Football matches on ITV digital, there were zero viewers in the sample group. This is understandably causing problems with advertisers.

Dr. Brown went on to talk about Tivo and all the rest, but I’m not going to cover that.

And all the rest…

I’m giving a very skewed view of the Festival here because I only took notes on things that were completely new to me and that I felt would interest people here. Consequently, I didn’t take any notes in the space lectures I went to, even though some of them, such as ‘Living and working in space’ by Dr. Kevin Fong and the lecture given by Sir Martin Rees were excellent. The former was a very entertaining and information lecture about space medicine on long duration space missions, and the latter was all about posthumans and the Fermi Paradox.

I was actually stunned by Sir Martin’s lecture; not because of its content (I read lots of SF, thank you very much) but because it was coming from him – the Astronomer Royal, no less! In the past, such respectable people wouldn’t touch esoteric subjects like posthumans with a bargepole.

Then there was the talk on DNA nanomachines by Dr. Turberfield from Oxford University; I hadn’t quite grasped the possibilities of DNA assembly before that lecture, and neither did I truly understand how DNA computing could be used to solve a variant of the travelling salesman problem, but afterwards I did (in other words, it was a good lecture). Dr. Turberfield also showed a model of his current work in trying to construct a DNA nanomachine motor, which he confesses probably doesn’t have much immediate practical use but certainly is fun.

Most of the lectures I attended were pretty good; some were excellent, of which I’ve only mentioned a few above. If you ever find that the BA Festival is taking place nearby one year (next year it’s in Salford) then it’s probably worth getting hold of a programme and attending for a day or two. You’ll learn a lot.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s