I never knew that Goosnargh was actually a real world until I read about ‘Goosnargh chicken’ in the Sunday Times today. I blame Douglas Adams in So Long, And Thanks For All The Fish:

‘Goosnargh,’ said Ford Prefect, which was a special Betelgeusian word he used when he knew he should say something but didn’t know what it should be.

I suppose I should’ve suspected something, given that I’ve read The Meaning of Liff, which interestingly enough is completely online, although without the funny illustrations.

The Sparrow

Some excellent news – it seems that Mary Doria Russell’s novel The Sparrow is on track to being made as a movie. This is particularly good news because the studio concerned appears to actually understand the novel, as opposed to Universal, who originally optioned the screenplay and were going to hack it to pieces.


Tonight I saw the planet Mars with my own eyes.

We’ve all been hearing that Mars is as close as it will be to Earth for the next sixty thousand years. Unfortunately, since I live in the UK I haven’t really had the opportunity to look for Mars since our skies have been swathed in cloud for the past week. This evening, though, I noticed that the skies were very clear and began to scan them occasionally. A few hours later, I pointed to a chip of bright light in the south that almost seemed as big as a disc and said to my friends, “That’s got to be Mars.” There wasn’t anything brighter in the sky.

When I got back about an hour ago, I decided that I wanted to look at Mars through my telescope. I’ve never been a particularly diligent astronomer – to be honest, I’m just not that interested in it. So I’m forced to say that the lovely Bausch and Lomb reflector telescope that I won five years ago from the Mars Society was a bit wasted on me. Nevertheless, when it arrived from America and I put it up on the first clear night that came, I managed to see the moons of Jupiter and the rings of Saturn, all in one night, by the simple (and simple-minded) expedient of pointing it at interesting-looking bright points of light.

Of course this is no way to use a telescope; what I should have done is lined it up on the pole star, gotten a star chart out and actually figured out what the telescope’s various lenses did.

I’m embarrased to say that I haven’t gotten any better at all in the five years since, so tonight I just got the telescope out, lined it up on Mars using the little finderscope and spent about ten minutes fiddling about with the fine grain controls, swapping lenses in and out and the messing about with the focus. While I was futilely scanning around, I noticed a bright patch of light at the top of the viewfinder. I immediately looked towards it and brought it into focus.

It was Mars, and I could see it as a bright and clearly defined disc. It was a tiny disc, but it was still there. After pausing for a couple of minutes just to savour the moment and think that I was finally seeing it with my own eyes, I swapped in a 7mm lense to up the magnification – and yes, it became a slightly larger disc. Maybe I was just imagining it, but I convinced myself that I could just about see the poles; there was an almost imperceptibly subtle difference in the shading on the disc.

I stood there, stooped over the viewfinder, and I thought for a moment that I could reach out and touch the planet. I thought, What I would give to walk on there for a just a few minutes. And then eventually I carefully packed away the lenses, collapsed the tripod and carried the telescope back inside.

It’s easy to look at the photos of Mars on the Internet and in the newspapers and wonder what the point is of staying up at night in the cold and peering through a telescope to see an image that isn’t anywhere near as big or colourful or clear. I know, because that’s what I thought yesterday.

But the sensation of seeing another planet with your own eyes, a planet that could have someone walking on it within your lifetime, a planet that’s big enough to hold a million dreams – it’s not something that you can get by looking at a piece of paper or a computer monitor. It conjured up the same feeling I had when I first saw the Milky Way, that the universe is impossibly vast and beautiful and bursting with things to see, that we as a species have the wonderful opportunity to explore. There’s an awful lot to see out there.


DataGlyphs – a ‘robust and unobtrusive method of embedding computer-readable data on surfaces such as paper, labels, plastic, glass, or metal’ by Xerox Parc. Dataglyphs use a pattern of forward and backward slashes to represent ones and zeroes, and at 600 dpi you can get up to 1kb per square inch of printed material. Clever stuff.

Scare Waves

A lot of people are getting angry and scared about new mobile phone masts being erected in their local areas; the vast majority of these masts are for 3G operations, which have a smaller coverage area than current mobile phone masts.

People don’t like the masts because they think they might be harmful to their health. It’s a perfectly understandable concern that happens to have no scientific basis. Of course, this has never stopped people in the past, and so residents try to avoid using the health argument when opposing 3G masts (because they know they’ll probably get beaten) and instead use things like planning grounds and so on.

This sort of thing irritates me because the central issue – whether or not 3G masts are dangerous – is being sidestepped. It’s a very seductive idea, that the invisible energy pouring out of these masts might cause, for example, cancer, but there’s no scientific reason to believe it.

The problem is, most people don’t understand the scientific process. Take, for example, North Cornwall MP Paul Tyler, who asked the Home Secretary David Blunkett to ‘reveal the scientific and medical data which proves they are safe.’ Mr. Tyler clearly doesn’t understand that you can’t prove that anything is safe. Sure, you can prove that something is dangerous, and you can say that you are almost certain that something is safe, but you can’t prove that it’s safe, at least in the classical sense.

And even if the data were released, I don’t think that the anti-mast campaigners would be satisfied; there’d be claims of bias and they’d point to the one or two studies (out of the dozens or hundreds) that agreed with their view. Have these guys heard of type 1 errors? Have they hell. The belief that all studies are equal and you can just pick the ones that you like is absurd.

All of this is a shame, because statistics isn’t a particularly difficult concept and there’s no reason why it couldn’t be taught compulsarily in schools. Neither is the idea of science being a process rather than a magical corpus of facts – but that isn’t taught well either.

This reminds me of an article in the Times today about opposition to RFID. The argument is the familiar one of privacy – if retailers don’t remove the RFID tags from items after you’ve purchased them, then it’s possible (although pretty difficult and unlikely) that someone could scan your tags remotely and figure out (for example) what you bought from the supermarket, what clothes you’re wearing, and so on.

I sympathise with this view; I wouldn’t like to be scanned to see what I have on me. But RFID tags do have other uses outside the store; for example, they could help microwave ovens and fridges identify them and so provide cooking or refrigeration instructions. One possibility is that your fridge could provide you with meal suggestions based on its current contents. Another is that the tags could aid with faster and more accurate recycling.

The answer seems simple to me; give shoppers the explicit choice to have the RFID tags removed at checkout, and also identify them prominently so that can be removed later. But the argument doesn’t seem to be based on reason as much as ideology. For example:

Caspian [an anti-RFID group] claims to have almost 6,000 members in 15 countries, with Britain now a �core constituency�. �What makes us powerful is that 78 per cent of people oppose this technology on privacy grounds, and 61 per cent on health grounds,� she said. No health risk has been identified.

Why is it good that 61% of people oppose RFID on non-existant health grounds? I cannot even conceive of a way that RFIDs could harm your health; they’re low powered, short range and they’re practically never ‘on’. Privacy grounds are all very well and good, but is it something to be proud of, that the public oppose a thing for completely the wrong reason?

Another excerpt from the article:

Ms Albrecht says that her interest stems from her religious convictions. “When I was eight years old, my grandmother sat me down after a visit to a grocery store and told me that there will be a time when people will not be able to buy or sell food without a number, referring to the Mark of the Beast, Revelations xiii,” she said.

“I made a promise to myself at eight years old that if there was ever a number to buy or sell food, I would stop what I was doing and fight it.”

Well, what the hell is that supposed to mean? All items at supermarkets already have barcodes. RFID tags are just another piece of technology that helps you locate and identify things. You might as well campaign against mobile phones because they let the ‘authorities’ locate you any time, anywhere.

A Love of Memory

When I got back home from Australia I took the opportunity to reread some of my favourite books. These included Kim Stanley’s Mars Trilogy, which are probably up there in my top ten of rereadable books (Cryptonomicon sits at the pole position, having withstood at least a couple of years of sustained rereading).

I’ve forgotten how much I enjoy the Mars Trilogy. It gets bashed so often for apparently being too dry, political and sometimes boring, but consider this – the first two books won the Nebula and Hugo awards, and the trilogy as a whole was singlehandedly responsible for getting me interested in Mars. There are few (if any) events in my life that I can point to and say, this changed everything, but that fateful day when I spotted the trilogy in a book club brochure must be one of them.

The last book in the trilogy, Blue Mars, is not considered to be as good as the first two, which is a fair claim to make but it certainly doesn’t mean it’s not a good read. In Red Mars, the first book, a group of scientists develop a longevity treatment that results in people regularly living to over 200 years old by the time of Blue Mars. This is all very well and good, but said geriatrics are having a really hard time with their memory.

Accordingly, one of the main characters joins an effort to develop a memory boosting drug. At this point, most authors would be happy to say, ‘…and then after much work they made the drug,’ or if they were feeling particularly generous, they might throw in a few choice words like ‘dopamine’ or ‘serotonin’. If you were really lucky, they might take the time to look up a diagram of the brain and mention the hippocampus.

But this isn’t enough for KSR, and it’s part of the reason why I love his books. He spends over seven full pages on a monologue/stream of consciousness that dives right into the way that memory works and how you might enhance it. That’s over two thousand words of detailed information and informed speculation, none of which is wildly wrong. In fact, most of it is right, it’s only the speculation that I have a problem with and even then I have to give him a lot of respect for giving it a good try. I would say that to have written that monologue, KSR must have read at least a few reviews on the subject and perhaps a book.

Here is the bit which I love and hate (and yes, the first paragraph is that long):

The original Hebb hypothesis, first proposed by Donald Hebb in 1949, was still held to be true, because it was such a general principle; learning changed some physical feature in the brain, and after that the changed feature somehow encoded the event learned. In Hebb’s time the physical feature (the engram) was conceived of as occuring somewhere on the synaptic level, and as there could be hundreds of thousands of synapses for each of the ten billion neurones in the brain, this gave researchers the impression that the brain might be capable of holding some 10^14 data bits; at the time this seemed more than adequate to explain human consciousness. And as it was also within the realm of the possible for cmoputers, it led to a brief bogue in the notion of strong artificial intelligence, as well as that era’s version of the ‘machine fallacy’, an inversion of the pathetic fallacy, in which the brain was thought of as being something like the most powerful machine of the time. The work of the twenty-first and twenty-second centuries, however, had made it clear that there were no specific ‘engram’ sites as such. Any number of experiments failed to locate these sites, including on in which various parts of rats’ brains were removed after they learned a task, with no part of the brain proving essential; the frustrated experimenters concluded that memory was ‘everywhere and nowhere’, leading to the analogy of brain to hologram, even sillier than all the other machine analogies; but they were stumped, they were flailing. Later experiments clarified things; it became obvious that all the actions of consciousness were taking place on a level far smaller even than that of neurons; this was associated in Sax’s mind with the general miniaturization of scientific attention through the twenty-second century. In that finer-grained appraisal they had begun investigating the cytoskeletons of neuron cells, which were internal array of microtubules, with protein bridges between the microtubules. The microtubules’ structure consisted of hollow tubes made of thirteen columns of tubulin dimers, peanut-shaped globular protein pairs, each about eight by four by four nanometres, existing in two differen configurations, depending on their electrical polarization. So they dimers represented a possible on-off switch of the hoped-for engram; but they were so small that the electrical state of each dimer was influenced by the dimers around it, because of van der Waals interactions between them So messages of all kinds could be propagated along each microtubule column, and along the protein bridges connecting them. Then most recently had come yet another step in miniaturization; each dimer contained about four hundred and fifty amino acids, which could retain information by changed in the sequences of amino acids. And contained inside the dimer columns were tiny threads of water in an ordered state, a state called vicinal water, and this vicinal water was capable of conveying quantum-coherent oscilliations for the length of the tubule. A great number of experiments on living monkey brains, with miniaturized instrumentation of many different kinds, had established that while conscousness was thinking, amino acid sequences were shifting, tubulin dimers in may different places in the brain were changing configuration, in pulsed phases; microtubules were moving, sometimes growing; and on a much larger scale, dendrite spins then grew and made new connections, something changing synapses permanently, sometimes not.

So now the best current model had it that memories were encoded as standing patterns of quantum-coherent oscillations, set up by changes in the microtubules and their constituent parts, all working in patterns inside the neurons. Although there were now researchers who speculated that there could be significant action at even finer ultramicroscopic levels, permnanetly beyond their ability to investigate (familiar refrain); some saw traces of signs that the oscillations were structured in the kind of spin networks that Bao’s work described, in knotted nodes and networks that Sax found eerily reminiscent of the palace of memory plan – rooms and hallways – as if the ancient Greeks by introspection alone had intuited the very geometry of timespace.

The reason why I hate it (and hate is too strong a word) is because I don’t happen to think that his explanation for memory and consciousness is true at all. It has a very Penrosian feel about it, and I’ve never really thought that it was possible for there to be a working quantum computer residing in our neuron microtubules; and neither have I seen the necessitity for it. Plus, the idea that you would use alterations in the tubulin dimer amino acid sequence is really not workable (although I suppose that enzyme-mediated residue methylation or ubiquitination wouldn’t be out of the question).

I love this passage because it almost makes sense. KSR clearly understands what he’s talking about, and I’m pretty sure that he realises it’s extreme speculation. The rest of the monologue is much like this, discussing terms that neuroscientists bandy about regularly but don’t actually understand fully, like LTP and glutamate receptor sensitizers.

In a way, to most readers it doesn’t matter if the science makes any sense. What matters is the flow of the words and the beautiful progression from one magical concept to the next that science seems to make effortlessly; in this passage, KSR has managed to convey some of the feeling that you experience when you understand (or think you understand) a horribly complicated system; the feeling when everything shifts, just so, and interlocks into place.

The fact that it also happens to largely make sense is something that I truly appreciate; it would have been simple enough for KSR to just make all of it up, but I think KSR must have actually enjoyed learning about how memory might work for him to have written this.

Lack of imagination

Once again we are at that special time of year when the GCSE and A-Level results are announced for secondary school students here in the UK. There’s almost no point reading the newspapers since they always run the same stories. If the results for an exam improve, that’s because it’s getting easier. If they get worse, it’s because of lowered standards. There’ll be a few people complaining that they didn’t get into Oxbridge with ten A’s at A-Level, and of course there are the stories about the child wonders.

This year it seems that an eight year old boy gained an A* at Maths GCSE. Funny that, how it’s always a Maths or Computer Science exam that people seem to get first. (My take is that GCSE Maths and similar subjects are trivially easy for kids who have the right sorts of minds; there’s nothing inherently difficult about simultaneous equations or calculus, it’s just that they’re boring and most people can’t be bothered putting the effort in.)

There was also another story last week about a 13 year old boy who was expected to get a bunch of A-Levels and had been refused entry to university because he was too young. I find this crazy. There is absolutely no way that a 13 year old can get the best out of university; quite apart from not being able to drink, it’s just not legal for a 13 year old to live on their own. So say you go with your parents; well, that kind of kills off any possibility of living a normal independent university life.

But that’s not the main problem I have with kids doing exams so early and wanting to go to university. My problem is that there are far more interesting and useful things to do than take exams at such an early age. This doesn’t mean that they should spend all their time playing football and mucking about; rather, it means that if they are interested in, say, computers or science, they could try their hand at programming a game or devising experiments. Just not exams!

When I was at San Diego, there was a 16 year old schoolboy in the lab who had been there for a year designing and running his own psychological experiments. He was very sharp and a very nice guy, and I was happy to see that instead of taking a load of pointless exams (who needs ten A-Levels?) he was doing something interesting and productive. Plus, I’m willing to bet that university admissions officers will be more impressed with the three papers he’ll have published than a couple of high exams marks.

I agree that there is a point to doing exams, but I feel that it’s an unconscionable waste of time pushing kids to do a bunch of exams five years ahead of normal. There are so many better things to do.

Walken vs. Depp

On Johnny Depp becoming the front-runner for Willy Wonka in Tim Burton’s new Charlie and the Chocolate Factory movie (from TrekToday forum):

Tim Burton?
Roald Dahl?

C’mon folks, this is a match made in heaven. I’m looking forward to it. Tim Burton is probably my favorite director. Although I’d prefer Christopher Walken to Johnny Depp. Just imagine…

“You…fat boy…what did I tell you…about drinking…out of the chocolate river? And you…with the gum…shut it…”


Saw a great notice at the gym today:

Q: Why isn’t there a weight scale available here?

A: We did use to have a scale, however, customers didn’t believe it. We advise that you buy a set of scales for your personal use at home instead.


While reading a thread about PowerPoint on MetaFilter, I remembered something a friend said about me last weekend. She was a fellow student in my neuroscience course this year, and she said, “You could always tell whether a lecture or workshop was going to be any good by watching whether Adrian fell asleep in the first half hour.” As always, I played an invaluable role in that course…