Inarticulate

The Guardian’s Life section (science) has an article about the impenetrable writing favoured by scientists when writing in journals. This is hardly a new development but it’s no less interesting or disappointing for it; what is disappointing is that the author, Chris McCabe, has reduced this interesting subject to a directionless and misguided article, which is rather amusing since he’s supposed to be a key member of the resistance against bad writing. The article basically consists of the trivial argument that no-one understands scientific papers supported by far too many anecdotes and examples of bad writing. It’s only until the last couple of paragraphs that he actually gets to the point, such that it is.

While I’ll be the first to admit that there’s far too much bad writing in journals, a lot of what people might consider to be impenetrable is always going to be impenetrable. McCabe uses the following excerpt as a prime example of bad writing:

“These findings support the hypothesis that spatial learning may depend on neuronal input from the entorhinal cortex to dentate granule cells via perforant path and LTP-induction at perforant path-dentate granule cell synapses in pathway-specific semi-interactive modes of operation.”

Now, this could have been written much more clearly, but the real reason why it’s impenetrable is because over 99% of the population have no idea what these words mean: spatial learning, neuronal input, entorhinal cortex, dentate granule cells, perforant path, LTP-induction, perforant path-dentate granule cell synapses, pathway-specific semi-interactive modes of operation. Is it any surprise, then, that it reads badly?

Scientific papers aren’t aimed for the general audience. They’re aimed for those highly specialised scientists, numbering in the tens of thousands, who actually care about and understand the particular field. Before I’m accused of being elitist, there are articles – called reviews – in journals that do attempt to explain specialised subjects like the one above to the greater scientific community. Most reviews probably wouldn’t be suitable for the general public because, like papers, they assume a certain amount of background knowledge. If they didn’t, they’d be far too long. As it is, journals impose very strict word limits on papers and unfortunately scientists have to be terse in order to meet them.

Yet even when scientists aren’t actively trying to be terse, they often appear like it. Take this sentence from the methods section of my dissertation:

“The slices were subsequently transferred to a recording chamber, also perfused with oxygenated aCSF at room temperature.”

Makes no sense at all, does it? But what’s the point of trying to pad it out when all of my readers know exactly what I’m talking about and just want me to get to the good bits, namely, the section where my methods differ from everyone else’s? Just in the same way, when I try to read this analysis of the Superbowl, I have no idea what they’re talking when they say ‘third down’ and ‘two point conversion’, but it doesn’t bother me because the article’s not written for those completely unfamiliar to American Football, it’s written for fans.

Even if writing in journals was improved (and it needs improving), it would still remain totally opaque to the general public. We only start nearing the general public at the level of newspaper article, TV programmes on science and (perhaps) New Scientist.

The problem McCabe is trying to address isn’t really about bad science writing for scientists, it’s about bad science writing for the public. To pretend that scientists get their negative image from their writing in journals is like claiming that Christina Aguilera is popular because of her wonderful personality – in other words, it’s completely missing the point. It might be a tired cliche to say that we need to ‘build a bridge’ between the scientific community and the public, but it’s still true.

Our entire culture’s schizophrenic attitude to science, one of simultaneous awe and hatred, of hope and despair and of magic and boredom, desperately needs mending. It’s not just a job for the scientists, it’s a job for politicians, teachers, parents, children, journalists – everyone.

So, long live terse journal papers – but more importantly, long live Science!

Tutorials &c.

So many things have happened in the past week! A final success at badminton, boardgame tournaments, computational neuroscience, strange and wonderful things happening on the next planet out, lots of good new books, and tutorials. I will deal with them all in time, but first, tutorials.

One of the distinguishing features of Oxbridge is the tutorial system, in which each undergraduate student will attend a few one-hour tutorials every week. In most tutorials, the attention of the tutor (usually a fellow or postgrad) is only divided between three or at most four students, and so they can spend a very intensive hour discussing the topics covered in that particular course. It’s thought that tutorials (they’re called supervisions at Cambridge) are one of the principal ways in which Oxbridge provides a ‘superior’ education to those found in other universities.

Whether or not tutorials are as good as they’re made out to be is a difficult question that depends on a number of factors, such as the skill of the tutor, the commitment of the students and so on. The reason I’ve brought the subject up is not to talk about their value – it’s because I’ve been asked to give a set of tutorials by my department.

It turns out that there are only two people in the department who know anything about phototransduction (the process in which photons hitting the retina are converted into information), and I am one of them. The other person, who knows a vast amount more about the subject that I ever will, is not able to give tutorials on the subject so I’ve agreed to give it a go. Despite what many of my friends fear, I really do believe that I can do a job at helping and teaching people. I’ve spent a rather large part of my life doing things that involve helping people understand difficult concepts and retain new facts so I hope I have something useful to offer undergraduates. Oh, and I do know how phototransduction works – thankfully, it is a rather logical subject to explain, if not fully understood.

The thought of giving tutorials to undergraduates in the very near future is a chilling yet simultaneously intriguing prospect. Chilling, because it means that these students will be partly relying on me to help them do well at their exams, which is no small responsibility. Intriguing, because they are finalist students (scientists and medics) and as such, there is a very distinct possibility that at least some of them will be older than me. Of course, it’s not unheard of for tutors to be younger than their students, and it certainly isn’t unusual for new graduates to be giving tutorials – I know a couple of friends in Cambridge who are in my year group and are already giving tutorials, albeit not to finalists.

Work and hair

My first chemicals arrived today! It may come as a surprise to many, but it isn’t the case (not entirely, anyway) that I just hang around in Oxford waiting for interesting things to happen – occasionally I do some real research. In preparation for an experiment on the mouse visual system, I’ve ordered a bunch of chemicals, radioactive tracers, film and nuclear emulsion over the past few days; it’s all very exciting, especially because they all cost ridiculous amounts of money. They might as well be hand-crafted by hundred year old monks on some remote Himalayan mountaintop for seven years – it’d probably turn out cheaper (although I suspect they wouldn’t work, unless they had a multimillion dollar biochemicals facility. Then again, stranger things have happened).

The robot scientist developed at the University of Wales is an interesting little thing. I’ve been thinking about the logistics of programming a similar thing for investigating the properties of the visual system. Obviously it couldn’t be closed loop, but there’s an awful lot that can be automated in the experimental process, and can benefit from the increased analytical precision that computers can offer. To be honest, I’m quite surprised that these kinds of experiments (and things like fMRI) aren’t much more automated than they currently are, especially since many of the researchers involved have extensive programming experience. Looks like I’ll be learning more MatLab then…

An exchange on the Culture list about The Return of the King (not involving me):

1: “…Also, Aragorn finally sort of washes his hair…that was the plot thread I was most eager to see tied up.”

2: “I found that incredibly disconcerting when it happened. Almost scarier than the whole flaming eye unstoppable evil thing. If you know what I mean.”

1: “Agreed. Although it was still kinda flat and lank, to ease the transition as it were.”

Tsk. Girls, eh?

A Cure

I was out in Liverpool doing some shopping and sherpa-duty for a friend when I saw a wonderfully stupid sign for a herbalist in a shopping mall. I lamented to my friend that I didn’t have a camera with me, and then did a double-take and realised that I did – I’ve just bought a new SE T610 phone which has got a small, low-quality yet working camera in it. So here is the picture:

The herb sign

Clearly these guys think that along with curing shingles, paralysis and migraines, they can also cure neurology. I didn’t know that the medical science of neurology was in fact an ailment or disease, or even that it could be cured, but I am glad that Herbal King has a way.

A Bright Picture

It’s not often that I see a piece of science writing that concisely explains a difficult concept in an accessible way, but this article at Wired on a pill that could prevent hearing loss had some well-written passages. The reporter, Noah Shachtman, used a nice turn of phrase to describe how a buildup of free radicals in the cocheal hair cells can cause damage and kill them:

When these hair cells are overstressed by loud noises, “free radicals” — unstable oxygen atoms that are short an electron — are produced, explains Southern Illinois University professor of audiology research Kathleen Campbell. The radicals start stealing electrons from nearby molecules, like the cell’s fatty walls. Enough of this thievery will kill the cell.

This can be stopped, however, if enough antioxidants — the body’s natural defense mechanisms — are supplied beforehand. The antioxidant molecules easily give up an electron. This supplies the free radical, and prevents its toxic larceny.

Hearing loss prevention is not a sexy subject and some might wonder whether there’s any point making an effort to explain it, but there’s no scientific concept too small or apparently unimportant to explain well. Shachtman did a good job in injecting a bit of humour into the article while also providing a vivid image of what’s going on in the ear. More, please!

Dishonest science

BoingBoing linked to this interview about ‘brain technologies’ today which I think will inevitably give people a completely wrong impression about the field. The interviewee, David Pescovitz (a science writer, not a scientist) touches on all the popular stuff at the moment including the laughable ‘neuromarketing’:

Volunteers in one study completed a survey about their likes and dislikes in different product categories. Then, while under the fMRI scanner, they were shown items on the screen. The researchers, according to the company�s press release, �pinpointed the preference area of the brain. Using this data, the Thought Sciences team can now help their client to design better products and services and a more effective marketing campaign.� Like something from a Philip K. Dick novel, the technique is called �neuromarketing.�

If the Thought Sciences team really did manage to help their client design a better product and marketing campaign by looking at a few blobs on a screen, I will eat my shoes. fMRI and neuroscience is not at the stage where we can look at brain images and say, “Well, this person clearly prefers product A to product B because the ‘preference area’ is lit up more with A.”

Later on, he talks about the research done by Alan Snyder at Sydney on using transcranial magnetic stimulation (TMS) to improve cognitive function. This was very interesting stuff, but the fact that researchers at Adelaide were not able to reproduce the results was of course not mentioned. After all, why mention something like that when it would ruin such a good story?

There is a lot of irresponsible science journalism going on these days. Normally it is of the negative variety, that is, genetically modified crops will eat your children, but increasingly you see a lot of cheerleading of biological and neuroscientific research going on. People like David Pescovitz are misrepresenting the true state of our knowledge and giving the impression that we have TMS and brain imaging all figured out, and in a mere few years you’ll be able to zap your frontal lobes and gain 20 IQ points. Unless these journalists are totally incompetent, they can’t have failed to notice that these issues are far from resolved and certainly in the case of the TMS experiment, very much disputed. Unfortunately the alternative to incompetence in this case is not much better – sheer dishonesty, fueled by the need to file a sensational story at the end of the day.

Introspection

Lately, I’ve been thinking about why I go to sleep in lectures so often. It isn’t because I’m tired, or because I’m bored; there are plenty of times when I am both tired and bored and fail to fall asleep with the kind of dependability that I do in lectures. Nor is it because I’m sitting still for an hour; I often sit, tired and bored, for several hours and again, I don’t fall asleep. The process of sleeping is admittedly accelerated by the lecture being in a dark and warm room, but then those conditions are neither necessary nor sufficient, and of course they accelerate any form of sleeping.

And contrary to popular belief, I don’t actively try to fall asleep in lectures. In fact, for most lectures I’m engaged in a mental struggle to stay awake. It’s not as I’m not making an effort here. So what is it that’s so unique about lectures that makes me fall asleep in them?

I think it’s divided attention. A lecture consists of auditory and visual stimuli, namely a lecturer talking and perhaps some slides, that reach my sense organs and are converted into information. During lectures, I try to attend to this outside stimuli, but for some reason, I usually can’t. Traditional psychologists would say that the reason behind this is because the stimuli isn’t salient enough to keep my attention from drifting off into introspection. Which basically means, I’m not paying attention because I find the lecture boring.

I don’t agree with that; I’ve been in many lectures whose topics I find highly interesting and important and I still manage to doze off, even if only for a few seconds. I think it has more to do with the presentation of the information; that is, the nature of the stimuli. I would venture that the distilled information bandwidth of most lectures is a constant low enough to be easily processed by most people, including me, consequently leaving a fair amount of spare processing power sloshing about doing nothing (I appreciate that it’s not particularly accurate to use a computer as a metaphor for the brain, especially in terms of the brain having a linear and generalised pool of processing power, but bear with me). This spare power might be used for any number of things, which could include further processing of the lecture information, processing of other non-lecture stimuli, or simple introspection.

For me, I believe that in a lecture I use a significant portion of my brain to attend to the lecture. The rest of my brain attends to something else, such as what I’m going to cook for dinner tonight, or how to design a new kind of streetlamp cover that would reduce light pollution. For most of the time, these two attentive streams can co-exist happily and independently without infringing on each others’ processing power. But when some event occurs that upsets this balance, my introspective stream can start gobbling up processing power from my lecture stream (without my conscious notice). At this point, I stop paying attention to the lecture, which means that I essentially can’t hear or see what’s in front of me, despite being awake*. From that point, it’s an easy hop, skip and jump to falling completely asleep, which I would compare to a sort of cascading, spiralling experience in which my neurones progressively succumb to whatever signals cause me to lose consciousness.

*Obviously I can still hear and see. But I’m not paying attention to those senses, which means that if you asked me what the lecturer had just said, I wouldn’t be able to tell you.

Then I wake up a few seconds or at most a minute later.

It’s essential to remember that the reason this process happens with lectures and not, say, during a conversation, is because the information bandwidth is a constant, which means that my brain can (with reasonable confidence) allocate processing resources to something else. A conversation, on the other hand, has high fluctuations in information bandwidth that my brain would have to keep an eye on.

Another equally important point that I haven’t mentioned yet is that in a lecture, the only stimuli that are changing are those directly related to the lecture itself, i.e. the lecturer and his slides. The rest of the room is basically unchanging. So, to push the computer analogy even further, imagine that my brain encodes auditory and visual information via a compression akin to MPEG; in other words, it only pays attention to things that change. If I stop paying attention to the lecturer and his slides, then I’m not paying attention to any external stimuli at all! This provides another compelling reason why I don’t just spontaneously fall asleep while walking around Oxford.

Finally, I think this happens to me rather than to everyone is to do with the balance between my two attentive streams. The possibilities are that lectures (for some reason) are unusually poor at holding my attention, or my imagination is overactive, or my attention-switching mechanism is kerjiggered.

It is, I believe, a very seductive and compelling hypothesis that is more satisfactory than my previous ‘energy conserving brain’ hypothesis – perhaps even worthy of more investigation…

Senses

Senses
(Flash) – a nice little interactive quiz from the BBC that deals with psychological phenomena and illusions (and some other not quite as interesting stuff), via Bhisma.

On memory

My 4 year DPhil here at Oxford is funded by a studentship from the Wellcome Trust. This is a great thing because it means I have enough money to, for example, live, and it also means that any research groups I join will not have to pay for me. It’s even better than that, though, because I just found out that my studentship comes with a research grant that goes to any group(s) I join. So in effect, by taking me on, I would be giving them money!

As a result, I’ve already had group leaders sidling up to me and nonchalantly informing me of the highly interesting and vital research work that they are doing. It is pretty cool.

I’m not actually expected to join an existing research project though; I’ve been told that I can basically think of any project (within reason) that involves ion channels. This might seem a little restrictive, but you have to realise that every cell in every organism has ion channels and they’re essential for, well, every biological process. So recently I’ve been giving some thought to memory and cognition, and how it might be improved by drugs.

Recently a drug called modafinil (aka provigil) has been making headlines about how it can drastically boost concentration and wakefulness. All of this is true. Even better, modafinil doesn’t appear to have any real side-effects at all. Unsurprisingly, legions of Americans, very few of whom actually have sleep disorders (which is what the drug is supposed to be for), have been trying to get their hands on this wonder-drug.

Would they be worried, or at least surprised, if they knew that we have no idea how modafinil works? Probably not. But I’m quite interested. I’ve started reviewing the literature on modafinil and haven’t been able to find much addressing the molecular and cellular mechanisms of the drug’s action; sure, there have been plenty of clinical studies checking to see whether it works or not, and people have been looking quite hard to find any harmful side-effects – but this doesn’t tell us how it works.

Of course, there are some groups trying to figure out how it works, but mostly the progress they’ve made is finding out how it doesn’t work (it doesn’t seem to involve the dopamine system, for example). So I think this is an interesting area for research and could shed some light on how normal memory and cognition work. My worry, however, is that since modafinil appears to have such global effects on consciousness, it might be very difficult to work out how individual systems are being affected.

Luckily, I have about a year before I have to start the serious research component of my DPhil so I have plenty of time to make my mind up on a project.

More neuroscience

The theme of today’s conference sessions was on attention, on which William James famously said, “Everyone knows what attention is.” (I never want to hear that phrase again. Ever. I heard it enough today)

I wasn’t too enamoured with the first three talks today, which were arguably given by the big-hitters of the conference. I didn’t think that any of them were particularly compelling speakers and they assumed quite a lot of knowledge on the part of the audience, which is mostly composed of graduates, many of whom didn’t even specialise in neuroscience or experimental psychology.

Kia Nobre, from Oxford, gave an interesting talk about imaging the system that controls attention in the brain, but alas my brain isn’t working properly and I can’t recall what she said. Clearly some sort of cue is in order…

John Marshall, also of Oxford, talked about spatial cognition and whether it’s a right hemisphere specialization. Well, that’s what the title of the talk was – in actual fact, he ended up talking about perceptual neglect, which is an interesting enough subject, especially when considering Bisiach’s imaginal neglect, but I don’t think he said anything particularly new or interesting. That’s not true – he did say one interesting thing, which didn’t have much to do with his talk.

He said that rather than spending our time trying to figure out the anatomical specialisation of different parts of the brain (e.g., insisting that Broca’s area of the brain is only about language), we should instead think that one region in the brain might have a number of specialised functions depending on what neural circuits are active at that time. A useful insight.

Next was Prof. Stephanie Clarke of CHUV, Switzerland. She has this interesting idea that much like the dorsal and ventral streams of processing in the visual system, there are ‘where’ and ‘what’ streams in the auditory system. She presented some histological evidence for this, which was a bit refreshing to see in the conference overburdened with psychophysical experiments, although of course she had her own psychophysical experiments as well to prove the functional point. These experiments basically showed a double-dissociation between recognition and localisation of hearing. Very intriguing stuff.

The last talk was by Geraint Rees, of UCL. This was quite controversial. Rees is an excellent speaker, and he talked about his experiments into proving that awareness (and thus conscousness) resides in the parietal cortex (or at least, in the cerebral cortex) by a clever and peculiar fMRI experiment involving subjects ‘merging’ two different images and involuntarily switching between the images.

The general consensus is that while it was a very clever experiment, of the sort that Nature likes to publish, his conclusions were a bit too ambitious and he had a few too many assumptions…

At the end of the day’s sessions, there was a brief discussion that involved how genes might determine brain function. One of the speakers speculated about how we might have to consider that genes statistically alter the growth of various neuronal regions and bias them towards being able to learn and specialise in certain areas, e.g. facial recognition.

It was at this time, about 5:20pm, that I came up with an idea. I was a bit tired and thinking that I might like to leave the lecture room but couldn’t really because I was sitting in the middle of a row and someone was talking. I then thought, wouldn’t it be wonderful if I could just transfer my consciousness into multiple locations so I could keep an eye on different events – a bit like being on the Internet and participating in several IRC chats at the same time. Not a new idea, I know, but it was quite vivid at the time.

Tomorrow is the last day of the conference, but it won’t give me much respite because I have a symposium to attend on Friday. Five straight days of having to wake up at 8am… I honestly don’t know how I’ll still be alive by this weekend.