Reith

Once again, it’s the wonderful time of year when the BBC’s Reith Lectures are being presented. I’ve followed the Reith Lectures on my weblog for quite a few years now, so when I discovered that this year’s lecturer is none other than my old San Diego research supervisor, Prof. Vilanyanur Ramachandran, I was pretty damn surprised. The theme of his lectures is ‘The Emerging Mind’.

I actually first heard that Ramachandran was over in the UK when I was at an interview at Oxford University; I’d just been asked a question about multimodal sensory integration and the binding problem, and I responded by using synaesthesia as an example and mentioning my time in San Diego. One of the interviewers then said that Ramachandran would be over at Oxford next week to speak. “Really?” I said thoughtfully.

Anyway, on Tuesday I went to a Cambridge Science Society lecture on ‘The Phenomenal Brain’ by a visual neuroscientist called Richard Gregory. After the talk I had a brief chat with him, asking if he was familiar with the blind spot theory of qualia espoused by Ramachandran. He was – he collaborated with Rama on the original experiment! “I hear that Rama will be speaking at Trinity on Friday,” he told me. “Really?” I said thoughtfully.

The reason I didn’t know about this is because the organisers of the Trinity talk, Trinity College Medical Society, had seen it fit not to publicise the talk in any way other than an email to the University’s Medical mailing list. Thus, poor saps like myself, a mere scientist, didn’t hear wind of it unless they began investigating with the Trinity porters and figuring out which rooms had been booked up for Friday (that, and asking my medic friends about it).

So the upshot of all of this is that I went to a packed talk given by Rama yesterday evening. Rama was in top form, exuding a real energy and enthusiasm about his subject while gesticulating madly and delighting the audience. I fear that his ideas about synaesthesia and the development of language and metaphor may have been a bit too novel for some Cambridge students, but it seemed like most people really enjoyed the talk. I wasn’t sure whether he’d recognise me, but when I put my hand up to ask a question he remembered immediately. “Hello, how are you!” he boomed. “Uh, fine, thanks…” I said, taken a bit aback. “This guy worked in my lab last summer,” he explained to the audience.

It was all very cool and I had another chat with him afterwards about my future plans, and him suggesting that I should apply to UC San Diego one of these days. And then I went to a curry for dinner and watched an episode of 24 downloaded from the net, rounding off an ideal evening.

Front Page

I was feeling a little depressed and annoyed today when I was told that my entry for the college Science Essay Prize hadn’t won. So, to cheer myself up, I submitted it to the Kuro5hin community website and to my delight, my essay about synaesthesia has met with their approval and been posted on the front page.

The Drugs Don’t Work

Over the past two days I’ve had an excellent two-part workshop in my neuroscience course on addiction, covering what we know about the causes of drug addiction at a molecular, cellular and cognitive level, reward pathways in the brain and possible treatments, vaccines and cures for drug addiction. Definitely one of the most thought provoking workshops I’ve attended, and it’s also made me appreciate the unique way we’re taught in our course.

I don’t have any lectures, not in the traditional sense. Every week, we have two three-hour long workshops that cover a specific topic; usually the teaching is a mixture of didactic and interactive, depending on the subject material and the organiser. Sometimes it’s more one than the other, but even the most didactic organisers try to get us to talk in discussion groups to figure out problems. The end result is that people feel far more comfortable about asking so-called ‘stupid’ questions and voicing their opinions than in traditional unidirectional lectures, which of course is a good thing.

After the two workshops, we split up into four groups that each reviews a paper or two, and presents the review at another three-hour session. Apart from the useful variety of viewpoints this gives you, it also helps people develop their speaking skills tremendously – I’ve seen great improvements in my and other people’s presentations over this year.

Given that the total course size is hovering around 13 or 14 these days, I’d say it’s pretty decent. Of course, it isn’t always good and we’ve had some boring workshops. Plus, no amount of good workshops could lift me out of the malaise I found myself in after being forced to study development of organisms.

Anyway, this last session was great; it helped that one of the organisers was Prof. Wolfram Schultz, the most recent recipient of the Golden Brain award.

We started off with a discussion of how you can become addicted to something psychologically.

Wolfram: The real problem is not drugs like cocaine or heroin, it’s tobacco and alcohol. Those two are the biggest health problems, and they cost the country the most. Part of the problem is the availability and the context-dependency of addiction and withdrawal – if you’re trying to abstain from drinking and there’s a wine bottle in front of you, you’re just going to start drinking again. And there’s a big problem with obesity these days as well – it’s all these supermarkets all over the place! You go into Sainsburys out of town and you just want to spend �2 but end up spending �50!

I find it great when people go off on bizarre tangents.

So, a lot of our talk today concentrated on how we’d treat drug addiction, which isn’t doing so well at the moment, what with a recent study that tracked a group of addicts over a long period of years. Most of the addicts were either dead or in prison, and the best case scenario was that they were back in rehab. Clearly not ideal from anyone’s point of view.

The problem with treating drug addiction is that the changes drugs make to your brain on a neurological level are so pervasive and long-lasting (the effects can last for years or decades) that it really is not possible to create a magic bullet that will quickly and easily ‘cure’ a full addict. Drug addiction is a mixture of a lot of different and nasty things; it seriously upsets the balance of chemicals in your brain, and it creates a literally warped form of learning that is the basis of the addiction. To cure addiction, you’d basically have to erase something that you’ve learned; a bit like erasing your liking of chocolate, but much much harder (since liking chocolate is far less intense than being addicted to cocaine).

So, appropriately, one of the best ways to overcome an addiction is simply to relearn it, over a long time, through cognitive therapy.

Current treatments for addiction address four areas:

1) Alleviate withdrawal symptoms to prevent craving and relapse. Also related to point 4.

2) Prevent drugs from reaching their targets in the brain and causing addiction. Unfortunately, this doesn’t work for addicts at all since along with preventing addiction, it also prevents the rush – so what’s the point, really? Apparently some addicts dedicated to rehab take these drug antagonists though.

3) Substitute the drug, e.g. methadone. Many people are opposed to this, seeing it as jumping out of the frying pan and into the fire, hence this exchange during the workshop:

“Over a hundred thousand people-”
“-are addicted to methadone.”
“No, I was going to say, use methadone as an alternative to heroin. Since it’s less dangerous, it’s an improvement.”

4) Alter the addiction process. Treatments such as Zyban and naltrexone help reduce addiction and craving. The only problem is, no-one knows exactly how they work, and they have some particularly nasty side-effects. Zyban, for example, gives 1% of people seizures.

A fair few people found my suggestion intriguing. If you want to both prevent addiction and help addicts, you need some kind of positive alternative – a drug that you can’t get addicted to! Or at least create a situation where people won’t want to take harmful drugs. Evidently not many people have read Brave New World.

Others suggested creating a new association for drug paraphanalia other than euphoria – pain, for example. This would mean that addicts would be too scared to relapse. It all got a bit Clockwork Orange-ish after that…

It’s important to realise how context-dependent drug addiction is. A famous study found that thousands of Vietnam veterans who were addicted to heroin had absolutely no problems back in the United States, because the environment in which they took drugs in Vietnam was so different to back home.

There’s also a fair bit of work being done on genetic susceptibility to drug addiction. As yet, there haven’t been any genes or polymorphisms identified in humans, but there have been interesting studies done in fish, of all things. They basically took some zebrafish and conducted a place preference test – in other words, they addicted zebrafish to cocaine. It turns out that some mutant zebrafish don’t get addicted. Interesting stuff.

Mutant Intelligent Mice!

Now this is why I love neuroscience. In a recent weekly paper presentation, one of the groups in my class presented a paper called Genetic enhancement of learning and memory in mice. After altering a single gene in mice, the authors of the paper managed to improve their learning and memory significantly, by up to 30%. Super intelligent mice, indeed.

What really amused me was the blithe Powerpoint presentation that accompanied the paper. For example, one slide was:

  • This study supports Hebb’s model of synaptic coincidence detection.
  • Suggests that 10-100 Hz activity in the forebrain is important for learning and memory.
  • Creating intelligent GM animals (humans?)
  • Future work: other species

And people have to gall to say that scientists aren’t socially responsible!

(I can go into greater detail about the paper if people want, but I didn’t think it was necessary for this post).

Adrian’s crazy day

Today I had to give two presentations; one summarising a paper about systems consolidation in memory, and another covering my research project this year. The research project presentation had been prepared for quite a while in advance, but as luck would have it, yesterday afternoon we struck on a different way of statistically analysing my data which completely changed all of our conclusions. So last night I had to revise the best part of my project presentation.

As for the paper presentation, well, I had to prepare that from scratch last night as well because I’ve been really busy all week. I managed both perfectly well and just before going to sleep I uploaded both to the Internet so I could download them from the room I’d be giving the presentations at.

Fast forward to this morning. It’s a grey and dreary day as usual in Cambridge, and when I get to the Anatomy department, which is where the presentations sessions are, I think to myself, ‘Why hasn’t anyone bothered turning the lights on.’ Grumbling a bit, I walked up the stairs into the team rom and flicked the light switch on. Nothing happened. It turned out that power had been lost to the entire site.

This didn’t prevent my workshop group from doing their presentations; a few people had brought laptops and others had theirs on disk. Alas, mine was too big to fit on a disk and I don’t like the idea of burning a new CD every time I make a new presentation*. Because, of course, without power, we had no Internet connection.

As luck would have it, the power turned on before I had to give my paper presentation. Except the net connection was still down. Since I didn’t have any notes on me for it, I had to spend the tea break rapidly drawing diagrams on the whiteboard and trying to remember what I was supposed to be talking about; it didn’t help that the first presenter didn’t talk about the study results, which meant that I had to do some quick thinking.

So this was about 11:30am. I had to give my project presentation – the important one – in about an hour. I jumped on my bike, cycled back to college and burned a CD with my presentation on it. On a whim, I decided to test it on my normal CD drive to see if it worked. My computer hung for a few minutes while it mulled over whether it wanted to read it (during which time my urge to throw it out of the window reached startling proportions) and eventually I just manually ejected it. I then burned another CD, which produced the same results. At this point, I was feeling a bit hard done by.

Finally, I decided that given my CD writer is professional grade and that my CD reader is quite temperamental, it was probably the reader that was wrong. So I took the two CDs and zoomed back to the Anatomy department, where the ageing iMac put my own computer to shame and read the CDs without a hitch. Which is how the presentation ultimately went – without a hitch – although I was literally battered with probing questions about my results, interpretation and conclusions. Interesting questions all of them, and I was quite pleased to find that I could respond to them all.

And now I’ve just discovered that my Orange SPV phone has finally been repaired. So maybe things will calm down now.

Magnetic Attraction

Today I had an interesting and unique experience – I had my brain scanned by functional magnetic resonance imaging (fMRI). The point of this was to take part in one of my friend’s psychology research experiments, earn �27 and also (arguably most importantly) get a picture of my brain.

Doing an fMRI is an unusual thing. Magnetic resonance imaging basically involves using big electromagnets positioned around your body to line up all of your atoms in one direction. Once they’re lined up, the electromagnets are then used to give them a nudge, which makes them vibrate and throw out electromagnetic radiation that can be detected by yet more magnetic fields. The salient points here are that MRI involves lots of big and powerful magnetic fields.

By big, I mean a very big and loud machine that (in this case) wrapped around my entire body with little room to spare. By powerful, I mean a 3 tesla magnetic field, which is roughly 50,000 times greater than the Earth’s magnetic field. This of course means that you have to be really careful not to take any metal into the MRI scanner room, or even worse, the MRI scanner itself, lest it fling itself out of your grip and slam against the magnet casing, cutting through anything that happens to be in its path along the way (e.g. clothes, flesh, etc).

I’ve encountered MRI scanners before. When I was in San Diego, the lab I worked with did some research with MRI so I had a chance to check that out. These things are very automated these days – you can run them with just one person, providing they know what they’re doing. The safety aspect was quite relaxed in San Diego; I had a chance just to stroll into the scanner room while the machine was on after a quick and cursory check for offending metal objects.

In contrast, at Cambridge I had to sign all sorts of forms (because I was a test subject this time) and undergo a thorough metal check; my sweater was not acceptable because it had a couple of metal clips. Glasses, watch and belt also had to go – but not trousers, because apparently if the metal is ‘attached’ to clothes then it’s OK.

Next, I lay down on the scanning bed and put on some ear plugs and ear protectors. I was instructed not to move my head at all during the scanning, which was expected to take about an hour, and they helpfully gave me a leg rest so I could lie down more comfortably. I also got a blanket.

Thus equipped, the scanning bed carried me into the depths of the scanner. This MRI scanner was a full body one, capable of imaging all parts of the body. However, for this experiment, they just wanted to look at my brain*. Anyway, in order to reduce the power needed for the magnets, the scanner is as compact as possible, which means there’s only a few inches of clearance between your body and the walls of the scanner. A bit like a coffin, I imagine. Apparently a lot of people have real claustrophobia problems when they start to slide into the scanner, but I felt fine (clearly I have an underdeveloped survival reflex). Likewise, it wasn’t too bad when I was fully inside the scanner.

(*Apparently the Cambridge University MRI group is trying to buy their own head-only scanner).

They had a little mirror set up just above my nose so that I could see a computer screen being projected onto the wall behind the scanner (no screens, because they contain metal). In my right hand was a little switch I used to send my responses for the experiment, and in my left was a panic button in case I had to get out quickly.

The whole thing took maybe 80 minutes; the experiment only lasted about 40, but they had to calibrate the scanner beforehand so that they could image the right slices of my brain, and afterwards they had to do a full structural imaging of my brain. During all of this time the scanner made lots of strange humming noises (not as loud or unpleasant as I expected). In fact, I didn’t even mind having to lie in the same position or keep my head still.

Granted, I did doze off a couple of times during the experiment (I’m told everyone does) and I really did need to go to the toilet towards the end, but it was all good. I can now rest easy in the knowledge that images of my brain will help the grand progress of science in some small way, and also that I’ll be getting a picture of my brain in a few weeks.

(Some notes. I gave a very simplified explanation of how MRI works – here’s a more detailed explanation. Also, when I say ‘functional MRI’, I simply mean that the scanning is producing a video of my brain’s activity (i.e. how it is functioning) rather than just a single static picture of its structure).

Nanosecond bats

While doing some research into neural coding, I came across a reference for a paper that claims bats have nanosecond acuity with echolocation.

Say what? Nanosecond? Apparently so. I can’t really tell how they came to this conclusion by the abstract, but it’s been reliably cited in another paper. I’m definitely going to check this out at the library soon. Exactly how a bat, or indeed any kind of animal, can tell distinguish signals to an accuracy of nanoseconds (those are billionths of a second) is beyond me; simple neurons can only transmit with millisecond accuracy. Just when you start getting blase about biology and how eyes can detect single photons, another amazing thing pops up.

Misunderstandings

Yet again, people are being confused by Kevin ‘Captain Cyborg’ Warwick’s work. Wired has just published an article about Tech Predictions for the Decade, and here’s a quote:

Other futuristic technology poised for human consumption is the implanted sensor. Gantz pointed out that University of Reading professor Kevin Warwick, who has a sensor implanted in his left arm, has undergone experiments in which scientists have been able to cause a tingling sensation in his left index finger by sending information to his nervous system. This is good news for paraplegics who may someday regain feeling in their legs by having a similar chip implanted in their bodies.

Where’s the media been for the last few decades? We’ve been able to do this for ages, and there are far better ways of going about it then putting a microship in your arm. You’d think that Wired would know about transcortical magnetic stimulation (TMS), which basically fires a magnetic pulse at the brain and can get people to move their limbs. Plus, the electrical stimulation of motor neurones is not exactly rocket science. If they want to know about this stuff, they should pay more attention to neuroscientists, not ‘cybernetics’ experts. Citing Kevin Warwick as the man to watch is doing the biology and neuroscience commuity a real disservice.

Pattern Recognition

(Warning: This entry has absolutely nothing to do with massively multiuser online entertainment, if that’s what you’re here for)

In my research project at the moment, I’m using a nifty little program to aid my pattern recognition.

A major part of my project involves me taking recordings of a signal (in this case, electrochemical spikes from a neuron) and discriminating them from the noise inherent in the system. Sometimes the noise is loud, and sometimes there is more than one signal (i.e. multiple neurones). In a recent case, I had eight different signals and a significant amount of noise.

Now, the way most people would go about discriminating the signal from the case I described is through hardware; they’d hook their recording apparatus up to a black box, and they would set a value X on that black box. Anything in their recording that went above value X would be recorded (on a separate channel) as a spike. Now, this seems reasonable enough since spikes are just that – they are spikes in voltage, and if you have a good recording with only one signal and little noise, you can be 100% confident in getting all of the spikes and no false positives.

But if you have lots of noise, and the signal is weak, you will have to set value X such that you may miss some of the spikes and get some false positives (because the spikes are only a bit above the level of the noise). Maybe you might not care about this if you’re just doing a simple analysis of the spike rate, but I’m not – I’m doing something a bit more complicated that involves information theory and it really is important for me to try and get all the spikes and no noise. Thus, a simple hardware discrimation of the spikes just ain’t good enough*.

(*Hardware discrimination can actually be a bit more complicated than this, but essentially it all boils down to seeing if the voltage goes above X and/or below Y or Z or whatever)

So what you really have to do is to look at the shape of a spike. A neural spike is quite distinctive – it generally has a slight bump, then a sharp peak, then a little trough. In other words, it doesn’t look like random noise. This means that you can do some software analysis of the shape.

The more computer-savvy of you readers are probably thinking – aha, no problem, we’ll just get some spike recognition neural network kerjigger in, and then that’s it. Well, you know, it’s not as easy as that, because spike shape can change over time and sometimes noise looks like a spike, and vice versa. It turns out that the best way to check whether a spike is really a spike is by looking at it – after all, the human brain is a pretty powerful neural net. Unfortunately, if you’re looking at a spike train with 50,000 spikes, this isn’t really feasible.

So a guy in my lab has made a nifty piece of software that will analyse each of the putative spikes in a recording (putative because they pass a trigger level – just like how a hardware discriminator works). Using a mathematical method of your choice (FFT, PCA, wavelet, cursor values, etc) it will assign a numerical value to each spike. You can then plot these values against each other to get a 2D scattergram. You do this three times, and hopefully you get three scattergrams that graphically isolate your chosen signal from the noise (or from other signals) on the basis of the analysis method you chose.

Next, you go and mark out which spikes you want (each spike is represented by a scatter point) by drawing ellipses, and finally you use Boolean algebra to say, ‘OK, I want all the points I circled in plot A, but not those that are shared with plot B or plot C’. At any point, you can check out what a particular spike or group of spikes looks like on a graph. And then you can export your freshly discriminated spikes.

It works surprisingly well, and I think this is because it is a marriage of the supreme pattern recognition abilities of humans with the brute force processing power of computers. I’m fairly sure it’s one of the best methods in current use for discriminating spikes from a recording, and it’s a shame that people don’t think that this is a worthwhile thing to do (but that’s a story for another time).

Hold on, though: this wouldn’t be a proper mssv.net post if it didn’t have any wild speculation. So, humans are good at pattern recognition in general. But we’re incredibly, uncannily good at facial recognition. We can distinguish two near identical faces and recognise someone we’ve only seen for a second out of thousands of faces. Pretty damn good.

It turns out that facial recognition and plain old pattern/object recognition are governed by different systems in the brain; we know this because there is something called a double dissocation between them. In other words, there are people who, for some reason, cannot recognise faces but can recognise objects fine, and vice versa. This strongly suggests that they run on different systems.

So how about we leverage our skills at facial recognition by converting other forms of information (say, spike trains, weather patterns, stockmarket data) into facial features? How might that work, eh? It could allow us to sense subtle differences in information and aid our recognition by no end.

Of course, I have no real idea whether this would work, or exactly how to do it – maybe you can take a recording of data (or real time data, I don’t know) and use different methods to analyse it and use the output values to describe different facial parameters. Hmm…

Digital TV

An interesting quotation from this week’s New Scientist confirms what I’ve suspected* for a while:

The latest 42 inch widescreen flat plasma panel screens cost around $7000, not counting a $250 wall mount and the digital tuner needed to receive broadcasts. Yet customers appear unconvinced of their quality. It turns out you cannot see the difference between a 500 and 1000 line display if you are more than six times as far from the screen as its height. A 42 inch display is less than 60 centimetres high, which means you don’t notice the difference across a 4 metre room.

This of course makes sense – the eye doesn’t have unlimited resolution. In fact, even in the fovea (the region of the eye with the most photoreceptors) it isn’t *that* high.

I remember when I saw a high definition TV at a museum in the US; I was duly impressed with the picture quality, but I did think that it didn’t look all that much better than a DVD quality image on a PAL TV (625 lines, no less). Big TVs are perfectly good, and there’s a real justification for flat TVs. But the vast majority of people certainly don’t need ultra high resolution TVs if they’re going to be used for ‘lean back’ viewing (as opposed to, say, computer work).

* I could have worked all of this out myself ages ago given the density of photoreceptors in the fovea and a bit of paper, but I never got around to doing it. Oh well.