Fun New Words

New words and terms I’ve heard at my lab:

Fiascotorial, adj.: combinations or permutations of fiasco-like situations. e.g., “And then the squirrel fell into the bowl! Just imagine the fiascotorial possibilites!”

Gene-jockey, n.: derogatory term for a geneticist or molecular biology. e.g., “Those gene-jockeys working on the squirrel genome project, they don’t understand that the real discoveries are to be made in neuroscience.”

Swiss Cheese psychology: derogatory term describing the deductive methods of some psychologists. e.g., “Here’s how psychologists work – they’ll take a squirrel, scoop out its temporal lobe of the brain and then they’ll say that the temporal lobe is responsible for eating nuts, because the squirrel doesn’t eat nuts any more. Typical Swiss Cheese psychologists – it’s the law of the holes.”

Reprise

Saw Donnie Darko a second time today, with a friend from Leeds; it survived rewatching quite well.

Afterwards, I described my ‘Dance Dance Revolution’ theory of cognitive development to her. It’s a little like Piaget’s controversial theory (although obviously much sillier). Jean Piaget was a psychologist who believed that children when through qualitatively different levels of cognitive maturity as they grew up. For example, he said that during the ages of six to twelve, children were in the ‘concrete operational stage’ in which they can perform cognitive ‘operations’ (like mental rotation, that sort of thing). However, only when they reached the age of twelve and graduated to the ‘formal operational stage’ could they concentrate on hypothetical situations and solve highly abstract, logical problems.

Piaget’s theory is a veritable piece of Swiss cheese now, what with all the holes that have been poked in it. Even so, it’s still interesting to discuss it, and I have based my Dance Dance Revolution theory upon it. Indeed, I propose that a new stage can be added to his progression of cognitive maturity, called ‘Dance Dance Revolution appreciation’.

There are those in the world who do not show an appreciation of Dance Dance Revolution; for some inexplicable reason, they have an urge to mock what is an inoffensive, entertaining and healthy game that promotes exercise and social skills. These people, I believe, have not achieved full advancement of their cognitive facilities. On the other hand, those who have passed through the ‘DDR stage’ will demonstrate an understanding of the true qualities that DDR holds; such people are at the zenith of cognitive development, I believe, and will in addition exhibit greater emotional development and what can be best described as ‘all round coolness’.

(If a DDR machine is not available nearby for testing purposes, a video of a DDR freestyler is an acceptable substitute)

Skwerls

During one of our classes today, we talked about the possible causes of Parkinson’s disease. One of the lecturers mentioned that in Kentucky, researchers thought they’d found a possible link between eating squirrel brains and Parkinson’s; 12 out of 42 people they surveyed with Parkinson’s ate squirrel brains, leading them to think that perhaps Parkinson’s was caused by a prion, similar to CJD.

However, when they did a control survey and looked at the general population in the area (rural Kentucky) they found that 27 out of 100 people also ate squirrel brains. So there’s probably no link, but eating squirrel brains? What the hell? Apparently they way it’s done is that they’ll run over a squirrel in the car, and then go and pick it up afterwards. Highly bizarre.

VOR

Are you short or long-sighted? Go and lower your glasses so that your visual field is split in half horizontally (in other words, perch your glasses further down on your nose). Now move your head from left to right, and look through your glasses. Then do the opposite, and look above your glasses.

You should have noticed that in the half of your visual field which you weren’t looking at, it seemed like things were out of sync – they weren’t moving exactly together with the half that you were looking at. Why is this so?

It’s all to do with the vestibulocular reflex, or the VOR. This reflex, which is hooked up to your eyes and your balance centres in the brain, compensates for the movements you make with your head. This allows to you distinguish between movements that you’ve initiated (e.g. moving your head) and those that you have not (e.g. falling off a tree). Clearly having a VOR is important, or else every time you moved your head it’d seem like the world was spinning around you. You don’t tend to notice you have a VOR, but the simply fact that the world doesn’t appear to move when you move your head shows it.

When you’re born, you don’t come with a VOR built in – you have to learn it, by comparing information coming from your eyes with information from your balance centres and other brain areas. Unfortunately you can’t remember what this was like when it happened because you were too young, but you can replicate the effect somewhat by becoming short or long-sighted and putting glasses on for the first time.

When you do this, the world seems sharper (obviously) but everything seems a bit out of kilter, like the world isn’t moving properly. You feel a little off-balance, and if you’re unlucky, you might fall down some stairs. This happens because your VOR isn’t calibrated for the new visual input it’s receiving – after all, a pair of glasses with significantly alter the way light reaches your retinas. After a while, though, you get used to it and flights of stairs don’t seem to be that much of a danger any more.

This is all very well and good. But consider the experiment that we did earlier – since the world seemed stable whether or not you looked through your glasses, you were able to effectively switch between two differently calibrated VORs at will. You do this every time you take off or put on your glasses, of course, but the effect is more salient when you are half-wearing them.

The fact that you can do this is really astounding, when you think about it. It’s not as if we had glasses when we evolved, so why should our brain be able to handle two differently calibrated VORs? And how is it that you can switch between them using solely an internal input (your decision to look through your glasses or not)? This is one of the more interesting questions in the visual sciences at the moment, if you ask me.

On Top Of The World

Forget about the Nobel Prizes and throw that article about the Space Shuttle into the bin, because at the time of writing, First Words is the top story on Discovery.com’s news homepage! The competition just broke through the 1000 entry barrier an hour ago, as well.

I should probably say something about starting my new Anatomy A: Research into Neuroscience course today, which is looking extremely promising. Basically, the course has no lectures – only workshops and seminars. There’s a great deal of emphasis on group work and research projects (which you’ve already heard about here). It might sound a little woolly, but I’m convinced that it’s superior to the traditional didactic method of having people talk at you for hours – maybe that’s because I always fall asleep in lectures, but aside from that, I really do believe that a more interactive mode of learning is better in the long run.

I’m slowly getting the hang of MatLab now; the last few days have seen me constructing ever more complicated graphs and programming increasingly sillier things. My current task, which I’m two thirds of the way through, is analysing a set of neural recordings, visualising it in a number of ways and using it to generate simulated neural data which will then be altered in a few more days.

Yes, it’s been a good day…

Eyes for you

This optical illusion has been making the rounds on the Internet recently, and most people are astonished to find that the A and B squares are the same shade, to the extent that they consult Photoshop to confirm that A is not darker than B. The explanation for this is simple – the eye is better at distinguishing sharp boundaries rather than shallow gradients of shade or places of even shade.

This satisfies most people. But why is this so? Why can’t the eye do that, and also be able to quantitatively compare the shades of two spatially separated areas?

It all comes down to space, or rather, lack thereof. There are roughly 130 million photoreceptor cells in the retina of the eye, each of which individually making measurements of the amount of light falling on it. However, if you look at the optic nerve bundle that conveys information from the retina to the brain, you’ll find that it contains only 1 million nerve fibres. That’s a contention ratio of 130 to 1, and the physiology of the situation dictates that one neurone cannot possibly contain all the information produced by 130 photoreceptors. There is a good reason for this, to do with the wiring of the retina and neural bandwidth limits, but you’ll just have to take my word for it for now.

As a result of all of this, there’s a significant loss in information from the total amount gathered by the photoreceptors, and that which is sent to the brain. The optimal solution would be to transmit the information that is most important to the survival of the organism, and that happens to be edges – sharp changes in shade – that you see, and of course that’s what the eye does. Each photoreceptor is linked to adjacent photoreceptors via ‘higher’ cells (still in the retina), and these higher cells perform a bit of processing called lateral inhibition.

Lateral inhibition is a relatively simply process that enhances edges between areas of different shade, and it’s mediated by a mechanism called centre-surround antagonism, which you can see in this Mach Bands demonstration.

Ultimately, a much better solution would be to have a whopping great big optic nerve that had 130 million nerve fibres in it, one for each photoreceptor in the retina. Alas, as I said earlier, it’s a question of space and there just isn’t enough room in our heads for such a big optic nerve, so we have to make do with a smaller one that causes us to have so much fun looking at visual illusions. Human vision isn’t perfect, but it is ‘good enough’ which is really the story of evolution.

The important questions

Ned Beauman at Bullets has made a post on my comments about the research I’m doing at Cambridge. I agree with what he’s saying, in that it’s really only the information that matters when you’re talking about cognition or consciousness, but many other people wouldn’t; all of this is based on the assumption that the brain is working via a set of algorithms.

I happen to think that it is, but I don’t really want to get bogged down in this because I’m in the sticky situation of knowing enough to talk about the subject, but not enough to prevent myself from saying something stupid. Clearly more Dennett is in order for me.

On the research: I’m reading through a long report that forms the underpinnings of what I’m doing, and it’s quite amazing to see how the author of the report dismisses all of the current ‘important questions’ in neuroscience, stating that they are immaterial, and proposes instead a completely new set, all based around information theory (I’m going to write a proper ‘massive’ article on this eventually). I’m beginning to think that the research I’m doing here in Cambridge could be equally if not more important than the synaesthesia research I worked on in San Diego.

Aloha

Alas and alack, &c, I haven’t been able to update much recently. I’ve just returned to Cambridge, which has been having unusually glorious weather, and have been unpacking various things. I’ve also been busy getting up to speed with the research project I’m doing this year, on (essentially) information processing in neurones.

What this means is that I’ll be using various information theory methods to try and determine whether the pattern of spike impulses given by neurones actually encodes information, and if so, how does it do it and what kind of information is it. This is pretty interesting stuff that hasn’t been done before, and it’s also quite daunting to me because while I’ve had a very casual interest in cryptography and information theory, I’m never become familiar with the equations involved.

I’ll be mixing traditional neuroscience and ‘wet biology’ practical methods with processor-intensive number crunching during my research, and I have to admit that this was not what I’d been expecting to do this year, not that I’m not looking forward to it.

Before all of that I’ve got to read a hundred-odd pages of background research and start learning a new programming language (MatLab)…

Dancing

In case you’re interested, it might be worth checking out the BBC2 documentary The Dancer’s Body, on Saturday nights; I’m told it’s pretty good. An added bonus is that you should see Prof. Ramachandran on it either this week or next week, since he was interviewed for the programme while I was in the US. Something to do with the science of art, I recall, and how humans appreciate art from a neuropsychological point of view.

It was pretty fun when the BBC crew came into our lab for a while, waiting for the Prof; we threw a baseball around, chatted with the cameraman about the TV business and so on, and then got told off by their presenter for being too loud while she was on the phone. Ah, great days.

Cerebroscope

Psychologists, neuroscientists and philosophers like to talk of a hypothetical instrument called the ‘cerebroscope‘. The first time I heard about this, in San Diego, I expressed a bit of surprise, and then asked, ‘Why isn’t it called a ‘brainoscope’?”

I was expecting to be told that people used ‘cerebroscope’ because it sounded more impressive (always a good thing in specialised disciplines). But no, the reason was even better than that. Apparently ‘brainoscope’ is a barbarism – it combines words from two different ancient languages (Middle English/Latin and Greek, as far as I can tell), whereas ‘cerebroscope’ is fine because it’s all Greek.

Upon seeing the rather stunned expression on my face, my friend said, “No, I don’t see why it makes a difference either.”