Probabilities

(Warning: Ramble ahead)

Earlier today, I was listening to a guy describe a project I might do next year for neurobiology, trying to figure out some of the characteristics of Golgi neurones in the cerebellum. The way you can identify these neurones, other than looking at them under a microscope, is to insert a super-thin electrode into them and look at their electrical activity. We’ve all seen what heartbeat readouts look like on TV, like a sharp spike. Well, the electrical output from neurones tends to look like that as well. Different types of neurones exhibit different and unique spike properties, such as spike magnitude, length, and interspike intervals.

So you can identify Golgi neurones by looking at their electrical readouts. This can take a bit of time, having to look back and forth all the time. What many researchers do is to hook up the output signal from the electrode to a loudspeaker, so each spike makes a click. I’m told that in time you can become extremely proficient at identifying different types of neurones very quickly by simply listening to their activity.

This kind of process is of course pattern recognition, and it struck me how skilled humans were at doing this and recognising and distinguishing new types of patterns. To do a similar thing on a computer right now would require a fair bit of coding – it wouldn’t be impossible by any means, and it might not be that difficult. But it would probably take longer than learning it yourself. That’s not to say that doing it on a computer is a waste of time, clearly if you want to automate the neurone-finding procedure and link the electrode position controls to the computer it’s worth it.

Even a computer wouldn’t be able to identify the type of a neurone with perfect accuracy though – neurones aren’t perfect things. It could give you probabilities though. And this set me onto a completely different train of thought. Usually probabilities of events or identification are shown in a numerical or percentage quantity, e.g. it’s 80% likely that it will rain tomorrow. Unfortunately, it seems that humans aren’t all too good at assessing probabilities – for example, it’s been shown that we ignore Bayes theorem while calculation probabilities ourselves.

We don’t say to each other, I think there’s an 80% probability of it raining tomorrow. We say, it’s a fairly good chance that it’ll rain tomorrow. And I think that people would respond to this type of framing probabilities better than numerical ways, in various circumstances. It just makes it more familiar.

And then I realised that we aren’t too hot on judging probabilities that way either, since according to human signal detection theories we can alter our criterion for the probability of events depending on, basically, how we’re feeling. And then I started writing this, and unfortunately I don’t have anything more to say at the moment.