Accelerating Mass

The last couple of episodes of In Our Time on Radio 4 have been particularly good. The first was on Pragmatism, not a topic that I initially had much interest in until I discovered that the philosophy of pragmatism, especially that of Charles Peirce, is rather close to what I support – unsurprising, given that it has large similarities to the scientific method. Peirce’s pragmatism seems to be a very solid middle ground between complete relativism (an accusation leveled at some versions of pragmatism) and the idea that there is some immutable ‘truth’ out there. I feel like reading up more on this subject now…

Last week’s programme covered Gravitons. Again, not a terribly inspiring subject – I went off physics when I was about 13 or 14, and never seriously looked back. Still, the guests were unusually good at explaining the subject (much better than the guys who talked about Asteroids a month ago, at least). There was one explanation I liked in particular:

“There might be an analogy between the production of gravitational waves and the production of electronmagnetic waves. Electromagnetic waves – light waves or radio waves – are produced by the acceleration of electrons. If you accelerate an electron, it radiates electromagnetic waves – light. But we know that light can either behave like a wave or like a particle, depending on how you look at it.

“By analogy, accelerating mass can radiate gravitational waves, or perhaps we can think of the gravitational waves in some way of having particle-like properties – gravitons. But the difficulty is, if you want to think of the graviton like a particle, if we can detect gravitational waves, really what we’re detecting is large numbers of gravitons. We won’t see individual gravitons, but it’s kind of equivalent to when you detect radio waves, you don’t detect individual photons, you detect a large number of incoming photons.”

This was by Sheila Rowan, Reader in Physics at Glasgow University, and I thought it was a wonderfully elegant analogy. I assume it’s accurate as well, given that neither of the other physicists there disagreed. It was amusing that throughout the programme, the host kept on referring to Sheila for these sorts of clear explanations of the difficult concepts that were being talked about; to be sure, the other physicists tried to be clear, but they weren’t always successful. I’m looking forward to hearing the final half of the programme on the walk to work tomorrow.

Superluminal

An excerpt from a BBC News story about a new Russian missile:

“Colonel Baluyevsky gave few details of the new missile which was tested on Wednesday, but said it was one that moved five times the speed of light.”

Wow, that’s some seriously good engineering they’ve got over there in Russia. If I lived in the US, I’d be afraid.

Multiverse

Just been to a very interesting talk by Prof. Hugh Mellor on the subject of the Multiverse. The idea* behind the Multiverse is that there are uncountable numbers of other universes out there that have slightly different properties to ours, owing to different initial conditions. We can never see any of the other universes in the Multiverse aside from our own.

*There are several ideas, but I have outlined the general one used. I think I have gotten Prof. Mellor’s argument correct; my apologies if I’ve made any mistakes in writing it.

Why did people come up with the idea? Because it’s an attempt to explain why our universe has the particular properties that it does, that allow complex phenomena like life and consciousness to exit. There’s a familiar argument that says that the fundamental constants of our universe, like the strong and weak nuclear force, are such that if they were different by only a miniscule amount, life couldn’t exist. These fundamental constants were caused by the initial starting conditions of our universe at the big bang. This argument is known as the anthropic principle.

So, this is a bit troubling for scientists. If the initial conditions of a universe that lead to life are so fragile, isn’t it a bit unlikely that we just happen to be living in such a universe, just by chance? Isn’t it more likely that these conditions were… designed? Well, maybe so, and it’s not as if all scientists don’t believe in God. But they would like to think that there is some other explanation for why we live in this particular universe, why our universe had such a convenient start. Hence the Multiverse – if there are uncountable numbers of universes out there with different starting conditions, then, say some scientists, surely there’s no problem with the fact that we live in such a convenient one?

Prof. Mellor disagrees, and this was the central point of his talk. Let’s try a thought experiment. Imagine that you are about to be executed, and there’s a firing squad of fifty people all aiming at you. They all fire at the same time, and they all miss. I think that we’d all be surprised if that happened. But the only reason why we’re surprised is because it is incredibly unlikely that they’d all miss, given the accuracy of the rifles, the willingness and training of the men, etc etc. This is not a valid analogy for the start of our universe and thus the anthropic principle, because we know that it is almost certain that we’re going to get hit.

Imagine instead that you pop into existence in a new universe surrounded by fifty bullets. All of these bullets have a particular trajectory. Let’s say that they all miss. Should you be surprised? No. Why not? Because you don’t know what the probability is that they appeared in that configuration. It’s not like being in front of a firing squad, where they’re all trying to hit you. You just don’t know why the bullets are going in their particular directions.

But say you are surprised that you didn’t get hit. Would it be any less surprising if you were told that there were uncountable numbers of other universes where people were hit? Prof. Mellor says no – the other universes don’t explain anything, because you still don’t know why you happen to be in this particular universe, and you have no reason to believe that things should have started differently.

I was quite taken by Prof. Mellor’s argument; it seems reasonable. It doesn’t claim to give a reason why our universe had its particular initial conditions (given a single universe model) – but neither does the Multiverse model either. Maybe there is a reason, and maybe we can eventually find it. But we don’t have it yet, so the Multiverse model is unnecessary.

Gravity Assist

Finally, I understand how gravity assists for spacecraft work now! (scroll to bottom of linked page)

Imagine a ball rolling down a hill. It gains speed rolling downhill, but then loses speed as it rolls up the next upslope. It’s hard to see how speed can be permanently gained this way. But now imagine that the hill is being propelled forward as you roll down it. Now you’re not only gaining speed due to the slope, but due to the motion of the hill as well…

Pattern Recognition

(Warning: This entry has absolutely nothing to do with massively multiuser online entertainment, if that’s what you’re here for)

In my research project at the moment, I’m using a nifty little program to aid my pattern recognition.

A major part of my project involves me taking recordings of a signal (in this case, electrochemical spikes from a neuron) and discriminating them from the noise inherent in the system. Sometimes the noise is loud, and sometimes there is more than one signal (i.e. multiple neurones). In a recent case, I had eight different signals and a significant amount of noise.

Now, the way most people would go about discriminating the signal from the case I described is through hardware; they’d hook their recording apparatus up to a black box, and they would set a value X on that black box. Anything in their recording that went above value X would be recorded (on a separate channel) as a spike. Now, this seems reasonable enough since spikes are just that – they are spikes in voltage, and if you have a good recording with only one signal and little noise, you can be 100% confident in getting all of the spikes and no false positives.

But if you have lots of noise, and the signal is weak, you will have to set value X such that you may miss some of the spikes and get some false positives (because the spikes are only a bit above the level of the noise). Maybe you might not care about this if you’re just doing a simple analysis of the spike rate, but I’m not – I’m doing something a bit more complicated that involves information theory and it really is important for me to try and get all the spikes and no noise. Thus, a simple hardware discrimation of the spikes just ain’t good enough*.

(*Hardware discrimination can actually be a bit more complicated than this, but essentially it all boils down to seeing if the voltage goes above X and/or below Y or Z or whatever)

So what you really have to do is to look at the shape of a spike. A neural spike is quite distinctive – it generally has a slight bump, then a sharp peak, then a little trough. In other words, it doesn’t look like random noise. This means that you can do some software analysis of the shape.

The more computer-savvy of you readers are probably thinking – aha, no problem, we’ll just get some spike recognition neural network kerjigger in, and then that’s it. Well, you know, it’s not as easy as that, because spike shape can change over time and sometimes noise looks like a spike, and vice versa. It turns out that the best way to check whether a spike is really a spike is by looking at it – after all, the human brain is a pretty powerful neural net. Unfortunately, if you’re looking at a spike train with 50,000 spikes, this isn’t really feasible.

So a guy in my lab has made a nifty piece of software that will analyse each of the putative spikes in a recording (putative because they pass a trigger level – just like how a hardware discriminator works). Using a mathematical method of your choice (FFT, PCA, wavelet, cursor values, etc) it will assign a numerical value to each spike. You can then plot these values against each other to get a 2D scattergram. You do this three times, and hopefully you get three scattergrams that graphically isolate your chosen signal from the noise (or from other signals) on the basis of the analysis method you chose.

Next, you go and mark out which spikes you want (each spike is represented by a scatter point) by drawing ellipses, and finally you use Boolean algebra to say, ‘OK, I want all the points I circled in plot A, but not those that are shared with plot B or plot C’. At any point, you can check out what a particular spike or group of spikes looks like on a graph. And then you can export your freshly discriminated spikes.

It works surprisingly well, and I think this is because it is a marriage of the supreme pattern recognition abilities of humans with the brute force processing power of computers. I’m fairly sure it’s one of the best methods in current use for discriminating spikes from a recording, and it’s a shame that people don’t think that this is a worthwhile thing to do (but that’s a story for another time).

Hold on, though: this wouldn’t be a proper mssv.net post if it didn’t have any wild speculation. So, humans are good at pattern recognition in general. But we’re incredibly, uncannily good at facial recognition. We can distinguish two near identical faces and recognise someone we’ve only seen for a second out of thousands of faces. Pretty damn good.

It turns out that facial recognition and plain old pattern/object recognition are governed by different systems in the brain; we know this because there is something called a double dissocation between them. In other words, there are people who, for some reason, cannot recognise faces but can recognise objects fine, and vice versa. This strongly suggests that they run on different systems.

So how about we leverage our skills at facial recognition by converting other forms of information (say, spike trains, weather patterns, stockmarket data) into facial features? How might that work, eh? It could allow us to sense subtle differences in information and aid our recognition by no end.

Of course, I have no real idea whether this would work, or exactly how to do it – maybe you can take a recording of data (or real time data, I don’t know) and use different methods to analyse it and use the output values to describe different facial parameters. Hmm…

Pretty girls and hot stoves

Ever heard the famous Einstein quote, “When a man sits with a pretty girl for an hour, it seems like a minute. But let him sit on a hot stove for a minute and it’s longer than any hour. That’s relativity.”? It was an abstract of a paper that’s now online, and if anything, the paper is even funnier.

Einstein – what a cad.

Trinity nuclear test

Anecdote of the day: Physicist Ted Taylor used a parabolic mirror to light his cigarette with the flash from the Trinity nuclear bomb test (from Rich).

Microwaves

While perusing the manual for the new microwave* in the kitchen upstairs, I spotted a nifty way of making sure whether utensils are microwave-friendly (i.e. won’t blow up or give off sparks if you put them inside). All you do is to put the utensil in the microwave next to a bowl of water and turn it on for a minute.

If the utensil is microwave-friendly, the bowl of water will heat up and the utensil will remain cool. If it isn’t, the bowl of water will stay cold and the utensil will be hot. Very neat, I thought.