The Death of Cyber

I’m becoming increasingly irritated by the lack of quality of writing in newspapers these days, and specifically, newspaper supplements. I have no problem with the main reporting, but the ‘lifestyle’ sections are just awful. Maybe they’ve always been this dull and boring, or maybe my standards have been risen by culling the best of the web from Arts and Letters Daily and MeFi. Either way, it’s disappointing.

Take, for example, this excerpt from Susan Greenfield’s new book in today’s Times. Prof. Greenfield is actually a pretty good neuroscientist, but this article is quite simply pap. It sounds like it was taken from some sixth form bull session. Early in the excerpt, she says:

The biggest question is whether, in a future cyberworld, there will still be a human need for these wider, gentler and more complex feelings. Or will we all end up as though autistic, unable to empathise with anybody else, locked into a remote and numbing isolation, or trapped in a speedy, giggly cycle of endless cyber-flirting, with deeper needs and pleasures lost for ever?

Has Susan ever been on the Internet and looked at online relationships? If she had, she’d know that online relationships are often just as subtle, gentle and complex as ‘real’ relationships. Of the people who I know who have had online relationships, they’ve all met each other in real life several times and one couple in particular has married. If she wants to concentrate on the teen phenomenon of IRC sex, then fine, but she shouldn’t pretend that it is representative of all online relationships, just as teenage dating isn’t representative either. But of course, she does:

Consider that fixture of many households: the glassy-eyed, monosyllabic adolescent in deep dialogue with their screen and keyboard. They are living in a different world, where the inhabitants spend long hours surfing the net, sending text messages or playing computer games. The lives of future generations look set to revolve less around face-to-face relationships than around relationships conducted via the computer, or even with the machine itself as the direct recipient of people’s attentions.

A few things. What does Prof. Greenfield think these kids are doing online? They’re playing games and gossiping to friends. If they weren’t doing it online, they’d be doing it on the phone, but then that has the problem of only being a one-to-one communication, whereas online you can talk to dozens of people simultaneously across the world. It’s not necessarily better, but then it’s not necessarily worse. Frankly, if she wants to make absurd claims that people will just ‘talk to their computer’ or not see their friends face-to-face, then she’d better get some evidence for them.

I wonder if Prof. Greenfield, in fact, know that there are approximately zero people on the Internet who even use the word ‘cyber’ any more, except in an ironic and self-mocking sense? Clearly not. And so how can she speak with any authority whatsoever on the matter of online relationships, and even worse, get published in a leading broadsheet newspaper?

Social commentators and others seem to be fixated on self-justifying their own preconceptions and prejudices of the Internet in general by applying tired stereotypes of ‘cybersex’ and ‘glassy-eyed, monosyllabic adolescents’ without taking the time out to study the Internet properly (and when have adolescents ever not been glassy-eyed and monosyllabic?). The reason this is so, I suspect, is because they are afraid of trying to master the (admittedly sometimes difficult) technology required to study the Internet, and also because they’re afraid they could be missing something good.

Which they are.

A Love of Memory

When I got back home from Australia I took the opportunity to reread some of my favourite books. These included Kim Stanley’s Mars Trilogy, which are probably up there in my top ten of rereadable books (Cryptonomicon sits at the pole position, having withstood at least a couple of years of sustained rereading).

I’ve forgotten how much I enjoy the Mars Trilogy. It gets bashed so often for apparently being too dry, political and sometimes boring, but consider this – the first two books won the Nebula and Hugo awards, and the trilogy as a whole was singlehandedly responsible for getting me interested in Mars. There are few (if any) events in my life that I can point to and say, this changed everything, but that fateful day when I spotted the trilogy in a book club brochure must be one of them.

The last book in the trilogy, Blue Mars, is not considered to be as good as the first two, which is a fair claim to make but it certainly doesn’t mean it’s not a good read. In Red Mars, the first book, a group of scientists develop a longevity treatment that results in people regularly living to over 200 years old by the time of Blue Mars. This is all very well and good, but said geriatrics are having a really hard time with their memory.

Accordingly, one of the main characters joins an effort to develop a memory boosting drug. At this point, most authors would be happy to say, ‘…and then after much work they made the drug,’ or if they were feeling particularly generous, they might throw in a few choice words like ‘dopamine’ or ‘serotonin’. If you were really lucky, they might take the time to look up a diagram of the brain and mention the hippocampus.

But this isn’t enough for KSR, and it’s part of the reason why I love his books. He spends over seven full pages on a monologue/stream of consciousness that dives right into the way that memory works and how you might enhance it. That’s over two thousand words of detailed information and informed speculation, none of which is wildly wrong. In fact, most of it is right, it’s only the speculation that I have a problem with and even then I have to give him a lot of respect for giving it a good try. I would say that to have written that monologue, KSR must have read at least a few reviews on the subject and perhaps a book.

Here is the bit which I love and hate (and yes, the first paragraph is that long):

The original Hebb hypothesis, first proposed by Donald Hebb in 1949, was still held to be true, because it was such a general principle; learning changed some physical feature in the brain, and after that the changed feature somehow encoded the event learned. In Hebb’s time the physical feature (the engram) was conceived of as occuring somewhere on the synaptic level, and as there could be hundreds of thousands of synapses for each of the ten billion neurones in the brain, this gave researchers the impression that the brain might be capable of holding some 10^14 data bits; at the time this seemed more than adequate to explain human consciousness. And as it was also within the realm of the possible for cmoputers, it led to a brief bogue in the notion of strong artificial intelligence, as well as that era’s version of the ‘machine fallacy’, an inversion of the pathetic fallacy, in which the brain was thought of as being something like the most powerful machine of the time. The work of the twenty-first and twenty-second centuries, however, had made it clear that there were no specific ‘engram’ sites as such. Any number of experiments failed to locate these sites, including on in which various parts of rats’ brains were removed after they learned a task, with no part of the brain proving essential; the frustrated experimenters concluded that memory was ‘everywhere and nowhere’, leading to the analogy of brain to hologram, even sillier than all the other machine analogies; but they were stumped, they were flailing. Later experiments clarified things; it became obvious that all the actions of consciousness were taking place on a level far smaller even than that of neurons; this was associated in Sax’s mind with the general miniaturization of scientific attention through the twenty-second century. In that finer-grained appraisal they had begun investigating the cytoskeletons of neuron cells, which were internal array of microtubules, with protein bridges between the microtubules. The microtubules’ structure consisted of hollow tubes made of thirteen columns of tubulin dimers, peanut-shaped globular protein pairs, each about eight by four by four nanometres, existing in two differen configurations, depending on their electrical polarization. So they dimers represented a possible on-off switch of the hoped-for engram; but they were so small that the electrical state of each dimer was influenced by the dimers around it, because of van der Waals interactions between them So messages of all kinds could be propagated along each microtubule column, and along the protein bridges connecting them. Then most recently had come yet another step in miniaturization; each dimer contained about four hundred and fifty amino acids, which could retain information by changed in the sequences of amino acids. And contained inside the dimer columns were tiny threads of water in an ordered state, a state called vicinal water, and this vicinal water was capable of conveying quantum-coherent oscilliations for the length of the tubule. A great number of experiments on living monkey brains, with miniaturized instrumentation of many different kinds, had established that while conscousness was thinking, amino acid sequences were shifting, tubulin dimers in may different places in the brain were changing configuration, in pulsed phases; microtubules were moving, sometimes growing; and on a much larger scale, dendrite spins then grew and made new connections, something changing synapses permanently, sometimes not.

So now the best current model had it that memories were encoded as standing patterns of quantum-coherent oscillations, set up by changes in the microtubules and their constituent parts, all working in patterns inside the neurons. Although there were now researchers who speculated that there could be significant action at even finer ultramicroscopic levels, permnanetly beyond their ability to investigate (familiar refrain); some saw traces of signs that the oscillations were structured in the kind of spin networks that Bao’s work described, in knotted nodes and networks that Sax found eerily reminiscent of the palace of memory plan – rooms and hallways – as if the ancient Greeks by introspection alone had intuited the very geometry of timespace.

The reason why I hate it (and hate is too strong a word) is because I don’t happen to think that his explanation for memory and consciousness is true at all. It has a very Penrosian feel about it, and I’ve never really thought that it was possible for there to be a working quantum computer residing in our neuron microtubules; and neither have I seen the necessitity for it. Plus, the idea that you would use alterations in the tubulin dimer amino acid sequence is really not workable (although I suppose that enzyme-mediated residue methylation or ubiquitination wouldn’t be out of the question).

I love this passage because it almost makes sense. KSR clearly understands what he’s talking about, and I’m pretty sure that he realises it’s extreme speculation. The rest of the monologue is much like this, discussing terms that neuroscientists bandy about regularly but don’t actually understand fully, like LTP and glutamate receptor sensitizers.

In a way, to most readers it doesn’t matter if the science makes any sense. What matters is the flow of the words and the beautiful progression from one magical concept to the next that science seems to make effortlessly; in this passage, KSR has managed to convey some of the feeling that you experience when you understand (or think you understand) a horribly complicated system; the feeling when everything shifts, just so, and interlocks into place.

The fact that it also happens to largely make sense is something that I truly appreciate; it would have been simple enough for KSR to just make all of it up, but I think KSR must have actually enjoyed learning about how memory might work for him to have written this.

Lucid

I arrived back in the UK yesterday morning after a 24 hour journey from Sydney. Predictably, it was raining.

What I tried to do during the flights back home was to time my eating and sleeping so that I could reduce any jetlag I’d have caused by the ten hour time difference. The easiest way to do this is to set your watch to your destination time zone as soon as you step on the plane and go to sleep at the appropriate time; there are other things you can do but they’re more personalised.

Resetting your time zone would probably work really well except for the fact that you also have to spend a large amount of time in a plane, which is not really the best environment for sleeping. I suspect that if we used teleporters things would be much better in this respect.

In any case, it worked reasonably well for me. I arrived back home at 7am GMT on Monday, after having been awake for about 32 hours (OK, I managed to get a couple of hours of sleep on the plane, which aren’t counted). I managed to stay up (some might say heroically) until about 3pm when I decided to have a short nap; this would be 40 hours up continuously.

Unsurprisingly, that nap went on for about eight hours. Surprisingly, it was only the second time in my life that I have (if briefly) had a lucid dream, a dream in which I knew I was dreaming.

The first time I had a lucid dream came after about a week or two of fairly diligent practice and preparation. There are a few strategies out there to help you have a lucid dream, and the majority boil down to experiencing and recognising a sign that you are dreaming, within the dream. My preparation involved checking the time on my watch a few times a day and thinking to myself, ‘My watch looks like it should, so I’m not dreaming.’ The point of this was to get into the habit of checking the time so that I would do it in my dreams as well.

Soon enough, during a dream I checked the time and noticed that the watch was doing something wacky, such as changing the time when I looked at it twice in succession, or maybe going backwards, or whatever. At that point, I thought to myself, ‘Hey, this is a dream!’ and it was a rather interesting experience, like waking up (but obviously not in the literal sense). Since the point of lucid dreaming is that you get to do whatever you want in the dream, I resolved to do a bit of flying, but for some reason I got caught up in the dream and lost self-consciousness. I was a bit disappointed by this and gave up the practice. This was probably over five years ago.

Last night’s lucid dream had a different beginning. I was chatting to someone who said something completely bizarre, and then I replied, ‘Hold on a second, that’s not possible, this must be a dream!’ and once again, I woke up and it was really a wonderful sensation. Alas, after only a few subjective minutes of lucidity, during which time I freaked out a bit because I thought I might make myself wake up properly by my antics in the dream, I lost self-consciousness again.

Anyway, this experience made me think about the physiological basis of the transition between normal and lucid dreaming. In normal dreaming, you are still conscious, in a sense – you are aware that you are yourself. However, you are not aware that you are actually in a dream; this is called meta-awareness by some.

So why is it that it’s so difficult to gain meta-awareness while dreaming, and how does it occur? Is it possible to observe some kind of neural correlate of the transition, perhaps by fMRI? I have to confess that I have no good theories on the basis of lucid dreaming, but it certainly does seem to be a ripe area for investigation by cognitive neuroscientists, especially those looking at the nature of consciousness, awareness and theory of mind (some might say that this would involve all of them).

Recursive

During dinner yesterday, I mentioned to Andrew Paul and The Official Bear Of The Third Millennium that I’d recently had an MRI scan done of my brain. Someone then said how strange it must be to see the activity of your brain in real time. I was just in the middle of replying that the experiment didn’t involve subjects seeing their brain, and that in any case it wasn’t possible, when I realised that it in fact was possible.

Creating images from MRI and functional MRI scans is very computationally intensive and analysing them even moreso. Up until recently, this has meant that it wasn’t really feasible to conduct an fMRI scan and see different areas of the brain ‘lighting up’ in real time. However, last year in UC San Diego at the brain imaging centre, one of the guys there mentioned to me that there was some really cutting-edge work done at some research institute that finally allowed scientists not only to see the workings of the brain in real time, but also zoom into the specific sections and essentially fly through the brain – in 3D.

Needless to say, neuroscientists who’ve heard about this – and they are few, because the technique (as far as I know) is not in use yet – are positively wetting themselves with excitement about the possibilities. Instead of waiting days or weeks after conducting a test to see the results and then planning subsequent tests, you could alter the scan immediately to focus in on regions of interest. Perhaps even more promising is the possibility of creating dynamic tests that respond to detected activity.

On the artistic front though, this new development has given me an interesting idea – wouldn’t it be awfully cool to be able to look at a computer screen and see a 3D image of your brain working in real time? Wouldn’t it be amazing to be able to fly around the inside of your brain, and see your auditory centres light up as you listen to music? Forget about biofeedback using heartbeat or galvanic skin response – it doesn’t get any better than biofeedback using your brain activity. Now, all I have to do is get an Arts Council grant…

Biononsense

After reading this article about human genetic engineering, I have to comment on something that’s been bugging me for a while now. The article is inoffensive enough, but it uses the term ‘biogenetics’. I’m sorry, but there is no such field as biogenetics; it’s either genetics or nothing, and there’s no use in trying to make it more sexy by putting a bio- in front of it.

I suppose you could make an argument that not all genetics is necessarily biological, but it wouldn’t be a good one. If you go to any university in the world and check out their genetics department, you’ll find that they’re studying biological organisms. Kids: don’t use the word biogenetics. In fact, whenever you feel the urge to prepend a word with ‘bio’, think long and hard.

This reminds me of an email exchange I had with Brad DeLong about overuse of the word ‘cognitive’:

Me:

Off Topic: I note that Brad has made a post about Cognitive Economics on his blog. I am most disappointed at this; not at the post, but at the use of the word ‘cognitive’. It seems as if everyone and his dog is using ‘cognitive’ – there are cognitive radios, cognitive networks, cognitive economics… all you people should get your grubby hands off the word and leave it where it belongs, in cognitive (neuro)science. Grumble. Just because you all wished you were in the cool gang.

But seriously. I know that we’ve had buzzwords for centuries, but dammit, this time it’s personal. In times past, you wouldn’t call it a cognitive radio, you’d call it an ‘intelligent radio’ or adaptive radio or whatever. There’s nothing cognitive about it. Ditto for ‘cognitive economics’. Whatever happened to ‘psychology’ or ‘value judgements’ eh? Damn kids…

Brad:

Jeebus!!!! We economists make one little foray into buzzword-land to try to land some few small drops of water from the firehose of funding being directed by the NSF and others at the “cognitive sciences,” and what happens?

It’s not so much that we wish we were in the cool gang (actually, we do–but put that to one side: most economists felt in college that they didn’t have the mathematical firepower to do nat sci, and still feel ashamed and inferior), as WE WANT SMALL POOLS OF RESEARCH MONEY, DAMMIT!!!

They just can’t help it

They just can’t help it – an article about the differences between the male and female brain by my old university psychology supervisor, Prof. Simon Baron-Cohen. Interesting and controversial stuff; I saw a lecture that he gave about this topic a few months ago (not via Metafilter – I read it in the Guardian myself this time).

Artificial hippocampus

There’s a fair amount of excitement on the Internet about efforts to make an artifical rat hippocampus. This idea strikes me as, well, pretty weird. I am a little doubtful as to whether it could work (I can think of a lot of reasons why it wouldn’t) but to be honest, it doesn’t matter whether it works or not; the point here is that they have developed a way to interface a chip into a brain and do some decent data processing.

Some problems with this project:

1) I suspect there are probably hundreds of millions of neurones in the hippocampus. At the very least a million. Thus they will need the same number of connections. I would be amazed if they managed to create a chip that could hook up all the connections. But if I’ve read the article properly, they aren’t even trying to preserve the exact connections between the neurones, and I don’t think that’s a good idea.

2) The lab that I work in at Cambridge does a lot of recordings from neurones in the rat. We only try to record from one neurone at a time, and we know what we’re doing. Despite that, we still have problems with the signal to noise ratio when listening to neurones. Often we’ll have to do some serious waveform analysis to get the signal out properly, and that takes a lot of processing power. Recording accurately from millions of neurones simultaneously, online – I’m not saying it’s impossible – but it is years, if not more than a decade, beyond our current capabilities.

3) “They had to devise a mathematical model of how the hippocampus performs under all possible conditions…” I’ll put it this way. The scientists themselves state that no-one knows how the hippocampus works. Even by stimulating it with electrical signals, ‘millions of times over’ is not enough to make a proper mathematical model. And never mind how the hippocampus works, we don’t know how neurones work.

4) Neurones do more than just send electrical signals. There’s a very complex interplay of chemical signals, ion channels and lots of other stuff, most of which we don’t know about. Even if it were possible to mimic the electrical signals of a hippocampus on a chip, you’d still be missing everything else.

Still, like I say, it doesn’t matter if it doesn’t work; it’s always useful to do research on chips that can interface with neurones. However, I don’t think it’s realistic to expect this to work any time soon.