Beating the Hive Mind

“What’s this?” I asked, toying with a white cylinder with letters printed across it.

“It’s a cryptex,” explained Eric Harshbarger, one of Mind Candy’s in-house puzzle designers. “Like the one from the Da Vinci Code.”

In The Da Vinci Code, a cryptex is a cylinder with five wheels that can be rotated independently; each wheel has letters printed on it, and if you line up the wheels properly, it’ll open. The puzzle to open the cryptex in the movie was rather boring, but Eric had come up with a much more interesting multi-stage puzzle and then constructed it himself. He’d brought the cryptex, along with some other fun physical puzzles, to San Francisco for our live event there last year.

While we walked down to a nearby cafe for breakfast, Eric mentioned how he’d visited Google a couple of days earlier with the cryptex and shown it to some of the puzzle-fans there, including Wei-Hwa Huang, the designer of Google’s Da Vinci Code puzzle quest. Immediately, Wei-Hwa and two other Googlers threw themselves on the task, and within two or three hours, had figured it out. Thus the challenge was set: could we beat Google?

Personally, I didn’t think so. Those guys not only live and breathe puzzles, they actually spend a lot of time solving them. So I passed (I didn’t have a few hours to spare), and instead played around with some of Eric’s wooden puzzles while David Varela, a writer at Mind Candy, busied himself with the cryptex.

Thirty minutes later, David had solved the cryptex. He had beaten Google. And he didn’t even have a computer, let alone a piece of paper. Continue reading “Beating the Hive Mind”

Avoiding dress collisions

Last month, Laura Bush turned up to a party at the White House in a $8,500 Oscar de la Renta dress, which three other women also happened to be wearing. Laura Bush quickly changed her dress, but the damage was done and the story of this amusing accident quickly spread across the Internet.

I was thinking about this while dawdling in a shopping mall last week, and wondered how this sort of thing might be prevented in future. Perhaps some sort of centralised dress list might work? If all attendees to a formal function submitted their dresses to the organisers, any embarrassing collisions could be quickly detected and avoided. Of course, the obvious problem with this (from a security standpoint) is that the list is rather valuable and if it was leaked, there would be hell to pay. Even if it wasn’t leaked, there’d still be the opportunity for the organisers to get up to all sorts of mischief by manipulating the list, for example, to fake collisions in order to prevent people from wearing particular dresses.

So, how about hashing the dresses? By this, I mean taking someone’s dress and applying a hash function to the name (e.g. ‘oscardelarenta2006’). This function would then spit out a unique hash sum (e.g. ‘4FA043BD’) which could then be stored in a database. If anyone else submitted an identical dress name, it would produce the same hash sum and the collision could be detected. The interesting thing about hashing functions (good ones, at least) is that they are one-way; in other words, if you are an unscrupulous White House party organiser with access to the database who wants to sell the dress list to the paparazzi, you only have the hash sums – and you can’t get the dress names from the hash sums.

This is not really a novel idea – ‘friend-of-a-friend’ applications (FOAF) do the same job, wherein people take their email contact list, hash it, and then compare that hashed list against other people’s hashed lists. If there’s a collision, then you have a friend in common, leading to all sorts of possibilities such as friendship/business partnerships/hooking up – but importantly, you can’t figure out someone’s friends from their hashed contact list.

Given that fashion designers already maintain their own private lists of who are wearing their dresses to which parties to avoid collisions and drum up publicity (and presumably someone was fired by Oscar de la Renta), I doubt that a hashed dress list repository will ever come about. But it’s an interesting exercise.

Singularitarian

“[When] there are wireless chips in my clothes, when I get up in the morning it’s going to simplify my life enormously. There’s all this stuff I won’t have to consciously think about anymore. If I don’t know where my cowboy boots are, they will tell me.”
– Bruce Sterling

I quite like Bruce Sterling’s novels, Schismatrix and Holy Fire in particular, but the reason why I bring up this quote is because it’s in an interview with Vernor Vinge, one of my favorite authors. Vernor Vinge is best known for two things – slowly but surely writing epic space operas that awards tend to gravitate to, and for popularising the idea of the Singularity (the continuation of the exponential development in technology to the point where it’s changing so fast, we can’t predict where it’s going).

His latest book, Rainbows End, is not a space opera, and takes place in California a mere few decades from now. In it, he looks at a few trends in computing and communication, and simply extrapolates them. However, the novel doesn’t focus on the technology as much as the way the technology profoundly changes the way people live and work, in quite an intimidating manner. We forget that our present day technology of mobile phones and the internet has completely reshaped millions of people’s lives (if only the seven million people who play World of Warcraft). Imagine what it’s going to be like in another 20 or 30 years.

One idea of his that I’m quite fond of is silent messaging, where you simply subvocalise words and they’re transmitted to another person’s earpiece or contact lenses instantly. The technology behind this is not particularly hard. You need unmetered ubiquitous network access (check – we’re almost there), a sensor that can pick up and translate subvocalizations (check – although it needs to get a little smaller) and a transceiver (check – it’s called a mobile phone). The hardest part are the contact lenses, which is tricky but there are a lot of bright people working on that. Alternatively you could just wear a miniaturised earpiece (check).

So what, you might ask, I can already do that with texting. True, but with silent messaging, you could effectively conduct two conversations at once, or simply talk to someone without anyone else knowing. The social implications are enormous. Not only are we talking about cheating on a massive scale, but at a more basic level, backup on a global scale. Want to get rid of an awkward date? Silent message your friend to rescue you. Need to figure out a complex sum while carrying a load of shopping bags? Silent message your calculator. Need to remember or look up something – anything – before you forget it? Silent message your notebook or Google. This is a few years off.

Ultimately, it’s what the technology enables that’s interesting. Vinge illustrates this rather well in the last third of the book, which takes place over about two hours:

“In the climax, there are only about 25 Marines that are actually involved in an operation that is looking after the entire southwest United States. But they are backed up by thousands of analysts and by a lot of equipment on the ground. So, in a way, the normal people in the story are already strange by our standards.”

If your technology is so powerful, then you don’t need people to wield the weapons, you just need people to out-think and outguess the enemy. The more people and the more diverse their knowledge and specialties, the better. The theme running through Rainbows End is that it’s not so much what you know, or even who you know, it’s the ability to identify and connect people with the skills and knowledge you need in the best way possible. To do that, you need to be able to use the new tools effectively, and inevitably, the young will be able to learn those tools better than the old.

It’s a scary place, the future.

The Young Lady’s Illustrated Primer

(As I mentioned in my last post, I have this big backlog of posts I want to make. One of my notes says ‘zelda’. I’m not sure what this means any more. Could it be about the Phantom Hourglass trailer I saw at GDC? Or maybe it’s about the Zelda music they played at Video Games Live – or something else entirely different. Clearly the process of externalising my short term memory to a text-based stack on my laptop is not without flaws)

With the exception of playing Civilization on the PC, my Nintendo DS has taken the throne as the all-time games ‘device’ I’ve ever had. I’m a little surprised at this – back when the DS and PSP were announced, I would have thought that I’d spend more time on the PSP, with its shiny graphics. After testing both for quite some time, I simply stopped using the PSP. I couldn’t stand the fact that I had to wait for it to boot up, and then wait through loading screens. I also disliked the poor battery life and the boring interface.

The DS, on the other hand, booted up instantly and had no real loading times. The graphics weren’t amazing, but that wasn’t importantly – they were perfectly suitable for the sorts of games I was playing. Mario Kart and things like puzzle games aren’t any better if you can display a million polygons. Plus the touch interface and voice was very charming. It offered all sorts of possibilities, including one that is apparent now – the DS can be a machine that shows you how to live. Let me explain the background first.

At the moment I have two new games: Animal Crossing and Tetris. Between those and Civ 4, it’s enough to destroy all my free time. However, I haven’t played Tetris at all; they’ve changed the control system from what I’m used to, so that the up arrow doesn’t rotate any more, which has proved irritating. I’m sure I’ll eventually adapt but it put me off after I accidently dropped a few vital pieces instead of rotating them.

Animal Crossing, on the other hand, is like a single player MMOG. Or like a real-time game of The Sims. Or maybe it’s something completely new. Animal Crossing sounds intensely boring – you’re a person in a village, and you can walk around and chat to the other people. You can go fishing, or catch butterflies, or look for fossil, or plant flowers, buy things. You can’t kill anything, or explore outside the town. There’s no story. And when I first played it, it was intensely boring – there just wasn’t anything to do.

The next time I turned it on, it was the morning. The game mimics real world time, since it has an internal clock. It also mimics holidays and seasons, because it knows the date. It turned out that there was a flower competition on. “Hmm,” I thought. I went over to the Mayor and had a chat. After some interminable chat, he gave me a free bag of flowers. Not bad.

I went to plant it, and then bumped into the guy who sold me my house in the game. Turns out that I still had to pay off the mortgage, so I worked in his shop for a while, doing deliveries, planting trees and the like. When I’d paid off my mortgage, I didn’t have to work for him any more (but he did offer to add an extension onto the house – for a tidy sum, of course). By this time I’d earned enough money to buy a fishing rod and insect net.

All the fish and insects – and fossils – in the game are actually realistic. In other words, they also mimic real life; there are dozens of species of fish in Animal Crossing, and you can only catch them in their natural locations, such as in the ocean, or in a river. They’re also only available during certain times of the day, or when it’s raining, or in certain seasons. Ditto for insects. When you take insects or fish to the museum and show them to the curator there (who, unsurprisingly, is an owl), he’ll tell you all about them. A loach, I’m informed, is a bottom-feeder. I ended up inadvertently learning quite a bit about fish in this way.

Of course, digging up fossils is a little easier in the game than in real life – you just look for an odd spot on the ground and use your shovel. The owl can identify fossils for you as well, and he’ll put them on display (alternatively, you could just sell them, but that wouldn’t be right). I recall the owl telling me about the fossil of the Peking Man that I found; “Did you know that he’s one of the ‘missing links’ between humans and primates? He lived around 500,000 years ago and could use fire.” I did not know that.

Flowers will die if you don’t water them enough. If you plant trees too close to the ocean, they’ll eventually die. If you run on flowers too much, they’ll die. If you take a fruit from one of the trees and bury it, it’ll turn into a new fruit tree in a few days. The other villagers will occasionally ask you to deliver items, but you can steal them if you want. There are other opportunities to lie as well, although if you get found out, they’ll be very upset. There’s a stock market which you can invest in (in white or red turnips), there’s a shady guy who’ll sell you insurance and a person who sells artworks which are sometimes fakes.

All of this is light and fun and entertaining for adults. However, for kids, I imagine it’s quite instructive. It teaches you all sorts of concepts, but not in a one-way, didactic fashion – it lets you learn simply by doing. Animal Crossing will slip in all sorts of lessons just in the course of playing, and it will chide you if you do something wrong. I’m pretty sure that many kids would learn more from playing Animal Crossing than any so-called edutainment title. And that’s when it struck me – Animal Crossing is truly a forerunner of The Young Lady’s Illustrated Primer.

TYLIP is a book from Neal Stephenson’s Diamond Age; it’s a book that is powered by a computer so advanced it’s almost magical, and it teaches children everything. It does this through a fully interactive story. It teaches you how to read, how to do maths, it teaches you morals, ethics, even self-defence. ‘Diamond Age’ is a very entertaining read, mainly because of the TYLIP.

Of course, Animal Crossing is nowhere near TYLIP in sophistication, but it has the same sort of principles. You could argue that A Tale in the Desert has similar leanings, although that’s more targeted towards adults. Anyway, I find the notion that we are slowly progressing towards new ways of teaching children by doing very interesting and worthy of further investigation.

(I came up with the idea that Animal Crossing was analogous to TYLIP while I was talking with Margaret. I announced that I had to blog it, and then promptly forget the whole thing five minutes later. On the coach back from Oxford, I was racking my brains – I knew that I’d thought of something I wanted to blog, and it was an analogy. I spent a few minutes on this, and then got distracted by a fly buzzing at the window. The fly reminded me of catching flies in Animal Crossing, and then bang – I remembered the analogy)

Fight the good fight

I’ve often wondered what it is I’d like to do in my life. Science, Mars, politics (of the non-traditional sort), education, alternate reality games have all appealed and continue to appeal. But perhaps one of the things I feel most passionately about is intelligent thinking and rational thought – science and the enlightenment, in short. Reading an article at the Columbia Journalism Review about how journalists feel the need to conduct ‘balanced’ reporting of things like creationism and abortion when empirically they are not balanced whatsoever simply makes me furious.

I don’t believe that all ideas and beliefs are equal to each other. I believe that there are such things as facts, and that there are competing positions – like creationism and evolution – that are by no means balanced in terms of factual evidence and theoretical underpinnings. Yet a good proportion of people who’ve had secondary or even university education – even a majority – would not agree, or even care. The notion that a handful (at most) of agenda-motivated scientists who say that smoking is not harmful, or that creationism should be taught alongside evolution, or that the MMR vaccine is not safe, are deserved equal time and consideration as the rest of the entire scientific community, backed by countless peer-reviewed, top-tier studies, is not even laughable. It’s disgusting. It’s even more horrific that most people don’t even give a shit, despite the fact that these issues affect them on a deeply personal level.

The typical and tired response to what I’m saying is, ‘Well, how can you say they’re wrong? No-one believed the Earth was round, etc etc.’ That sort of response is ridiculous. Firstly, science today is not the same as science as it was centuries ago, or even decades ago. Secondly, there is no scientific conspiracy to keep new theories down. In fact, speaking from experience, every scientist would like to be the one that transforms a field and the way we think about things.

I recall seeing a pro-smoking lobbyist on TV recently. When challenged with a new metastudy that showed unequivocally that passive smoking is extremely and significantly harmful to public health, this lobbyist said, ‘This study doesn’t have any new data, it doesn’t mean anything, and there are other studies that show passive smoking isn’t harmful.’ I was literally speechless. Not only does this guy misrepresent what a metastudy is, but he also goes and implies that all studies are equal, and if he has one that says passive smoking is fine – never mind whether it’s flawed or not – well, that means it’s fine. Even worse, I have no doubts that this guy is fully aware that he is misrepresenting the issue.

What I want to do is make people think rationally about these issues. I want them to understand what the scientific method is, what a theory means and what it means to prove something. I want them to think for themselves. And I think I can do it at the same time, and within, my other interests as well.

A Smoother Future

(Photos and stories from my trip to Madrid will be up soon!)

Every time I wander around a lab, I’m always amazed at how anything ever gets accomplished, what with the innumerable racks of chemicals and samples crowded into fridges and shelves, bits of paper scattered all over the place and various out-of-date printouts and forms tacked to walls. Yet research still gets done, somehow.

Being the automation-obsessed person that I am, I immediately began to think about how labs could improve their organisation. I’m not talking about putting up alphabetised shelves or anything like that, since they never last and people are intrinsically messy, I’m talking about using technologies that are available now or in the next year to improve the research experience. Two technologies would do the job.

1) Put RFID tags on everything. At first this will just be bottles from commercial suppliers, but will eventually expand to lab samples and chemicals. Scanners would then be able to locate the position of any object immediately, if the tags were hooked up to a searchable database.

2) Flexible, wireless displays on everything. Initially these will be expensive but will hit mass production soon enough. At that point you’ll be able to have dynamic displays that could, say, display a map of the fridge with the location of every bottle, along with its age, owner, origin and contents. They could display news of upcoming lab events, recently delivered chemicals, notes and messages for other lab workers. The dynamic information is the important bit, as is the ubiquity.

As usual, these developments will seep into universities and labs slowly until we wonder how we ever did anything without them, just like how we wonder how we ever did without mobile phones. How did people arrange to meet up without them, anyway?..

Genetic Enhancement

The Atlantic ran an anti-genetic enhancement article this month called The Case Against Perfection. Written by Michael J. Sandel, a member of the notorious President’s Council on Bioethics, the article is cogent and well-argued.

Essentially Sandel believes that embryonic or hereditary genetic enhancements would remove the ‘giftedness’ of every child – in other words, the fact that the traits of a child would no longer be left to ‘chance’ would cause a number of ethical and societal problems. These could include a deterioration in parenting, as parents might expect their children to succeed in particular areas instead of accepting their unique characters, and also the explicit creation of a new class of humans which could damage the solidarity among all humans, and more conventionally, in insurance markets (of course this could happen with genetic screening as well). The fact that this new eugenics would come without coercion does not, in his opinion, remove the potential danger to parents and children.

While reading the article, I unusually found myself agreeing with Sandel, as I’m normally a proponent of genetic enhancement. Most of the arguments I’ve heard against it are of the ‘who needs bigger muscles/taller children’ variety, which I actually agree with – I’m not sure I see the point of these things. However, I hadn’t heard the ‘giftedness’ argument before, which was initially appealing to me but has left me a little uncomfortable.

My problem with genetic enhancement supposedly removing the giftedness of children is mainly in the way that it abstracts the issue entirely. Surely we should be looking at the concrete benefits that genetic enhancement might give children, not just the fact that parents will know about them? How would improved vision and a better heart change the attitudes of parents towards their child? Sandel uses the examples of height and muscle improvement when he talks about the problems of genetic enhancement, which are understandably fraught with complications – but these shouldn’t extend to all variations of genetic enhancement.

Also, Sandel is noticeably quiet about the potential use of genetic engineering to correct diseases such as Parkinson’s. I find it difficult to believe that he would argue against genetic ‘corrections’ unless he believes that such debilitating diseases are also a part of the giftedness of a child, thus if we do use genetic engineering to correct diseases but we don’t use it to ‘enhance’ people, exactly where do we draw the line? What’s the extent that people are prepared to accept in the name of “our appreciation that life is a gift”? (his words).

I understand that my position is pretty vulnerable, given that if we allow genetic enhancements, people will use it to design taller and stronger children whether I like it or not. My only answer (and a weak one it is) would be to raise the bar on genetic enhancements extremely high and only allow those that would not predispose the child to any particular path; hence, improved eyesight and increased memory is useful for anything a person might care to do, but stronger legs clearly suggests that you are at least implicitly expected to make good use of them in some physical activity. I do agree that there are serious issues concerning genetic enhancement and that we should move very slowly with it, but I just don’t see that its removing the ‘giftedness’ of children is a strong argument against banning the whole thing.

Introspection

Lately, I’ve been thinking about why I go to sleep in lectures so often. It isn’t because I’m tired, or because I’m bored; there are plenty of times when I am both tired and bored and fail to fall asleep with the kind of dependability that I do in lectures. Nor is it because I’m sitting still for an hour; I often sit, tired and bored, for several hours and again, I don’t fall asleep. The process of sleeping is admittedly accelerated by the lecture being in a dark and warm room, but then those conditions are neither necessary nor sufficient, and of course they accelerate any form of sleeping.

And contrary to popular belief, I don’t actively try to fall asleep in lectures. In fact, for most lectures I’m engaged in a mental struggle to stay awake. It’s not as I’m not making an effort here. So what is it that’s so unique about lectures that makes me fall asleep in them?

I think it’s divided attention. A lecture consists of auditory and visual stimuli, namely a lecturer talking and perhaps some slides, that reach my sense organs and are converted into information. During lectures, I try to attend to this outside stimuli, but for some reason, I usually can’t. Traditional psychologists would say that the reason behind this is because the stimuli isn’t salient enough to keep my attention from drifting off into introspection. Which basically means, I’m not paying attention because I find the lecture boring.

I don’t agree with that; I’ve been in many lectures whose topics I find highly interesting and important and I still manage to doze off, even if only for a few seconds. I think it has more to do with the presentation of the information; that is, the nature of the stimuli. I would venture that the distilled information bandwidth of most lectures is a constant low enough to be easily processed by most people, including me, consequently leaving a fair amount of spare processing power sloshing about doing nothing (I appreciate that it’s not particularly accurate to use a computer as a metaphor for the brain, especially in terms of the brain having a linear and generalised pool of processing power, but bear with me). This spare power might be used for any number of things, which could include further processing of the lecture information, processing of other non-lecture stimuli, or simple introspection.

For me, I believe that in a lecture I use a significant portion of my brain to attend to the lecture. The rest of my brain attends to something else, such as what I’m going to cook for dinner tonight, or how to design a new kind of streetlamp cover that would reduce light pollution. For most of the time, these two attentive streams can co-exist happily and independently without infringing on each others’ processing power. But when some event occurs that upsets this balance, my introspective stream can start gobbling up processing power from my lecture stream (without my conscious notice). At this point, I stop paying attention to the lecture, which means that I essentially can’t hear or see what’s in front of me, despite being awake*. From that point, it’s an easy hop, skip and jump to falling completely asleep, which I would compare to a sort of cascading, spiralling experience in which my neurones progressively succumb to whatever signals cause me to lose consciousness.

*Obviously I can still hear and see. But I’m not paying attention to those senses, which means that if you asked me what the lecturer had just said, I wouldn’t be able to tell you.

Then I wake up a few seconds or at most a minute later.

It’s essential to remember that the reason this process happens with lectures and not, say, during a conversation, is because the information bandwidth is a constant, which means that my brain can (with reasonable confidence) allocate processing resources to something else. A conversation, on the other hand, has high fluctuations in information bandwidth that my brain would have to keep an eye on.

Another equally important point that I haven’t mentioned yet is that in a lecture, the only stimuli that are changing are those directly related to the lecture itself, i.e. the lecturer and his slides. The rest of the room is basically unchanging. So, to push the computer analogy even further, imagine that my brain encodes auditory and visual information via a compression akin to MPEG; in other words, it only pays attention to things that change. If I stop paying attention to the lecturer and his slides, then I’m not paying attention to any external stimuli at all! This provides another compelling reason why I don’t just spontaneously fall asleep while walking around Oxford.

Finally, I think this happens to me rather than to everyone is to do with the balance between my two attentive streams. The possibilities are that lectures (for some reason) are unusually poor at holding my attention, or my imagination is overactive, or my attention-switching mechanism is kerjiggered.

It is, I believe, a very seductive and compelling hypothesis that is more satisfactory than my previous ‘energy conserving brain’ hypothesis – perhaps even worthy of more investigation…

Only a Matter of Time

“The location of the Greenwich Meridian, that was decided arbitrarily, right?”

“I suppose. They put it there because our system of time or mapping or something like that was designed in Greenwich.”

“But if it was designed in, say, America or Russia, the ‘zero time’ could have been there?”

“I don’t see why not.”

“So, in a way, it’s a complete accident that the international date line happens to lie in the middle of an ocean, instead of, say, cutting inconveniently through a major country?”

(pause)

“Huh, I hadn’t thought of it that way.”

“See, if the equivalent of the Greenwich Meridian was in America or Russia, then the international date line could lie across several countries and you’d have the strange situation of being able to move back and forward a day just by going for a walk.”

“Okay. I think in reality they would have just bent the international date line – like they already do for other time zone boundaries – so it would run along an ocean or sea. Still very interesting though.”

Lucid

I arrived back in the UK yesterday morning after a 24 hour journey from Sydney. Predictably, it was raining.

What I tried to do during the flights back home was to time my eating and sleeping so that I could reduce any jetlag I’d have caused by the ten hour time difference. The easiest way to do this is to set your watch to your destination time zone as soon as you step on the plane and go to sleep at the appropriate time; there are other things you can do but they’re more personalised.

Resetting your time zone would probably work really well except for the fact that you also have to spend a large amount of time in a plane, which is not really the best environment for sleeping. I suspect that if we used teleporters things would be much better in this respect.

In any case, it worked reasonably well for me. I arrived back home at 7am GMT on Monday, after having been awake for about 32 hours (OK, I managed to get a couple of hours of sleep on the plane, which aren’t counted). I managed to stay up (some might say heroically) until about 3pm when I decided to have a short nap; this would be 40 hours up continuously.

Unsurprisingly, that nap went on for about eight hours. Surprisingly, it was only the second time in my life that I have (if briefly) had a lucid dream, a dream in which I knew I was dreaming.

The first time I had a lucid dream came after about a week or two of fairly diligent practice and preparation. There are a few strategies out there to help you have a lucid dream, and the majority boil down to experiencing and recognising a sign that you are dreaming, within the dream. My preparation involved checking the time on my watch a few times a day and thinking to myself, ‘My watch looks like it should, so I’m not dreaming.’ The point of this was to get into the habit of checking the time so that I would do it in my dreams as well.

Soon enough, during a dream I checked the time and noticed that the watch was doing something wacky, such as changing the time when I looked at it twice in succession, or maybe going backwards, or whatever. At that point, I thought to myself, ‘Hey, this is a dream!’ and it was a rather interesting experience, like waking up (but obviously not in the literal sense). Since the point of lucid dreaming is that you get to do whatever you want in the dream, I resolved to do a bit of flying, but for some reason I got caught up in the dream and lost self-consciousness. I was a bit disappointed by this and gave up the practice. This was probably over five years ago.

Last night’s lucid dream had a different beginning. I was chatting to someone who said something completely bizarre, and then I replied, ‘Hold on a second, that’s not possible, this must be a dream!’ and once again, I woke up and it was really a wonderful sensation. Alas, after only a few subjective minutes of lucidity, during which time I freaked out a bit because I thought I might make myself wake up properly by my antics in the dream, I lost self-consciousness again.

Anyway, this experience made me think about the physiological basis of the transition between normal and lucid dreaming. In normal dreaming, you are still conscious, in a sense – you are aware that you are yourself. However, you are not aware that you are actually in a dream; this is called meta-awareness by some.

So why is it that it’s so difficult to gain meta-awareness while dreaming, and how does it occur? Is it possible to observe some kind of neural correlate of the transition, perhaps by fMRI? I have to confess that I have no good theories on the basis of lucid dreaming, but it certainly does seem to be a ripe area for investigation by cognitive neuroscientists, especially those looking at the nature of consciousness, awareness and theory of mind (some might say that this would involve all of them).