The 7 Minute Solution

I’m intrigued by the proliferation of explicitly time-based self-care plans, like the 7 Minute Workout. They aren’t a new phenomenon – we’ve had 30 day diets and things like NaNoWriMo for decades. But it feels like the duration of these plans are getting shorter and shorter.

The Science

Part of the change is surely due to science. We know now that high-intensity interval training can produce better results in terms of fitness than longer but less intense exercise, by putting our heart and muscles under shorter, sharper periods of stress. Crucially, we know the mechanisms of why this works – it’s not just an observation, we can really see how our body’s cells and organs respond to stress.

But there are different degrees of rigour and certainty in science. A lot of the self-care plans based on psychology and neuroscience are, to my mind, based on much fuzzier research. I don’t mean to say that the researchers in question are incompetent or lying; it’s that their research is taken lightyears too far by companies marketing products.

Let’s imagine researchers conduct a study where they place university students in an MRI scanner and observe their brains while they’re listening to different sounds for ten minutes; maybe some students hear music, some hear white noise, some hear speech, and so on. They find that the students who hear the music have a different kind of brain activity in regions associated with focus or relaxation, or whatever, and the students also report that they feel more relaxed afterwards. So perhaps something is going on with the music, or that type of music, and it’s worthy of more study.

But then let’s say a company sees this research and makes an app – 10 Minute Relaxation (I’m making this up) – which plays calming music to you. They say their app is proven ‘by science’ to make you more relaxed in just ten minutes. Well, clearly not; what ‘works’ on university students sitting in an MRI may not work at all on a 50 year old sitting on a bus.

In any case, it doesn’t matter whether it works or not, it sounds good and people want a fast solution proven by science. The app makers can point at the study and the apps’ users get a nice placebo effect.

The Speed

Not along ago, the time in London was different from the time in Edinburgh. Not that it mattered – it took so long to travel between the two cities, and the journey was so unreliable, that knowing the time down to the minute would have been pointlessly expensive (clocks and watches being pretty high tech back a century or two ago).

But now we have smartphones, which means that we agree on the time down to the second, and we can know our ETA via Google Maps and Uber down to the minute. We can be more efficient – no more idly waiting for ten minutes at the coffee shop for friend, because they can let us know they’re running late; we can spend that ten minutes on something else. Maybe it’s playing a game or reading Facebook – or maybe it’s something productive, like a 10 Minute Relaxation session.

The gaps in our busy lives are shrinking, which means that self-care solutions must also shrink.

The Anxiety

Any one of us can become an exceptional artist or writer or games designer or YouTuber or actor. Any one of us can lose our jobs in an instant. Any one of us can have their entire field of work vanish in just a few years, thanks to automation and globalisation. So we are in competition with everyone else, which is a recipe for serious anxiety. It means you always need to be improving yourself; and it’s easy to see why shorter solutions can feel more manageable and rewarding than, say, the 7 Month Workout, or the 10 Year Relaxation session.

Invariable Reinforcement

Our office manager Sophie passed me the phone. “It’s someone from Google,” she said. I raised an eyebrow. Perhaps this was an invitation to an event, or another chance to test prototype hardware, or something even more magical.

I unmute the phone. “Hello?”

“Hi, I’m Tim, from Google Digital Development. I’d love to talk about how we can help you promote your apps on the Google Play Store better.”

How disappointing — they were just selling Google search ads. I quickly made my excuses and hung up.

Three months later: “Hi Adrian! My name is Mike, I’m from Google Digital Development -”

Seven months: “Hey Adrian! I’m from Google Digital -”

Twelve months: “I’m Sean, I’m from Google Digi -”

To this day, it keeps happening and I keep getting my hopes up, like a child. Why don’t I learn that ‘Google’ on the phone equals ‘Irish guy cold-calling with ad sales’?

Because I haven’t told you about the times Google contacts us about actual interesting projects. It’s usually by email, but sometimes they do call. Not on a regular schedule, of course — but at random, unpredictable times.

This pattern of frustration mixed with intermittent success is essentially a variable reinforcement schedule. If you’ve read any article about addiction in the last twenty years, you’ll know that a variable reinforcement schedule can be used to make rats compulsively press a lever in the hope of getting another pellet of food; and that the same schedule could explain how addictive behaviour develops in humans.

Some people in the tech community act as if variable reinforcement schedules were occult knowledge, magic words capable of enchanting muggles into loosening their wallets. If only we could learn the secrets of variable reinforcement schedules, we could make them addicted to our new app — all those microtransactions, all those ad views, oh my!

So when people learn that I studied experimental psychology and neuroscience at Cambridge and Oxford — and that I run a company that designs health and fitness games — they are taken aback. They are fascinated. And then… they are disappointed, but only after I tell them that the principles of variable reinforcement schedules and operant conditioning can be learned by a dedicated student in a few hours. Moreover, if experimental psychologists were all capable of making the next Candy Crush, they wouldn’t spend most of their time complaining about the quality of tea in the staff common room.

That doesn’t mean that variable reinforcement schedules are bunk, though.

Variable reinforcement schedules help explain why I spend an hour a day mindlessly checking Gmail, Metafilter, Reddit, Twitter, and Hacker News. Even when I know, with 99% certainty, that nothing interesting will have happened in the 15 minutes since I last checked them, I still type Command-R — because maybe this time I’ll get lucky.

More broadly, it’s why we pay attention to the constant interruptions that plague our screens — there’s no cost to the person sending the interruption, and occasionally, it’s of real interest to us. Continue reading “Invariable Reinforcement”

Brain Training Games Don't Work

A few days ago, 73 scientists signed a letter asserting that brain training games – which typically feature puzzle games and mental exercises on smartphones, tablets, PCs, or handheld devices – do not successfully increase general measures of intelligence or memory.

I have long had my doubts about the efficacy of games like Brain Age in improving general intelligence. Doing simple arithmetic exercises, in my mind, only improves your ability to… do simple arithmetic. Supposedly there are some mental exercises you can do to improve working memory, such as the n-back task, but these are really quite difficult and not fun to do. Still, I have not been a practising neuroscientist or experimental psychologist for several years, so I didn’t feel qualified to comment.

I suggest you read the whole letter in full, or failing that, the Guardian’s summary (which also handily includes responses from game developers) but there are some important excerpts that are worth considering:

It is customary for advertising to highlight the benefits and overstate potential advantages of their products. In the brain-game market, advertisements also reassure consumers that claims and promises are based on solid scientific evidence, as the games are “designed by neuroscientists” at top universities and research centers. Some companies present lists of credentialed scientific consultants and keep registries of scientific studies pertinent to cognitive training. Often, however, the cited research is only tangentially related to the scientific claims of the company, and to the games they sell.

Too many times have I seen apps and games that use the badge of being ‘designed by neuroscientists’ as a mark of efficacy and quality. It makes me sick. I don’t doubt the sincerity of their intentions, but they are being misleading. Just as often, I see game designers trot out a long list of papers of varying quality that are barely relevant to the actual experience being offered. This also makes me sick.

…we also need to keep in mind opportunity costs. Time spent playing the games is time not spent reading, socializing, gardening, exercising, or engaging in many other activities that may benefit cognitive and physical health of older adults. Given that the effects of playing the games tend to be task-specific, it may be advisable to train an activity that by itself comes with benefits for everyday life.

Another drawback of publicizing computer games as a fix to deteriorating cognitive performance is that it diverts attention and resources from prevention efforts. The promise of a magic bullet detracts from the message that cognitive vigor in old age, to the extent that it can be influenced by the lives we live, reflects the long-term effects of a healthy and active lifestyle.

People shouldn’t play sudoku or solve crosswords or go to the bingo in the belief that they make you smarter. They should do them because they’re fun. If you want to improve your cognitive health, do a range of mental tasks and be physically active – there is lots of good research demonstrating this works. Unfortunately, this is more time consuming and tiring than sitting at home playing on a smartphone, and thus is a harder sell.

Do not expect that cognitively challenging activities will work like one-shot treatments or vaccines; there is little evidence that you can do something once (or even for a concentrated period) and be inoculated against the effects of aging in an enduring way. In all likelihood, gains won’t last long after you stop the challenge.

Like I say, read the whole letter.

On a related note, another thing that makes me sick are the pseudoscience apps I regularly see in the Top Health and Fitness category these days, including “Hypnotic Gastric Band” and the endless apps that promise to reduce your stress and anxiety. In some ways, these are no worse than self-help books that have been with us forever; but I think the veneer of science and professionalism delivered by the App Store and by the whole ‘quantified self’ industry is encouraging people to believe in effects that are not proven to exist. More on this another time.

Tip of the Tongue

A phenomenon well-known by psychologists, and pretty much everyone else, is called ‘tip of the tongue’, and it’s described in this American Scientist article:

When we have something to say, we first retrieve the correct words from memory, then execute the steps for producing the word. When these cognitive processes don’t mesh smoothly, conversation stops.

Suppose you meet someone at a party. A coworker walks up, you turn to introduce your new acquaintance and suddenly you can’t remember your colleague’s name! My hunch is that almost all readers are nodding their heads, remembering a time that a similar event happened to them. These experiences are called tip-of-the-tongue (or TOT) states. A TOT state is a word-finding problem, a temporary and often frustrating inability to retrieve a known word at a given moment. TOT states are universal, occurring in many languages and at all ages.

The article goes on to explain that tip-of-the-tongue may be caused by weak connections between words and their phonology (their sound) in our brain; the weaker they are, the more likely it is that you will know a word, but you just can’t recall how to say it.

There’s also a general theory of memory, that we retrieve memories through their connections to other memories – the stronger the connections, the easier the recall. You can imagine a cascading chain of memories of a moment years ago, set off by a particular smell or piece of music from that day; or revising for a exam for months and months, baking those connections in.

What’s interesting is that these connections are now being externalised from our brain, and supplemented by computers and the internet. Here’s what I mean: earlier today, I needed to recall the name of someone who’d won a prize. I couldn’t remember what the prize was, what it was for, or even when this happened. I did, however, know that it would be in an email, and the email would contain the word ‘Jeremy’. So I did a search in my mail for ‘Jeremy’, and a quick scan of the search results later revealed the email.

I don’t relate this to show that I am some sort of search ace; far from it. Plenty of people use searches in their mail, their RSS feeds, their computers, or even the entire web, to supplement things that they already know but just can’t retrieve. These days, the searches are fast enough, and the information kept in databases broad enough, that this practice of laying down virtual connections is accelerating.

I expect that as we store increasing amounts of important information on computers, and we continually improve the speed and accessibility of searches (through, say, silent messaging), we will find it ever more difficult to see where our memory and recall processes end, and where those of our computers begin. We’ll be able to remember far more, far faster – and if we’re ever disconnected from our computers, it’ll be even more painful.

Brain Enhancement

One of the many sad results of Perplex City being put ‘on hold’ is that I can’t explore the effect of cognitive enhancement on society. As a former neuroscientist who studied experimental psychology at university, I always enjoyed writing about my pet fictional company, Cognivia, and its range of cognitive enhancements including Ceretin (wide-spectrum enhancement), Mnemosyne (memory booster), Cardinal (maths), Synergy (creativity) and others. I still think the names are really cool as well.

As usual though, reality is catching up to fiction at an breathtaking rate; The New York Times published an article today covering the use of cognitive enhancers in universities and society in general:

In a recent commentary in the journal Nature, two Cambridge University researchers reported that about a dozen of their colleagues had admitted to regular use of prescription drugs like Adderall, a stimulant, and Provigil, which promotes wakefulness, to improve their academic performance. The former is approved to treat attention deficit disorder, the latter narcolepsy, and both are considered more effective, and more widely available, than the drugs circulating in dorms a generation ago.

… One person who posted anonymously on the Chronicle of Higher Education Web site said that a daily regimen of three 20-milligram doses of Adderall transformed his career: “I’m not talking about being able to work longer hours without sleep (although that helps),” the posting said. “I’m talking about being able to take on twice the responsibility, work twice as fast, write more effectively, manage better, be more attentive, devise better and more creative strategies.”

Would I take cognitive enhancers? I would certainly like to give Provigil a try, if only to see what it’s like. I have concerns about its long-term efficacy, and obviously there are issues of developing a dependency on it (if not physiological, psychological). There are already many people out there who regularly use caffeine and Pro-Plus to pep themselves up. You could argue that the stimulant properties of caffeine are merely a side-effect, and that the reason people drink coffee is because it tastes nice, but I find that as hard to believe as the notion that people drink alcohol only because they enjoy the taste.

The fact is, we already widely use cognitive enhancers, whether it’s caffeine or sugar. They do improve our performance. They are not natural in the slightest, unless natural somehow means ‘old’. So the question becomes, are we prepared to allow use of cognitive enhancers that are even more powerful, more reliable, and with fewer side-effects?


(Flash) – a nice little interactive quiz from the BBC that deals with psychological phenomena and illusions (and some other not quite as interesting stuff), via Bhisma.

On memory

My 4 year DPhil here at Oxford is funded by a studentship from the Wellcome Trust. This is a great thing because it means I have enough money to, for example, live, and it also means that any research groups I join will not have to pay for me. It’s even better than that, though, because I just found out that my studentship comes with a research grant that goes to any group(s) I join. So in effect, by taking me on, I would be giving them money!

As a result, I’ve already had group leaders sidling up to me and nonchalantly informing me of the highly interesting and vital research work that they are doing. It is pretty cool.

I’m not actually expected to join an existing research project though; I’ve been told that I can basically think of any project (within reason) that involves ion channels. This might seem a little restrictive, but you have to realise that every cell in every organism has ion channels and they’re essential for, well, every biological process. So recently I’ve been giving some thought to memory and cognition, and how it might be improved by drugs.

Recently a drug called modafinil (aka provigil) has been making headlines about how it can drastically boost concentration and wakefulness. All of this is true. Even better, modafinil doesn’t appear to have any real side-effects at all. Unsurprisingly, legions of Americans, very few of whom actually have sleep disorders (which is what the drug is supposed to be for), have been trying to get their hands on this wonder-drug.

Would they be worried, or at least surprised, if they knew that we have no idea how modafinil works? Probably not. But I’m quite interested. I’ve started reviewing the literature on modafinil and haven’t been able to find much addressing the molecular and cellular mechanisms of the drug’s action; sure, there have been plenty of clinical studies checking to see whether it works or not, and people have been looking quite hard to find any harmful side-effects – but this doesn’t tell us how it works.

Of course, there are some groups trying to figure out how it works, but mostly the progress they’ve made is finding out how it doesn’t work (it doesn’t seem to involve the dopamine system, for example). So I think this is an interesting area for research and could shed some light on how normal memory and cognition work. My worry, however, is that since modafinil appears to have such global effects on consciousness, it might be very difficult to work out how individual systems are being affected.

Luckily, I have about a year before I have to start the serious research component of my DPhil so I have plenty of time to make my mind up on a project.

More neuroscience

The theme of today’s conference sessions was on attention, on which William James famously said, “Everyone knows what attention is.” (I never want to hear that phrase again. Ever. I heard it enough today)

I wasn’t too enamoured with the first three talks today, which were arguably given by the big-hitters of the conference. I didn’t think that any of them were particularly compelling speakers and they assumed quite a lot of knowledge on the part of the audience, which is mostly composed of graduates, many of whom didn’t even specialise in neuroscience or experimental psychology.

Kia Nobre, from Oxford, gave an interesting talk about imaging the system that controls attention in the brain, but alas my brain isn’t working properly and I can’t recall what she said. Clearly some sort of cue is in order…

John Marshall, also of Oxford, talked about spatial cognition and whether it’s a right hemisphere specialization. Well, that’s what the title of the talk was – in actual fact, he ended up talking about perceptual neglect, which is an interesting enough subject, especially when considering Bisiach’s imaginal neglect, but I don’t think he said anything particularly new or interesting. That’s not true – he did say one interesting thing, which didn’t have much to do with his talk.

He said that rather than spending our time trying to figure out the anatomical specialisation of different parts of the brain (e.g., insisting that Broca’s area of the brain is only about language), we should instead think that one region in the brain might have a number of specialised functions depending on what neural circuits are active at that time. A useful insight.

Next was Prof. Stephanie Clarke of CHUV, Switzerland. She has this interesting idea that much like the dorsal and ventral streams of processing in the visual system, there are ‘where’ and ‘what’ streams in the auditory system. She presented some histological evidence for this, which was a bit refreshing to see in the conference overburdened with psychophysical experiments, although of course she had her own psychophysical experiments as well to prove the functional point. These experiments basically showed a double-dissociation between recognition and localisation of hearing. Very intriguing stuff.

The last talk was by Geraint Rees, of UCL. This was quite controversial. Rees is an excellent speaker, and he talked about his experiments into proving that awareness (and thus conscousness) resides in the parietal cortex (or at least, in the cerebral cortex) by a clever and peculiar fMRI experiment involving subjects ‘merging’ two different images and involuntarily switching between the images.

The general consensus is that while it was a very clever experiment, of the sort that Nature likes to publish, his conclusions were a bit too ambitious and he had a few too many assumptions…

At the end of the day’s sessions, there was a brief discussion that involved how genes might determine brain function. One of the speakers speculated about how we might have to consider that genes statistically alter the growth of various neuronal regions and bias them towards being able to learn and specialise in certain areas, e.g. facial recognition.

It was at this time, about 5:20pm, that I came up with an idea. I was a bit tired and thinking that I might like to leave the lecture room but couldn’t really because I was sitting in the middle of a row and someone was talking. I then thought, wouldn’t it be wonderful if I could just transfer my consciousness into multiple locations so I could keep an eye on different events – a bit like being on the Internet and participating in several IRC chats at the same time. Not a new idea, I know, but it was quite vivid at the time.

Tomorrow is the last day of the conference, but it won’t give me much respite because I have a symposium to attend on Friday. Five straight days of having to wake up at 8am… I honestly don’t know how I’ll still be alive by this weekend.


So Bhisma has requested a few long posts on the cognitive neuroscience conference I’m currently attending in Oxford (that’s my life – one long, endless round of conferences…). The conference, properly named the Autumn School in Cognitive Neuroscience, began on Monday at the Department of Experimental Psychology. Some thoughts on the sessions:

First talk was by Heidi Johansen-Berg from Oxford on ‘Plasticity of movement representations in disease’. Basically about the problems of investigating remapping of function to different brain areas (elicited by brain trauma), monitored by fMRI. The problem is that of correlation and causation; is the remapping a direct consequence of the trauma, or merely epiphenomenal? Heidi advocates using transcranial magnetic stimulation to tease out of the causation to test the functional relevance of the areas in question in human subjects. Seemed interesting to me, but nothing world-shattering.

Oh, and she mentioned something called DTI – diffusion tensor imaging, which is a way of using fMRI to map out neurones and blood vessels in the brain. Very neat stuff. It works by tracking the self-diffusion of protons, and from that you can infer fibre direction. Check Catani et al, Neuroimage, 2002.

Next talk was by Roger Lemon of UCL. I know this guy because he’s a collaborator with a guy who worked down the corridor in Cambridge. I have serious issues with his use of multi-unit neuronal recording in brains. That’s about it – he did talk about the importance of oscillations in neurones that might serve as a ‘sensorimotor working memory’ to ensure a constant and appropriate level of force while grasping objects.

Nothing hugely interesting for the rest of the day until Daniel Wolpert’s talk (UCL) in the afternoon. Wolpert is an excellent speaker and he talked about his theory of how all human movement is governed by a requirement to reduce the uncertainity of final limb position, given the fact that there is inevitably noise generated when moving limbs in the first place. Touched on the difficulty of tickling yourself, and Bayesian estimates for the uncertainty of the body’s own sensors.

Today started off with a talk about Janette Atkinson on kids suffering from Williams’ Syndrome. A good talk, except for the fact that it was identical to the one given by her at the Festival of Science last year in Leicester. Oh well.

Kate Watkins (formerly of McGill, now Oxford) talked about using a new technique of ‘Brain morphometry’ to help map out the brain and investigate differences in brain morphology between patients with brain trauma and controls. Not so bad, but I have concerns about the methodology of exactly how ‘morphometry’ works. I imagine I will have to read a paper or two on this to make an informed comment.

Kate Plaisted (Cambridge) discussed her theory of ‘reduced generalisation’ to explain both the social and non-social aspects of autism. We all know that autists are very good at distinguishing very small differences in objects or things that most normal people wouldn’t even notice – this is why autists can solve jigsaw puzzles by simply looking at the shape of the pieces, rather than the pictures on them. Kate argues that the downside of this is that they aren’t good at generalising the similarities between objects and things, which leads to lots of problems down the road – including social deficits.

An interesting theory, that again I will have to read up on. I asked a question at the end, about whether she feels her theory is in conflict with Baron-Cohen’s Theory of Mind (that she briefly mentioned). She basically thinks that it is. I am a bit worried about this because I feel that Baron-Cohen’s theory is awfully convincing as well. She also doesn’t like his idea that ‘autism is an extreme form of the male mind’ – but then, neither do most people.

The last talk today was by John Stein, on ‘The magnocellular theory of dyslexia’. This was great stuff, if very controversial. Stein basically thinks that a great deal of dyslexia, and similar cognitive deficits, can be explained by problems in the magnocellular component of the visual system (the retinal ganglion cells, remember?) – and the putative ‘magnocellular auditory system’ which most people don’t think even exists.

Apparently dyslexics generally have a very reduced magnocellular system which means that they aren’t good at all at stablising their vision, resulting in blurry vision – and blurry text when reading. Why is this so? Several reasons. Dyslexics have an uncommonly high number of auto-immune problems that could explain the reason for an impaired magnocellular system (the growth of which is, appropriately enough, governed by the immune system) and they are also lacking in essential fish oils. By this, he means HUFAs – highly unsaturated fatty acids – that make up an essential component of cell membranes that accelerate the action of ion channels.

Some interesting factoids from his talk: 3/4 of people in jail are illiterate. Half of those in jail are dyslexics. Dyslexia is one of the biggest causes of family strife and misery. Furthermore, the state of literacy in the western world is such that 20% of people in the UK and USA are unable to find the word ‘plumber’ in the Yellow Pages.

All in all, the conference has been interesting. There have been some boring talks, to be sure, but there have been some interesting ones. I have fallen asleep for roughly the average amount of time I normally do during lecture (maybe 10-15% of the time). I’ve met a lot of interesting people in neuroscience, and amazingly enough, despite the fact that this is the sixth conference I’ve been to, it’s the very first academic conference related to my actual line of research.