Imagine a device, similar in appearance to the iPhone, that you could point at a street sign in a foreign language, and it would display that sign on the screen – translated. I described this dream device to some friends a few weeks ago, explaining that there was nothing technically insurmountable about it – optical character recognition has been around for decades, and machine translation has been around for years. Yes, you would have to improve both to make it work in the field, and you’d have to speed it up and close the loop to make it work in real time, but there’s nothing stopping it from happening.
Maybe we’ll have it in a few years, I told them.
Yesterday, Intel demoed this exact device (scroll to 4:43), translating signs in Mandarin. A separate PC, wirelessly linked to the phone, did the heavy lifting of the OCR and translation, but the principle was there and it worked. Not only that, but the phone (with the PC) also managed near-real time audio translation. To be sure, the device is not ready for consumer use, and Intel expect it’ll be 3-5 years before the software and hardware is really ready.
Real-time translation of anything you can see will transform the world, and it’ll only be 3-5 years away. That’s it. This was my first real sensation of future shock.