Invoked computing is top banana
You can download emulators for 1980s games machines that will run happily on your PC.
Soon, with the new visors appearing at the end of the year, the screen could be an augmented reality display. And since the computing is being done by your PC, the ‘computer’ that you interact with could actually be anything at all visually. Like a banana for example.
The University of Tokyo has done just this. They call their version Invoked Computing, though the idea has been around many years in various forms. BT and MIT had another variant where a camera would recognise an object and call up any associated multimedia.
Other companies have apps now that invoke multimedia from clothes or packaging. Like most of what we hear on augmented reality, they are old ideas whose time has finally come.
But that doesn’t make them any less exciting or important. In fact, since we are pretty much ready for them, the various applications will explode in number and market penetration.
Ishikawa-Oku Lab proved that silly things can be most entertaining. Using a pizza box as a laptop or a banana as a telephone is perfectly ok, the actual computer can map the visuals onto computing easily. Any object that triggers a memory or concept in the user’s head can work fine.
So that’s nice but tomorrow, there is room for progress. Once you understand that any object or image or action or gesture can be mapped onto any function or media, then you see that it is no longer about science and technology, but is about art and marketing and play and socialising and just generally making life fun.
And as they said in their press release, sometimes it is easier to do new things by recreating old mechanisms, like using a virtual typewriter in place of an iphone screen.
One problem with IT generally is that people are not very good at creating genuinely new ideas. When we started getting more than a few files on computers, we quickly invented filing cabinets, folders and so on – computer equivalents of real life ideas.
We have lots of online shops now that bear little resemblance to real life shopping, but many of the first ones were in virtual malls. I rather suspect that when augmented reality comes online properly, virtual malls will come back big time.
It is an easy and friendly way to shop compared to trying to figure out menus. In fact, when you look at someone passing by, and your PC recognises their clothes by their appearance or embedded devices, you could buy one for yourself just by making some gesture there and then.
Companies are working fast to developing gestures that will work best. There will be stiff competition to control the future psycho-ergonomic space because companies realise that the virtual interface is yet another major layer – another major market – with more revenue associated with it, and more scope to control the value chain and syphon off a drop or two.
Gestures and visuals need to be easy to use, easy to remember and safe in a crowded area. But because we need to understand them, they have to build on what we are already familiar with in the real world or the web.
I have seen very many exciting interfaces that have never made it to market because they were too far ahead of their time. Users are best assumed dumb.
Adding small increments on existing creative content is by far the easiest and usually the best way to proceed. If you dumped someone from the early 1980s into today’s word they wouldn’t know how to do many things. So it will be if you throw someone from today straight into a 2025 interface.
We need to walk before we can run. Simple as that. And invoked computing is an excellent large step towards making such a small step.