In one of my previous posts, “In Which I Bemoan the Tech Level in the Navy”, I discussed the possibility of layering radar and targeting data as a heads-up display (HUD) over a ship’s Bridge windows – not necessarily to speed up reaction time as a fighter pilot requires, but just to remove the layer of separation between data and reality.

Whilst this level of Augmented Reality might not be catching on any time soon in the Navy, it’s starting to become popular in the civilian world (at least, the part of it that owns a smartphone). That’s great, but the problem is, where do we go from here?

Imagine that, walking along a city street, you see some building or monument that you don’t recognise, but that interests you. What can you do, and what could you have done at various points in the past?

Once upon a time, we had maps. On paper. Unbelievable, I know! So you see the building, and you go off to find a map. Not only is the data (the map) separated from reality conceptually (you have to refer to it on its own terms, by going and buying one) it also doesn’t look much like reality (just a 2D, symbolic view) and it’s also subject to what the map-makers thought was important. So if your building was just being built when the map was being drawn up, or the cartographers just didn’t find it that interesting, it might not be marked.

Online mapping such as Google Maps improves things a little, particularly if your building has been snapped by El Goog’s all-seeing Street View cameras – you can get out your phone, virtually navigate to the building, and hopefully it might tell you what it is. Google Earth goes a step further in allowing locations to be freely tagged by users – rather than just one map-making company, you can now hope that one out of a thousand or a million users has found the building interesting enough to tag – much more likely. They may even have made a 3D model of it. But there’s still a separation between data and reality; you have to look at the real world, then your phone, then back again.

The next step on the path to merging data into reality is AR – Augmented Reality. Layar, Wikitude and their kin do pretty much the same job – point your phone at a scene, and interesting things will be overlayed as points on it. Whether it’s one point per Wikipedia entry, per geo-tagged tweet, Flickr photo or hotel review, the data is inserted on top of reality so long as you view reality through your smartphone screen. Look at the building you’re interested in through Layar’s interface, and it’ll have a dot in front of it that will allow you to pull up information about it.

That, of course, is fantastic. But where do we go from here? In order to make the data more ambient, more blended with reality, we’re eventually going to have to remove the smartphone from the equation. Even if seeing the world through a 3-inch screen becomes a normal way to behave in public, the things run out of batteries, don’t have great GPS chips, and besides, you might be busy using it for something else.

My proto-novel Forgotten Children features brain-implanted microchips that bind onto neuronal pathways so as to obtain read/write access to the host’s visual cortex and short-term memory. That may be the end goal of this line of thought, but it’s entirely possible that it proves technologically impossible or socially unacceptable even thousands of years in the future.

But that’s a crazy science fiction writer spaffing about the year eleventy-billion. With the rate of technological progress, we should be looking forward to the next transition to more ambient data within ten years. I wonder what form it will take – VR inserts in glasses? In contact lenses? Holographic projectors in wrist-computers? And of course, I wonder if I will have some small part in creating it.