The Next Step for Ambient Data

This is an pretty old post from my blog, which has been preserved in case its content is of any interest. You might want to go back to the homepage to see some more recent stuff.

In one of my previous posts, “In Which I Bemoan the Tech Level in the Navy”, I discussed the possibility of layering radar and targeting data as a heads-up display (HUD) over a ship’s Bridge windows – not necessarily to speed up reaction time as a fighter pilot requires, but just to remove the layer of separation between data and reality.

Whilst this level of Augmented Reality might not be catching on any time soon in the Navy, it’s starting to become popular in the civilian world (at least, the part of it that owns a smartphone). That’s great, but the problem is, where do we go from here?

Imagine that, walking along a city street, you see some building or monument that you don’t recognise, but that interests you. What can you do, and what could you have done at various points in the past?

Once upon a time, we had maps. On paper. Unbelievable, I know! So you see the building, and you go off to find a map. Not only is the data (the map) separated from reality conceptually (you have to refer to it on its own terms, by going and buying one) it also doesn’t look much like reality (just a 2D, symbolic view) and it’s also subject to what the map-makers thought was important. So if your building was just being built when the map was being drawn up, or the cartographers just didn’t find it that interesting, it might not be marked.

Online mapping such as Google Maps improves things a little, particularly if your building has been snapped by El Goog’s all-seeing Street View cameras – you can get out your phone, virtually navigate to the building, and hopefully it might tell you what it is. Google Earth goes a step further in allowing locations to be freely tagged by users – rather than just one map-making company, you can now hope that one out of a thousand or a million users has found the building interesting enough to tag – much more likely. They may even have made a 3D model of it. But there’s still a separation between data and reality; you have to look at the real world, then your phone, then back again.

The next step on the path to merging data into reality is AR – Augmented Reality. Layar, Wikitude and their kin do pretty much the same job – point your phone at a scene, and interesting things will be overlayed as points on it. Whether it’s one point per Wikipedia entry, per geo-tagged tweet, Flickr photo or hotel review, the data is inserted on top of reality so long as you view reality through your smartphone screen. Look at the building you’re interested in through Layar’s interface, and it’ll have a dot in front of it that will allow you to pull up information about it.

That, of course, is fantastic. But where do we go from here? In order to make the data more ambient, more blended with reality, we’re eventually going to have to remove the smartphone from the equation. Even if seeing the world through a 3-inch screen becomes a normal way to behave in public, the things run out of batteries, don’t have great GPS chips, and besides, you might be busy using it for something else.

My proto-novel Forgotten Children features brain-implanted microchips that bind onto neuronal pathways so as to obtain read/write access to the host’s visual cortex and short-term memory. That may be the end goal of this line of thought, but it’s entirely possible that it proves technologically impossible or socially unacceptable even thousands of years in the future.

But that’s a crazy science fiction writer spaffing about the year eleventy-billion. With the rate of technological progress, we should be looking forward to the next transition to more ambient data within ten years. I wonder what form it will take – VR inserts in glasses? In contact lenses? Holographic projectors in wrist-computers? And of course, I wonder if I will have some small part in creating it.

Comments

While I love the idea of being able to call up this information on HUD style displays, I think that there is the possibility to lose site of what on a chart or diagram is actually important and necessary information,

As an example, which I know isn't overlaying on the world in front of us but serves to illustrate my point, marine charts are increasingly coming with the option to show satellite imagery over all the land segments. Now I can see this can have its uses, if you are on a coastline then there may be times that seeing the house on the point may be of benefit. Most of the time however it serves as a distraction, complicating the image in ways that are unnecessary and distracting from the important data within the water. The traditional yellow colour for land makes it a lot easier to concentrate on the separation between areas of relevance.

Don't get me wrong, I love the idea of being able to call up AIS data on ships as they come over the horizon, with little blocks of data that can be called up around any sighted vessels. I just don't think there is the screen technology available yet to do this in a satisfactory fashion (especially in a system where it may be relied upon for safety). I also don't think that you can completely replace the diagrammatic representation of the data, or should, simply for the fact that a diagram represents a simplified view of the world focusing on the relevant pieces of information.

My apologies, I think that strayed a little from the point you were going for and to be fair you did discuss the limitations inherent in the current display technology. I just see on a regular basis people being far too obsessed with the use of photo style data (aerial or satellite especially) when in fact a basic map would actually serve the purpose of displaying the relevant information far better. A lot of reality style overlays look to be dangerously headed in the same direction at present.

I agree completely on maps with satellite imagery -- it's kind of cool for a couple of minutes, but then it just gets in the way.

Particularly in the case of ships' tactical displays there will probably always be a need for a 2D symbolic interface so that crew can easily understand the situation. I just feel that the current response to "What's that ship over there?" of "Let's go and look at this screen and see if we can figure it out" is pretty inefficient!

Add a Comment