I saw this last night and was surprised not to see any mention of it here yet.
I think we all knew that this was being worked on since Kinect's capabilities were demonstrated, but even for someone with a little background in this field the implementation already seems fantastic.
This is clearly Google's plan for indoor navigation, perhaps not to compete with iBeacons immediately but by letting your phone determine its real 3d location by correlating real world geometry several powerful mapping applications become possible.
For me the most exciting part is crowd sourcing 3d maps of the entire world and then superimposing them on a HUD. For example, imagine glasses which can alert you to dangers on the highway by augmenting your vision and crowdsourcing such dangers from the cars ahead of you.
I am a freeride mountain biker, so being able to see in 3d where my colleagues have ridden and where they have fallen would be incredibly useful to me and to the inevitable emergency services visit. Imagine if firefighters could see in the dark because 3d models of building interiors exist.
The possibilities are endless, and I've seen mention Apple is working on the same technology, so I thought I'd post this thread to try and encourage discussion.