Apple's iPhone-based augmented reality navigation concept has 'X-ray vision' features

Posted:
in General Discussion edited September 2014
According to a pair of patent applications published on Thursday, Apple is investigating augmented reality systems for iOS capable of providing users with enhanced virtual overlays of their surroundings, including an "X-ray vision" mode that peels away walls.




Apple filed two applications with the U.S. Patent and Trademark Office, titled "Federated mobile device positioning" and "Registration between actual mobile device position and environmental model," both describing an advanced augmented reality solution that harnesses an iPhone's camera, onboard sensors and communications suite to offer a real-time world view overlaid with rich location data.

The system first uses GPS, Wi-Fi signal strength, sensor data or other information to determine a user's location. From there, the app downloads a three-dimensional model of the surrounding area, complete with wireframes and image data for nearby buildings and points of interest. Corresponding that digital representation with the real world is a difficult task with sensors alone, however.

To accurately place the model, Apple proposes the virtual frame be overlaid atop live video fed by an iPhone's camera. Users can align the 3D asset with the live feed by manipulating it onscreen through pinch-to-zoom, tap-and-drag and other gestures, providing a level of accuracy not possible through machine reckoning alone.

Alternatively, users can issue audible commands like "move left" and "move right" to match up the images. Wireframe can be "locked in" when a point or points are correctly aligned, thus calibrating the augmented view.

In yet another embodiment, the user can interact directly with the wire model by placing their hand into the live view area and "grabbing" parts of the virtual image, repositioning them with a special set of gestures. This third method requires object recognition technology to determine when and how a user's hand is interacting with the environment directly in front of the camera.


Current iOS 7 Maps Flyover of New York City.


In addition to user input, the device can compensate for pitch, yaw and roll, as well as other movements, to estimate positions in space as associated with the live world view. Once a lock is made and the images are calibrated, the augmented reality program can stream useful location data to the user via onscreen overlays.

For example, an iPhone owner traveling to a new locale may be unfamiliar with their surroundings. Pulling up the enhanced reality app, users would be able to read names of nearby buildings and roads, obtain business information and, in some cases, "peel back" walls to view the interiors of select structures.

The "X-ray vision" feature was not fully detailed in either document aside from a note saying imaging assets for building interiors are to be stored on offsite servers. It can be assumed, however, that a vast database of rich data would be needed to properly coordinate the photos with their real life counterparts.

Finally, Apple makes note of crowd-sourcing calibration data for automated wireframe positioning, as well as a marker-based method of alignment that uses unique features or patterns in real world objects to line up the virtual frame.


Source: USPTO


There is no evidence that Apple is planning to incorporate its augmented reality technology into the upcoming iOS 8 operating system expected to debut later this month. The company does hold a number of virtual reality inventions aimed at mapping, however, possibly suggesting a form of the tech could make its way to a consumer device in the near future.

Apple's augmented reality patent filings were first applied for in March 2013 and credit Christopher G. Nicholas, Lukas M. Marti, Rudolph van der Merwe and John Kassebaum as inventors.

Comments

  • Reply 1 of 13
    eat@meeat@me Posts: 321member
    Ah, Apple did not invent this just added a slight twist - how does the patent office issue patents when their is prior work from other companies. See this - http://en.wikipedia.org/wiki/Nokia_City_Lens
  • Reply 2 of 13
    In other news, Google releases Google Maps with augmented reality navigation with 'X-ray vision' features.
  • Reply 3 of 13
    Quote:

    Originally Posted by eat@me View Post



    Ah, Apple did not invent this just added a slight twist - how does the patent office issue patents when their is prior work from other companies. See this - http://en.wikipedia.org/wiki/Nokia_City_Lens

    Because it is not the idea which gets the patent, it is the mechanism by which the idea is realised. I'm no patent expert and I'm sure someone here can explain better, but that is the gist of it.

  • Reply 4 of 13
    I think there is value in peeling away structures as well as peeling away walls. For example, the Picasso sculpture in downtown Chicago is buried in an urban canyon of tall buildings. It would be useful to peel away the buildings so that you could get a low level 360° view of the sculpture. The same could be done with St. Patrick's Cathedral in New York City or the Transamerica building in San Francisco.

    Additionally, you could peel away natural components such as trees or water in riverbeds oceans etc.

    I think this is possible because of the way Apple 3-D mapping works. If you watch a map being presented over a slow connection, you can see that a contoured mesh of the underlying physical location is generated first -- on top of that buildings and other structures are generated back to front.

    You could select, say, St. Patrick's Cathedral and have the map display only that structure. Then, as you moved around, any structures that blocked the view of St. Patrick's would not be displayed.
  • Reply 5 of 13
    MacProMacPro Posts: 19,718member
    I think there is value in peeling away structures as well as peeling away walls. For example, the Picasso sculpture in downtown Chicago is buried in an urban canyon of tall buildings. It would be useful to peel away the buildings so that you could get a low level 360° view of the sculpture. The same could be done with St. Patrick's Cathedral in New York City or the Transamerica building in San Francisco.

    Additionally, you could peel away natural components such as trees or water in riverbeds oceans etc.

    I think this is possible because of the way Apple 3-D mapping works. If you watch a map being presented over a slow connection, you can see that a contoured mesh of the underlying physical location is generated first -- on top of that buildings and other structures are generated back to front.

    You could select, say, St. Patrick's Cathedral and have the map display only that structure. Then, as you moved around, any structures that blocked the view of St. Patrick's would not be displayed.

    It sounds so logical as to seem obvious doesn't it? In fact I could imagine selecting a radius to peel away buildings around you from any point you were by dragging a circle. By asking the system to leave untouched, certain selections, it would give an unrestricted view. Mapping should even be pretty fast as there would be less to draw. It would be the ultimate 'can see the woods for the trees'.
  • Reply 6 of 13
    I think there is value in peeling away structures as well as peeling away walls. For example, the Picasso sculpture in downtown Chicago is buried in an urban canyon of tall buildings. It would be useful to peel away the buildings so that you could get a low level 360° view of the sculpture. The same could be done with St. Patrick's Cathedral in New York City or the Transamerica building in San Francisco.

    Additionally, you could peel away natural components such as trees or water in riverbeds oceans etc.

    I think this is possible because of the way Apple 3-D mapping works. If you watch a map being presented over a slow connection, you can see that a contoured mesh of the underlying physical location is generated first -- on top of that buildings and other structures are generated back to front.

    You could select, say, St. Patrick's Cathedral and have the map display only that structure. Then, as you moved around, any structures that blocked the view of St. Patrick's would not be displayed.

    It sounds so logical as to seem obvious doesn't it? In fact I could imagine selecting a radius to peel away buildings around you from any point you were by dragging a circle. By asking the system to leave untouched, certain selections, it would give an unrestricted view. Mapping should even be pretty fast as there would be less to draw. It would be the ultimate 'can see the woods for the trees'.

    Exactly!

    I like your implementation!

    I think it might take some additional 3D data capture around structures of interest ... I don't know the cost of the capture camera/devices -- but drones are less than $1,500. Might be a good crowd-sourcing candidate.

    If tech can put a spinning circle under a quarterback's feet then draw a wire that shows the arc of his pass to a receiver in near real-time ... Why not remove the forrest?

    It may take a combination of science and good ol' video editing (Coremelt SliceX/TrackX) to generate the views with removed items ... but hey, that's what post is all about.


    Anyway, when I first noticed the way maps was displaying layers, I submitted a feature request to Apple to allow a developer to manipulate Apple's 3D data ... No response! Apple doesn't allow developers to do anything with 3D data -- but you can fake it out.
  • Reply 7 of 13
    Ha!

    Here's a drone image of Flint Center and the mystery building:

    [IMG ALT=""]http://forums.appleinsider.com/content/type/61/id/48053/width/500/height/1000[/IMG]

    http://************/2014/09/04/drones-eye-view-of-flint-center-highlights-the-mysterious-structure-apple-is-building-next-to-theater-venue/#more-339098

    AI is still screwing with [I] nineto5mac.com [/I] urls -- just substitute [I] nineto5mac.com [/I] (with digits replacing nine and 5) for the asterisks ...


    Maybe for something like this:

    [VIDEO]


    [B][U]Benjamin Twine[/U][/B]

    Let me tell you about my best friend
    Well he's got hair down to his knees
    He gets along fine doing just how he please

    And I believe that we first met
    Around three years ago
    Well it feels like a lifetime funny how fast it goes

    And I remember how this one time
    Well I stole his sister's car
    And he didn't even mind no he jumped in for the ride

    We'd got miles away from anywhere
    When the panic soon kicked in
    We'd got it back safe and sound no one asked us where we'd been

    Ohh Oh Oh
    Ohh Oh Oh

    Let me tell you why his sister
    Why she didn't second guess
    She was going out of line, putting on her favorite dress

    And oh by God she knows she's worth it
    Any boy would tell you so
    Well a girl like that is worth her own weight in gold

    Ohh Oh Oh
    Ohh Oh Oh

    Now I made a promise to myself once
    That I would wake up by her side
    My best friend said that that would be the day I died

    Ohh Oh Oh
    Ohh Oh Oh
    Ohh Oh Oh
    Ohh Oh Oh
    Ohh Oh Oh
    Ohh Oh Oh
    Ohh Oh Oh
    Ohh Oh Oh
  • Reply 8 of 13
    so they finally copied Yelp's Monocle feature from 4 years ago
  • Reply 9 of 13
    mike1mike1 Posts: 3,275member
    Quote:

    Originally Posted by eat@me View Post



    Ah, Apple did not invent this just added a slight twist - how does the patent office issue patents when their is prior work from other companies. See this - http://en.wikipedia.org/wiki/Nokia_City_Lens



    First, it clearly says patent application. No patent has been granted yet. If it is that close to what Nokia did, then it probably won't be granted unless there is something unique. Second, the Wikipedia page you referenced makes no mention of the ability to peel away walls or other buildings to give a clearer view of something else.

  • Reply 10 of 13

    so they finally copied Yelp's Monocle feature... cool

  • Reply 11 of 13
    The Apple approach is a lot less efficient than what Google is working on. Apple takes the image from the camera and attempts to identify 2D image markers. I would guess that these are things like rectangular signs. iOS 8 already has the ability to recognize rectangles in images (if you know their proportions). The problem is that in many places there won't be any good markers to search the database for. Google uses 3D scanning. They generate a cloud of 3D points around the smart device and then use this to pattern match their database of point clouds captured previously and can use this to determine the users exact location and orientation. It works much like song matching technology. If you have enough sample points it becomes possible to rapidly search a vast database quickly and accurately even if some of those points are different (something in the environment moved or a car got in the way).
  • Reply 12 of 13
    how does apple get their hands on each building's structure and floor plans?
  • Reply 13 of 13
    dasanman69dasanman69 Posts: 13,002member
    truffol wrote: »
    how does apple get their hands on each building's structure and floor plans?

    They're public record.
Sign In or Register to comment.