Initial developer tests of ARKit 1.5 show off impressive vertical surface, image detection...

Posted:
in iPhone edited December 2019
Developers have been testing out Apple's ARKit 1.5 since the refreshed augmented reality platform saw release with iOS 11.3 beta last week, and a few early projects offer a glimpse into the technology's ability to detect vertical surfaces, images and more.




Apple announced ARKit 1.5 in a preview of iOS 11.3 last week, touting refinements to the AR technology that will allow apps to recognize vertical and irregularly-shaped surfaces. With the new assets, developers can place virtual objects on walls, doors, odd-shaped tables and more, capabilities that make the platform more robust and immersive.

Over the proceeding days, a number of developers have taken a crack at leveraging ARKit 1.5's new features, to varying degrees of success.

For example, a brief presentation created by London-based mobile app development studio Ubicolor shows ARKit's ability to detect vertical surfaces, in this case a brick wall. The short clip shows a virtual object, a disc designed to roughly mimic the brick surface, overlaid on the wall, then twisted and pulled out to reveal hollow shaft.

So I've been playing with the new ARKit wall detection today...what do you think?#arkit #madewitharkit #augmentedreality #iOS113 #arkit15 #ar #AugmentedReality pic.twitter.com/J9tj6sJf9S

-- Ubicolor (@ubicolorapps)


Another attempt by iOS developer Mohammad Azam demonstrates ARKit's image recognition capabilities. Using ARKit 1.5, Azam was able to create a program that detects a movie poster, pulls up a clip or trailer of that film and presents the video in an overlaid window.

Similar AR posters are already in circulation, but often require an embedded QR code or specialized imagery to work. Apple's solution would allow production houses to benefit from augmented promotional material without altering original graphics.

The Future of Movie Posters. ARKit 1.5 image detection and reference images. #ARKit #AugmentedReality pic.twitter.com/ruVxMCQtD3

-- azamsharp (@azamsharp)


Azam also applied the image recognition feature to book covers, which when properly configured and detected can act as triggers for the display of online e-commerce links.

Developer Tim Field created a module that shows iOS and ARKit detecting horizontal, vertical and irregular objects in real time.

New in iOS 11.3 - vertical & irregularly shaped surfaces with #ARKit pic.twitter.com/z54OuiIxuv

-- Tim Field (@nobbis)


Apple debuted ARKit in iOS 11 as a basis for its efforts in the augmented reality space. CEO Tim Cook has in past interviews extolled the virtues of AR, going so far as to say the technology has the potential to be as paradigm shifting as the App Store.

After an initial influx of ARKit apps, ranging from apps that position furniture within a user's house to games, the platform has hit a bit of a slow patch as developers look for new and novel ways to integrate the AR tech.
mattinoz

Comments

  • Reply 1 of 10
    mattinozmattinoz Posts: 2,316member
    Yes Please.
  • Reply 2 of 10

    The poster scanning thing is like the Star Wars app, wherein I scan some specific poster from another screen and it puts Storm Troopers in the room as AR.

    This tech is supposedly different though.

  • Reply 3 of 10
    Rayz2016Rayz2016 Posts: 6,957member
    The ad industry will go mental for this … unfortunately. 
  • Reply 4 of 10
    Nah, this has been possible for years with QR codes, Shazam and similar apps. I have seen maybe 10-20 of these things and that was years ago. This reduces friction by just using your camera app, but then still, look how crap it is to view a video like that...Maybe if it allows the video to open in a normal player..
    Rayz2016 said:
    The ad industry will go mental for this … unfortunately. 
    aylk
  • Reply 5 of 10
    michelb76 said:
    Nah, this has been possible for years with QR codes, Shazam and similar apps. I have seen maybe 10-20 of these things and that was years ago. This reduces friction by just using your camera app, but then still, look how crap it is to view a video like that...Maybe if it allows the video to open in a normal player..
    Rayz2016 said:
    The ad industry will go mental for this … unfortunately. 
    The movie poster thing is a very rough demo. The idea is you could integrate video or 3d animations or whatever you wanted coming out of a poster/display, or around it, etc. I just went to an art opening that used an AR app to recognize painted parts of sculptures to "launch" 3D plants out of them. The painted parts were like rings of a tree, not like a QR code. Image recognition is far more complicated than a black and white grid of blocks, not to mention planes/orientation/etc in 3D space in real time. Use your imagination.
    edited January 2018 jony0
  • Reply 6 of 10
    michelb76 said:
    Nah, this has been possible for years with QR codes, Shazam and similar apps. I have seen maybe 10-20 of these things and that was years ago. This reduces friction by just using your camera app, but then still, look how crap it is to view a video like that...Maybe if it allows the video to open in a normal player..
    Rayz2016 said:
    The ad industry will go mental for this … unfortunately. 
    The movie poster thing is a very rough demo. The idea is you could integrate video or 3d animations or whatever you wanted coming out of a poster/display, or around it, etc. I just went to an art opening that used an AR app to recognize painted parts of sculptures to "launch" 3D plants out of them. The painted parts were like rings of a tree, not like a QR code. Image recognition is far more complicated than a black and white grid of blocks, not to mention planes/orientation/etc in 3D space in real time. Use your imagination.
    And I'm pretty sure Apple is just getting developers into the platform and getting the tech just right for when AR glasses become available. 
    When the hardware catches up they will be ready.
    fastasleep
  • Reply 7 of 10
    flaneurflaneur Posts: 4,526member
    michelb76 said:
    Nah, this has been possible for years with QR codes, Shazam and similar apps. I have seen maybe 10-20 of these things and that was years ago. This reduces friction by just using your camera app, but then still, look how crap it is to view a video like that...Maybe if it allows the video to open in a normal player..
    Rayz2016 said:
    The ad industry will go mental for this … unfortunately. 
    You're missing one very important point here.

    Using the cameras to detect objects, shapes and patterns for the AR software — that turns the whole real world into one big QR code.
    fastasleep
  • Reply 8 of 10
    Without onboard LIDAR, echolocation or some other means of pinging the surrounding environment or determining the scale of nearly featureless surroundings, A/R will continue to have difficulty determining z-scale information to place the viewer inside a properly scaled environment.
    tmay
  • Reply 9 of 10
    HyperealityHypereality Posts: 58unconfirmed, member
    Without onboard LIDAR, echolocation or some other means of pinging the surrounding environment or determining the scale of nearly featureless surroundings, A/R will continue to have difficulty determining z-scale information to place the viewer inside a properly scaled environment.
    Google's tango program used structured light, like the front facing camera on the iPhone X. 

    However ARKit works better than Tango with the rear camera  despite not having structured light showing just how very far you can go in deriving z-index information simply from VIO.  

    Yes, its true that with a completely featureless surface VIO will not work, but in reality such surfaces are not so common as to make AR useless as you suggest. 

    The power requirements and bulk of structured light, LIDAR etc. means that in the foreseeable future these will remain specialist tools (eg: for face detection as in the iPhone X) rather than general AR camera tools.  

    In the meantime VIO will just get better and better and once people have the glasses then there are some very useful general applications that will become possible. 
    fastasleep
  • Reply 10 of 10
    Without onboard LIDAR, echolocation or some other means of pinging the surrounding environment or determining the scale of nearly featureless surroundings, A/R will continue to have difficulty determining z-scale information to place the viewer inside a properly scaled environment.
    Google's tango program used structured light, like the front facing camera on the iPhone X. 

    However ARKit works better than Tango with the rear camera  despite not having structured light showing just how very far you can go in deriving z-index information simply from VIO.  

    Yes, its true that with a completely featureless surface VIO will not work, but in reality such surfaces are not so common as to make AR useless as you suggest. 

    The power requirements and bulk of structured light, LIDAR etc. means that in the foreseeable future these will remain specialist tools (eg: for face detection as in the iPhone X) rather than general AR camera tools.  

    In the meantime VIO will just get better and better and once people have the glasses then there are some very useful general applications that will become possible. 
    I’ve no doubt the sensors used to interact with one’s surroundings will continue to improve on both price and capability.
Sign In or Register to comment.