Apple updates ARKit resources for developers building augmented reality apps

Posted:
in iPhone edited December 2019
Apple on Tuesday refreshed its developer portal with new information about ARKit, including sample code and AR-specific entries in its Human Interface Guidelines for iOS.




Announced through Apple's developer webpage, the new assets added to the ARKit mini-site include sample code demonstrating audio and interactive content. Developers can download the code to explore ARKit's capabilities prior to its launch with iOS 11 this fall.

Apple also includes an updated version of HIG that includes a new section dedicated to augmented reality applications. In the developer document, Apple notes AR can be used to deliver "immersive, engaging experiences that seamlessly blend realistic virtual objects with the real world."

Explaining the concept, Apple says a device's camera is used to present a live view of a user's surroundings, onto which three-dimensional objects are superimposed. This melding of real-world space and computer generated objects creates a unique experience that can be applied to any number of software solutions, from games to e-commerce.

Apple offers a few key guidelines for developers to consider when building with ARKit. For example, apps should use the entire display and avoid cluttering onscreen user interfaces with unnecessary controls and other graphics that could detract from the experience. Developers should also consider physical constraints, user comfort and user safety when creating interactive AR apps.

The HIG entry goes on to detail best practices for interacting with virtual objects, as well as identifying and solving potential problems users might have with ARKit software.

Apple on Monday invited developers and media to its Cupertino, Calif., headquarters to experience a few ARKit apps made by major firms like Ikea and Giphy. The company is attempting to build hype around the platform prior to its imminent public release.

Aside from ARKit, Apple also informed developers of upcoming requirements for App Store-based in-app purchases.

When iOS 11 launches this fall, users will be able to browse through promoted in-app purchases in the App Store, and might elect to purchase an item prior to downloading an app. In current iOS 11 beta builds, promoted in-app purchases are disabled, but developers must build in compatibility once the operating system goes live.

"Once the GM version of Xcode 9 is released, simply implement the new delegate method within SKPaymentTransactionObserver, rebuild your app, and submit for review. You can also customize which promoted in-app purchases a user sees on a specific device with the SKProductStorePromotionController API," Apple says.

Apple is widely expected to release iOS 11 alongside a slate of new iPhone hardware at a special event next month. Recent reports suggest the company is planning to hold the annual press gathering on Sept. 12.

Comments

  • Reply 1 of 2
    maestro64maestro64 Posts: 5,043member
    Is it just me, or does this first attempt at AR "melding of real-world space and computer generated objects" is just doing green screen without the green screen.
  • Reply 2 of 2
    MarvinMarvin Posts: 15,322moderator
    maestro64 said:
    Is it just me, or does this first attempt at AR "melding of real-world space and computer generated objects" is just doing green screen without the green screen.
    It's a form of digital compositing. Green screen is for cropping out real objects (usually people) to put into a virtual or different environment. This is putting virtual objects into a real environment. The way it's done manually for movies is the following:



    When it has to be done like that, it takes a long time because the compositor has to figure out the camera movements in 3D using only the 2D footage. Then they have to add 3D objects into that tracked 3D space. It takes a while to set that up and then render objects in place. The iOS devices are doing it all in real-time and they can do this accurately because they have motion sensors.

    That 3D motion data is very useful to have, I would have thought more standard cameras would integrate motion sensors and store the motion data to make it easier on compositors. Filmmakers could always strap an iPhone to the side of the camera - this could be a feature of the RED phone. The camera motion data keeps real and virtual objects moving together. Other data that's useful to have is depth data, otherwise virtual objects can't move behind real objects, they can only be composited on top:







    This is where the depth sensors (time of flight, Primesense, Kinect) come in. They can assess how far from the camera the recorded pixels are and they can do real-time masking to allow virtual objects to integrate better in the environment. They can do facial tracking for advanced facial animation of virtual characters, which is really hard to do, this has applications in video games and accessibility for lip-reading, including more accurate Siri understanding.

    The usefulness of these tools is much the same as film itself. When you think back years ago, filmmaking wasn't a commoditized skill. Even the consumer camcorder market didn't do this to the extent that modern smart devices have, partly because of the expense but also because they are single use:

    https://www.statista.com/statistics/485970/digital-camcorder-shipments-worldwide/

    The smartphone market is about two orders of magnitude larger than the camcorder market and it's simplified to the point that you just open the app, tap record, crop it on the device and can publish online.

    This additional data takes it beyond recording footage so people can be more creative. This applies to even basic videos. Sometimes Appleinsider's product videos use motion-tracked labels:



    When the camera pans across the laptops, the labels stick in place. They have to do that the manual way shown in the above video by tracking the footage to figure out how the camera is moving and then apply the reverse movement to the text so that they stick together. The more accessible that advanced tools are, the more creative people tend to be. As unskilled service jobs continue to dry up, it's good for creative skills to be accessible in order for future generations to have meaningful work.



    The examples people are seeing now have been made before everyone has access to it, it's not out yet. Once everyone has it in a couple of weeks or so, there's going to be a lot more experiments and I think at least some of them are going to pretty cool and useful. The technology is really better suited for smart glasses as you always have to hold a device up but it lets people see the possibilities.
    edited August 2017
Sign In or Register to comment.