ARKit 3 features restricted to latest iPhones and iPads
Apple's upcoming ARKit 3 feature set, which include People Occlusion and motion capture, will only be available to users of current-generation iPhones and iPads, the company revealed this week.
According to fine print buried at the bottom of an Apple Developer webpage detailing the company's augmented reality technology, advanced features in ARKit 3 will be limited to devices with A12 Bionic chip variants or better.
Currently, the A12 Bionic powers Apple's iPhone XR, iPhone XS and iPhone XS Max, while the more robust A12X Bionic is found in the 11-inch iPad Pro and third-generation 12.9-inch iPad Pro.
"People Occlusion and the use of motion capture, simultaneous front and back camera, and multiple face tracking are supported on devices with A12/A12X Bionic chips, ANE, and TrueDepth Camera," Apple says.
Shown off onstage at Monday's Worldwide Developers Conference keynote, ARKit 3 boasts a number of enhancements designed to streamline AR compositing and effects.
New for 2019 are complex assets like the aforementioned People Occlusion, which automatically detects humans, allowing AR objects to pass behind and in front of them. Motion capture is another new feature that determines a person's body position and translates that information into user input.
Both People Occlusion and motion tracking are apparently extremely processor intensive, prompting Apple to level the A12 Bionic or higher chip requirement.
Additionally, ARKit 3 support simultaneous front and back camera input and multiple face tracking, the latter of which can track up to three faces and is compatible with TrueDepth camera systems found in iPhone X, XR, XS and XS Max, as well as the latest iPad Pro models.
ARKit 3 is part of iOS 13 and will debut this fall.
According to fine print buried at the bottom of an Apple Developer webpage detailing the company's augmented reality technology, advanced features in ARKit 3 will be limited to devices with A12 Bionic chip variants or better.
Currently, the A12 Bionic powers Apple's iPhone XR, iPhone XS and iPhone XS Max, while the more robust A12X Bionic is found in the 11-inch iPad Pro and third-generation 12.9-inch iPad Pro.
"People Occlusion and the use of motion capture, simultaneous front and back camera, and multiple face tracking are supported on devices with A12/A12X Bionic chips, ANE, and TrueDepth Camera," Apple says.
Shown off onstage at Monday's Worldwide Developers Conference keynote, ARKit 3 boasts a number of enhancements designed to streamline AR compositing and effects.
New for 2019 are complex assets like the aforementioned People Occlusion, which automatically detects humans, allowing AR objects to pass behind and in front of them. Motion capture is another new feature that determines a person's body position and translates that information into user input.
Both People Occlusion and motion tracking are apparently extremely processor intensive, prompting Apple to level the A12 Bionic or higher chip requirement.
Additionally, ARKit 3 support simultaneous front and back camera input and multiple face tracking, the latter of which can track up to three faces and is compatible with TrueDepth camera systems found in iPhone X, XR, XS and XS Max, as well as the latest iPad Pro models.
ARKit 3 is part of iOS 13 and will debut this fall.
Comments
Also, you didn't mention the new iPad Touch. It surely has the A12.
When the cool stuff arrives (2-3 years?) you’ll probably have a new phone.
I second the desire for AR glasses (some else desired), unfortunately it’s proving more complicated than developers thought. You really need the ability to adjust the distance between the eyes & screen (for each person) or it causes headaches and eye strain.
Still, so far iPhones and iPads don’t seem like the proper devices. I also wonder if there will really be much developer support for AR glasses. It at least seems possible that in such a product the first-party apps and solutions may be the primary ones, much like on Apple Watch.
Anyone who has followed this more closely please jump in and correct me if I have this wrong.
Two years on nothing much has really come of it.
I think this is a prime case of if you really want to be on the AR train, you should hold off until some compelling use for it appears before jumping on the phone. Moreso if it will be your phone for three or four years.
The OPs point is basically that. Brand new, generational phone (iPhone X) which is less than 2 years old and already not in the ARKit 3 running.
Another thing to keep in mind... when ARKit3 actually gets released this Fall, there will be two generations of hardware that support People Occlusion and Motion Capture: last year’s phone/pad and this year’s. By the time apps with those features are widespread, it will likely be three generations.
The more annoying angle is for developers: simulators won’t cut it, so we absolutely have to have the most recent models to even evaluate this tech.
I saw the demo's during the keynote, so yeah, it's definitely moving forward, hence the ARKit 3 moniker.
Remember back about a week ago when you were arguing about benchmarks that were posted comparing Kirin unfavorably to A series, and you stated that SOC performance wasn't important.
Guess what, performance still is relevant. Every generation of A series brings new features that are unavailable to iPhones of an earlier generation, whether those be major or minor.
As for AR apps on iOS devices, it's all just a run up to some wearable that Apple will deliver when the technology is ready. In the meantime, someone that owns a two year old iPhone X will have the option of trading in for an upgrade.
Me, I'll be buying the next generation iPhone with a triple lens, assuming the rumors are correct, and I'll be able to enjoy the benefits of ARKit 3.0 as part of that. Next year, Apple will create a more powerful A series SOC, and then I'll likely miss out on something. That's how the market works.
He reminds me of the iKnockoff users who claim "I don't need Apples super fast processors as long as I can open apps quickly".
The problem is these people see the present while Apple sees the future. Right now there are features that will only be supported on the next iPhone and Apple is preparing for this NOW.
I'm not saying it won't happen, because I really want an SE 2 even if it doesn't do AR. It just seem unlikely to me that Apple would make this AR capable. I don't feel Pokemon counts.
There were demoes at the original launch too. What we need is a compelling reason for users to use the framework.
Being an Honor 10 user is irrelevant.
No idea why you are mentioning benchmarks here.
Performance is relative to what you can do with it. The whole point of saying 200 million iPhone users could benefit from ARKit out of the box was precisely to claim that they had the performance (relative to the year) to jump on the AR train. The problem was the train didn't really leave the station.
Everything I stated remains as is. If AR is important for you because you believe something compelling is in the pipe, you will be better off waiting until the product actually arrives and then take a decision on what phone you need to get the most out of it.