one thing that might work is a board game with augmented features. Imagine risk where as you moved your men, they started to fight. That, however, still has the chicken and egg problem - will work great only with glasses and universal adoption
As others have suggested, ARKit isn't the end game -- it's just one piece of the puzzle dropping to expose it to developers.
As mentioned earlier, Apple's tech acquired from PrimeSense provides the ability to use 3D camera to TrueDepth map rooms, buildings. Apple acquired PrimeSense in 2013.
Apple acquired several other companies in 2013 -- most notable is Passif Semi (SOC Tech) and WiFiSLAM (Indoor mapping tech -- SimultaneousLocationAndMapping).
Now, the WiFiSLAM tech is interesting in that it uses WiFi and magnetic noise, measured at multiple locations within a building -- to quickly and accurately create a map of the building... just by (one or more persons*) walking around...
* one or more persons -- kinda' crowd-sourced indoor mapping...
The sensors within the phone (accelerometer, gyroscope, magnetometer etc.) provide additional raw data -- but are not as accurate as WiFiSLAM, as shown below:
OK, let's circle back: How might this PrimeSense tech, WiFiSLAM tech, ARKit tech provide unique solutions for Apple Developers?
So, we could have several persons with iPhones, walking around creating renderings of 3D walls, rooms, floors, buildings.
But they're just 3D drawings -- they don't actually show what the space looks like...
From the above link:
They (WiFiSLAM) were also looking into other areas that would work well with camera-centric devices like Google Glass. These computer vision arenas are still in the nascent stages, but the potential is crazy. Using a video feed as you walk through a building, the device tracks the pixels on the screen to map turns one way or another and forward progress. There’s also visual matching of textures and architectural details across a city in order to identify it, which is really cool.
Others, here, have said that people won't want to walk around holding their iPhone out in front of them -- Apple needs to provide a headset like Google Glass...
...But do they? What if Apple were to include the tech in an AppleWatch or other wearable to gather the camera data -- and do the heavy lifting on the connected iPhone?
You wouldn't have the distraction of the real-time mapping -- but could pull out your iPhone to see the maps when desired.
Best of all, you wouldn't be considered a GlassHole!
As in all tech, nobody needs it, until everybody needs it... Internet adoption was also slow pre-1994 , that's the thing with this kind of tech, you never know what will click, you only want to be there when that happens. That's what Apple has done and will continue doing in short order.
Agreed, we should give it at least 1 year, may be 2 years before writing it off as gimmick or not-so ground breaking. But one thing I would like to highlight - As soon as ARKit was released by Apple, many people remarked that Apple has delivered AR to 500+ million iPhones which are capable of running AR Apps, leaving Android phones far behind. And that Android is behind at least by 2 years to catch up and very few phones are capable of running the AR Apps. In hindsight, it does not seem accurate. It would take at least 2 years for many useful AR apps to show up, by which time MANY Android phones would have the capability to run AR apps.
Who the hell mentioned Android?
McDave in post 10 seems to have started down that path
Agreed, we should give it at least 1 year, may be 2 years before writing it off as gimmick or not-so ground breaking. But one thing I would like to highlight - As soon as ARKit was released by Apple, many people remarked that Apple has delivered AR to 500+ million iPhones which are capable of running AR Apps, leaving Android phones far behind. And that Android is behind at least by 2 years to catch up and very few phones are capable of running the AR Apps. In hindsight, it does not seem accurate. It would take at least 2 years for many useful AR apps to show up, by which time MANY Android phones would have the capability to run AR apps.
Who the hell mentioned Android?
His comment wasn't aimed specifically at you.
To answer your question, though: A whole lot of posters on AI. It was the throwaway phrase post WWDC. Apple had an instant AR/VR platform with 500+ million users ready to go.
Now, it seems many of those users will have retired many those phones before AR/VR takes off (although some might just opt for a battery replacement and see if their phones can run iOS 12 because it might be around that time that some people find a widespread or compelling use for it).
It's possible that those apps that really provide something compelling, could require iOS 12 but I wonder if the iPhone 6 will be able to run it.
Agreed, we should give it at least 1 year, may be 2 years before writing it off as gimmick or not-so ground breaking. But one thing I would like to highlight - As soon as ARKit was released by Apple, many people remarked that Apple has delivered AR to 500+ million iPhones which are capable of running AR Apps, leaving Android phones far behind. And that Android is behind at least by 2 years to catch up and very few phones are capable of running the AR Apps. In hindsight, it does not seem accurate. It would take at least 2 years for many useful AR apps to show up, by which time MANY Android phones would have the capability to run AR apps.
Who the hell mentioned Android?
McDave in post 10 seems to have started down that path
Actually Avon was correct. My comment was NOT specifically addressed to any one particular person, but the below AI articles.
Comments
As mentioned earlier, Apple's tech acquired from PrimeSense provides the ability to use 3D camera to TrueDepth map rooms, buildings. Apple acquired PrimeSense in 2013.
Apple acquired several other companies in 2013 -- most notable is Passif Semi (SOC Tech) and WiFiSLAM (Indoor mapping tech -- SimultaneousLocationAndMapping).
Now, the WiFiSLAM tech is interesting in that it uses WiFi and magnetic noise, measured at multiple locations within a building -- to quickly and accurately create a map of the building... just by (one or more persons*) walking around...
* one or more persons -- kinda' crowd-sourced indoor mapping...
The sensors within the phone (accelerometer, gyroscope, magnetometer etc.) provide additional raw data -- but are not as accurate as WiFiSLAM, as shown below:
https://thenextweb.com/apple/2013/03/26/what-exactly-wifislam-is-and-why-apple-acquired-it/
OK, let's circle back: How might this PrimeSense tech, WiFiSLAM tech, ARKit tech provide unique solutions for Apple Developers?
So, we could have several persons with iPhones, walking around creating renderings of 3D walls, rooms, floors, buildings.
But they're just 3D drawings -- they don't actually show what the space looks like...
From the above link:
Others, here, have said that people won't want to walk around holding their iPhone out in front of them -- Apple needs to provide a headset like Google Glass...
...But do they? What if Apple were to include the tech in an AppleWatch or other wearable to gather the camera data -- and do the heavy lifting on the connected iPhone?
You wouldn't have the distraction of the real-time mapping -- but could pull out your iPhone to see the maps when desired.
Best of all, you wouldn't be considered a GlassHole!
To answer your question, though: A whole lot of posters on AI. It was the throwaway phrase post WWDC. Apple had an instant AR/VR platform with 500+ million users ready to go.
Now, it seems many of those users will have retired many those phones before AR/VR takes off (although some might just opt for a battery replacement and see if their phones can run iOS 12 because it might be around that time that some people find a widespread or compelling use for it).
It's possible that those apps that really provide something compelling, could require iOS 12 but I wonder if the iPhone 6 will be able to run it.
Actually Avon was correct. My comment was NOT specifically addressed to any one particular person, but the below AI articles.
http://appleinsider.com/articles/17/10/13/how-apples-iphone-x-truedepth-ar-waltzed-ahead-of-googles-tango
https://forums.appleinsider.com/discussion/201494/google-tries-to-fight-wide-arkit-compatibility-with-its-own-augmented-reality-initiative-a/p1
http://www.patentlyapple.com/patently-apple/2018/01/amazon-wins-a-patent-covering-future-retail-store-smart-mirrors-for-virtual-reality-fashion-shopping.html
It's a patent by Amazon where you can model fashion garments virtually displayed on your moving body by something like this:
- capturing a model of your body in motion in a video through a camera embedded in a mirror
- selecting garments to model
- matching the garments to your body image
- creating virtual video segments showing your body in motion wearing different garments
- displaying the combined Blended Reality videos showing you wearing the garments blended through a video display -- in this case the mirror
You can display multiple images in motion showing different colors, different garments, etc.I suspect the Amazon implementation will be something like this:
...Hey significant other, friend, etc. -- How do you think this looks on me -- I like the red...