Latest demos with iOS 11 ARKit show plated food, 3D sculpting with Apple Pencil
Two new ARKit demonstrations by developers are showing the rapid advancement of what is possible with the new technology, with one showing food on a plate prior to ordering, and the other illustrating sculpting and painting with the iPad for use in other AR applications.
The demonstration of a new sculpting app illustrates a face being crafted in the forthcoming Maker Studio, including color and texture.
Potentially more impressive is a proof of concept by Kabaq. The demonstration depicted a series of very-near photographic quality foods, including two desserts and a hamburger on a plate.
Properly executed, the app can give the user a good feel for not only appearance of the food, but portion size as well. Beyond just menus, the company notes that the app can be used for cookbooks, and wider marketing initiatives as well.
These demos, and many more can be found on the MadewithARKit Twitter account. Updated frequently, the feed curates the best of what it sees, and appears to be the most frequently updated stream at this time.
Apple Vice President Craig Federighi announced ARKit during the 2017 WWDC keynote. The project contains a developer toolset that it will make available to nearly instantaneously make the iPhone and iPad the largest AR platform in the world, according to the company.
In a demonstration of software produced by the new ARKit, the software identified a table surface, and applied a virtual coffee cup properly scaled to the surface. Following a lamp's addition to the surface, the lighting model adjusted dynamically as Federighi moved the lamp around the cup.
A statement in Apple's developer information limits compatibility to A9 processor and newer, with little other amplifying information. Apple notes that the "breakthrough performance" in the two processors allow for "fast scene understanding" without breaking down why the A8 can't technically accomplish the same feat.
The demonstration of a new sculpting app illustrates a face being crafted in the forthcoming Maker Studio, including color and texture.
Potentially more impressive is a proof of concept by Kabaq. The demonstration depicted a series of very-near photographic quality foods, including two desserts and a hamburger on a plate.
Properly executed, the app can give the user a good feel for not only appearance of the food, but portion size as well. Beyond just menus, the company notes that the app can be used for cookbooks, and wider marketing initiatives as well.
These demos, and many more can be found on the MadewithARKit Twitter account. Updated frequently, the feed curates the best of what it sees, and appears to be the most frequently updated stream at this time.
Apple Vice President Craig Federighi announced ARKit during the 2017 WWDC keynote. The project contains a developer toolset that it will make available to nearly instantaneously make the iPhone and iPad the largest AR platform in the world, according to the company.
In a demonstration of software produced by the new ARKit, the software identified a table surface, and applied a virtual coffee cup properly scaled to the surface. Following a lamp's addition to the surface, the lighting model adjusted dynamically as Federighi moved the lamp around the cup.
A statement in Apple's developer information limits compatibility to A9 processor and newer, with little other amplifying information. Apple notes that the "breakthrough performance" in the two processors allow for "fast scene understanding" without breaking down why the A8 can't technically accomplish the same feat.
Comments
This will of course help everyone involved in the field, even ms and google.
I wonder if the [yet-to-be-announced] New AppleTV will have the requisite A9 chip and directly support ARKit (instead of via AirPlay from an iDevice)...
As a practical matter, it makes no sense to waste time and money developing virtual foods when a simple video clip would suffice. There's no advantage for the customer and it would slow the ordering process.
It's a solution looking for a problem, even though its eye-popping as a demo.
But ... can it do that?
I didn't realize it, but with iOS 11 Apple kinda' included AR in the Apple Maps app.
What you need to do is:
You are in a 3D virtual tour with AR Heads-up highlights. You can navigate by walking around... But even better, you can pan and zoom with your fingers from the comfort of your chair.
It's a little clunky -- but it works!
It would be great if you could use Siri to refine Flyover, say, define a custom tour -- tell Siri where you want to go, the route you wish to take, how fast you want to travel, etc.
Here's a a video that illustrates what I mean -- a tour of Sydney Harbor featuring the Opera House. (This video was made over 4 years ago, and Flyover has improved quite a bit).
How does one customize what is viewed in a video? Recording *every* possible combination that a customer might choose? Not only would that be expensive time wise, but what if you added *one* new ingredient option? Then you'd need to record several more videos including that ingredient. It would be exponentially insane!
AR solves that.
Profitability is the difference between a restaurant that succeeds and one that fails.
Or for a different use: A 3D weather map showing the possible effects of a category 4 hurricane hitting the Texas coast at various points; stalling with torrential rainfall; and potential evacuation route options for 3 Million plus people.
You make a good point... Just tell the waiter: "I'll have the same thing I had last time!"
I can see some value in bringing an Open Kitchen to the customers' table to show how the different dishes are prepared -- especially at fine restaurants.
A variant of this could even work for fast food -- with lots of menu items. There could be kiosks (maybe an iPad at each table) where the customers could visualize the menu items, choose their purchase, place the order, pay for it -- and get called when it is ready. Less stress for the customers -- no waiting in line, etc. More productive for the restaurant -- fewer cashiers, etc. Not much need for AR in this example, tho.
red pill anyone? Or the blue pill?
http://www.kabaq.io
They also mention ingredients and cookbooks so you could have a recipe in a cookbook and get a preview of the dish on the table and compare how bad your attempt was. Supermarkets and online shopping can have this too because sometimes when you order by weight or get ingredient packs, it's not obvious what that looks like. What does 50g of yogurt look like vs 250g of cheese?
The visuals are a bit more realistic in the setting you're in than video because they have real-time lighting and you can move around the models. Making the models is more time-consuming than video but if upcoming iOS devices have depth sensors, they can do 3D model acquisition almost as quickly as video. Picking shaders would be hard but some sensors would be able to build the shader from the way the light bounces off the object and they can have pre-made ones.
It's hard enough to get pictures of food from restaurants so I doubt this more advanced presentation will be widely used but one of the hardest problems for restaurants is getting people in the door. If some tech can help with that, all the better. There will be a lot of AR experiments to see what will be practical uses for it.