Invite-only meeting details state of AI, machine learning research inside Apple's halls
In an invitation-only meeting at Tuesdays Neural Information Processing Systems conference, Apple's head of machine learning broke down Apple's specific goals for its artificial intelligence developments, and detailed where it feels it has an advantage of its competitors, giving a rare look at Apple's research process.

According to presentation slides supplied to Quartz, Apple's over-arching theme for artificial intelligence research is roughly the same as everybody else's: image identification, voice recognition for use in digital assistants, and user behavior prediction. Apple uses all of these techniques with Siri on macOS and iOS now.

However, without specifically mentioning that Apple was working on a vehicle or systems associated with self-driving autos, the presentation by machine learning head Russ Salakhutdinov also discussed "volumentric detection of LiDAR," or measuring and identifying objects at-range with lasers.
LIDAR is a surveying method that measures distance to a target by illuminating that target with a laser light, and is commonly used for automotive guidance systems. The core technology also sees frequent use in map-making for a wide array of sciences.
Apple claimed in the meeting that Apple's GPU-based implementation of image recognition was capable of processing twice as many photos per second as Google's system, cranking out 3,000 images per second, besting a claimed 1,500. Apple's system is more efficient as well, with Salakhutdinov noting that the array was comprised of one third of the GPUs used by Google, with both tapping Amazon's cloud computing services.

Delving deeper into what Apple is working on, one item is parent-teacher neural networks, where a larger neural network with an equally large amount of data transfers a neural network to a smaller device, like an iPhone, with no compromise in decision-making quality. Implementation of this feature would allow for a subset of a vast artificially intelligent system to make decisions on a mobile device of lesser power, instead of being forced to feed any relevant data to a larger processing network and then the device waiting on a response to the query.
Before the meeting about Apple's artificial intelligence focus on Tuesday, Apple publicly announced that its scientists would be allowed to publish and collaborate with the larger research community for the first time.
Cook has also recently revealed that Apple's Yokohama, Japan research facility will boast "deep engineering" for machine learning, far different from Apple's Siri voice assistant.
Apple is also rumored to be utilizing the technology for self-driving car systems, as an offshoot of the "Project Titan" whole-car program.

According to presentation slides supplied to Quartz, Apple's over-arching theme for artificial intelligence research is roughly the same as everybody else's: image identification, voice recognition for use in digital assistants, and user behavior prediction. Apple uses all of these techniques with Siri on macOS and iOS now.

However, without specifically mentioning that Apple was working on a vehicle or systems associated with self-driving autos, the presentation by machine learning head Russ Salakhutdinov also discussed "volumentric detection of LiDAR," or measuring and identifying objects at-range with lasers.
LIDAR is a surveying method that measures distance to a target by illuminating that target with a laser light, and is commonly used for automotive guidance systems. The core technology also sees frequent use in map-making for a wide array of sciences.
Apple claimed in the meeting that Apple's GPU-based implementation of image recognition was capable of processing twice as many photos per second as Google's system, cranking out 3,000 images per second, besting a claimed 1,500. Apple's system is more efficient as well, with Salakhutdinov noting that the array was comprised of one third of the GPUs used by Google, with both tapping Amazon's cloud computing services.

Delving deeper into what Apple is working on, one item is parent-teacher neural networks, where a larger neural network with an equally large amount of data transfers a neural network to a smaller device, like an iPhone, with no compromise in decision-making quality. Implementation of this feature would allow for a subset of a vast artificially intelligent system to make decisions on a mobile device of lesser power, instead of being forced to feed any relevant data to a larger processing network and then the device waiting on a response to the query.
Before the meeting about Apple's artificial intelligence focus on Tuesday, Apple publicly announced that its scientists would be allowed to publish and collaborate with the larger research community for the first time.
Cook has also recently revealed that Apple's Yokohama, Japan research facility will boast "deep engineering" for machine learning, far different from Apple's Siri voice assistant.
Apple is also rumored to be utilizing the technology for self-driving car systems, as an offshoot of the "Project Titan" whole-car program.
Comments
Not only is that consistent with Apple's privacy policies, it's also consistent with Apple's comparative advantages in SOC design.
I like the idea of Apple having a large ecosystem of smart devices that talk to each other rather than dumb devices that all act as the tentacles of a Cloud Monster.
It bums me out, though, when Apple seems unable to keep all of those smart devices up to date!
/troll
"It bums me out, though, when Apple seems unable to keep all of those smart devices up to date! "
What do you mean by that?
I think (could be wrong) that a vast majority of shareholders (I know how badly you've voiced the idea of them taking the company private, IIRC) and customers couldn't care less about this stuff.
Those of us who enjoy reading about this still and even marginally, or more understand it, are in the minority. Till that changes, I think it would be a bad idea for Cook to put this at or near the forefront; it, IMO, would make Apple complex, not the "simple" company which appeals to the masses. IMO, it would make Apple more Google/MS like (And before people jump at me, I happen to like (certain) Google/MS products/aspects as well). It's the job of Apple's head of machine learning to talk about this and maybe iOS/macOS heads to touch on a bit, no?
It's the middle of the holiday shopping season. These products that have not been updated in a year or more:
Mac Pro -- 1084 days
iMac -- 421 days
Mac Mini -- 783 days
12.9" iPad Pro -- 392 days (ignoring price drop, which isn't really an update)
iPod Touch -- 511 days
Apple TV -- 408 days
I guess I'm assuming they are replacing that with the 9.7" pro, but who knows.
I think it would be very fair to include the iPad Mini, since its only update was a storage bump. But I was feeling generous.
The only product on the list that one could argue is truly hampered by Intel is the iMac. But even then -- how about a faster GPU? That's nontrivial.
There's a significantly more valuable, and monetizable, need, which is best done by just one vendor. And that's fleet management, scheduling, dispatch and route optimization, all combined into one system that any autonomous car, and even human driven cars, could tap into. This is something that is less efficient if there are multiple fleets each coordinating on their own.
Only when all available vehicles are pooled together can a system choose the optimal vehicle to serve a ride share request. Because the system needs to know the positions, current disposition of, and remaining charge of, all vehicles in order to choose the ideal vehicle to serve each ride request. It should be among the vehicles proximate to a request, a vehicle of the appropriate type to serve the request, a vehicle that has sufficient remaining charge to deliver the ride and still have sufficient charge to serve other requests or at lest return itself to a depot for recharging, etc.
A single system that coordinates all available vehicles in a geographic area could best deploy vehicles to balance route optimization, efficiency, and time to pick-up. And could layer onto that scheduled rides, like taking kids to soccer practice after school, could act to dispatch vehicles in a manner that would preposition them to locations where there are known spikes in volume at certain times and on certain days. It could automatically adjust pricing to accommodate peak hours, could automatically combine ride requests in the traditional car pooling method of collecting multiple riders into a single vehicle. All of this and many more efficiencies and optimizations would best be served by a single service that aggregates all available ride share vehicles and all passengers versus multiple disjointed services overlapping territory.
Sometimes competition is not a good thing. Coordination is key in such a system, and this is something that Apple could tackle. Design an API that every car could tap into to add itself to a network and begin taking and serving dispatch requests. Like Apple Pay, Apple would take a tiny fraction of the price of each ride.
If Apple came up with a system that other manufacturers fitted to their cars, then they could create a fleet of self-driving taxis. Given how cars are put together I can imagine that integrating reliably would be a complete nightmare though.
I've been thinking for some time now that the car I have is probably the last car I will actually own. Most of the time it's parked on the driveway. I have friends in London who don't own cars (there really is no point in London), but they're members of a car share scheme and that works really well for them.
I'm all for Cook's social involvement and having Apple up front as opinion leaders as well, just not as the only thing in Apple news. Toss us a crumb like this once in a while.