Apple buys machine learning specialist Inductiv to bolster Siri, AI projects
Adding to a mounting number of artificial intelligence-related acquisitions, Apple in the past few weeks purchased Ontario-based Inductiv to work on Siri and machine learning initiatives.
Apple confirmed the buy in a boilerplate statement to Bloomberg, saying it "buys smaller technology companies from time to time and we generally do not discuss our purpose or plans."
Inductiv developed a system that relies on AI to automatically identify and correct errors in data, the report said. Error rectification is an important facet of machine learning, which itself aims to complete complex tasks without human intervention.
The firm was co-founded by University of Waterloo professor Ihab Ilyas, University of Wisconsin-Madison assistant professor Theodoros Rekatsinas and Stanford University professor Christopher Re, all machine learning experts. This is the second Apple acquisition for Re, who saw his "dark data" AI company, Lattice Data, scooped up for $200 million in 2017.
A number of Inductiv employees have updated their LinkedIn profiles to reflect moves to Apple. Some team members, like Josh McGrath and Mina Farid are now "Machine Learning Engineers" at the tech giant, while Ryan Clancy is listed as a "Software Engineer." Each reports to a team overseen by SVP of Machine Learning John Giannandrea, the former head of Google's machine learning division who was promoted to his current senior executive role at Apple at the end of 2018.
The Inductiv purchase continues a string of AI-related acquisitions that started with Perceptioin 2015. Apple went on to buy out Turi and Tuplejump in 2016, Laserlike in 2019 and, most recently, Xnor.ai in January.
Apple confirmed the buy in a boilerplate statement to Bloomberg, saying it "buys smaller technology companies from time to time and we generally do not discuss our purpose or plans."
Inductiv developed a system that relies on AI to automatically identify and correct errors in data, the report said. Error rectification is an important facet of machine learning, which itself aims to complete complex tasks without human intervention.
The firm was co-founded by University of Waterloo professor Ihab Ilyas, University of Wisconsin-Madison assistant professor Theodoros Rekatsinas and Stanford University professor Christopher Re, all machine learning experts. This is the second Apple acquisition for Re, who saw his "dark data" AI company, Lattice Data, scooped up for $200 million in 2017.
A number of Inductiv employees have updated their LinkedIn profiles to reflect moves to Apple. Some team members, like Josh McGrath and Mina Farid are now "Machine Learning Engineers" at the tech giant, while Ryan Clancy is listed as a "Software Engineer." Each reports to a team overseen by SVP of Machine Learning John Giannandrea, the former head of Google's machine learning division who was promoted to his current senior executive role at Apple at the end of 2018.
The Inductiv purchase continues a string of AI-related acquisitions that started with Perceptioin 2015. Apple went on to buy out Turi and Tuplejump in 2016, Laserlike in 2019 and, most recently, Xnor.ai in January.
Comments
Latest Siri idiocy: "Hey Siri, remind me in 15 minutes to do x" "Ok, I've set a reminder for yesterday at nn:nn." Wtf? How in the hell have they managed to introduce a bug like that? Even more odd is that the reminder actually goes off at the correct time on other devices.
Another one I've discovered is you can't add "thyme" to a shopping list. Siri just flat out doesn't understand that "thyme" doesn't always refer to "time" , no matter how you phrase it. So I've taken to saying "Add the herb thyme to my shopping list", which results in "the herb Thyme" listed instead... Not great. At all.
“Add t-h-y-m-e to shopping list”
The transcription is usually impressively good, it rarely gets a word wrong, and that has had a notable improvement since they started using AI. But for some reason they don't seem to use AI in the comprehension, I guess that's a lot more difficult to do with Apple's computer generated AI training method than Google/Amazon's with actual customer data; though Apple does use actual customer data to some extent.
I didn't know you could spell things out though so thanks for that!
(Although when Siri gives me the option to update the appointment and I say "cancel it" all it does is rename the reminder to "cancel it."
After letting Siri create a Grocery list for me (on my iPad) I attempted to use my HomePod. I said, “Hey Siri, add thyme to my grocery list.” Siri replied, “OK, I added thyme to your grocery list.” I went back to my iPad, opened the newly created Grocery list and now have two items that say “Thyme”.
Another Edit: I noticed you use “shopping list” instead of “grocery list” and I thought perhaps that was where the issue lies. I asked Siri (again, on my iPad) to add thyme to my shopping list. I received the same response as the one I got when I asked for “grocery list” above and “thyme” was spelled correctly.
I’ve said this before but what I find most bewildering about Siri is how different people can make the exact same request and get different results. A couple of years ago, I believe it was @SpamSandwich posted a screenshot of a Siri fail where the question was who starred in a particular movie, asked on an iPhone. The result was not what was expected. When I asked Siri on my phone using the exact same language as was in the screenshot the answer provided was a spot on. I was shown a movie poster for the movie, maybe the running time and Tomato Rating and the beginning of the cast while Siri spoke the names of the people who starred in it. I don’t get it.
If you exposed any relatively intelligent agent, meat based or silicon based, to the hideously convoluted English language, whether spoken in its infinitely varying dialects and accents, or its confusing written form, and expected it to make sense of it in most cases, you've set yourself for bitter disappointment. If you bring context or meaning into the equation, then you're heading into a pit of disappointment from which there is no escape. Is Alexa much better? Maybe a little, but not by much. It's an algorithm, with a probabilistic distribution that defines how well it matches your query, which probably means it's going to be wrong a lot of the time.
What the heck, when I'm watching TV with allegedly English (sort-of) speaking people portrayed I frequently find myself wishing for closed captioning to convert whatever it is they're saying to words that I can read. And no, I'm not picking on any region of the country or native vs non-native speakers. Languages are just so damn messy and the fuzzy logic needs to be deeply fuzzy, especially when it comes to English, which has like 6 real, hard and fast rules and about 280 million "special cases and exceptions."
Maybe we shouldn't be trying to talk to machines any more. Perhaps we should start off by talking to other people, in a meaningful way perhaps. Once we figure that out, maybe we apply what we've learned from our people-to-people communication and connection "experiments" to the task of programming stupid machines to emulate some of what we've learned about communication. Between now and then, if we simply treat these algorithms more as games with a pseudorandom win-lose probability, like scratch-off lottery tickets, perhaps we'll all learn to be happier. Or maybe we just learn their language.
I experienced the “add 20 minutes to my timer” request turning it into a 20-minute timer. I’ve also repeatedly experienced endless mistakenly chosen words, whether I emphasize certain key phonemes or not, and there’s just zero context sensitivity.
The reason it sucks is that none of this is AI. It’s all just scripting and algorithms. That’s not intelligence. No one really knows what intelligence is on a mechanical level, so we cannot recreate it. I watch my girlfriend using the Google version and it’s just as stupid.
That’s what we really have: artificial stupidity. The more they try to dress it up with human-like attributes, the more the uncanny valley effect shines spotlights on its failure to BE intelligence.
From this industry, with the pathology of “perpetual growth”, and disinterest in actually investing in pure research (and companies like Apple throwing away existing research and previously learned knowledge, such as with UI and human interface design), incompetence is really all that should be expected from big business... and big business stands in the way of small business, because it’s all about acquiring or destroying any competition.
There was a stark difference between “before iPhone 1” and “after iPhone 1”, but we won’t see that kind of dramatic change again in this industry for a very long time. Certainly not with MBA-mindset leadership. When it’s only about the share prices, the products inevitably suffer, and Apple’s well-established corporate culture of self-imposed isolation just makes it all the worse.