mpantone
About
- Username
- mpantone
- Joined
- Visits
- 802
- Last Active
- Roles
- member
- Points
- 3,767
- Badges
- 1
- Posts
- 2,522
Reactions
-
John Giannandrea out as Siri chief, Apple Vision Pro lead in
Wesley Hilliard said:9secondkox2 said:Tim’s ai, headset, and car haven’t panned out so far.Would love to see a return to the days of Apple not releasing something until it’s 100% ready and captivating. I’ll give Cook’s car a pass since it was never officially announced.Even the watch was closely guarded. And it was a success when it finally launched. A return to form in that regard would be most welcome.No more public experiments and betas please.Other than delaying one feature, Apple Intelligence is doing fine. Apple didn't promise a sentient machine like others in the space did. Is that a bad thing? Them delaying something that isn't ready instead of releasing it anyway is exactly what you're asking Apple to do, yet you're criticizing them for it.And unless you know something about how Apple Vision Pro has done so far and Apple's expectation for the product, there's no way of knowing "how it panned out." It's been pretty awesome from my perspective.
For the ficticious future "Apple Glass" to be successful, it really needs to be around the same weight as my current eyeglasses: about 30 grams.
One of the AI photo features has been helpful. Of course there are third party tools that function similarly, it's just nice to have the Apple Intelligence one closely integrated in the OS.
You're a writer so you are attuned to the writing benefits of Apple Intelligence. Many on this planet won't benefit. As I mentioned elsewhere, Apple Intelligence is good at generating corpspeak, the stuff one shoves into a work e-mail. There's very little style involved. Writing is highly commoditized these days, it doesn't take much effort for an AI assistant to pump out copy that resembles 99.9% of drivel on the Internet anyhow.
And with LLMs training on Internet accessible content, the quality of their datasets is dropping over time. Old school computer scientists have an expression for this: GIGO (Garbage In, Garbage Out). That's a really good way to describe LLM-powered AI chatbot assistant responses in March 2025.
Note that there are similar examples of regression in generative AI photo tools. Some of the AI generation tools today put out less satisfactory images than they did a year ago.
I expect consumer-facing AI to put out more and more useless crap in the next 12-24 months because these models are training themselves on garbage that their predecessors created. There really needs to be some sort of quantum leap in AI model performance to circumnavigate this decline. The longer this takes, the harder it will be for AI models to overcome the poor quality data.
When I do a standard Internet search engine query, I'm seeing mostly garbage. It's pretty easy for me to identify in a few seconds but LLMs don't have any natural quality distinctions, they will take absorb satire, fact, and outright lies equally. This probably explains why LLMs can't effectively process the junkmail folder in my e-mail. What is obvious to me as junkmail goes unrecognized by AI tools. -
John Giannandrea out as Siri chief, Apple Vision Pro lead in
Let's remember that John Giannandrea was the former head of AI at Google. It's not like he was unqualified for his post at Apple. However being a good researcher and shipping product are two very different disciplines.
Unfortunately Apple took its sweet time to make this change, just like letting Project Titan fester for years.
Let's also remember that consumer-facing AI is new technology still in its infancy. It's not like there's any (consumer) company that has been doing this for 20+ years. Apple only started including machine learning silicon in their chips in 2017.
Everyone is pretty new to AI which is why not a single consumer-facing AI assistant is head-and-shoulders better than the competition. It's all alpha quality right now. And it doesn't look like whoever has the most datacenter Nvidia GPUs wins either.
Like most Americans with a retirement plan, I am an indirect investor in almost all of the major players. I have a vested interest in seeing some level of success from all of them. Competition is good, it drives quality, innovation and value. I also appreciate Apple's commitment to privacy. This reason itself makes me want Apple to be a top competitor in this field. -
Apple is lying about Apple Intelligence, John Gruber says -- and he's right
Today's Siri is the result of years of neglect by Apple. They probably tried to bolt on some AI capabilities to Siri which is like putting lipstick on a pig. And then they belatedly realized that they were looking at a pig with lipstick.
Now they realize that they need to rewrite Siri from scratch, a decision they should have made 3-4 years ago. Well, at least AI has come far enough where a lot of the coding can be done with AI and Apple certainly has enough cash to get enough GPUs for such an undertaking.
My belief is that many of these AI companies have blinders on and are currently following narrow paths in terms of LLM development that will eventually lead to an end of the road.
Time will bear witness to these misguided decisions. Apple really needs to deliver something stunning at WWDC this year. If they can't, they should consider shipping versions of iOS and macOS that have zero Apple Intelligence features. Just cut out all that resource-hogging code.
I will say this: my iPhone 12 mini running iOS 17.7.2 runs faster than a newly acquired iPhone 16 running iOS 18.3.2. For this reason, I have balked on making the iPhone 16 my primary phone. I'm sticking with the 12 mini for the time being. Right now, I view iOS 18 as a downgrade from iOS 17. -
Apple is lying about Apple Intelligence, John Gruber says -- and he's right
macplusplus said:More personalized Siri... Easy to utter, extremely hard to conceive and implement. The biggest drawback of current LLMs is their lack of "context retention". Ask any of the most powerful LLMs they will list that among their limitations. Even in a single session they have difficulty on maintaining an established response pattern in repetitive tasks.
LLM-powered AI chatbots have no common sense, no artistic taste, no situational awareness (which includes context retention).
A week before the Super Bowl I asked seven chatbots (ChatGPT, Gemini, Perplexity, Claude, DeepSeek, Grok, Llama-3.1-Nemotron-70B) "What time is the Super Bowl kickoff?"
Not a single chatbot was able to answer this question correctly. They all pulled up previous Super Bowl historical data and many still provided fuzzy, long winded answers.
That's a complete lack of both common sense and situational awareness. A normal human being (like a schoolkid or intern or the guy sitting next to you at the DMV) would assume that I was asking about the upcoming event.
There are other trial questions I've asked AI chatbots that have failed laughably. So you really, Really, REALLY need to be super specific about how you frame and write a question because LLM powered AI chatbots are dumb as rocks. There's really no intelligence behind it, it's just a fancy probability calculator.
Asking 6-7 chatbots the same question and getting multiple wrong answers takes way more effort than using a standard Internet search engine plus your own brain and common sense to figure out what's B.S. and what's legit. And even if 1-2 chatbots gave the right answer, you'd still have to go back and verify the accuracy.
These work great for topics that are mathematics and engineering oriented but for many ordinary topics, they are a complete waste of time here in early 2025. Another glaring shortcoming of LLM AI is its apparent inability to identify junkmail. I look through my junkmail folder several times a day and 99% of the bogus messages can be identified just by glancing at the subject line. No e-mail provider has shown any inclination at using AI to permanently and automatically deleting junkmail because no AI operator can trust LLMs to get things right.
So even with my very modest cognitive capabilities I'm much better than an LLM in identifying spam.
Someday they will get better but the notion that AGI is a year or two away is pretty ludicrous. I think it's more AI company CEOs trying to hype up their capabilities to get more funding.
Sure, AI is great for answering math questions but I'm not in school and I don't have any math questions to ask.
Apple needs to deliver an Apple Intelligence service that is better than their AI competitors. Just providing something comparable is a disservice to their users. It's possible that they might have realized that they are incapable of doing just that in 2025.
-
Apple's rumored Home Hub said to be under employee testing
Personally I think the idea of a fixed tablet makes zero sense for today's consumer. No one wants to get up to go look at a screen. They want to do everything from their phones (or possibly an Apple Watch) since that's realistically those are the only two devices they are going to carry around with them throughout the house.
The most plausible Home Hub hardware device would be some sort of dual- or multi-function MagSafe-equipped smart charging stand that would be controlled by a software interface primarily from their phones.
A device with HomePod-like features would be okay but again but voice control only works within normal speaking distance.
One thing for sure that this Home Hub device would have to conquer would be effortless device configuration. It needs to "just work" rather going through complicated, unreliable, convoluted pairing procedures. That's like 95% software.
Also devices like iPhones and Apple Watches can indicate where the person is (living room, den, bedroom, garage, laundry room, etc.) and possibly adjust connected devices appropriately. If I need to walk up to a touchpanel to dim the lights upon leaving a room, hell, it's easier to just hit a physical light switch.
We've seen consumers push back on full touchscreen interfaces in automobiles and a return of certain physical controls (like cabin climate control). It's way easier to just turn on/off the bedside lamp by hitting a switch rather than hunt around some screen for the button that turns off that light or use voice commands which might disturb other occupants.
And speaking of software it's clear that a Home Hub device will work better if it interfaces with a Siri implementation that isn't as dumb as a rock. That means that Apple really needs to release a context-aware version of Siri with Apple Intelligence with meaningful accuracy and reliability before they will have anything really useful in a Home Hub.
Releasing any sort of Home Hub device with Siri's current capabilities will not help adoption. This is one of those types of new devices that I will not be an early adopter of. I will probably wait 5+ years before thinking about having some sort of smart home hub. I'm sure Apple has prototype hardware in its labs. It's the software that is key.
Based on what I've seen in Apple Intelligence in March 2025, I think we are still 3+ years away from having an Apple Intelligence that will make the smart home viable.