mpantone

About

Username
mpantone
Joined
Visits
803
Last Active
Roles
member
Points
3,772
Badges
1
Posts
2,525
  • Behind the scenes, Siri's failed iOS 18 upgrade was a decade-long managerial car crash

    One thing for sure, as time marches on, many of the limitations of LLMs are becoming blatantly clear.

    I asked 6-7 AI chatbots when the Super Bowl kickoff was about ten days before the event. Not a single one got the simple question correct because AI has zero contextual understanding, no common sense, no true reasoning, no design, no taste, nothing. It's just a probability calculator. They all just coughed up times from previous Super Bowls. They can't even correctly process a question that a second grader could understand.

    About six weeks later I asked 3 AI chatbots to fill out an NCAA men's March Madness bracket the day after the committee finalized seeds. All three of them failed miserably. They all fabricated tournament seeds and one was so stupid to fill out their bracket of fake seeds with *ZERO* upsets. It predicted every higher seed to win their game.

    Appalling.

    I stopped after three chatbots because going through another 5-6 is just a waste of time. Asking questions to 7-8 chatbots and hunting for the correct/best response is NOT sustainable behavior. IT'S A BIG WASTE OF TIME and that's what we have in 2025. Worse, each AI inquiry uses an irresponsibly large amount of water and electricity.

    These are just two minor examples that show how little LLM-based AI has come. Advancements have stalled. Did I pick two questions that are unanswerable for today's AI chatbots? Well, they were two very, Very, VERY common topics at the time. It's not like I asked AI chatbots to cure cancer, solve world hunger or reverse climate change (which is probably what they should be working on).

    It's trivial to show a couple examples how AI works or a couple more that shows it doesn't. Anyone can cherry pick through specific examples to advance their opinions. However the key is for any given AI to be reliable >99.99% of the time. If you asked a human personal assistant to do three things every day (for example, pick up dry cleaning, send a sales contract to Partner Company A, and summarize this morning's meeting notes) and they consistently failed to do one of the three, you'd fire them in a week. There was a previous article that said that Apple is seeing Siri with Apple Intelligence only having a 60% success rate. That's way too low and none of their competitors are anywhere close to 90%, let alone 99%.

    Here's a big part of the problem in 2025: no one knows what questions AI chatbots can handle and what they will stumble at. It's all a big resource-draining crapshoot. Again, this is not sustainable behavior. My guess is in the next 2-3 years we will see some very large AI players exit the consumer market completely. More than a couple of big startups will run out of funding and fold.

    Worse, AI chatbots are utterly unembarrassed by giving out bad advice, like eating rocks or putting glue on pizza. They also have apparently zero ability to discern sarcasm, irony, parody, or other jokes in their datasets. If I write "Glue on pizza? Sounds great. You'll love it." online some AI chatbot's LLM will score that as an endorsement for that action even though any sane person would know I'm joking.

    That's a fundamental problem with today's LLMs. They "learn" from what they find online. And computer scientists have a very old acronym for this: GIGO (Garbage In, Garbage Out).

    And I'm not all that convinced that the engineers who program consumer-facing LLMs are all that perturbed when their precious baby makes extremely public mistakes. Capped off by hubris-ladened AI CEOs who swear that AGI is coming next year. Sorry guys, "fake it until you make it" is not a valid business plan. Go ask Elizabeth Holmes. Scratch that, don't ask her. Ask former shareholders of Theranos.

    You can say that 2025 LLMs have more parameters than before but based on a few simple queries there is very little real progress in generating correct answers repeatedly, consistently, and reliably. All I can surmise is that AI servers are just hogging up more power than before while spewing out more nonsense. That's GIGO in action, running in a bigger model on larger clusters of Nvidia GPUs in more data centers. But it's still GIGO.

    I adamantly believe that LLMs and other AI technologies have fabulous uses in enterprise and commercial settings especially when the data models are seeded with specific, high quality verifiable and professionally qualified knowledge. But from a consumer perspective, 98% of the tasks on consumer-facing AI are just a massive waste of time, electricity and other resources. If consumer-facing AI doesn't make any headway toward 99.99% accuracy, at some point the novelty will wear off and Joe Consumer will move onto something else. The only people using AI will be some programmers, scientists and data analyst types in specific markets (finance, drug research, semiconductor design, etc.).

    Sure, AI can compose a corporate e-mail summarizing points made in this morning's staff meeting but it doesn't actually make the user any smarter. It just does a bunch of tedious grunt work. If that's what consumer expectations of AI end up being, that's fine too.

    I'm pretty sure that some of Apple senior management team finally realize that consumer-facing AI has a high risk of turning into an SNL parody of artificial intelligence in the next couple of years. Not sure if anyone over at OpenAI gets this (and yes, ChatGPT was one of the AI assistants that fabulously failed the Super Bowl and March Madness queries).

    neoncatgatorguymuthuk_vanalingamwatto_cobra
  • Behind the scenes, Siri's failed iOS 18 upgrade was a decade-long managerial car crash

    I disagree that Apple needs to come with new AI-specific blockbuster hardware. Everyone loves their phones, some to the point of addiction. Apple Intelligence's first home must be the iPhone. Not Mac, not Apple Watch, not Apple TV, not some weird Home Hub flatpanel screen bolted to a wall in their kitchen or living room.

    Remember that Apple sells more iPhones than any other platform device. Apple Intelligence needs to work on the iPhone first. They don't need to release new specific hardware, the Neural Engine has been on the iPhone since 2017, almost eight years ago. Apple already knows this which is why Apple Intelligence features trail on macOS Sequoia compared to iOS 18.

    Remember how Steve described the iPhone: "the computer for the rest of us." He was absolutely, 100% correct. 

    The world does not need some dorky Star Trek-like AI communicator badge. We have phones. Smart glasses will never have the same consumer acceptance and smart watches have too many limitations (e.g., cameras).
    Alex1Nmuthuk_vanalingamjidowatto_cobra
  • AAPL crumble: stock hit again, as White House clarifies 145% China tariff rate

    Hard to win when the goalposts are being moved continuously. But Tim and Apple's senior management will try because that's their responsibility to their shareholders, partners, and customers. Note that "shareholder" basically includes every American adult with a pension plan or retirement account and thus affects their offspring as well.
    dewmedanoxradarthekatchiawatto_cobra
  • Behind the scenes, Siri's failed iOS 18 upgrade was a decade-long managerial car crash

    Voice to text transcription is horribly unreliable. I've seen it with my own eyes via Google Voice for over ten years. It's abysmal. And that's not even real-time, it takes Google Voice a minute or two to generate the text transcription from a short voice message.

    My hunch is that Apple is rewriting Siri from scratch with text as the primary input method not unreliable voice. After all, LLMs are based mostly on text data so there's no surprise that LLM-powered AI chatbots such as ChatGPT, Claude, Perplexity, Gemini, etc. have all debuted with text-centric interfaces.

    I presume it's actually faster to deploy AI chatbots to other languages if text is the primary input method versus voice as well.

    Even in 2025, when I find myself navigating through some company's telephone customer service system, I will always lean toward keypad input (account numbers, menu selections, etc.) rather than voice commands when offered the choice. Keypad entry is still miles better in accuracy than voice.

    I remember using some startup's voice operated TV remote control back in the late Eighties or early Nineties. It was terrible and the startup failed unsurprisingly. My experience with voice commands in 2025 shows very little real progress. Voice to text transcription is barely any better today than it was 30 years. It seemingly works only part of the time for part of the users. It really needs to be 99.9% accurate for pretty much everyone (speaking the supported language) for it to not be a waste of time.

    My guess is that accurate realtime voice-to-text transcription is still about 20 years away. It might happen in my lifetime or maybe again it might not. But I'm certainly not stupid enough to hold my breath for it. 

    Every time I go to my clinic the doctor asks if they can use an AI medical notes transcription service. I always decline and I will likely decline until the day I die. Accurary is just one concern. Privacy and data security are two others.

    If anyone from Apple is reading this (which I highly doubt), I am not singling your company out. Basically everyone sucks at voice-to-text transcription in 2025 regardless of their market capitalization whether it's realtime or the system can spend some time processing the input. It is ALL rubbish.

    What was shown in that now-deleted advertisement video was pure science fiction or so horribly unreliable that it's worthless for daily use by Joe Consumer. Bring us an AI assistant that we can count on. And "fake it until you make it" is not a sound policy. We (well those amongst us who are sane) don't have the time for that sort of nonsense.

    Too much of this AI stuff is released to the public half-baked. Do you know what a half-baked loaf of bread is? It's inedible. At least Apple labels Apple Intelligence as "Beta" because it's lightyears away from being release quality right now.
    Alex1Nmuthuk_vanalingamdecoderring
  • Instagram may finally be coming to iPad after 15 years

    lukei said:
    I NEED WhatsApp for the iPad, everything uses WhatsApp in Brazil and not having it one iPad is my biggest gripe today.

    That and the fact that it doesn't run MacOS
    Just use the web app, it works fine 
    The web app on iPad is fine if you only have one Instagram account. It's far more awkward than a native app (even the scaled iPhone app) when switching between multiple accounts.

    Meta is a very obtuse organization with precious little vision. They really can't see that the tablet experience can be vastly different than the smartphone experience. There are many possibilities on the larger tablet screen that really aren't available on smartphones from a practical sense. I'd love to see something like a hybrid Flipboard type interface as an option or putting the main feed, reels, and following together on the same screen.
    watto_cobra