jas99

About

Username
jas99
Joined
Visits
99
Last Active
Roles
member
Points
734
Badges
1
Posts
185
  • Apple Watch Series 11: What's expected to arrive this fall

    The Apple Watch is an incredibly useful device. I love my Series 10 in titanium. 
    radarthekatwatto_cobra
  • John Giannandrea out as Siri chief, Apple Vision Pro lead in

    mpantone said:
    gatorguy said:
    mpantone said:
    As far as I can tell, consumer-facing AI isn't improving in leaps and bounds anymore, and probably hasn't for about a year or 18 months.

    A couple weeks before Super Bowl I asked half a dozen LLM-powered AI assitant chatbots when the Super Bowl kickoff was scheduled. Not a single chatbot got it right.

    Earlier today I asked several chatbots to fill out a 2025 NCAA mens basketball tournament bracket. They all failed miserably. Not a single chatbot could even identify the four #1 seeds. Only Houston was identified as a #1 seed by more than one chatbot, probably because of their performance in the 2024.

    I think Grok filled out a fictitious bracket with zero upsets. There has never been any sort of major athletic tournament that didn't have at least one upset. And yet Grok is too stupid to understand this. It's just a dumb probability calculator that uses way too much electricity.

    Context, situational awareness, common sense, good taste, humility. Those are all things that AI engineers have not programmed yet into consumer facing LLMs.

    An AI assistant really need to be accurate 99.8% of the time (or possibly more) to be useful and trustworthy. Getting one of the four #1 seeds correct (published on multiple websites) is appallingly poor. If it can't even identify the 68 actual teams involved in the competition, what good is an AI assistant? Why would you trust it to do anything else? Something more important like schedule an oil change for your car? Keep your medical information private?

    As I said a year ago, all consumer facing AI is still alpha software. It is nowhere close to being ready for primetime. In several cases there appears to be some serious regression.

    25% right isn't good enough. Neither is 80%. If a human assistant failed 3 out of 4 tasks and you told them so, they would be embarrassed and probably afraid that they would be fired. And yes, I would fire them.

    Apple senior management is probably coming to grips with this. If they put out an AI-powered Siri that frequently bungles requests, that's no better than the feeble Siri they have now. And worse, it'll probably erodes customer trust.

    "Fake it until you make it" is not a valid business model. That's something Elizabeth Holmes would do. And she's in prison.
    Did you try Gemini, currently 2.0 Flash? In a voice search on my Pixel it listed South Auburn Tigers, West Gators, East Duke, and Midwest Cougars
    I did not give Gemini the bracket question. I did give it the Super Bowl question which it failed like the others.

    Your comment brings up an important illustrative point. No one has the time to dork around with 7-8 AI chatbots to find one (or more) that gives the correct answer for each question. That's not a sustainable approach.

    There's probably some AI chatbot that will might get the right answer to a simple question. The problem is no AI chatbot is reliably accurate enough to instill trust and confidence. I can't ask ten questions to 8 chatbots and wade through the responses. In the same way, having ten human personal assistants isn't a worthwhile approach.

    Let's say Grok has a 20% accuracy score and Gemini is 40%. That's double the accuracy for Gemini but it still is way too low to be trusted and deemed reliable.

    Like I said I think Apple's senior management is understanding this which is why they've postponed the AI-enabled Siri. Even if it were 60-80% accurate, that's still too low to be useful. You really don't want a personal assistant -- human or AI -- that makes so many messes that you have to go clean up after them or find an alternate personal assistant to might do some of those failed tasks better. In the end for many tasks right now, you are better off using your own brain (and maybe a search engine) to figure many things out because AI chatbots will unapologetically make stuff up.

    All of these AI assistants will flub some questions. The problem is you don't know which one will fail which question at any given moment. That's a waste of time. I think the technology will eventually get there but I'm much more pessimistic about the timeline today compared to a year ago because improvements in these LLMs seems to have stalled or even regressed. I don't know why that is but it doesn't matter to Joe Consumer. It just needs to work. Basically all the time. And right now none of them do.

    For sure I am not the first person to ask an AI assistant about the Super Bowl and March Madness. And yet these AI ASSistants have zero motivation to improve accuracy even if they are caught fibbing or screw up an answer.

    I've used all of the major AI assistants and they all routinely muck up. The fact that I did not try Gemini is simply because I got far enough by the third AI chatbot to deem this a waste of time. I can't keep jumping from one AI chatbot to another until I find one that gives an acceptable result.

    In most cases, the AI assistant doesn't know it is wrong. You have to tell the developer (there's often a thumbs up or thumbs down for the answer). And that doesn't make the AI assistant get it right for the next person who asks the same question. Maybe enough people asked Gemini the question and the programmers fixed Gemini to give the proper response. One thing for sure, AI assistants don't have any common senses whatsoever. Hell, a lot of humans don't either so if LLMs are modeled after humans, it's no wonder that AI assistants are so feeble.

    Here in March 2025 all consumer-facing AI assistants are silly toys that wastefully use up way too much electricity and water. I still use some AI-assisted photo editing tools because my Photoshop skills are abysmal. But for a lot of other things, I'll wait for AI assistants to mature. A lot more mature.
    You are absolutely correct. I believe Apple also agrees with you. They know the masses are in a delusional frenzy about AI-enabled …. whatever. The truth is the LLM fake-intelligence chatbot DOES NOT WORK and is UNACCEPTABLE for Apple’s products.
    Apple has to generate their own neural network machine learning instead of this LLM snake oil.
    I’m just sorry Apple committed itself to being a purveyor of a useful system called Apple Intelligence when, apparently, it’s based on snake oil LLMs.
    williamlondonmrstepmuthuk_vanalingamelijahgwatto_cobra
  • Apple loses antitrust appeal in Germany, now subject to steep fines and regulations

    charlesn said:
    avon b7 said:
    avon b7 said:
    longpath said:
    This ruling is akin to Lamborghini being declared anticompetitive for not allowing 3rd party (including parts made by Ford & Chrysler) dealer installed accessories in the Temorino.

    Apple is a minority manufacturer of phones, tablets, and personal computers. As such, they do not now, nor have they ever had anything vaguely resembling sufficient market control for any other their actions to be meaningfully anticompetitive. This ruling reflects a warped grasp of Apple's actual market share.
    https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-markets-act-ensuring-fair-and-open-digital-markets_en

    By Apple's own numbers it qualifies as a Gatekeeper for phones under EU law.

    Car anologies don't work well here due the digital CPS nature of the issue.

    Also, many jurisdictions around the world are coming to similar conclusions about Apple's anti competitive practices. The US might end up being one of them. 


    EU Law is a joke. EU law is so entirely vague and open to subjective interpretation that anyone perceived to have deep pockets can quite easily be deemed to be in violation of it. The way it's written, all they have to do is fabricate a plausible rationale and set, or move, the goal posts to wherever they need them to be, and jackpot!

    EU law makes a mockery of law.
    The numbers that determine gatekeepers are not subjective. 

    You may argue about how those numbers were set but not that Apple falls into the group of gatekeepers. 
    Here's what makes no sense: for MANY years, Apple's stubborn insistence on a walled garden held it back in the marketplace. And no government cared about its walled garden then. But over time, and especially as digital devices proliferated into tablets, wearables, etc, consumers made the free will choice to buy into the tightly controlled Apple ecosystem. In fact, the tightly controlled ecosystem with its focus on privacy, security, seamless operation between devices and "it just works" reliability became THE main reason to choose Apple. Let's face it: it has never been difficult to get more bang for your buck in the world of Windows and Android hardware. But consumers chose to pay more for Apple hardware with its walled garden being a main reason why. And now here comes government, breaking the very thing that tens of millions of consumers have freely chosen in buying Apple products, all in the name of insane, upside down logic of supposedly greater consumer choice. Except you're not allowed to choose a closed and tightly controlled ecosystem and--here comes the upside down logic again--the reason consumers will not be allowed to have that choice is because too many consumers have freely chosen it. 
    You are 100% correct. Thanks for the lightning bolt of rationality. 
    jibwatto_cobra
  • Calls for Tim Cook's resignation over Apple Intelligence miss that he has made Apple what ...


    Amazing how Apple / Tim Cook is pilloried over the Apple Car - a product that Apple never even acknowledged. It really is quite an amazing apportioning of criticism. The AI functionality that has been advertised, but not been delivered is very fair game for criticism.

    But if the car, then why not blame Apple for their failures over their delayed TV set, the hot mess of the folding iPad, Macs still without a cellular connection, their flawed electric motorcycle, Apple ring, washing machine, solar powered router, 8” iPhone and all those other products Apple have never even announced. We know they were junk and were quietly shelved ….
    This. Exactly.

    I’m incredibly glad Apple had the strength to say no to the boondoggle called Apple Car. $1 billion per year investigating that market was nothing for Apple. Few companies are well-run enough to say no to a product idea they’ve spent time investigating. Best decision Tim Cook could have made.
    SmittyWdanoxmuthuk_vanalingamAppleZuluronnwatto_cobra
  • Calls for Tim Cook's resignation over Apple Intelligence miss that he has made Apple what ...

    xbit said:
    Apple Intelligence feels like a me too product designed to appeal to investors rather than consumers. Almost every iPhone user I know hates it and asks me how you can switch it off.

    I don’t blame Tim Cook though as every tech company is expected to bet the farm on gen AI. Cook would be eviscerated by the markets if Apple didn’t have a gen AI strategy. There’s a lot of hype but no-one has a product that works as promised.


    I think Apple users are a lot more savvy than the average consumer. I think the AI “revolution” is a mix of lies, misunderstanding, and intentional false hype. The large language models (LLMs) create the impression of intelligence but possess none. But that’s enough to fool huge swaths of the human population. One of Apple’s internal communications was, “The last thing the world needs is another chatbot.” And Apple was right. But it was forced to do something by the - let’s just say it - gullible masses who bought the AI lies. Maybe that’s the one thing I can say it did wrong - jump on the bandwagon instead of exposing it for the lie it is?

    I think a lot of Apple users know this and are simply not interested in AI - even if it’s Apple’s version. I get it. The last thing I want is some LLM making nonsensical statements in my e-mail messages to other people. I don’t need help writing. Get the heck out of my way. That’s why lots of Apple users are turning it off.

    The only thing that worries me now is that if Apple relies too heavily on LLMs it can never have a reliable Apple Intelligence product. Why? Because LLMs hallucinate shockingly high percentages of the time. LLMs are just plain wrong - a LOT. I know Apple knows this. I know Apple has been developing neural processors for a long, long time. I know Apple has the best and brightest people. I know they are amazing managers - honest, caring, really great corporate citizens. So I trust they are doing the wise thing with Apple Intelligence, even if it takes longer. Doing things well takes longer. And I’m sick of those who can’t criticizing those who can.
    mattinozforegoneconclusioncoolfactorwatto_cobra