mpantone

About

Username
mpantone
Joined
Visits
712
Last Active
Roles
member
Points
3,561
Badges
1
Posts
2,388
  • Facebook, WhatsApp, and Instagram block use of Apple Intelligence

    Look, AI-assisted writing tools are really just like any other tool or convenience.

    Did you cook your own dinner last night? Or did you just order DoorDash? Dump a jar of spaghetti sauce in a saucepan? Use a food processor instead of chopping up ingredients with a knife? Use canned chicken stock? Did you paint your last image or did you just snap a photo with your smartphone?

    Tools exist to make lives easier whether it be a Skil-saw, automobile, or digital service like Apple Intelligence's Image Clean Up. It's really up to the individual to decide how to use it. You are still free to fire up Photoshop or The GIMP and edit your images manually. Or dodge-and-burn like Ansel Adams. Or just use a pencil on paper and an eraser. I suck at Photoshop so something like Image Clean Up helps me because I could spend hours on an image and not get the same results as using an AI-assisted photo editing tool.

    How long would it take you to bake the perfect baguette, your sister's wedding cake, a scone? Don't want to pick up leaves? Get a rake. Thing raking is too slow? Get a leaf blower.

    For sure AI is here to stay and even sticking your head in the sand isn't going to make it go away.

    And no one is pointing a gun at your head saying you need to use Meta's AI tools when you are on Facebook. You are still free to type your own thoughts using your own brain cells instead of Meta Llama.

    For sure, AI-assisted writing tools reduces some of the tedium and time of composing certain types of written communications. For sure some people have better writing skills than others. AI-assisted writing tools simply reduce some of the time and/or improve the output for some less talented writers. It doesn't actually make them better writers or thinkers. We have been over this countless times before.

    Hell, I'm pretty disappointed with all consumer-facing AI because it still can't handle junk e-mail. That is absolutely pathetic and everyone working on consumer-facing LLMs should be utterly ashamed at their lack of ability here in April 2025. Sure, writing tools are fine. But give us something that will help everyone. No one benefits from wasting time triaging through junkmail.

    This is how moronic consumer-facing LLM-based AI tools are in 2025. There are great things happening in the commercial/enterprise side of things (pharmaceutial research, semiconductor design, financial services). But from a consumer perspective, it's almost all embarrassingly lame. AI writing tools are one of the few things today's consumer-facing LLMs can actually do reasonably decently.

    AI IS NOT GOING AWAY AND WHINING ABOUT IT WON'T DO SQUAT.


    Is Apple Intelligence's Image Clean Up feature better than an expert Photoshop user? Probably not. Is it faster than rank Photoshop-illiterate Joe Consumer? Absolutely. These consumer-facing tools are being aimed at a broad market, not for people who have 20+ years of professional, on-the-job Photoshop experience. Some of you have excellent Photoshop skills. We understand, good for you. Some of you also might be superior bakers, carpenters, seamstresses. We get that too.

    AI-writing tools aren't targeted at professional writers, editors, etc. They're aimed at the ordinary office worker who probably doesn't have a Master's Degree in English, Comparative Literature, Classics, whatever (hell, some might be writing in a language different than their native tongue). AI isn't going to write War & Peace, a Pulitzer-prize winning poetry collection or wedding gift thank you notes.

    We have been over this time and time before.
    jibspliff monkeydanoxmuthuk_vanalingamOfer
  • Apple leads global smartphone market as iPhone 16e boosts sales

    When Apple revealed the iPhone 16e, a lot of Western journalists (including some here at AppleInsider) and many commentors scratched their heads saying "I don't know who this iPhone 16e is for."

    And like every other lower-priced smartphone, the answer is always the same: emerging markets.

    I don't know how many comments I read like "the lack of MagSafe is a dealbreaker for me." Clearly there's a very large portion of this planet who really doesn't prioritize MagSafe. Note that if you're mostly on-the-go (particularly if you rely on public transit), using a MagSafe charger simply isn't part of your daytime modus operandii. It's good for big fat Americans who sit in front of their big fat monitors (or in their big fat SUVs) where having a stationary MagSafe charger is a reality.

    Remember people, smartphones have reached the point of saturation in the USA and many other technologically advanced markets (Japan, South Korea, much of Western Europe, Canada, etc.). It's places like India, Brazil, Indonesia, Latin America, Africa where there is growth potential. Removing MagSafe from the iPhone 16e was a very reasonable compromise in features to cut costs.

    Same with the lack of mmWave 5G: poorly adopted outside of the USA and even here in the States, it's mostly metropolitan downtown zones, sports stadiums, and a handful of other places that really benefit from mmWave technology.

    I'm sure the next time Apple releases another lower-priced iPhone, we'll get the same ignorance about emerging markets. This is nothing new, we saw this with the iPhone 5c and every single iteration of the iPhone SE and the two iPhone minis.
    mike1algnormwilliamlondonpulseimages
  • Apple Vision Pro 2: What the rumor mill sees coming, and when it might arrive

    Remember that not everyone will tolerate eyeglasses let alone a vision obstructing HMD. I have an Oculus Rift S and I can't wear it for more than 40-45 minutes tops. And the Rift S is way lighter than AVP. I also hate headphones, I hate goggles.

    I wear glasses and I take frequent breaks for short relief. My glasses are 30-35 grams.

    If Apple wants to release "Apple Glass" they are going to need to come in around 50-75 grams tops. Not convinced today's technology is sufficient.
    williamlondon9secondkox2dewme
  • Apple hampered its Siri ambitions by penny-pinching

    jdiamond said:
    An easy misstep, but no excuse for Siri.  A typical leading foundation model takes about 2 months to train on ~16,000 GPUs.  I don't know where Apple was on that scale, but let's say as a result of stinginess, Apple took 4 months instead of 2 months to train a model.  Doesn't explain why 2 years later they have nothing.  Or why they didn't have something already back in 2023.
    There are two separate components in Siri With Apple Intelligence. Let's look at the AI part first.

    My guess is that Apple has an LLM-based AI chatbot assistant that is comparable in quality to the competition (no one stands head and shoulders above the rest). And that's a problem. Apple executives undoubtedly realize that "just as good" isn't what they need to deliver. They need to ship something markedly better. An AI assistant that works 60-70% of the time is not reliable enough for Joe Consumer. It's just a waste of resources: time, electricity, water, money. If you had a human personal assistant that would bungle 30-40% of assigned tasks, you'd fire them that first week.

    In the same way, if your iPhone failed at subway fare gates 40% of the time, you'd quickly give up using Apple Pay as a transit pass for your daily commute. You'd just pull out your plastic card or shove a paper ticket into a slot.

    For true usability, an AI assistant will likely need to be 99.99% reliable. Maybe even more accurate than that. No one has time to query 7-8 AI chatbot assistants and continuously triage through the responses until they stumble upon the right answer but that's the state of the consumer-facing AI industry in April 2025.

    The second problem is that Siri's current input method is voice only. Voice input is notoriously unreliable. It works some of the time for some people. This is not news, it has been like this for decades.

    You combine a balky input method with unreliable assistance and you get something that might be amusing from time-to-time but not useful in the long run.

    Apple needs to make Siri's primary input as text. That will reduce query interpretation errors. There's no surprise that all of the other big AI assistants (ChatGPT, Perplexity, Gemini, Claude, whatever) start out as text-based tools.

    There's a third challenge for Apple: they prioritize privacy and data security more than the competition. That makes it harder for Apple not easier. Apple's main competition in this area wants to take your AI chatbot activity data and sell it to the highest bidder. Apple has to put more effort and resources into their service because it takes extra work to ensure privacy and security.

    Remember Microsoft Recall from last year? Well, it's supposed to ship soon, almost a year late. Reason? Heavy criticism about Microsoft's utter lack of security and privacy features for the service.

    Throwing more money at AI model training isn't improving chatbot assistant reliability.
    williamlondonjas99Alex1N7omrSmittyW
  • Behind the scenes, Siri's failed iOS 18 upgrade was a decade-long managerial car crash

    One thing for sure, as time marches on, many of the limitations of LLMs are becoming blatantly clear.

    I asked 6-7 AI chatbots when the Super Bowl kickoff was about ten days before the event. Not a single one got the simple question correct because AI has zero contextual understanding, no common sense, no true reasoning, no design, no taste, nothing. It's just a probability calculator. They all just coughed up times from previous Super Bowls. They can't even correctly process a question that a second grader could understand.

    About six weeks later I asked 3 AI chatbots to fill out an NCAA men's March Madness bracket the day after the committee finalized seeds. All three of them failed miserably. They all fabricated tournament seeds and one was so stupid to fill out their bracket of fake seeds with *ZERO* upsets. It predicted every higher seed to win their game.

    Appalling.

    I stopped after three chatbots because going through another 5-6 is just a waste of time. Asking questions to 7-8 chatbots and hunting for the correct/best response is NOT sustainable behavior. IT'S A BIG WASTE OF TIME and that's what we have in 2025. Worse, each AI inquiry uses an irresponsibly large amount of water and electricity.

    These are just two minor examples that show how little LLM-based AI has come. Advancements have stalled. Did I pick two questions that are unanswerable for today's AI chatbots? Well, they were two very, Very, VERY common topics at the time. It's not like I asked AI chatbots to cure cancer, solve world hunger or reverse climate change (which is probably what they should be working on).

    It's trivial to show a couple examples how AI works or a couple more that shows it doesn't. Anyone can cherry pick through specific examples to advance their opinions. However the key is for any given AI to be reliable >99.99% of the time. If you asked a human personal assistant to do three things every day (for example, pick up dry cleaning, send a sales contract to Partner Company A, and summarize this morning's meeting notes) and they consistently failed to do one of the three, you'd fire them in a week. There was a previous article that said that Apple is seeing Siri with Apple Intelligence only having a 60% success rate. That's way too low and none of their competitors are anywhere close to 90%, let alone 99%.

    Here's a big part of the problem in 2025: no one knows what questions AI chatbots can handle and what they will stumble at. It's all a big resource-draining crapshoot. Again, this is not sustainable behavior. My guess is in the next 2-3 years we will see some very large AI players exit the consumer market completely. More than a couple of big startups will run out of funding and fold.

    Worse, AI chatbots are utterly unembarrassed by giving out bad advice, like eating rocks or putting glue on pizza. They also have apparently zero ability to discern sarcasm, irony, parody, or other jokes in their datasets. If I write "Glue on pizza? Sounds great. You'll love it." online some AI chatbot's LLM will score that as an endorsement for that action even though any sane person would know I'm joking.

    That's a fundamental problem with today's LLMs. They "learn" from what they find online. And computer scientists have a very old acronym for this: GIGO (Garbage In, Garbage Out).

    And I'm not all that convinced that the engineers who program consumer-facing LLMs are all that perturbed when their precious baby makes extremely public mistakes. Capped off by hubris-ladened AI CEOs who swear that AGI is coming next year. Sorry guys, "fake it until you make it" is not a valid business plan. Go ask Elizabeth Holmes. Scratch that, don't ask her. Ask former shareholders of Theranos.

    You can say that 2025 LLMs have more parameters than before but based on a few simple queries there is very little real progress in generating correct answers repeatedly, consistently, and reliably. All I can surmise is that AI servers are just hogging up more power than before while spewing out more nonsense. That's GIGO in action, running in a bigger model on larger clusters of Nvidia GPUs in more data centers. But it's still GIGO.

    I adamantly believe that LLMs and other AI technologies have fabulous uses in enterprise and commercial settings especially when the data models are seeded with specific, high quality verifiable and professionally qualified knowledge. But from a consumer perspective, 98% of the tasks on consumer-facing AI are just a massive waste of time, electricity and other resources. If consumer-facing AI doesn't make any headway toward 99.99% accuracy, at some point the novelty will wear off and Joe Consumer will move onto something else. The only people using AI will be some programmers, scientists and data analyst types in specific markets (finance, drug research, semiconductor design, etc.).

    Sure, AI can compose a corporate e-mail summarizing points made in this morning's staff meeting but it doesn't actually make the user any smarter. It just does a bunch of tedious grunt work. If that's what consumer expectations of AI end up being, that's fine too.

    I'm pretty sure that some of Apple senior management team finally realize that consumer-facing AI has a high risk of turning into an SNL parody of artificial intelligence in the next couple of years. Not sure if anyone over at OpenAI gets this (and yes, ChatGPT was one of the AI assistants that fabulously failed the Super Bowl and March Madness queries).

    neoncatgatorguymuthuk_vanalingam