Apple isn't behind on AI, it's looking ahead to the future of smartphones

2»

Comments

  • Reply 21 of 33
    elijahgelijahg Posts: 2,759member

    mattinoz said:
    elijahg said:
    This article carefully steps around the fact that this improvement will make zero difference to Apple's most visible and most deficient use of AI: Siri.

    Siri is the primary and by far most visible use of "AI" at Apple. People quite rightly have a problem with Siri being thick as a brick. Siri has numerous problems which are not going to be solved by improving the creation of useless avatars for a niche device no one yet owns outside of the developer space. This research specifically does nothing to help Siri for several reasons: 
    • Siri doesn't use contemporary "AI" like ChatGPT, Siri's "intelligence" comes from predefined queries a in basic lookup table - which is why if you don't ask it something in the right way it breaks.
    • Very few Siri queries are processed on-device. 
    • LLMs need huge datasets unavailable on-device so efficiency of on-device processing is mostly irrelevant. 
    This article does nothing to convince anyone that Apple is not far far behind the AI curve. Making a clockwork timepiece 10% more efficient is irrelevant when everyone else has long since moved to battery. Apple needs exactly what this article claims they don't: a clone of ChatGPT. Siri was supposed to be conversational 10 years ago when Steve Jobs introduced it on the iPhone 4. It has arguably got worse since then.

    However, Apple's AI in other areas seems pretty good, photos is pretty good at recognising objects/people/things/scenes for example; and that's on-device. But claiming that Apple isn't behind the AI curve when their most visible use of AI is a disaster is pure fanboyism.
    Does the article step around or not engage the obvious speculation or inference?

    Siri could reduce the look-up table by having an LLM ( or a tight LM) on the device translated from the user language to "SiriCode", Developers wouldn't need to localise at all. define intents for what can be done by their Apps in  "SiiriCode" (much like what happens for shortcuts).

    Could actually be advantageous for them as they can have a localised LM to suit a much more limited range of speech to start, but have a lot border range of downloadable models per language or regional dialect, even common language combos by allowing users' to nominate all the languages they speak.  then share a retrained or annotated version between the users' devices that picks up on each user's quirks in speech. 

    Smaller, on-device ML lets Apple be more personable. I'm not surprised they would be looking at these sorts of developments the same way as the article clearly shows they are. 
    Well the research specifically targets resource-constrained devices, i.e. mobile phones. Siri's queries are almost all done on a server with almost unlimited resources, so there's no speculation, really. But it could do as you say, that would be a vast improvement.
    mattinozwatto_cobra
  • Reply 22 of 33
    elijahg said:
    zeus423 said:
    Me to Siri: “Play My Way by Elvis Presley”

    Siri: “Okay, here’s Elvis Presley” as she plays the same wrong song every single time. 

    I’d be thrilled if Siri could just do the simple tasks it used to do just fine. 
    Something about Siri that has baffled me for years is how people can ask it the same thing and get different results. Every time is see an issue like yours on the forum I try it myself. So far I’ve had the expected response every time I ask, trying to use the same wording. I find it very strange that this happens, year after year. What could it be that makes Siri reply one thing to one user and another thing to a differ t user asking the same thing?


    I've had Siri respond differently even when asking the same thing. For example one that often trips it up is saying "Add tomato sauce to my shopping list". Sometimes I get "tomato" and "sauce" added, sometimes I get "tomato sauce". Maybe the inflection is different but I doubt it.

    elijahgwilliamlondon
  • Reply 23 of 33
    iOS 16 played the correct music fine, but ever since iOS 17+ I get the same wrong song every time. Siri on my 27” Intel iMac plays a different incorrect song, although I chalk that up to it being on Mojave. My HomePod Miniworks great. The least expensive device works the best. Go figure. 
    williamlondonroundaboutnow
  • Reply 24 of 33
    danoxdanox Posts: 2,875member

    Apple the better Path

    1. The Apple Vision Pro with the R1 co-processor 

    2. Bigger screen iPad

    3. The M3 Ultra Studio Mac (with 256 gigs of UMA memory).

    https://multiplatform.ai/apple-emerges-as-the-preferred-choice-for-ai-developers-in-harnessing-large-scale-open-source-llms/

    https://om.co/2023/10/30/apple-launches-m3-chips-with-ai/

    watto_cobra
  • Reply 25 of 33
    blastdoorblastdoor Posts: 3,308member
    Yawn. It’s just optimizing something akin to virtual memory for AI. That’s fine, but it’s not a huge breakthrough, just a small innovation. 

    Apple clearly is behind in LLMs. 

    But, I’d say it also probably doesn’t matter. All you need to catch up is a ton of money and competent management. Apple checks both those boxes. 

    Apple will likely excel at making LLMs practically useful and accessible, even if they lag on the core tech for a while. In time, they will pull even and then ahead on the tech.
    williamlondondesignr
  • Reply 26 of 33
    zeus423 said:
    elijahg said:
    zeus423 said:
    Me to Siri: “Play My Way by Elvis Presley”

    Siri: “Okay, here’s Elvis Presley” as she plays the same wrong song every single time. 

    I’d be thrilled if Siri could just do the simple tasks it used to do just fine. 
    Something about Siri that has baffled me for years is how people can ask it the same thing and get different results. Every time is see an issue like yours on the forum I try it myself. So far I’ve had the expected response every time I ask, trying to use the same wording. I find it very strange that this happens, year after year. What could it be that makes Siri reply one thing to one user and another thing to a differ t user asking the same thing?


    I've had Siri respond differently even when asking the same thing. For example one that often trips it up is saying "Add tomato sauce to my shopping list". Sometimes I get "tomato" and "sauce" added, sometimes I get "tomato sauce". Maybe the inflection is different but I doubt it.

    No issues on multiple attempts too.
    Maybe turn Siri off and back on to reset it?
  • Reply 27 of 33
    nubus said:
    avon b7 said:
    I think it's fair to say that Apple is behind in this area. 

    Objectively, this year has been about ChatGPT style usage and Apple hasn't brought anything to market while others have. 

    So very true. Apple is behind on AI. Research papers are nice but as Steve Jobs put it: Real artists ship. Tim Cook recently said Apple is investing in generative AI - so clearly Apple is behind on that as other companies are shipping. If Tim Cook can say it... then why not Appleinsider?

    Siri is a mess, Apple can't match Copilot for code development, the iOS keyboard suggestions are nowhere near ChatGPT, and organizing files locally and on servers has been a manual chore for soon 40 years on the Mac with no AI to support us. Apple is doing very well on photos and a lot of space is reserved for processing Neural Engine = we pay for ML.
    I'm waiting for the day that Microsoft gets ChatGPT to improve auto-correct in Outlook and other Office apps. The suggested typo corrections look like they are made without any regard to context. This would seem to be relatively straightforward to do. Sure, oftentimes the typo corrections do work, but making sure the words before and after are considered in order to make the corrected word fit/make sense would make the success rate significantly higher. I don't even think a ChatGPT level of analysis would be required to do this. Of course, if I wasn't such a shitty typist, I wouldn't care as much--I just need all the help I can get!
    muthuk_vanalingam
  • Reply 28 of 33
    avon b7 said:
    I think it's fair to say that Apple is behind in this area. 

    Objectively, this year has been about ChatGPT style usage and Apple hasn't brought anything to market while others have. 

    It is also recruiting for specific roles in AI. So far, most of the talk has been only that, talk. 

    Talking about ML as they made a point of doing, is stating the obvious here. Who isn't using ML? 

    In this case of LLMs on resource strapped devices, again, some manufacturers are already using them. 

    A Pangu LLM underpins Huawei's Celia voice assistant on its latest phones. 

    I believe Xiaomi is also using LLMs on some of its phones too (although I don't know in which areas). 

    The notion of trying to do more with less is an industry constant. Research never stops in that area and in particular routers have been a persistent research target, being ridiculously low on spare memory and CPU power. I remember, many years ago, doing some external work for the Early Bird project and the entire goal was how to do efficient, real time detection of worm signatures on data streams without impacting the performance of the router. 

    Now, AI is key to real-time detection of threats in network traffic and storage (ransomware in the case of storage, which is another resource strapped area). 

    LLMs have to be run according to needs. In some cases there will be zero issues with carrying out tasks in the Cloud or at the Edge. In other cases/scenarios you might want them running more locally. Maybe even in your Earbuds (for voice recognition or Bone Voice ID purposes etc). 

    Or in your TV or even better across multiple devices at the same time. Resource pooling. 

    As usual, a terrible take. All we’ve seen are gimmicky chat bots just come out, which despite the hoorah are entirely pointless to me as a consumer. MS has limited search queries in its web product beta or whatever. I wouldn’t expect that to be immediately rolled out into production iOS in the last 6 mos. Get real. 
    ihatescreennameswilliamlondondanox13485
  • Reply 29 of 33
    avon b7avon b7 Posts: 7,703member
    avon b7 said:
    I think it's fair to say that Apple is behind in this area. 

    Objectively, this year has been about ChatGPT style usage and Apple hasn't brought anything to market while others have. 

    It is also recruiting for specific roles in AI. So far, most of the talk has been only that, talk. 

    Talking about ML as they made a point of doing, is stating the obvious here. Who isn't using ML? 

    In this case of LLMs on resource strapped devices, again, some manufacturers are already using them. 

    A Pangu LLM underpins Huawei's Celia voice assistant on its latest phones. 

    I believe Xiaomi is also using LLMs on some of its phones too (although I don't know in which areas). 

    The notion of trying to do more with less is an industry constant. Research never stops in that area and in particular routers have been a persistent research target, being ridiculously low on spare memory and CPU power. I remember, many years ago, doing some external work for the Early Bird project and the entire goal was how to do efficient, real time detection of worm signatures on data streams without impacting the performance of the router. 

    Now, AI is key to real-time detection of threats in network traffic and storage (ransomware in the case of storage, which is another resource strapped area). 

    LLMs have to be run according to needs. In some cases there will be zero issues with carrying out tasks in the Cloud or at the Edge. In other cases/scenarios you might want them running more locally. Maybe even in your Earbuds (for voice recognition or Bone Voice ID purposes etc). 

    Or in your TV or even better across multiple devices at the same time. Resource pooling. 

    As usual, a terrible take. All we’ve seen are gimmicky chat bots just come out, which despite the hoorah are entirely pointless to me as a consumer. MS has limited
     search queries in its web product beta or whatever. I wouldn’t expect that to be immediately rolled out into production iOS in the last 6 mos. Get real. 
    Have you used them?

    Have you read the links I posted? 

    They do have gimmicky aspects, they can provide incorrect information, they do tend to have tell tale traits in responses etc but where have you seen people say it's all worthless? 

    You haven't. The opposite is in fact true. 

    It isn't pointless to Joe Consumer and that has been proven over and over. Of course YMMV

    People are struggling to tell the difference between human responses and generative responses in certain situations. 

    Here’s an example from three weeks ago. 

    I was preparing an expert in UX and visualisation for an important job interview in English. 

    She asked ChatGPT to create a list of potential questions and answers. I was very surprised at the level it achieved in the text. She then asked it to be more specific in the answers and the results were frankly amazing. All I had to do was eliminate those tell tale traits and tone it down a bit. 

    In an English speaking but non-native setting most of what I changed could have been left as is. 

    That's only the ChatGPT side of things and the use cases are endless (even taking into account the caveats). 

    The LLM side of things is already widely used across many industries and there is nothing gimmicky about that. 

    Even three years ago Huawei trained one of its image recognition models to identify COVID-19 in lung scans, automating a time consuming process and dramatically reducing time to diagnosis. 

    All Apple was able to do was design a face mask.

    Since then over 30 Pangu models have come to market. Each one with the potential to dramatically improve on existing technologies. 

    From the link I posted in this very thread:

    "In meteorology, the Pangu Meteorology Model (or Pangu-Weather) is the first AI model to have surpassed state-of-the-art numerical weather prediction (NWP) methods in terms of accuracy. The prediction speed is also several orders of magnitude faster. In the past, predicting the trajectory of a typhoon over 10 days took 4 to 5 hours of simulation on a high-performance cluster of 3,000 servers. Now, the Pangu model can do it in 10 seconds on a single GPU of a single server, and with more accurate results."

    Gimmicky? Pointless? 

    Google recently released their own solution for climate too. 



    elijahgmuthuk_vanalingam
  • Reply 30 of 33
    danvmdanvm Posts: 1,409member
    avon b7 said:
    I think it's fair to say that Apple is behind in this area. 

    Objectively, this year has been about ChatGPT style usage and Apple hasn't brought anything to market while others have. 

    It is also recruiting for specific roles in AI. So far, most of the talk has been only that, talk. 

    Talking about ML as they made a point of doing, is stating the obvious here. Who isn't using ML? 

    In this case of LLMs on resource strapped devices, again, some manufacturers are already using them. 

    A Pangu LLM underpins Huawei's Celia voice assistant on its latest phones. 

    I believe Xiaomi is also using LLMs on some of its phones too (although I don't know in which areas). 

    The notion of trying to do more with less is an industry constant. Research never stops in that area and in particular routers have been a persistent research target, being ridiculously low on spare memory and CPU power. I remember, many years ago, doing some external work for the Early Bird project and the entire goal was how to do efficient, real time detection of worm signatures on data streams without impacting the performance of the router. 

    Now, AI is key to real-time detection of threats in network traffic and storage (ransomware in the case of storage, which is another resource strapped area). 

    LLMs have to be run according to needs. In some cases there will be zero issues with carrying out tasks in the Cloud or at the Edge. In other cases/scenarios you might want them running more locally. Maybe even in your Earbuds (for voice recognition or Bone Voice ID purposes etc). 

    Or in your TV or even better across multiple devices at the same time. Resource pooling. 

    As usual, a terrible take. All we’ve seen are gimmicky chat bots just come out, which despite the hoorah are entirely pointless to me as a consumer. MS has limited search queries in its web product beta or whatever. I wouldn’t expect that to be immediately rolled out into production iOS in the last 6 mos. Get real. 
    Maybe you should do a little research about MS Copilot and how they are integrating it in MS 365 apps, GitHub and even Azure. If you ask me, it far more than just a "gimmicky chat bot", and ahead of what Apple have today. 
    edited December 2023 elijahgmuthuk_vanalingam
  • Reply 31 of 33
    Try talking to Apple employees, they are behind and it's a shitstorm in the teams. They're not millions of dollars per day for nothing. Lot's of projects tabled while several teams scramble to make the next iOS release feel meaningful. I mean Eddy, Greg and John are dropping stuff left and right to get AI into everything now. Wouldn't want to be on their teams. That said, I'm not looking for a generative AI from Apple, I'm looking for something really useful in daily life. Seeing some of the papers coming out, I'm liking the direction it's going. Apple going multimodal already is quite a feat. Siri will obviously be much improved, can't wait.
    elijahgwilliamlondon
  • Reply 32 of 33
    1348513485 Posts: 347member
    I find it amusing that so many compare recent AI implementations to Siri, which are not the same things, as 9secondkox has said.

    Further, it is a truism that long term solutions often require long term research, and the fact that most here think Apple is "behind" in a mechanism/feature that most users don't yet have a use for on a daily basis means nothing. 

    In addition, continuing to think that Apple's AI will be based on "Siri Plus LLM" could be completely wrong. The tool, whatever it's called, might be completely different. So this constant diminution of Siri as it exists now is irrelevant, however accurate it might be.
    mattinoz
Sign In or Register to comment.