Apple's iOS 18 AI will be on-device preserving privacy, and not server-side

2»

Comments

  • Reply 21 of 28
    tipootipoo Posts: 1,144member
    On-device processes could help eliminate certain controversies found with server-side AI tools. For example, these tools have been known to hallucinate, meaning they make up information confidently. 
     

    Er, location of processing has nothing to do with LLMs hallucinating and confidently giving wrong answers or not. 
    ctt_zhAlex1Nwilliamlondon
  • Reply 22 of 28
    Wesley HilliardWesley Hilliard Posts: 204member, administrator, moderator, editor
    What Apple is advertising of nothing more than what Google Pixel Phones already have which is dedicated Machine Learning hardware and ML models that just run on the phones to do things like time transcribing, image recognition, etc. 

    Nothing new. Just a marketing packaged by Apple as if it is a "breakthrough" feature.. 

    YAWN!!! 
    All of these features existed on iPhone before Google even invented a Pixel. On device ML models have always existed on iPhone. The Neural Engine enhanced how the worked with dedicated hardware in 2017.

    Apple is anything but late in this field. lol
    ihatescreennameswilliamlondonAlex1Ndanoxwatto_cobraroundaboutnow
  • Reply 23 of 28
    entropysentropys Posts: 4,189member
    Indeed. AI is the new ML.

    yeah more than that, but still.
    I’ve been taking photos of plants for ages and the iPhone tells me what it is, pretty well actually.
    edited April 16 Alex1Nwatto_cobra
  • Reply 24 of 28
    A few thoughts:

    1) Apple already has the neural engine on chip and very few developer are utilizing it. So I think for wwdc they will be focusing on educating, expanding, and upgrading it.  
    2) if Apple is now controlling its chips they have more control over prices.
    3) imagine a super large chip size. The problem is errors. Apple already makes cores that can be bypassed due to errors.  Why not do the same with ram and storage? If they make the segmented into many small regions of ram and cpu that each can be bypassed they could produce a giant chip with high yield. This architecture is not popular for cpu but fine for CG and AI.
    4) I was reading online that the current neural engine is 16 bit.  I am have a hunch this may change in the future.
    Alex1Ndanoxwatto_cobra
  • Reply 25 of 28
    kelliekellie Posts: 51member
    The AI I want to see on a Mac or an iPhone is a configuration assistant that would analyze all of the configuration parameters and then ask me using plain English questions how I would like to use my system, my privacy preferences, my risk tolerance, etc. and then make suggestions for recommended changes.  Or being able to instruct Siri about configuration changes I’d like to make and it goes and makes the changes without me having to plod through all sorts of menus. 
    Alex1Nwatto_cobra
  • Reply 26 of 28
    canukstormcanukstorm Posts: 2,717member
    kellie said:
    The AI I want to see on a Mac or an iPhone is a configuration assistant that would analyze all of the configuration parameters and then ask me using plain English questions how I would like to use my system, my privacy preferences, my risk tolerance, etc. and then make suggestions for recommended changes.  Or being able to instruct Siri about configuration changes I’d like to make and it goes and makes the changes without me having to plod through all sorts of menus. 
    This would actually very cool and useful
    Alex1Nwatto_cobra
  • Reply 27 of 28
    kellie said:
    The AI I want to see on a Mac or an iPhone is a configuration assistant that would analyze all of the configuration parameters and then ask me using plain English questions how I would like to use my system, my privacy preferences, my risk tolerance, etc. and then make suggestions for recommended changes.  Or being able to instruct Siri about configuration changes I’d like to make and it goes and makes the changes without me having to plod through all sorts of menus. 
    This would actually very cool and useful
    “Hey siri, please reconfigure my iPhone settings as I might prefer”

    i found these results for you.
    Alex1N
  • Reply 28 of 28
    danoxdanox Posts: 2,948member
    ctt_zh said:
    danox said:
    gatorguy said:
    avon b7 said:
    Sounds like a more marketing focused play to push the privacy and latency angles. Assuming the rumour is true. 

    That's certainly one way to do it but isn't without its own issues. Limited scope and how many older iPhones can make decent use of it for example. 

    We'll have to wait and see how they announce it because other options from multiple vendors will remain open to iOS users anyway (perhaps some will be monetised through usage or subscription options).

    There will also need to be a range of models to play off. 

    NLU, NLG are obviously two key areas. Image related AI will probably require an improvement too.

    Hallucination is a property of AI models and won't go away in that sense. Hallucination is even desired in many scenarios. 

    The problem is that there are obviously scenarios where you would prefer not to have it. I believe that can be done quite well but normally requires the model to sit within another model etc, upping resource usage. 
    It's much the same way the most recent Pixel phones do it. Most of the (Generative) AI functions happen on device, a privacy-friendly approach. Computationally intensive features and web search are processed "in the cloud" for obvious reasons.  I think Apple will be following the same path, but be better at marketing the privacy of the on-device portions of the system than Google is. 

    Sure, Apple will be different because they have the superior mobile/desktop/laptop hardware/OS to make it different the 3.5-4 hours Video Boost and other AI miscues exist because Google hardware is weak and Samsung has no real control over what Google feeds them OS wise. The Tensor is easily more than five years behind Apples hardware they have no choice but to use a phone home solution.

    https://www.androidpolice.com/video-boost-pixel-8-pro-review/ Phone home to the cloud servers and wait 3.5-4 hours for a AI/video boomerang.
    Key phrase from the article... "It looks like the initial batch of features will function without the need for an internet connection.". If Apple can get its AI Servers remotely near the capabilities of Google's Cloud / Azure you could well be seeing more intensive / complex features being processed in the cloud by Apple (folks will indeed hope this is the case if the initial on-device features are a bit dull, compared to what the competition has offered for a while now). 

    Regarding the Tensor chip, it's all about the software and models in combination with the chip. You really need to think about this differently, you're looking at AI in terms of solely the SoC like it's 2015... 
    The Tensor is a extremely weak and years behind Apple SOC any solution by Google and Meta will be on the cloud why? they have no choice and vacuuming data up from users to sell to their real customers is how they make money.
    watto_cobra
Sign In or Register to comment.