Apple demonstrates its commitment to AI with new open source code release

Posted:
in iOS

Apple isn't standing still on AI and machine learning -- it has released a free and open-source framework for other AI developers to build on with Apple Silicon.

Siri could benefit by Apple's contributing its research with other people's
Siri could benefit by Apple's contributing its research with other people's



Even though Apple has publicly been developing Artificial Intelligence tools for years, tools that it is already implementing in the iPhone, the company is regularly perceived to be behind the rest of the industry in AI. That's the power of semantics -- because Apple calls it Machine Learning instead of AI, it is believed to not be doing AI enough.

That's even after Tim Cook said that he sees AI as a fundamental technology, or revealedhow long Apple has been working on this.

Apple has decided to take its work to a very specific audience. It's giving away a deep learning (DL) framework that its Machine Learning team has developed.

It's meant to be tested, used, and improved on by other developer groups, whether they say they're working on Machine Learning, or AI.

Just in time for the holidays, we are releasing some new software today from Apple machine learning research.

MLX is an efficient machine learning framework specifically designed for Apple silicon (i.e. your laptop!)

Code: https://t.co/Kbis7IrP80
Docs: https://t.co/CUQb80HGut

-- Awni Hannun (@awnihannun)



"Just in time for the holidays, we are releasing some new software today from Apple machine learning research, says announcement on Twitter/X by Awni Hannum, part of Apple's Machine Learning Research group.

The software, or framework, is called MLX, which Hannum says "is an efficient machine learning framework specifically designed for Apple silicon (i.e. your laptop!)."

"MLX is designed by machine learning researchers for machine learning researchers," says Apple in its MLX documentation. "The framework is intended to be user-friendly, but still efficient to train and deploy models."

More than what this framework gives users, though, is how it is being presented to the world. There was no press release, no announcement, and certainly not a WWDC keynote presentation.

Instead, Apple is contributing to the open source development of AI tools, where developers can see just how the company is far from behind.

"We intend to make it easy for researchers to extend and improve MLX with the goal of quickly exploring new ideas," continued Apple.

The full source code is available on GitHub.

Read on AppleInsider

Comments

  • Reply 1 of 18
    I am so stoked that they did this. AI talent wants open source publishing and this shows Apple being among the leaders in the field.

    A taste of positive things to come me thinks.
    byronljas99gregoriusm9secondkox2watto_cobradewme
  • Reply 2 of 18
    avon b7avon b7 Posts: 7,480member
    A good move. Somewhat late but better late than never.

    I can see a parallel with Huawei (and others) here when in late 2018 Huawei announced the Ascend line of chips and supporting frameworks and made it open source. 

    They have Mindspore, CANN, Da Vinci Pangu etc and all the hardware to go with it. 

    Ascend starts with Nano (for things like earbuds) and scales up to cluster systems with thousands of cores.

    The question must be then if Apple intends to take that step too and produce (and possibly sell/lease?) a system similar to the Ascend 910 at some point?

    It almost seems inevitable. 
    edited December 2023 9secondkox2h2p
  • Reply 3 of 18
    mpantonempantone Posts: 2,022member
    avon b7 said:
    A good move. Somewhat late but better late than never.

    I can see a parallel with Huawei (and others) here when in late 2018 Huawei announced the Ascend line of chips and supporting frameworks and made it open source. 

    They have Mindspore, CANN, Da Vinci Pangu etc and all the hardware to go with it. 

    Ascend starts with Nano (for things like earbuds) and scales up to cluster systems with thousands of cores.

    The question must be then if Apple intends to take that step too and produce (and possibly sell/lease?) a system similar to the Ascend 910 at some point?

    It almost seems inevitable. 
    It's a marathon not a sprint and there's no finish line anyhow.

    Machine learning/AI is still very much in its infancy, there is plenty of space for multiple players in multiple segments of the market, from mobile devices all the way to supercomputing.

    My guess is that Apple will not bring their ML technology as a standalone product to market. They will rather use it to eventually provide broad benefit to their customers and offer differentiation vis-a-vis the competition. They are not interested in selling rackmount devices to stuff into datacenters.

    I could conceivably see them rolling their own custom ML silicon for their own servers, optimized for their applications and providing a performance-per-watt advantage over the competition. With the Apple Silicon M3 chip family, they made a major shift to a new GPU architecture that might be a harbinger of this.

    Apple does not want to put their chip designs in their competitors' hands.

    Remember that Apple's fundamental philosophy is that they are essentially a software company whose code runs best on their proprietary hardware platforms. They do not gain a competitive advantage by shipping trays of Apple Silicon SoCs from TSMC to others.
    9secondkox2watto_cobradanoxmac_dogh2pdewme
  • Reply 4 of 18
    avon b7 said:
    A good move. Somewhat late but better late than never.

    I can see a parallel with Huawei (and others) here when in late 2018 Huawei announced the Ascend line of chips and supporting frameworks and made it open source. 

    They have Mindspore, CANN, Da Vinci Pangu etc and all the hardware to go with it. 

    Ascend starts with Nano (for things like earbuds) and scales up to cluster systems with thousands of cores.

    The question must be then if Apple intends to take that step too and produce (and possibly sell/lease?) a system similar to the Ascend 910 at some point?

    It almost seems inevitable. 
    Apple doesn’t develop commodity hardware for everyone else to use. They develop hardware and software that coalesces into a tangible, user product for its own customers. 

    A framework or standard is another thing though as Apple has historically either developed themselves or partnered to develop technologies such as FireWire, QuickTime, thunderbolt, etc. Opening up some of their machine learning (AI) framework seems to be this. It benefits Apple, but also the computing landscape as a whole. 

    It’s hilarious. While everyone is focused on AI nerd stuff, Apple has already been shopping hyper useful AI in its products. Now if they can just hurry up and finish the new and improved Siri already…
    watto_cobrarezwits
  • Reply 5 of 18
    avon b7avon b7 Posts: 7,480member
    avon b7 said:
    A good move. Somewhat late but better late than never.

    I can see a parallel with Huawei (and others) here when in late 2018 Huawei announced the Ascend line of chips and supporting frameworks and made it open source. 

    They have Mindspore, CANN, Da Vinci Pangu etc and all the hardware to go with it. 

    Ascend starts with Nano (for things like earbuds) and scales up to cluster systems with thousands of cores.

    The question must be then if Apple intends to take that step too and produce (and possibly sell/lease?) a system similar to the Ascend 910 at some point?

    It almost seems inevitable. 
    Apple doesn’t develop commodity hardware for everyone else to use. They develop hardware and software that coalesces into a tangible, user product for its own customers. 

    A framework or standard is another thing though as Apple has historically either developed themselves or partnered to develop technologies such as FireWire, QuickTime, thunderbolt, etc. Opening up some of their machine learning (AI) framework seems to be this. It benefits Apple, but also the computing landscape as a whole. 

    It’s hilarious. While everyone is focused on AI nerd stuff, Apple has already been shopping hyper useful AI in its products. Now if they can just hurry up and finish the new and improved Siri already…
    There is nothing 'commodity' about an Ascend 910 which costs millions of dollars when used in something like an Atlas system. 


    AFAIK, Apple hasn't shipped anything that others haven't already shipped and yes, Siri is a shortcoming but it's clear that it wasn't a priority for years or else it wouldn't need major improvements.

    There are probably more important AI related goals at the moment. 
    edited December 2023 9secondkox2
  • Reply 6 of 18
    avon b7avon b7 Posts: 7,480member
    mpantone said:
    avon b7 said:
    A good move. Somewhat late but better late than never.

    I can see a parallel with Huawei (and others) here when in late 2018 Huawei announced the Ascend line of chips and supporting frameworks and made it open source. 

    They have Mindspore, CANN, Da Vinci Pangu etc and all the hardware to go with it. 

    Ascend starts with Nano (for things like earbuds) and scales up to cluster systems with thousands of cores.

    The question must be then if Apple intends to take that step too and produce (and possibly sell/lease?) a system similar to the Ascend 910 at some point?

    It almost seems inevitable. 
    It's a marathon not a sprint and there's no finish line anyhow.

    Machine learning/AI is still very much in its infancy, there is plenty of space for multiple players in multiple segments of the market, from mobile devices all the way to supercomputing.

    My guess is that Apple will not bring their ML technology as a standalone product to market. They will rather use it to eventually provide broad benefit to their customers and offer differentiation vis-a-vis the competition. They are not interested in selling rackmount devices to stuff into datacenters.

    I could conceivably see them rolling their own custom ML silicon for their own servers, optimized for their applications and providing a performance-per-watt advantage over the competition. With the Apple Silicon M3 chip family, they made a major shift to a new GPU architecture that might be a harbinger of this.

    Apple does not want to put their chip designs in their competitors' hands.

    Remember that Apple's fundamental philosophy is that they are essentially a software company whose code runs best on their proprietary hardware platforms. They do not gain a competitive advantage by shipping trays of Apple Silicon SoCs from TSMC to others.
    There's a lot going on recently that does indicate they are racing to improve things. Whether they call it AI or ML or something else. 

    There would be no issue in selling cluster systems to third parties if the underlying software is open. 

    It would be exactly the same model as they have now. Selling hardware to end users. The only difference would be that the systems would be built to order for specific use cases and very, very expensive. Leasing time is also an option (another 'service' branch). 

    Apple Silicon could theoretically find its way into powerful AI inference fields if they are working on the GPU and interconnect side of things. 

    Maybe not right now, but I wouldn't rule it out. 


  • Reply 7 of 18
    danoxdanox Posts: 2,671member
    avon b7 said:
    mpantone said:
    avon b7 said:
    A good move. Somewhat late but better late than never.

    I can see a parallel with Huawei (and others) here when in late 2018 Huawei announced the Ascend line of chips and supporting frameworks and made it open source. 

    They have Mindspore, CANN, Da Vinci Pangu etc and all the hardware to go with it. 

    Ascend starts with Nano (for things like earbuds) and scales up to cluster systems with thousands of cores.

    The question must be then if Apple intends to take that step too and produce (and possibly sell/lease?) a system similar to the Ascend 910 at some point?

    It almost seems inevitable. 
    It's a marathon not a sprint and there's no finish line anyhow.

    Machine learning/AI is still very much in its infancy, there is plenty of space for multiple players in multiple segments of the market, from mobile devices all the way to supercomputing.

    My guess is that Apple will not bring their ML technology as a standalone product to market. They will rather use it to eventually provide broad benefit to their customers and offer differentiation vis-a-vis the competition. They are not interested in selling rackmount devices to stuff into datacenters.

    I could conceivably see them rolling their own custom ML silicon for their own servers, optimized for their applications and providing a performance-per-watt advantage over the competition. With the Apple Silicon M3 chip family, they made a major shift to a new GPU architecture that might be a harbinger of this.

    Apple does not want to put their chip designs in their competitors' hands.

    Remember that Apple's fundamental philosophy is that they are essentially a software company whose code runs best on their proprietary hardware platforms. They do not gain a competitive advantage by shipping trays of Apple Silicon SoCs from TSMC to others.
    There's a lot going on recently that does indicate they are racing to improve things. Whether they call it AI or ML or something else. 

    There would be no issue in selling cluster systems to third parties if the underlying software is open. 

    It would be exactly the same model as they have now. Selling hardware to end users. The only difference would be that the systems would be built to order for specific use cases and very, very expensive. Leasing time is also an option (another 'service' branch). 

    Apple Silicon could theoretically find its way into powerful AI inference fields if they are working on the GPU and interconnect side of things. 

    Maybe not right now, but I wouldn't rule it out. 




    Apple isn't  behind with a SOC with UMA a Mac Studio Ultra M1, M2, Mac Book Pro M3 or the upcoming M3 Ultra with maxed out memory is proving to be very good machine for running local large language models, what is interesting is Apple up to this point not taking advantage of the possible server possibilities. (that big block of addressable UMA seems to be very usable.)

    https://om.co/2023/10/30/apple-launches-m3-chips-with-ai/  Very article by OM, (He talks about the M3, but since the intro of Apple Silicon most of the elements have quietly already been there in Apple software and hardware).

    https://news.ycombinator.com/item?id=37846387   (A discussion on Apple Silicon UMA and AI models which is turning into big deal.)

    https://www.apple.com/newsroom/2023/10/apple-unveils-m3-m3-pro-and-m3-max-the-most-advanced-chips-for-a-personal-computer/

    https://creativestrategies.com/apple-silicon-and-the-mac-in-the-age-of-ai/ (Ben Bejarin running a few tests)

    https://multiplatform.ai/apple-emerges-as-the-preferred-choice-for-ai-developers-in-harnessing-large-scale-open-source-llms/

    There is no theory Apple is finding it's way into AI inference fields because of that UMA memory.
    edited December 2023 watto_cobrarezwitswilliamlondon
  • Reply 8 of 18
    avon b7 said:
    A good move. Somewhat late but better late than never.

    I can see a parallel with Huawei (and others) here when in late 2018 Huawei announced the Ascend line of chips and supporting frameworks and made it open source. 

    They have Mindspore, CANN, Da Vinci Pangu etc and all the hardware to go with it. 

    Ascend starts with Nano (for things like earbuds) and scales up to cluster systems with thousands of cores.

    The question must be then if Apple intends to take that step too and produce (and possibly sell/lease?) a system similar to the Ascend 910 at some point?

    It almost seems inevitable. 
    So many words I’ve never heard before … Mindspore, CANN, Da Vinci Pangu, Huawei.
    watto_cobrawilliamlondon
  • Reply 9 of 18
    mpantonempantone Posts: 2,022member
    avon b7 said:
    mpantone said:
    avon b7 said:
    A good move. Somewhat late but better late than never.

    I can see a parallel with Huawei (and others) here when in late 2018 Huawei announced the Ascend line of chips and supporting frameworks and made it open source. 

    They have Mindspore, CANN, Da Vinci Pangu etc and all the hardware to go with it. 

    Ascend starts with Nano (for things like earbuds) and scales up to cluster systems with thousands of cores.

    The question must be then if Apple intends to take that step too and produce (and possibly sell/lease?) a system similar to the Ascend 910 at some point?

    It almost seems inevitable. 
    It's a marathon not a sprint and there's no finish line anyhow.

    Machine learning/AI is still very much in its infancy, there is plenty of space for multiple players in multiple segments of the market, from mobile devices all the way to supercomputing.

    My guess is that Apple will not bring their ML technology as a standalone product to market. They will rather use it to eventually provide broad benefit to their customers and offer differentiation vis-a-vis the competition. They are not interested in selling rackmount devices to stuff into datacenters.

    I could conceivably see them rolling their own custom ML silicon for their own servers, optimized for their applications and providing a performance-per-watt advantage over the competition. With the Apple Silicon M3 chip family, they made a major shift to a new GPU architecture that might be a harbinger of this.

    Apple does not want to put their chip designs in their competitors' hands.

    Remember that Apple's fundamental philosophy is that they are essentially a software company whose code runs best on their proprietary hardware platforms. They do not gain a competitive advantage by shipping trays of Apple Silicon SoCs from TSMC to others.
    There's a lot going on recently that does indicate they are racing to improve things. Whether they call it AI or ML or something else. 

    There would be no issue in selling cluster systems to third parties if the underlying software is open. 

    It would be exactly the same model as they have now. Selling hardware to end users. The only difference would be that the systems would be built to order for specific use cases and very, very expensive. Leasing time is also an option (another 'service' branch). 

    Apple Silicon could theoretically find its way into powerful AI inference fields if they are working on the GPU and interconnect side of things. 

    Maybe not right now, but I wouldn't rule it out. 


    It's important to remember that Apple has been selling hardware with machine learning cores for over six years now, the A11 SoC in the iPhone X and the iPhone 8 series (Fall 2017 releases), referred to as the "Neural Engine" (a name they have continued to use).

    This is around the same time Nvidia debuted their Tensor cores in their Turing GPUs (included in the GeForce RTX 20 Series graphics card generation).

    I don't see Apple marketing a standalone ML product anytime in the near future. General purpose CPU cores are still better for many tasks. Apple doesn't sell things based on specs, they sell holistic experiences.

    And to Joe Consumer it really doesn't matter whether it's the traditional CPU cores, graphics cores or the ML cores that do anything, whether it's sending a text, taking a photo, or even just unlocking your phone with Face ID.

    Remember that silicon differentiation came about because no one type of transistor is perfect for every single task. That's why there are many different types of them now in Apple's devices. 

    Apple's current business model doesn't pursue building expensive one off custom solutions to enterprise customers. Of course they could change some day but they don't seem particularly inclined. That would probably be better off by divesting their consumer business or spinning off an enterprise-focused business.

    Heck, if anything, Apple has been drifting farther and farther away from that. They canned their Xserve product line a long time ago and neglected the Mac Pro for years. Today's Apple Silicon-powered Mac Pro has a really limited itself to a handful of use cases. Even on the software side they've pared down their pro-focused offerings (RIP Aperture). And it's not like they're selling software that competes with the Oracle RDBMS.

    Right now there is zero evidence that Apple wants to compete in the (lucrative) datacenter market. Could they? Yes. Will they? Likelihood is probably less than 0.1% in the next five years. A hundred years from now? Maybe.
    edited December 2023 watto_cobra9secondkox2
  • Reply 10 of 18
    "because Apple calls it Machine Learning instead of AI"

    Machine learning is a minuscule sub-field of the field of AI! It so happens that the speed of devices affordable to consumers have made ML feasible in recent years so it is currently making all the headlines. Neural networks, upon which ML runs, have been around for many decades. Their uses and limitations are well understood but that doesn't stop billion-dollar valuations of startups claiming a new 'AI'-based application. 

    The best ML app will never approach the best equivalent human behaviour, even with learning supervised by the best human expertise. Most ML is not supervised so ML apps are fast, impressive at the moment but they will never be 'intelligent'. The real potential of AI is in evolutionary algorithms (a.k.a. artificial evolution) whose results can exceed those of humans. And to think I was 'growing' neural nets within an evolutionary algorithm almost 30 years ago and no one, not even in the field of computer science, was remotely interested :-( 
    byronlwilliamlondon
  • Reply 11 of 18
    avon b7avon b7 Posts: 7,480member
    mpantone said:
    avon b7 said:
    mpantone said:
    avon b7 said:
    A good move. Somewhat late but better late than never.

    I can see a parallel with Huawei (and others) here when in late 2018 Huawei announced the Ascend line of chips and supporting frameworks and made it open source. 

    They have Mindspore, CANN, Da Vinci Pangu etc and all the hardware to go with it. 

    Ascend starts with Nano (for things like earbuds) and scales up to cluster systems with thousands of cores.

    The question must be then if Apple intends to take that step too and produce (and possibly sell/lease?) a system similar to the Ascend 910 at some point?

    It almost seems inevitable. 
    It's a marathon not a sprint and there's no finish line anyhow.

    Machine learning/AI is still very much in its infancy, there is plenty of space for multiple players in multiple segments of the market, from mobile devices all the way to supercomputing.

    My guess is that Apple will not bring their ML technology as a standalone product to market. They will rather use it to eventually provide broad benefit to their customers and offer differentiation vis-a-vis the competition. They are not interested in selling rackmount devices to stuff into datacenters.

    I could conceivably see them rolling their own custom ML silicon for their own servers, optimized for their applications and providing a performance-per-watt advantage over the competition. With the Apple Silicon M3 chip family, they made a major shift to a new GPU architecture that might be a harbinger of this.

    Apple does not want to put their chip designs in their competitors' hands.

    Remember that Apple's fundamental philosophy is that they are essentially a software company whose code runs best on their proprietary hardware platforms. They do not gain a competitive advantage by shipping trays of Apple Silicon SoCs from TSMC to others.
    There's a lot going on recently that does indicate they are racing to improve things. Whether they call it AI or ML or something else. 

    There would be no issue in selling cluster systems to third parties if the underlying software is open. 

    It would be exactly the same model as they have now. Selling hardware to end users. The only difference would be that the systems would be built to order for specific use cases and very, very expensive. Leasing time is also an option (another 'service' branch). 

    Apple Silicon could theoretically find its way into powerful AI inference fields if they are working on the GPU and interconnect side of things. 

    Maybe not right now, but I wouldn't rule it out. 


    It's important to remember that Apple has been selling hardware with machine learning cores for over six years now, the A11 SoC in the iPhone X and the iPhone 8 series (Fall 2017 releases), referred to as the "Neural Engine" (a name they have continued to use).

    This is around the same time Nvidia debuted their Tensor cores in their Turing GPUs (included in the GeForce RTX 20 Series graphics card generation).

    I don't see Apple marketing a standalone ML product anytime in the near future. General purpose CPU cores are still better for many tasks. Apple doesn't sell things based on specs, they sell holistic experiences.

    And to Joe Consumer it really doesn't matter whether it's the traditional CPU cores, graphics cores or the ML cores that do anything, whether it's sending a text, taking a photo, or even just unlocking your phone with Face ID.

    Remember that silicon differentiation came about because no one type of transistor is perfect for every single task. That's why there are many different types of them now in Apple's devices. 

    Apple's current business model doesn't pursue building expensive one off custom solutions to enterprise customers. Of course they could change some day but they don't seem particularly inclined. That would probably be better off by divesting their consumer business or spinning off an enterprise-focused business.

    Right now there is zero evidence that Apple wants to compete in the (lucrative) datacenter market.
    Agreed on the current status but my point was forward looking, based on what is happening (and had happened) industry wide. All speculation on my part of course. 

    If we take health as just one area of interest, clearly there are good reasons to use huge cluster systems to crunch massive datasets. 

    Then there are the GPT style situations. Google's Gemini is making waves now. 

    Then there is object recognition for autonomous driving. 

    Natural speech recognition etc. 

    Those elements alone require very expensive hardware. 

    If Apple is open sourcing its AI frameworks it looks as if something bigger could be coming down the pipe. Especially when the steps of others are taken into account. It will depend to a degree if the frameworks are for inference etc. 

    There are rumours that Apple has decided to spend a pretty sum on specific third party hardware for these tasks. Nvidia, I would imagine. Or a tie up with Google? 

    They might not want to get into the more scientific or industrial side of things themselves (climate models, pharma, mining, logistics, smart manufacturing etc) but if they open source the frameworks anything is possible. 

    Also, if the plan is to offer a full stack solution (I don't know if that is the case) then it would be logical to think it would include more than their current hardware/silicon.

    During the pandemic Huawei was able to train its systems to to recognise Covid infection in lung scans which were uploaded to the cloud over direct 5G links from hospitals, greatly reducing diagnosis tines. All Apple could muster were some face masks. 

    Apple's neural network efforts in its 2017 were actually not up to much when compared to what was being done with NPUs on chips like the Kirin 970. The focus seemed to be mainly with depth sensing and photography.

    It has improved over the years but there has been very little on show in the wider AI field which is where all the buzz (rightly or wrongly) has been lately. It just looks like Apple is 'reacting' here (job listings, rumours of increased spending on AI related hardware, no real competing solutions etc). It has no alternative but to make statements about it's efforts but in terms of competing equivalent solutions to what's been in the news for most of the year, very little has appeared. 

    This open source offering may be a sign of a more concerted effort to enter the field. If that involves producing its own homegrown dedicated hardware solutions for use with the software it would make sense so, at the moment, I have a hunch that that is where we will see a new hardware branch. 



    gatorguy
  • Reply 12 of 18
    mpantone said:
    avon b7 said:
    mpantone said:
    avon b7 said:
    A good move. Somewhat late but better late than never.

    I can see a parallel with Huawei (and others) here when in late 2018 Huawei announced the Ascend line of chips and supporting frameworks and made it open source. 

    They have Mindspore, CANN, Da Vinci Pangu etc and all the hardware to go with it. 

    Ascend starts with Nano (for things like earbuds) and scales up to cluster systems with thousands of cores.

    The question must be then if Apple intends to take that step too and produce (and possibly sell/lease?) a system similar to the Ascend 910 at some point?

    It almost seems inevitable. 
    It's a marathon not a sprint and there's no finish line anyhow.

    Machine learning/AI is still very much in its infancy, there is plenty of space for multiple players in multiple segments of the market, from mobile devices all the way to supercomputing.

    My guess is that Apple will not bring their ML technology as a standalone product to market. They will rather use it to eventually provide broad benefit to their customers and offer differentiation vis-a-vis the competition. They are not interested in selling rackmount devices to stuff into datacenters.

    I could conceivably see them rolling their own custom ML silicon for their own servers, optimized for their applications and providing a performance-per-watt advantage over the competition. With the Apple Silicon M3 chip family, they made a major shift to a new GPU architecture that might be a harbinger of this.

    Apple does not want to put their chip designs in their competitors' hands.

    Remember that Apple's fundamental philosophy is that they are essentially a software company whose code runs best on their proprietary hardware platforms. They do not gain a competitive advantage by shipping trays of Apple Silicon SoCs from TSMC to others.
    There's a lot going on recently that does indicate they are racing to improve things. Whether they call it AI or ML or something else. 

    There would be no issue in selling cluster systems to third parties if the underlying software is open. 

    It would be exactly the same model as they have now. Selling hardware to end users. The only difference would be that the systems would be built to order for specific use cases and very, very expensive. Leasing time is also an option (another 'service' branch). 

    Apple Silicon could theoretically find its way into powerful AI inference fields if they are working on the GPU and interconnect side of things. 

    Maybe not right now, but I wouldn't rule it out. 


    It's important to remember that Apple has been selling hardware with machine learning cores for over six years now, the A11 SoC in the iPhone X and the iPhone 8 series (Fall 2017 releases), referred to as the "Neural Engine" (a name they have continued to use).

    This is around the same time Nvidia debuted their Tensor cores in their Turing GPUs (included in the GeForce RTX 20 Series graphics card generation).

    I don't see Apple marketing a standalone ML product anytime in the near future. General purpose CPU cores are still better for many tasks. Apple doesn't sell things based on specs, they sell holistic experiences.

    And to Joe Consumer it really doesn't matter whether it's the traditional CPU cores, graphics cores or the ML cores that do anything, whether it's sending a text, taking a photo, or even just unlocking your phone with Face ID.

    Remember that silicon differentiation came about because no one type of transistor is perfect for every single task. That's why there are many different types of them now in Apple's devices. 

    Apple's current business model doesn't pursue building expensive one off custom solutions to enterprise customers. Of course they could change some day but they don't seem particularly inclined. That would probably be better off by divesting their consumer business or spinning off an enterprise-focused business.

    Heck, if anything, Apple has been drifting farther and farther away from that. They canned their Xserve product line a long time ago and neglected the Mac Pro for years. Today's Apple Silicon-powered Mac Pro has a really limited itself to a handful of use cases. Even on the software side they've pared down their pro-focused offerings (RIP Aperture). And it's not like they're selling software that competes with the Oracle RDBMS.

    Right now there is zero evidence that Apple wants to compete in the (lucrative) datacenter market. Could they? Yes. Will they? Likelihood is probably less than 0.1% in the next five years. A hundred years from now? Maybe.
    Great post. 
  • Reply 13 of 18
    gatorguygatorguy Posts: 24,084member
    avon b7 said: There are rumours that Apple has decided to spend a pretty sum on specific third party hardware for these tasks. Nvidia, I would imagine. Or a tie up with Google? 
    AMD...

     Microsoft will be using the new AMD AI chip rather than Nvidia's, and I think Facebook/Meta has also decided to go AMD too. Likely for cost reasons IMO.
    Google uses their own chip design which may (or may not) exceed Nvidia's performance. 
    edited December 2023
  • Reply 14 of 18
    I think the view that Apple may develop dedicated server side ML/AI accelerators for their own internal use with iCloud is right on the money and is on trend with what the other hyper scalers have been doing or are doing. Services is increasingly a larger proportion of overall gross margin and a reason why flat/sluggish device sales do not dramatically affect earnings. Apple will have the margins and incentive to go all in on server side AI for internal use.

    Generative AI is here to stay and we are seeing very early days of adoption and what is clear is that it will be an incredible eater of compute. Anyone wanting to be in the services game going forward will have to spend a lot so it makes perfect business sense for Apple to build in house.

    Nvidia has held the generative AI server side market captive for a while and AMD is now providing some competition and more importantly more supply. Nvidia have been supply blocked for a long time.

    Apple's privacy ethos will likely continue to drive model inference to be run on end user devices so as little as possible runs over the wire back to Apple. How this affects their model development I do not know. Likely we may see increasing self-tuning of models on device to enable your local context to be optimally effective. This will likely mean there is a nice client chip - server chip co-development synergy and maybe also interesting opportunities for optimisation.

    Training of models and model development is super heavy lifting so Mac Pro will likely evolve into an AI beast to run as much locally rather than have to phone home to cloud based services. I cannot see AI heavy video tools uploading terabytes of video to the cloud instead of running locally.

    People have posted that inference requires a lot of memory bandwidth and expressed disappointment in the lower memory bandwidth of M3 vs M2 and overall much lower than a Nvidia 40XX. I am no expert in the topic and I hope that the released libs from Apple may help the common inference tool stack leverage the Apple Silicon design instead of executing like it was on a vanilla GPU.

    Will we see multiple M4 Ultra SoCs in a Mac Pro AI edition in the future? A clue to this would be if there is a fast interconnect fabric available for the M series chips. Overall Apple's lead in performance per watt will be a massive advantage as Nvidia cards will soon require a side loaded small fusion reactor to run.


    avon b7danox
  • Reply 15 of 18
    y2any2an Posts: 181member
    I think the view that Apple may develop dedicated server side ML/AI accelerators for their own internal use with iCloud is right on the money and is on trend with what the other hyper scalers have been doing or are doing. Services is increasingly a larger proportion of overall gross margin and a reason why flat/sluggish device sales do not dramatically affect earnings. Apple will have the margins and incentive to go all in on server side AI for internal use.

    Generative AI is here to stay and we are seeing very early days of adoption and what is clear is that it will be an incredible eater of compute. Anyone wanting to be in the services game going forward will have to spend a lot so it makes perfect business sense for Apple to build in house.

    Nvidia has held the generative AI server side market captive for a while and AMD is now providing some competition and more importantly more supply. Nvidia have been supply blocked for a long time.

    Apple's privacy ethos will likely continue to drive model inference to be run on end user devices so as little as possible runs over the wire back to Apple. How this affects their model development I do not know. Likely we may see increasing self-tuning of models on device to enable your local context to be optimally effective. This will likely mean there is a nice client chip - server chip co-development synergy and maybe also interesting opportunities for optimisation.

    Training of models and model development is super heavy lifting so Mac Pro will likely evolve into an AI beast to run as much locally rather than have to phone home to cloud based services. I cannot see AI heavy video tools uploading terabytes of video to the cloud instead of running locally.

    People have posted that inference requires a lot of memory bandwidth and expressed disappointment in the lower memory bandwidth of M3 vs M2 and overall much lower than a Nvidia 40XX. I am no expert in the topic and I hope that the released libs from Apple may help the common inference tool stack leverage the Apple Silicon design instead of executing like it was on a vanilla GPU.

    Will we see multiple M4 Ultra SoCs in a Mac Pro AI edition in the future? A clue to this would be if there is a fast interconnect fabric available for the M series chips. Overall Apple's lead in performance per watt will be a massive advantage as Nvidia cards will soon require a side loaded small fusion reactor to run.


    This bears all the hallmarks of being written by an AI :-O
    gatorguywilliamlondon
  • Reply 16 of 18
    avon b7 said:
    avon b7 said:
    A good move. Somewhat late but better late than never.

    I can see a parallel with Huawei (and others) here when in late 2018 Huawei announced the Ascend line of chips and supporting frameworks and made it open source. 

    They have Mindspore, CANN, Da Vinci Pangu etc and all the hardware to go with it. 

    Ascend starts with Nano (for things like earbuds) and scales up to cluster systems with thousands of cores.

    The question must be then if Apple intends to take that step too and produce (and possibly sell/lease?) a system similar to the Ascend 910 at some point?

    It almost seems inevitable. 
    Apple doesn’t develop commodity hardware for everyone else to use. They develop hardware and software that coalesces into a tangible, user product for its own customers. 

    A framework or standard is another thing though as Apple has historically either developed themselves or partnered to develop technologies such as FireWire, QuickTime, thunderbolt, etc. Opening up some of their machine learning (AI) framework seems to be this. It benefits Apple, but also the computing landscape as a whole. 

    It’s hilarious. While everyone is focused on AI nerd stuff, Apple has already been shopping hyper useful AI in its products. Now if they can just hurry up and finish the new and improved Siri already…
    Siri is a shortcoming but it's clear that it wasn't a priority for years or else it wouldn't need major improvements.

    There are probably more important AI related goals at the moment. 
    I think you missed the boat on that one. 

    Siri was one of the first “AI’ personal assistants. 

    I’m not talking about just shoring up conversational responses. We’re talking adding ML to it. Siri being able to execute scripts you set up, answer in superior fashion to chat gpt, create assets. Assist with writing, fact-checking, and grammar. Double checking your code and suggesting best practices for everything from coding to video editing to graphic design to remodeling your house, based on a review of current status. 

    Apple wouldn’t need to remarket anything. You simply invoke Siri and boom. Cue Johnny Srouji “I don’t think we are” smirk. LOL

  • Reply 17 of 18
    cpsrocpsro Posts: 3,181member
    It's for your laptop because nobody in their right mind would buy a Mac Pro.
    williamlondon
Sign In or Register to comment.