Apple is reportedly investing heavily into Nvidia servers for AI development
Perhaps defying Apple's very public claim that it is using Apple Silicon servers to run Apple Intelligence, an analyst insists the company is now spending $1 billion to buy Nvidia systems as well.

Siri icon in a datacenter
In April 2024, there were the rumors that Apple would use its own Apple Silicon processors to run its AI servers. Then in June 2024, the expectation was that whole data centers would be run on Apple Silicon chips.
By September 2024, it was certain. Apple's Craig Federighi said its Apple Intelligence servers were on Apple Silicon, and that this was crucial to making the company's AI services private.
Yet now according to Loop Capital analyst Ananda Baruah -- as spotted by Investor's Business Daily -- Apple is in the process of spending around $1 billion to order in new Nvidia servers specifically for generative AI.
"AAPL is officially in the large server cluster Gen AI game and SMCI [Super Micro Computer] & Dell are the key server partners," he wrote in a note to investors. "While we are still gathering fuller context, this appears to have the potential to be a Gen AI LLM (large language model) cluster."
Baruah claims that Apple is buying 250 Nvidia NVL72 servers at a cost of between $3.7 million and $4 million each.
According to Nvidia, its NVL72 server contains 36 Grace CPUs and 72 Blackwell GPUs. The company also says that, as of March 18, 2025, that this server is not yet available.
Doubtlessly Apple could pre-order the servers now, and it's not surprising that the company sees a need to expand its servers. Given the quantities, this may be for development purposes, and not public-facing, but there's no way to tell right now -- and that's assuming the report is correct.
If it's for more than development, this just doesn't quite fit with Federighi's claim that he thinks using Apple Silicon servers "sets a new standard for processing in the cloud in the industry."
"Building Apple Silicon servers in the data center when we didn't have any before [and] building a custom OS to run in the data center was huge," he said. "[Creating] the trust model where your device will refuse to issue a request to a server unless the signature of all the software the server is running has been published to a transparency log was certainly one of the most unique elements of the solution -- and totally critical to the trust model."
Read on AppleInsider
Comments
Nvidia has CUDA. Huawei has CANN.
Has Apple released an equivalent solution?
I'm not qualified to say whether they are 'equivalent' to CUDA. But I believe they are focused on doing the same general job.
But I could see using Nvidia in a limited capacity in the short run as Apple works to catch up.
Everyone is just going nuts over this because it is Nvidia, who is on the rather long list of companies that Apple fans are supposed to despise (along with Microsoft, Google, Intel, Samsung, Qualcomm, Masimo, Amazon and I am certain that I am leaving out a lot more) despite Apple's own, er, history of doing stuff. Such as Steve Jobs accusing Nvidia of IP theft and Apple getting upset at Nvidia's refusal to make custom MacBook GPUs under terms that likely would have bankrupted Nvidia. But honestly, it is only 250 servers for less than $1 billion. Lots of companies far smaller than Apple are paying far more to buy way more.
They are just going to be used to cover gaps that Apple can't immediately fill with their own tech: short term stuff. Other companies have already spent far more time and money being among the first to do what Apple needs to get done now. Apple will be able to trace their steps at a fraction of the time and cost while avoiding their mistakes. Once they are finished using the servers and CUDA to play catch-up they'll be done with them and will probably donate them to some university or nonprofit for a tax writeoff, and the engineers that they hire to work on this will make top dollar for a relatively brief gig and will leave with the Apple experience on their resumes that will allow them to work wherever Apple's noncompete clause allows. And yes, this means next time they will actually go with Nvidia when they want to instead of when they have to, which is the way that it should be anyway. As Apple is working with companies that they have literally sued (or threatened to) like Microsoft, Samsung, Google and Amazon then there was never any reason to try to freeze Nvidia out in the first place. That MacBook GPU thing? Well Apple wound up using AMD GPUs that weren't nearly as good, which forced a ton of people who needed the best graphics to buy Windows machines with Nvidia cards instead. So Apple really showed them, didn't they?
Hurts even more because Apple is probably just two generations away from the top spot on the Blender benchmark table. The next uplift (generation) of the ultra M series chip will probably put Apple in the top three on that list using less than 138 watts to do it, somewhere around 11,700.
https://opendata.blender.org/benchmarks/query/?compute_type=OPTIX&compute_type=CUDA&compute_type=HIP&compute_type=METAL&compute_type=ONEAPI&group_by=device_name&blender_version=4.3.0
Apple is also probably two generations away from being able to train using Mac ultras, Nvidia is running into wattage trouble. They’re reaching the end of the line, the era of unlimited wattage increases is coming to an end.
https://creativestrategies.com/mac-studio-m3-ultra-ai-workstation-review/