Apple Silicon gets massive AI training speed boost with this new project
A new project to improve the processing speed of neural networks on Apple Silicon is potentially able to speed up training on large datasets by up to ten times.

An image generated with Image Creator from Microsoft Designer
One of the problems of creating a machine learning project is training the model on large datasets. This relies on a lot of computing power to chew through the data, but improvements here can help speed up training, and potentially improve the models.
A new project from PHD student Tristan Bilot, Francesco Farina, and the MLX team, mlx-graphs is a library intended to help Graph Neural Networks (GNNs) to run more efficiently on Apple Silicon. GNNs are used to make predictions of nodes, edges, and in performing graph-based tasks, with a particular usefulness in computer vision.
Based on MLX, the mlx-graphs project has been released as a Graph Neural Network library specifically for Apple Silicon. To researchers in the field, the project aims to provide a considerable performance boost.
Bilot claims that initial benchmarks for the library can run at up to ten times the speed of frameworks such as PyTorch Geometric and DGL when training on large graph datasets. It does so by using dedicated kernels designed to parallelize GNN computations running directly on the M-series chip's GPU.
Apple's work on Mlx-graphs is still in the early days
The project is still in its early days, having only been worked on for a few weeks, with Bilot admitting that there is "still plenty of room for major contributions." This could be a hint that more speed gains could be found with further development.
The mlx-graphs library is available on GitHub to download and install. Bilot has offered an invitation for others to explore and test the library, provide feedback, and to submit implementations via pull requests.
The project is part of a wave of interest in machine learning and generative AI, a field that can massively transform the creation of content, and the serving of information to users.
In the case of Apple, researchers within the company have created a generative AI tool for animating images. Additionally, other projects are testing the use of AI in Xcode tools.
Apple CEO Tim Cook has also spoken about the massive effort put into AI features that Apple will be rolling out to users later in 2024.
Read on AppleInsider
Comments
Apple is not in the B2B cloud service arena. If they were or wanted to be, they could have stuck a bunch of A-series or M-series SoCs in racks and sold the CPU/GPU compute power years ago. Don't forget: they ditched the Xserve, got rid of most of their high-end software offerings, and threw the OS X Server code tree into the rubbish bin.
If Apple wants to put their silicon into AI/ML cloud computing, they will do it for their own customers: iPhone users, Mac users, Apple Watch users, Apple TV users, Apple Music subscribers, etc. Their M.O. is to deploy new technologies that benefit a wide swath of their customer base and create a competitive advantage over other companies' users.
And where Apple would come ahead would be performance-per-watt, the metric that Johny Sroudji pounds home relentlessly. If Apple ran some sort of AI cloud facility, someone else could probably do the same with Nvidia hardware. They would just be pouring more dollars into Nvidia's wallets and feeding more dollars to the utility company. And they would still lack the seamless vertical integration that Apple can design.
Sure outside of maybe a single line on the developer site and 15mins at each WWDC mid week session they aren't going to make a public fuss about it, still they could make a system that pays for itself in subscriptions and benefit themselves from the 3rd people working on the optimisation.
https://developer.apple.com/xcode-cloud/
What looks like a simple call to an API from the point of view of a developer could involve massive computation in an Apple-owned facility running thousands (or more) of M-based systems.
The notion that because Apple killed Xserve back in 2011 does not tell us much about what Apple might do today, anymore than the cancellation of Newton told us anything about what hand-held computer Apple might one day make.
Apple needs to do all they behind the scenes to make it easy for those who want to use Mac's in this area of computing, by designing/engineering whatever bridge software/utilities needed, I don't think Apple can afford to let it go and this also includes
servers down the road. (shouldn't have let those three engineer's go who ended up at Qualcomm after Nuvia was bought out).
https://www.zdnet.com/article/how-apples-ai-advances-could-make-or-break-the-iphone-16/ Being a vertical computer company means everything.
https://www.theregister.com/2024/01/25/apple_generative_ai/ Hmm...theregister over the years is known for being negative toward Apple but their take on AI is actually fair.