Apple working out how to use Mac in parallel with iPhone or iPad to process big jobs

Posted:
in Future Apple Hardware

Apple isn't content with the existing distributed computing technologies, and is working on a way to network your Mac, iPhone, iPad, and maybe even the Apple Vision Pro together to combine processing jobs dynamically according to device power, to get a big calculation done faster.

Diagram showing central system with power supply, memory, SoC, and peripherals connected to various devices including PCs, laptops, tablets, phones, monitors, smart home, cloud, and transportation modes.
Apple's patent shows multiple devices sharing the computing workload.



In a newly published patent, Apple has detailed what it wants to see in the future as it pertains to using all of your devices for computations, simultaneously, an seamlessly. If the research pans out as published, this would radically improve performance for some tasks, and automatically take into consideration the varying power from each device.

For example, an idle iPad could make a photo edit or 3D render faster. Or, a Mac could improve the experience on an Apple Vision Pro by sharing the processing of tasks.

The Apple Vision Pro in theory, could get the biggest boost by automatically using other devices to process the enormous amounts of data used for augmented reality. For example, in the processing demands of a mixed reality and augmented reality system like the Apple Vision Pro, a single internal processor outside of the motion sensing chip could be a limitation in the headset's tasks.

Processing more data more quickly can create a much more immersive experience. Sharing that processing with multiple devices can radically improve performance, bringing new features.

It would help if you increasingly had increasingly powerful devices to perform complex tasks such as mixed reality, video editing and rendering, special effects creation, 3D modeling, and mathematical simulations.

Most of this is already addressed in distributed computing models, like Apple's two-decade old xGrid. That technology was limited, though, and didn't consider the different processing power of each computational node.

A single slow node could bog down the entire job. Without consideration of job size in relation to satellite processing power, that slower calculation could delay future work dependent on the job delivery time.

The technology in Apple's new patent would not only share your workload but also figure out which devices would be best to get the job done.

So, if you're trying to output 4K/60 video for YouTube from Final Cut Pro on a Mac mini but are playing Death Stranding on your iPhone, this system would automatically detect that the iPhone doesn't have enough spare power. The workload could be automatically shared with your iPad or even a MacBook Air nearby, or give a smaller load to that iPhone, dynamically.

To the user, one or more Apple devices would automatically handle tasks without even knowing what's happening behind the curtain.

In another practical example, the performance of the Apple Vision Pro could be sped up just by having an iPhone in your pocket. In a future version of the relevant operating systems, the Apple Vision Pro could automatically send some of the processing tasks to that iPhone, boosting its speed and potentially enabling new features.

More processing power means better Apple Intelligence



This approach could benefit more than just the Apple Vision Pro or your Mac.

Apple's newly announced Apple Intelligence features will be processed on-device. If you ask Siri on your iPhone to book you a flight with your mother and send reminders to everyone, that iPhone has to do all the artificial intelligence processing.

While the bulk of the processing power needed to handle Apple's upcoming AI features can be handled on an iPhone or iPad, sometimes the task is too complex for just those devices alone. As part of Apple's new Apple Intelligence initiatives, Apple plans to use a new technology called Secure Cloud Compute to perform similar functionality with more complexity.

Naturally, the round-trip from the device to the cloud requires an Internet connection. It also adds delay to tasks while the data is sent and received. Automatically sending the requests to a user's other devices would cut back the need to use a cloud system, speeding up the process tremendously.

The patent was credited to Andrew M. Havlir, Ajay Simba Modugala, and Karl D. Mann. Havlir holds multiple patents, many of which are in distributed processing and graphics processing.



Read on AppleInsider

Comments

  • Reply 1 of 16
    AppleZuluAppleZulu Posts: 2,097member
    Herein lies one answer to how ‘Apple Intelligence’ will be extended to puny existing HomePods and AppleTVs without telling customers that the smarter Siri will exclusively be available on new next generation HomePods and AppleTVs. It’s already true that a shortcut command issued verbally to a HomePod only works if your iPhone is connected to the same network. Seems likely that Apple Intelligence Siri will eventually work on existing HomePods by sending the computational work first to capable devices on the home network, and then to ‘Secure Cloud’ only when necessary. This will enable Apple to fulfill its AI privacy policy without requiring users to replace a house full of HomePods and AppleTVs first. 
    lotoneswilliamlondonFidonet127watto_cobra
  • Reply 2 of 16
    Did Richard Hendricks finally sell Pied Piper to Apple?
    williamlondonwatto_cobra
  • Reply 3 of 16
    danoxdanox Posts: 3,107member
    Excellent this is what happens when Apple starts to design more effective server/networking solutions in house, the ideas are probably germinating out building that Mac Studio M2 Ultra server farm for AI if you are designing new solutions might as well leverage the tech. (the Luddites in the EU will think differently however and consider this evil)
    blastdoorbaconstangwatto_cobra
  • Reply 4 of 16
    blastdoorblastdoor Posts: 3,432member
    This sounds very cool and, given private cloud compute, highly plausible.

    I wonder if one marketing challenge for Apple is helping people to understand that they can get better performance out of one device if they buy another device. I mean, I guess it’s not a huge challenge, you just have to explain it to them. And that seems doable. But it would also be nice if a user can see, sort of like through an activity monitor, the extent to which one device is benefiting from another. Making the benefit as visible and quantifiable as possible would be beneficial to both Apple and users.
    baconstangwatto_cobra
  • Reply 5 of 16
    This is what I am truly waiting for. The technological upgrades for all devices using what you already own is perfection
    watto_cobra
  • Reply 6 of 16
    This is when I wish Apple never cancelled the AirPort. I feel like that would have been the perfect home control device. Have it able to create a full backup of iCloud and Time Machine of all devices on network, auto download updates for all devices for faster installs, and now being able to offload a lot of processes for Apple Intelligence or other intensive tasks. 

    It would even make dumb devices like Apple HomePod capable of using Apple Intelligence. 

    It was a great device just in its old basic form. I miss that thing. 
    williamlondondewmedanoxblastdoorwatto_cobra
  • Reply 7 of 16
    sigma902sigma902 Posts: 18member
    I suspect this is part of a strategy for “Apple Glasses or Vision (non-pro)”. The worn device is slimmed down by moving processing to the iPhone. A redux of how the original Apple Watch apps ran on the iPhone and Watch provided input/output. 
    watto_cobra
  • Reply 8 of 16
    Maybe… Apple will develop a new step in Apple Intelligence:

    1—In-device Comput
    2— “Secure Personal Clous Compute"
    3— Secure Cloud Compute
    4— Other companies
    watto_cobra
  • Reply 9 of 16
    dewmedewme Posts: 5,545member
    This is similar in some ways to the SETI approach of using whatever computing resources are available to distribute work across multiple computers working cooperatively to solve a problem. This concept has been  the subject of discussion and prototyping for decades. At a finer level of granularity some applications take advantage of GPU resources that would otherwise be allocated to the CPU. Teamwork makes the dream work. 

    I’m completely in favor of this because there are staggering levels of computing capacity just idling away their time waiting for the user to ask them to do something, anything really. I think my Mac Studio is beckoning me now to give it something to chew on. 

    Edit: I should have said the the staggering unused capacity is more on the client side of things. Servers are typically utilized at much higher levels than clients. On the server side too much is never enough.
    edited June 25 danoxwatto_cobra
  • Reply 10 of 16
    chadbagchadbag Posts: 2,013member
    Hopefully they’re working out now to divvy stuff up so that you don’t lose efficiency to data transfer.   It sounds all nice and good to split 4K/60 renders across your various devices but that’s very data intensive work and just moving all the data wirelessly to the other device could cause you to lose the benefit.   Small data sets with complicated processing would benefit more.  
    watto_cobra
  • Reply 11 of 16
    programmerprogrammer Posts: 3,461member
    A few years ago I suggested that Apple Silicon & Mac Pro could be combined by creating an M-series chip-on-a-PCIe-board which could be inserted into a Mac Pro's chassis.  The problem with doing this is that it doesn't look (to software) like a traditional CPU/GPU/memory machine.  That is precisely what this article is about though -- how to distribute heavy computations to the available hardware.  The more computation Apple manages to offload from the local machine, the more it makes sense to have additional "headless" hardware available.  This would make the Mac Pro chassis a lot more compelling than it is currently, and the same ASi-on-PCIe boards could be deployed into servers in the cloud.

    dewmewilliamlondonmattinozwatto_cobra
  • Reply 12 of 16
    programmerprogrammer Posts: 3,461member

    chadbag said:
    Hopefully they’re working out now to divvy stuff up so that you don’t lose efficiency to data transfer.   It sounds all nice and good to split 4K/60 renders across your various devices but that’s very data intensive work and just moving all the data wirelessly to the other device could cause you to lose the benefit.   Small data sets with complicated processing would benefit more.  
    Non-training AI workloads are often like this -- a moderately sized model (a large part of which can often be generic and shared, but fine tuned by relative low volume training data), fairly small amounts of data to be analyzed, and a whole lot of compute needed on this data.  We haven't seen commonly used distributed compute before because the workloads haven't had the necessary compute:bandwidth ratio, but now this huge wave of machine learning use has changed that, so Apple is jumping on it.
    watto_cobra
  • Reply 13 of 16
    programmerprogrammer Posts: 3,461member
    dewme said:

    Edit: I should have said the the staggering unused capacity is more on the client side of things. Servers are typically utilized at much higher levels than clients. On the server side too much is never enough.
    I think you'd be surprised by just how low average server utilization is, on average.

    dewmewatto_cobra
  • Reply 14 of 16
    blastdoorblastdoor Posts: 3,432member
    A few years ago I suggested that Apple Silicon & Mac Pro could be combined by creating an M-series chip-on-a-PCIe-board which could be inserted into a Mac Pro's chassis.  The problem with doing this is that it doesn't look (to software) like a traditional CPU/GPU/memory machine.  That is precisely what this article is about though -- how to distribute heavy computations to the available hardware.  The more computation Apple manages to offload from the local machine, the more it makes sense to have additional "headless" hardware available.  This would make the Mac Pro chassis a lot more compelling than it is currently, and the same ASi-on-PCIe boards could be deployed into servers in the cloud.
    I wonder if the thing you're describing is in the same ballpark of Apple's servers. Probably not PCIe boards, but I could imagine apple packing a lot of small circuit boards (akin to Mac mini motherboards) with M chips into blade enclosures. They could perhaps connect those boards within the enclosure using TB4, and then connect them to the world outside the enclosure using ethernet. 

    Apple could vary the M chips used to support different mixes of CPU/NPU/GPU work. 
    watto_cobra
  • Reply 15 of 16
    dewmedewme Posts: 5,545member
    A few years ago I suggested that Apple Silicon & Mac Pro could be combined by creating an M-series chip-on-a-PCIe-board which could be inserted into a Mac Pro's chassis.  The problem with doing this is that it doesn't look (to software) like a traditional CPU/GPU/memory machine.  That is precisely what this article is about though -- how to distribute heavy computations to the available hardware.  The more computation Apple manages to offload from the local machine, the more it makes sense to have additional "headless" hardware available.  This would make the Mac Pro chassis a lot more compelling than it is currently, and the same ASi-on-PCIe boards could be deployed into servers in the cloud.

    Thanks for the server feedback. The dedicated servers I've dealt with (Windows Server) were always quite heavily loaded, but server machines weren't nearly as powerful back then, especially with redundancy to support hot failover.

    It sounds like what you're describing would also lend itself to a "stack-of-minis" approach to provide additional compute resources as needed, i.e., compute elasticity. At one point in time before the Mac Studio came out I really thought that Apple might do something like this with the Mx Mac mini. It sounds like Apple is proposing an architecture that will allow end users to setup their own ad hoc system for managing compute elasticity but also functional elasticity by using a heterogeneous mix of computers. It sounds very compelling.
    edited June 26 watto_cobra
  • Reply 16 of 16
    mattinozmattinoz Posts: 2,393member
    dewme said:
    A few years ago I suggested that Apple Silicon & Mac Pro could be combined by creating an M-series chip-on-a-PCIe-board which could be inserted into a Mac Pro's chassis.  The problem with doing this is that it doesn't look (to software) like a traditional CPU/GPU/memory machine.  That is precisely what this article is about though -- how to distribute heavy computations to the available hardware.  The more computation Apple manages to offload from the local machine, the more it makes sense to have additional "headless" hardware available.  This would make the Mac Pro chassis a lot more compelling than it is currently, and the same ASi-on-PCIe boards could be deployed into servers in the cloud.

    Thanks for the server feedback. The dedicated servers I've dealt with (Windows Server) were always quite heavily loaded, but server machines weren't nearly as powerful back then, especially with redundancy to support hot failover.

    It sounds like what you're describing would also lend itself to a "stack-of-minis" approach to provide additional compute resources as needed, i.e., compute elasticity. At one point in time before the Mac Studio came out I really thought that Apple might do something like this with the Mx Mac mini. It sounds like Apple is proposing an architecture that will allow end users to setup their own ad hoc system for managing compute elasticity but also functional elasticity by using a heterogeneous mix of computers. It sounds very compelling.
    Yes it does, go slightly further add CxL support to Max / ultra and build each mini as a MPX module with a MacPro stacked full of these modules 
    edited June 26 dewmewatto_cobra
Sign In or Register to comment.