y2an
About
- Username
- y2an
- Joined
- Visits
- 38
- Last Active
- Roles
- member
- Points
- 442
- Badges
- 1
- Posts
- 189
Reactions
-
Apple is hardening iMessage encryption now to protect it from a threat that doesn't exist ...
-
Apple's first foldable screen probably won't be on the iPhone
-
Apple gets backlash from India after uncovering hacks on journalists
-
Apple demonstrates its commitment to AI with new open source code release
discountopinion said:I think the view that Apple may develop dedicated server side ML/AI accelerators for their own internal use with iCloud is right on the money and is on trend with what the other hyper scalers have been doing or are doing. Services is increasingly a larger proportion of overall gross margin and a reason why flat/sluggish device sales do not dramatically affect earnings. Apple will have the margins and incentive to go all in on server side AI for internal use.
Generative AI is here to stay and we are seeing very early days of adoption and what is clear is that it will be an incredible eater of compute. Anyone wanting to be in the services game going forward will have to spend a lot so it makes perfect business sense for Apple to build in house.
Nvidia has held the generative AI server side market captive for a while and AMD is now providing some competition and more importantly more supply. Nvidia have been supply blocked for a long time.
Apple's privacy ethos will likely continue to drive model inference to be run on end user devices so as little as possible runs over the wire back to Apple. How this affects their model development I do not know. Likely we may see increasing self-tuning of models on device to enable your local context to be optimally effective. This will likely mean there is a nice client chip - server chip co-development synergy and maybe also interesting opportunities for optimisation.
Training of models and model development is super heavy lifting so Mac Pro will likely evolve into an AI beast to run as much locally rather than have to phone home to cloud based services. I cannot see AI heavy video tools uploading terabytes of video to the cloud instead of running locally.
People have posted that inference requires a lot of memory bandwidth and expressed disappointment in the lower memory bandwidth of M3 vs M2 and overall much lower than a Nvidia 40XX. I am no expert in the topic and I hope that the released libs from Apple may help the common inference tool stack leverage the Apple Silicon design instead of executing like it was on a vanilla GPU.
Will we see multiple M4 Ultra SoCs in a Mac Pro AI edition in the future? A clue to this would be if there is a fast interconnect fabric available for the M series chips. Overall Apple's lead in performance per watt will be a massive advantage as Nvidia cards will soon require a side loaded small fusion reactor to run. -
ExpressVPN brings its VPN app to the Apple TV
brianjo said:I get that people WANT VPNs to get around restrictions, but isn't that pretty similar to having an app that lets you pirate videos? You don't want to follow the rules and pay what's needed where it's needed, so who cares about the TOS that you agreed to, right?