discountopinion
About
- Username
- discountopinion
- Joined
- Visits
- 66
- Last Active
- Roles
- member
- Points
- 358
- Badges
- 0
- Posts
- 154
Reactions
-
Apple Watch Series 9 review three months later: Don't buy it for Double Tap
Have gotten Series 9 for every elderly member of the family. Slowly but surely everyone has iPhones and Apple Watches and HomePods and hand me down Apple TVs.
The screen is very good for people with eyesight issues and also the sense of assurance and safety that they feel that the Watch has their back, should they fall or be in an accident. Overall also the heart health features, oxygen tracking etc give a sense of comfort albeit with the knowledge the features are not perfect. They also find using Siri on the Watch very freeing. Also they feel cool wearing it.
Series 9 is an absolute slam dunk as a gift for loved ones. If only Apple knows how much comfort and coolness it gives people who feel a bit more vulnerable.
-
Apple Silicon M3 Pro competes with Nvidia RTX 4090 GPU in AI benchmark [u]
Super interesting and it is clear that Apple is making headway and that the MLX release was a great thing for the community.
Comparison, while entertaining, is a bit misleading tho as it is comparing one architecture that is optimised for power efficiency against a plugged-in no holds barred GPU architecture. To say that Apple has a long way to go missed the point a bit.
The power consumption factor should be highlighted even more. The Apple benchmark is likely similar or the same if running plugged in or on battery. All-in the PC guts power consumption is likely 10x (or higher) of the M3 Max SoC. So if doing an amateur Watts x seconds calculation then the optimised Nvidia model is still faster than the MLX M3Max, but not by a whole lot at all.
RTX 4090 has a TDP of 450W. Laptop RTX 4090 is 175W (?). M3Max is far from this and Nvidia GPU power draw is not counting the PC CPU, RAM and other circuitry that resides in the M3 chip.
Now I wonder if there is any option to do a MLX version of the Nvidia optimised model. Perhaps there are many other tweaks in the model besides tuning it for Nvidia cards?
Also I wonder what performance numbers we would see if Apple would go on a ragga tip and do an M3 chip with a TDP of 450-700W. Maybe Apple has tried it and it didn't meet expectations. Chips be hard.
Hopefully M4 will be tripling down on genAI performance. -
Apple Vision Pro launch expected as soon as January
Launch early with invitations and waiting lists may be a good way to gauge demand and adapt production ramp.
Personally I would love to see it early in the market so that they learn fast from real world use and feedback. Vision Pro 2 will likely be a much more polish product and if we are lucky by then they may not need the tethered power bank.
-
Apple demonstrates its commitment to AI with new open source code release
I think the view that Apple may develop dedicated server side ML/AI accelerators for their own internal use with iCloud is right on the money and is on trend with what the other hyper scalers have been doing or are doing. Services is increasingly a larger proportion of overall gross margin and a reason why flat/sluggish device sales do not dramatically affect earnings. Apple will have the margins and incentive to go all in on server side AI for internal use.
Generative AI is here to stay and we are seeing very early days of adoption and what is clear is that it will be an incredible eater of compute. Anyone wanting to be in the services game going forward will have to spend a lot so it makes perfect business sense for Apple to build in house.
Nvidia has held the generative AI server side market captive for a while and AMD is now providing some competition and more importantly more supply. Nvidia have been supply blocked for a long time.
Apple's privacy ethos will likely continue to drive model inference to be run on end user devices so as little as possible runs over the wire back to Apple. How this affects their model development I do not know. Likely we may see increasing self-tuning of models on device to enable your local context to be optimally effective. This will likely mean there is a nice client chip - server chip co-development synergy and maybe also interesting opportunities for optimisation.
Training of models and model development is super heavy lifting so Mac Pro will likely evolve into an AI beast to run as much locally rather than have to phone home to cloud based services. I cannot see AI heavy video tools uploading terabytes of video to the cloud instead of running locally.
People have posted that inference requires a lot of memory bandwidth and expressed disappointment in the lower memory bandwidth of M3 vs M2 and overall much lower than a Nvidia 40XX. I am no expert in the topic and I hope that the released libs from Apple may help the common inference tool stack leverage the Apple Silicon design instead of executing like it was on a vanilla GPU.
Will we see multiple M4 Ultra SoCs in a Mac Pro AI edition in the future? A clue to this would be if there is a fast interconnect fabric available for the M series chips. Overall Apple's lead in performance per watt will be a massive advantage as Nvidia cards will soon require a side loaded small fusion reactor to run.
-
Apple demonstrates its commitment to AI with new open source code release