michelb76

About

Username
michelb76
Joined
Visits
113
Last Active
Roles
member
Points
1,621
Badges
1
Posts
774
  • M5 Pro may separate out GPU and CPU for new server-grade performance

    Isn’t this the same thing the group that left Apple to start Nuvia wanted to do at Apple?
    Yes, and that's the irony isn't it.  That group that left, wanted to do an ARM server processor while at Apple but supposedly, at that time, Apple decided not to pursue it.  And now with the big AI initiative, Apple needs to do its own server processor.
    Pretty sure these server processors have been in the cards for several years, as you don't just start doing this without having a solid base chip. Pretty sure the Nuvia guys just wanted it to happen faster, while Apple has to appease shareholders and milk the Mx iterations first. Also, unlikely Apple will sell these chips for use in datacenters, they will probably just exclusively use them themselves. No datacenter has plans to run on MLX.
    watto_cobra
  • How Apple's smart home revolution begins in 2025

    Let's see how this iteration of the 'Digital Hub' strategy will play out. If it's anything like 2001, it won't go far. Open source solutions are far ahead, and people seem to accept the setup trouble and lack of polish for something that, well, works.
    Alex1Nbyronl
  • Apple-Nvidia collaboration triples speed of AI model production

    danox said:
    Friendly? Apple will use them for as long as they have to in the back of house, Apple is not far from convergence with many of the Nvidia graphics cards, when will that convergence take place the M4 Studio Ultra? or will it be the M5 or M6 it isn’t that far away, look at the power signature required for one of the graphics cards made by Nvidia. Apple and Nvidia ain’t on the same path the power signature wattage is so far off. It’s ridiculous.

    https://www.pcguide.com/gpu/power-supply-rtx-4080/ 750 Watts system recommendation, and the back house Nvidia stuff it’s even more out there. The current M2 Ultra takes 107 watts for everything, and the M4 is even more powerful and efficient than that, let along what’s coming up with the M5 and M6.

    Currently the 4080 is about 3.3 times faster than the M2 Studio Ultra (Blender), it will be interesting to see how close the M4 Studio Ultra gets next year, we know it’ll use a hell of a lot less power to achieve Its performance.

    Apple Silicon is definitely powerful enough now, but for market inertia by the time of the M5 it will be indisputable.





    That's quite a stretch as the Nvidia cards are way faster at most things, and yes, of course they use much more power. They're also so much faster on LLM work, it's not even a comparison. The 4090 cards are purposely made slower, the have a cut circuit that unlocks even more performance. All Nvidia has to do is make a new board with the circuit connected and the 4090 will almost be twice as fast on a lot of important LLM calculations. Right now, they obviously don't want to cut into their datacenter market with a consumer card. Apple's M-series performance per watt, coupled with the shared memory is absolutely fantastic, but still quite behind on CUDA powered stuff, and once Nvidia increases RAM a bit (RAM is getting much less important for inference by the month), they'll stay on top very comfortably.
    byronl
  • M4 Mac minis in a computing cluster is incredibly cool, but not hugely effective

    danox said:
    Sounds like a OS level software problem that can or is already solved by Apple, in house and they probably are not going to share the solution with the general public anytime soon, Apple internally is using something on those Mac Studio M2, M4 ultra’s.

    But I hope whatever software solutions that Apple created for running those Mac Studio M2 ultras as a part of Apple Intelligence make it to the public in a way that can be used by everyone. (hopefully)….
    I highly doubt that. Most software isn't even taking advantage of multiple cores, and we've had those for decades. Distributed computing is even more niche and Apple's architecture and hardware isn't made for that.
    dewmewatto_cobra
  • Siri engine to be fundamentally revamped across 2025, early 2026

    Siri changes go unnoticed — like the infamous frog in a saucepan gently coming to the boil. Surely it’d make a better impression if the new animation actually signalled new functionality. By the time Siri is useful, we’ll have lost the will to wait with this drip drip slow evolution.
    Unlikely. A good voice AI is magic and part of the promise of what computing was supposed to be. There isn't a single voice AI that is actually good. OpenAI has a good voice experience but can still make up stuff and it can't integrate with the OS.
    Alex1Nbyronlwatto_cobra