danox

About

Username
danox
Joined
Visits
152
Last Active
Roles
member
Points
5,361
Badges
1
Posts
3,916
  • Apple says not every Apple Silicon generation will get an Ultra

    Many people in this forum used to say that Apple has made plans for next 5 years, or even 10 years. The question is - Was it part of Apple's roadmap to not have M4 Ultra for Mac Studio from the beginning OR it was part of the plan but Apple ran into unexpected issues and had to change their plans in the last minute?

    Apple, gave voice to what they’ve been doing essentially all along for the last 10 years their upper division computers have basically been on a 2-3 or 4 year schedule, (take your pick) the Mac Pro, and the big screen iMac have been orphans and the Mac Studio has basically been released every 2-2.5 years, with nothing but internal excuses some of the upgrades Apple made recently are mainly due to them using the Mac Studio as a server in their Apple Intelligence/AI efforts (dragged along kicking scratching I might add).

    The thing that really hurts is that Apple has the ability in house to get these products out in a timely manner, 10 years ago it was a menagerie of third-party hardware companies that got in the way IBM, Motorola, Intel, AMD, and Nvidia that excuse is now gone most of the problems are now self-inflicted (iPhone fever). Apple is still doing very well, but :) you know the rest of the story they have to do better the lead Apple Silicon/Apple OS gives them won’t last forever.

    https://www.reddit.com/r/LLMDevs/comments/1j46f5e/apples_new_m3_ultra_vs_rtx_40905090/   Multiple accessible markets available Phones/Tablets/Laptops/Desktops right now no one is even close with solutions in all four major forward facing computing areas with performance and low wattage capability.

    https://www.reddit.com/r/MacOS/comments/1j47b6k/apple_announces_m3_ultraand_says_not_every/
    9secondkox2neoncatwatto_cobra
  • Mac Studio gets an update to M4 Max or M3 Ultra


    gwmac said:
    gwmac said:
    This is what I and a lot of other 27 inch iMac owners have been waiting on.  My only remaining decision is what monitor to buy. I want to get at least a 32 inch so was considering the

    Dell UltraSharp 32 4K Thunderbolt Hub Monitor - U3225QE

    Does this seem like a good monitor to pair with my new studio? I get a $200 Dell credit every 6 months with my Amex Business Platinum card so that will knock it down to $749. 

    Nah. We've been waiting on a new 32" iMac.
    That would be my preference but the chances of that are slim to none at this point. I would have been happy with a M4 iMac 27" but that also doesn't seem to be in the cards. My advice is just get a new studio like the rest of us. 

    Keep the dream alive imagine a 32” big screen iMac inside of an XDR display enclosure. A person who couldn’t wait any longer and settled on a M2 Mac Studio.
    neoncatwatto_cobra
  • Mac Studio gets an update to M4 Max or M3 Ultra

    gwmac said:
    This is what I and a lot of other 27 inch iMac owners have been waiting on.  My only remaining decision is what monitor to buy. I want to get at least a 32 inch so was considering the

    Dell UltraSharp 32 4K Thunderbolt Hub Monitor - U3225QE

    Does this seem like a good monitor to pair with my new studio? I get a $200 Dell credit every 6 months with my Amex Business Platinum card so that will knock it down to $749. 

    Nah. We've been waiting on a new 32" iMac.

    Which will probably be at minimum a 5K monitor and if you are lucky 6K, with thunderbolt five.
    neoncatwatto_cobra
  • Mac Studio gets an update to M4 Max or M3 Ultra

    Wow, the balls on Apple, continuing their bait and switch tactics with limited memory options for low-end Max chip on the studio. Crazy criminal!
    Intel, AMD, and Nvidia are available on the marketplace…. Good luck with the wattage, It seems Apple can’t win on the low end or the high end from a memory standpoint low put too little on the high put too much. :)
    9secondkox2watto_cobraneoncat
  • Mac Studio gets an update to M4 Max or M3 Ultra

    netrox said:
    The fact that M3 Ultra now support up to 512 GB RAM is pretty amazing. It's great for large scale LLMs. Ultra 2 would only support 192GB at max. 


    Why anyone would dislike your comment is puzzling to me.

    I bought a Surface Laptop 7 with 64 GB RAM (at a little discount, as I’m a Microsoft employee: these can only be bought directly from Microsoft) purely for the point of having a Windows machine to run larger LLMs and do AI experimentation at a reasonable budget, knowing there are better performance options if you have bottomless budgets.

    For the price, it’s a great deal: not many machines can run that large of LLMs. It’s not perfect, as memory bandwidth and thermals (when running pure CPU for the LLMs makes it a bit warm) appears to be the bottlenecks. Right now the NPU isn’t supported by LM Studio and others, and where you can use the NPU, most LLMs aren’t currently in the right format. It’s definitely an imperfect situation. But it runs 70 Billion parameter LLMs (sufficiently quantized) that you couldn’t do with nVidia chips at a rational price, but you do need to be patient.

    I’d love to have seen an M4 Ultra with all the memory bandwidth: with 512 GB RAM, presumably being able to use all the CPU cores, GPU cores and Neural Engine cores, it’s likely still memory bandwidth constrained. I would note: my laptop is still perfectly interactive at load, with only the 12 cores. I’d expect far more with one of the Mac Studio beasts.

    We finally have a viable reason mere mortals could make effective use of 512 GB RAM machines: LLMs. Resource constraints of current hardware are the biggest reasons we can’t have a very user-friendly, natural human language interaction hybrid OS using LLMs to interact with humans, and the traditional older style OS as a super powerful traditional computer architecture device driver and terminal layer. The funny thing is with powerful enough LLMs, you can describe what you need, and they can create applications that run within the context of the LLM itself to do what you need, they’re just needing a little bit more access to the traditional OS to carry it out for the GUI. I know, because I’m doing that on my laptop: it’s not fast enough to run all LLMs locally at maximum efficiency for humans yet, but it does work, better than expected.
    Out of genuine curiosity why would one need to run a LLM especially with a maxed out m3 ultra? Like the use cases for such local llm

    Don’t forget Apple is also using these new M3 Ultras behind the scenes as servers for Apple Intelligence that may be a big reason why they decided to support up to 512 gigs of UMA memory most (me included) were speculating that Apple would go to 256-320 gigs it’s actually a big bonus that Apple is going to 512 gigs.
    apple4thewinneoncatwatto_cobra