apple4thewin

About

Username
apple4thewin
Joined
Visits
90
Last Active
Roles
member
Points
1,983
Badges
2
Posts
476
  • Apple is planning to make enormous design changes to iOS 19 & macOS 16

    I know some people will hate this change, but I think this is great. By 2020, most updates for UI and app icons became more professional/basic and flat, so a refresh would be good, especially since the few times I trialed the VP, I enjoyed the more glassy, polished 3D look for applications, and even the current/new control center looks good, although some adjustments have to be made (like why do we have the 1st and 3rd page repeats of connectivity?).
    watto_cobra
  • Everyone is a loser in the Apple Intelligence race

    Well-written article, and I agree most people may only be helped from grammar in writing tools (which I wish there was a faster way to check than triple-clicking the section) and from an overall organized look that most people wouldn’t have noticed it was a machine learning software that did it.
    jas99watto_cobra
  • Everyone is a loser in the Apple Intelligence race

    Because I have to prioritize my time and despite wanting to read a thoughtful article on the subject all I have time for is this summary ¯\_(ツ)_/¯ 

    “The article discusses the state of Apple Intelligence in the AI race, highlighting that while Apple’s position is not ideal, it is not as far behind as perceived. The author argues that the current AI landscape, dominated by large corporations, is flawed and potentially detrimental to the information industry. Despite Apple Intelligence’s useful features and privacy focus, it is criticized for being behind competitors, who often prioritize flashy releases over substance.”

    I think this illustrates my thoughts on the matter and the current state of things. And again, I don’t have time for much else. 
    I mean you can just go to “show reader” hit summarize and get Apple’s AI summary,

    The AI race is difficult to judge due to constantly shifting victory conditions and the nature of AI itself. While Apple faces criticism for its perceived lag in AI development, its on-device, private, and secure approach offers advantages over competitors. Despite Apple’s polished and useful AI tools, the company’s cautious approach and lack of sensational claims have led to a perception of being behind in the race.”
    watto_cobra
  • iPhone 17 Pro Max may get thicker than the iPhone 16 Pro

    The more I see the mock iPhone 17 photos, the more it grows on me and looks normal.
    watto_cobra
  • Mac Studio gets an update to M4 Max or M3 Ultra

    netrox said:
    The fact that M3 Ultra now support up to 512 GB RAM is pretty amazing. It's great for large scale LLMs. Ultra 2 would only support 192GB at max. 


    Why anyone would dislike your comment is puzzling to me.

    I bought a Surface Laptop 7 with 64 GB RAM (at a little discount, as I’m a Microsoft employee: these can only be bought directly from Microsoft) purely for the point of having a Windows machine to run larger LLMs and do AI experimentation at a reasonable budget, knowing there are better performance options if you have bottomless budgets.

    For the price, it’s a great deal: not many machines can run that large of LLMs. It’s not perfect, as memory bandwidth and thermals (when running pure CPU for the LLMs makes it a bit warm) appears to be the bottlenecks. Right now the NPU isn’t supported by LM Studio and others, and where you can use the NPU, most LLMs aren’t currently in the right format. It’s definitely an imperfect situation. But it runs 70 Billion parameter LLMs (sufficiently quantized) that you couldn’t do with nVidia chips at a rational price, but you do need to be patient.

    I’d love to have seen an M4 Ultra with all the memory bandwidth: with 512 GB RAM, presumably being able to use all the CPU cores, GPU cores and Neural Engine cores, it’s likely still memory bandwidth constrained. I would note: my laptop is still perfectly interactive at load, with only the 12 cores. I’d expect far more with one of the Mac Studio beasts.

    We finally have a viable reason mere mortals could make effective use of 512 GB RAM machines: LLMs. Resource constraints of current hardware are the biggest reasons we can’t have a very user-friendly, natural human language interaction hybrid OS using LLMs to interact with humans, and the traditional older style OS as a super powerful traditional computer architecture device driver and terminal layer. The funny thing is with powerful enough LLMs, you can describe what you need, and they can create applications that run within the context of the LLM itself to do what you need, they’re just needing a little bit more access to the traditional OS to carry it out for the GUI. I know, because I’m doing that on my laptop: it’s not fast enough to run all LLMs locally at maximum efficiency for humans yet, but it does work, better than expected.
    Out of genuine curiosity why would one need to run a LLM especially with a maxed out m3 ultra? Like the use cases for such local llm
    anonconformistronnneoncatwatto_cobra