brianus

About

Username
brianus
Joined
Visits
26
Last Active
Roles
member
Points
77
Badges
0
Posts
183
  • How to turn off Apple Intelligence -- and why you need to keep turning it off

    brianus said:
    Another weird take from the editors... don't get me wrong, this article is a public service and I'll be following its advice if and when I'm ever forced to update to iOS 18 or the latest macOS. But why focus on storage as the issue? I thought the majority of resource consumption an on-device LLM would be responsible for would be in RAM, not storage. The fact that any device that runs Apple Intelligence must have 8GB minimum is a clue to just how much you're losing when you turn this thing on. Even if it's only taking up say 3GB, that's 3GB I cannot spare on my 16GB MacBook, or even less so on an iPad, where for most models that would almost halve the available memory. Do I want webpages and apps reloading more frequently, background uploads failing and everything feeling more sluggish just so I can have an LLM of so far very questionable utility running at all times?
    The bulk of the complaints on it online and on social are about the storage it takes.
    Lol there’s the problem with modern journalism, letting “online and social” lead reporting. Objectively, 9GB or whatever is a much smaller chunk of storage (are there any Apple Intelligence capable devices with less than 256GB of storage standard⁇) than the RAM overhead is of memory. Folks whose devices are so larded up they couldn’t even install a moderately large app can complain all they like but they can also just delete stuff or buy a device with more storage if AI is important to them. There is literally no recourse if it’s eating up memory though, at least for iOS/iPadOS.
    dewme
  • How to turn off Apple Intelligence -- and why you need to keep turning it off

    Another weird take from the editors... don't get me wrong, this article is a public service and I'll be following its advice if and when I'm ever forced to update to iOS 18 or the latest macOS. But why focus on storage as the issue? I thought the majority of resource consumption an on-device LLM would be responsible for would be in RAM, not storage. The fact that any device that runs Apple Intelligence must have 8GB minimum is a clue to just how much you're losing when you turn this thing on. Even if it's only taking up say 3GB, that's 3GB I cannot spare on my 16GB MacBook, or even less so on an iPad, where for most models that would almost halve the available memory. Do I want webpages and apps reloading more frequently, background uploads failing and everything feeling more sluggish just so I can have an LLM of so far very questionable utility running at all times?
    DAalsethgrandact73
  • Heavily upgraded M3 Ultra Mac Studio is great for AI projects

    Why on earth would I want to run an AI model?  Locally or otherwise?
    I’m sure this was meant to be snarky, but for me it’s a genuine question: what are the envisioned real world use cases? What might a business (even a home one) use a local LLM for?

    The article mentions a hospital in the context of patient privacy, but what would that model actually be *doing*?
    ITGUYINSDneoncatAlex1N9secondkox2davwatto_cobra
  • M4 MacBook Air is imminent, iPad Air to follow shortly

    tht said:
    It's probably just a coincidence. There is no such thing as a step function transition of all the product lines. Every product line has their own schedule, and they are updated with the most appropriate hardware per marketing and component availability.

    Well I'm not talking about all the lines, just two of them. If they have a glut of M3 chips because they'd been making them for the MacBook Air (and nothing else, for at least 6 months), then suddenly those chips will be available for the iPad Air (and not needed for anything else) once the MBA gets updated to M4. I don't know how many they made for the MBA, but it's their most popular Mac..
    watto_cobra
  • M4 MacBook Air is imminent, iPad Air to follow shortly

    tht said:
    ApplePoor said:
    Since the M3 was a real challenge to make, I wonder if the M2 models will continue as the low cast versions when the M4 chips are installed? They could just ignore the fact of the M3 existence.
    Hopefully, they waterfall the M3 MBA 16/256 model to $1000.

    The emphasis should be that "the M3 was a challenge to make". It's been about 18 months since N3B has been mass producing chips. Today, costs should have been amortized and production smoothed out so that the M3 is just as cheap as the M2. This is all assuming you should believe the rumors of TSMC N3B being a problem. If it was, it was probably 24 months ago when it was ramping production. Now, it's likely not an issue other than $/mm2 inherent in its production process. After all this time, probably not an issue at all.

    If Apple keeps an M2 MBA at $1000, it's going to be for segmentation or work effort reasons, not N3B.
    It’s interesting though that the MBA, the only M3 device left in their lineup (despite still having several M2 products), is losing that chip at, as is rumored, possibly the same time or very shortly before the iPad Air gets it. Makes me think for whatever reason they don’t want to maintain two product lines requiring these chips. Also the iPad Air was updated to the A17 just after they stopped using that chip in any other products too… maybe an analogous situation? Maybe they made all the N3B chips they’re ever going to make already, and are just shoving them into two of their less popular products until they run out?
    watto_cobra