aptfx

About

Username
aptfx
Joined
Visits
4
Last Active
Roles
member
Points
21
Badges
0
Posts
8
  • Craig Federighi ignited Apple's AI efforts after using Microsoft's Copilot

    This „is behind“ thinking is a completely wrong idea. The idea of it is that there are companies like OpenAI, Meta, Google and lots of others competing about foundational AI models and that Apple would be late in the game on that.

    The real competition Apple faces is in „use of AI features“. Foundational models are hyped upon because all of this is new, but in the long run there are not really useful as end user products. There are already lots of very good Open Source foundation models. Llama 3 is very comparable to GPT 4 and there are MANY other options. Finetuning those models for specific purposes and building actual product features around them is the real thing to be done.

    its pointless to compete on foundation models for Apple.
    gregoriusmtmaymattinozthtjas99StrangeDaysdanoxwatto_cobra
  • The new Apple Silicon Mac Pro badly misses the mark for most of the target market

    danox said:

    With the introduction of Apple Vision Pro, the redesign on a lot of their in-house software and hardware across the Mac, iPhone, iPad, and many other things they’ve done in recent times with Apple Silicon, it’s obvious Apple is on a different path, and it doesn’t include Intel, AMD or Nvidia whatsoever, after all, could the Apple Vision Pro be made being tied to those three companies no it cannot.
    It’s obvious? I think those who see that it’s obvious that Apple never intends to deliver solutions to those asking now for multi GPU support are to quick in doing presumptions.

    When thinking about this stuff it is always important to look at the easiest explanations for things being the way they are.

    Since the introduction of Apple Silicon much of the things are now part of chip design that formerly were not. Even if Apple Silicon did impress in lots of ways there still were parts that were not fully scaled out. Think of the very limited display support of M1. Think of the growing but still limited RAM support. Think of the number of Thunderbolt controllers. Think of the video encoding units on Chip on M1 vs M2. 

    I think expectations from most would have lead to an availability of M3 at WWDC 2023. we know now that this „next generation“ will come later and that we will first have a refined version of M2 chips now. Though there is limits to what Apple can put in this refinements… a bit more power using more cores, a bit more RAM and such things… though not so much completely new stuff.

    M2 did not have any on-die capability to address dedicated off chip GPU address spaces from Apple Silicon CPU cores. This isn’t something to just add by refining M2.

    Still Apple was late in giving its answer to an Apple Silicon Mac Pro. If it hadn’t come - most people would have speculated the death of the Mac Pro. Now that it’s there many people still speculate that it means the death of the Mac Pro because of the missing multi GPU support. This is from a thinking that Apple just needs to say »oh let’s „allow“ multiple GPUs« and then it is finished. It completely misses the fact that there is at least one further chip generation necessary to reach that point.

    So speculation about the death of the Mac Pro after the new introduction of a new variant of that product is very very premature. It’s a widely nonsensical assumption. We could see this resolved with M3 - it’s not absolutely safe though, since we just don’t know which new core features will be in M3.
    Fidonet127Alex1Nroundaboutnowwatto_cobrawilliamlondon
  • The new Apple Silicon Mac Pro badly misses the mark for most of the target market


    My wild guess is that the GPU cannot access an external bus. If all the connections between the GPU, CPU and RAM are all on chip or substrate there is no reason to connect them to external pins. Finding a way to fan out all the signals to external pins may sound trivial but it is not so I doubt it those signals are run out to external pins if they don’t have to be. My guess is Apple doesn’t publish the pinouts of their custom chips but has anyone reversed engineered their functions yet?
    No not really. In simple terms: The problem here is that for off chip GPUs to work you have a way to shove things from CPU RAM to GPU RAM (and back). And it’s not only CPU RAM vs GPU RAM… it’s CPU RAM vs GPU1 RAM vs GPU 2 RAM vs …

    This „addressing“ feature is something that needs stuff done in the Apple Silicon architecture (see some of the patents already mentioned here).

    Since Apple is as silent as ever on this, we do not know when or even if those features will appear. We can already see that it was nothing to stuff into the already designed M2 line so we at least have to wait for M3 or maybe longer

    If it solves this, this doesn’t mean that it just will be a feature to support AMD or Nvidia. It will be support for “GPU computation unit extensions”. They then might also start providing extension cards with compute unit chips proprietary to, but also perfectly optimized to the Apple silicon platform in hardware and software.
    Alex1Nwatto_cobrawilliamlondonFileMakerFeller
  • Vulnerabilities found in Swift repository left millions of iPhone apps exposed

    Your title is very misleading. "Swift Repository" would be the Git repository of the Swift programming language. Vulnerabilities in this repository could happen too of course but thats not what actually happened here. CocoaPods is a separate entity... a package manager with packages from an undefined big number of sources... so there is much less control about what code gets in there... as with ANY other package manager too (npm...). Nearly any website and even many apps on iOS, Android, macOS and Windows are vulnerable because they in some way use tools or packages from npm. It's the obligation of the developer of an app to ensure this. That's the price you have to pay for relying on OpenSource.
    danoxJollyRoger