- Last Active
MacQuadra840av said:Finally an article that clearly points out the unfortunate limitations of the M1. Everyone is so blinded in tunnel vision of 3x performance that they are completely missing out on the fact that the M1 is a low-end base model CPU with less features than the models it replaced.
It was not long ago that all the commenters were complaining of soldered memory, soldered storage, no upgrades, etc. All Apple has to do is slap an Apple logo on a pig and the fanatics think it is the best thing in the world. It wasn't long ago that people were complaining about 16GB RAM in the MacBooks and then they cheered when Apple bumped it up to 32GB and 64GB. Now suddenly they are all happy that the M1 is capped at 16GB? Suddenly they are excited that integrated graphics in the M1 are faster than the integrated graphics on the intel Macs, but still much slower than discrete graphics? WTF?
Could you imagine if Apple introduced an iMac with only 16GB of RAM (instead of 128GB), 2TB of storage (instead of 8TB), two USB-C (instead of 4 USB/2 Thunderbolt), and integrated graphics driving a 27+" 5K display? It would be a joke! Or a Mac Pro with those specs? Suddenly people think a 16GB M1 can do anything? Not when you throw a huge graphics file at it. Let's not forget about the excessive read/writes that is occurring in the M1 Macs, wearing out the flash storage prematurely.
If you don’t get ARC memory management, stop talking. You sound like the people who can’t differentiate between core-count/GHz and actual performance. The memory pressure in the 16GB M1 is mostly lower than a 32GB Intel-based product.
The SSD write issues have been debunked. As you’re a sucker for Kool-Aid, Cinebench is an invalid benchmark as it uses Intel’s Embree renderer which, oddly, is only optimised for x86 SIMD (SSE4/AVX2 - not AVX512 as it down-clocks quickly under load) and not the M1’s custom SIMD units beyond standard NEON.
cloudguy said:tmay said:I don't imagine that Apple has concerns one way or the other. Apple is likely at a point where they have in house capability and have licensed necessary IP to create their own proprietary ISA, while also large enough to create the design and validation tools needed to fab at TMSC, or whomever.
I would prefer that ARM reside in Japan or the UK, and not Taiwan, simply for National Security reasons.
Another thing: basic R&D like this isn't Apple's deal. It is amazing that so many people are convinced that it is. In fact, Apple doesn't do originality. Instead they take existing technology - stuff that has been around for awhile and has been proven - and incorporate them into their existing design language. At most, one could say that they excel at taking parts innovated or improved by others and using them to make new great products. But the truth is that nothing in Apple's present existence or their previous history indicates that they are capable of coming up with a "new" CPU design, or even a major advance on an existing design. Even their own CPUs, in addition to being based on the existing ARM design, were the result of acqui-hiring PA Semiconductor. Even something MUCH SIMPLER such as a fingerprint scanner, they had to buy a company that already had the tech, where Qualcomm and Samsung created their own using their own R&D departments (which is why they were able to make under-the-screen fingerprint scanners so quickly).You also seem to assume a CPU ISA is beyond them when their SoCs already hold many 1st-party ISAs (graphics & neural being the obvious). Apple can (and probably already do) create a direct relationship between Swift, OS and Apple Silicon which eliminates the need for a 3rd-party CPU ISA at all. Probably time to implement that, though they could already be emulating Aarch64 anyway.
dk49 said:If ARM has its own AI engine now, what does it mean for Apple's Neural engine? Is it possible for Apple to completely discard ARMs AI engine in their processors or they will have to build theirs on top of ARMs? If yes then will it not break ARM's licence?
This is my bone of contention with Cinebench as it ignores a TeraOp of SIMD compute from the AMX units. The M1 should be smashing all but ThreadRipper/Epyc on Cinebench but as it’s Embree renderer is controlled by Intel, that optimisation probably won’t happen.
eGPUs are not the solution. Benchmarks aside, they never delivered in terms of application performance.
Apple has a long road ahead convincing developers to fully re-architect their products so they could release their own discrete GPU options which would work better in the short term. Of course if they do this, there’s less of an incentive for devs to move to UMA.
The longer they’re leaving it, the more the competition has stepped up. The buying public can’t see beyond marketing specs so genuine advantages are already mitigated. Single-core performance has been matched by Intel 11th gen i7/i9 so it’ll be interesting to see how much Apple has left in the tank.
shareef777 said:I'm confused, what's the diff with how Roblox and Fortnite were treated? Both have in-game currency that must be purchased via IAP so that Apple gets their 30% fee. I don't see how they were treated any differently.