- Last Active
maestro64 said:it is now coming down to software optimization. pure processor power is not enough, unless the underlining code is optimize around the processor users will never see the performance. Even though the benchmarks try to work directly with the processor they can not they still have to interface with the operating system to execute code on the processor. The only way to eliminate the operating systems is to remove and replace it with the benchmark software which we know is not happening.
This is why Apple has the advantage and will always have the advantage. Google can not optimize their software to work with all the versions of processors.
Apple is shipping the widest ARM cores out there at 6-issue, everyone else is still playing with 4-issue. Nvidia tried 7 but their binary translation attempt made performance too weird.
Since core complexity goes up exponentially with width, Apple is also spending twice as much silicon per core, or at least were last year as the competition.
Point being it's not just some ambiguous whole banana optimization, Apple is shipping the most advanced ARM CPU cores in a phone period regardless of OS. For the Exynos Anandtech does mention it's a pre-release scheduler so some of it could be software, but even if things were perfectly optimized it would not be as good per clock as Apples wide core.
spheric said:jamiel said:Does that slight size increase mean existing bands won't fit either model?
The mm count is the lug to lug width, not display size.
ascii said:I am interested in whether a fast eGPU gets bottlenecked by the 7W processor. i.e. if you install the same game on the new MBA and on a MBP, and use the same eGPU, does the MBA get a lot less fps?
Sure it would, you can already see this with the 13 vs 15 CPUs, and the Pro 13 is a higher bar not to bottleneck the GPU with four cores and higher clocks. Depends on the task/title of course, but a 7W CPU will certainly bottleneck modern mid-high end GPUs a fair bit.
nunzy said:iPhone is better than Samescam? What a surprise. Not.
Tailosive bruh here is mentioning FOUR GPU stacks....Let's say TB4 doubles bandwidth as every TB generation has, that still seems like a stretch, unless you're manually connecting each stack with a direct cable. Otherwise stack 1 has to carry all the bandwidth for the next three GPUs and all other components...
Just from a bandwidth flow standpoint, this seems nuts.
One thing I could think of is not use TB4 at all - just go native PCI-E with as many lanes as it needs to make this work.