Last Active
  • Compared: M1 Max 16-inch MacBook Pro versus Mac Pro

    If you’re going to compare the highest end New MacBook Pro with 32 core GPU and 10 Core  M1Max and 16 core Neural Engine, then compare it to the Mac Pro with Duo W6900X with Afterburner at 28 cores and 384GB DDR4 memory, at least.

    Show people why Apple demonstrated a fully loaded Mac Pro w/ 1.5TB DDR4, 28 Core Xeon and peak Duo GPGPU Logic Pro on stage, or at the very least cite what a fully loaded latest offerings Mac Pro performance can do and how far the new MacBook Pro had to go to even be in the ballpark.

    Studios by the Mac Pro for music production and post production never mind 3D Modeling and Engineering because that expansion will be viable for the next 7 years and pay for itself tenfold.

  • Next-gen Apple TV could output 120Hz video, beta code suggests

    1. How many existing programs (TV or movies) have already been recorded in 120 FPS so far? Couldn't be too many, since I could find only 3 doing a 3 minute web search.
    2. Have Apple's programs for Apple TV+ been (secretly?) recorded in 120 FPS?
    3. For countries with 100 Hz power limitations, would Apple TV be limited to 100 FPS there? (To match the TVs?)
    4. I've heard of some computer games that can do 120 FPS, but that requires special video cards. If this rumour is true, that the chip in the Apple TV can render at 120 Hz, does that imply that Apple's next M chips will have the ability to render at 120 FPS?
    5. I remember when Ted Turner wanted to colourize his back catalog of movies. He did some. It wasn't too will received. Will people want old non-120Hz shows to be 120-ized using similar technology? I would think that animated films could "remaster" their programs more easily than live action shows. Especially when the movie is generated from computer software. It wouldn't be too hard to get Pixar films re-rendered to 120 FPS.
    They're all recorded in 120Hz or higher. They just bouncing down to 60Hz when releasing them to the public. Movies are being recorded in 8K/16K standard with much higher FPS but you don't see the original cuts. Or do you think what you see on the TV was the original 1:1 from the Camera/Audio/FPS to the theater/home?

    Nearly every model of TV is now standardized on 120Hz and upscaling to 240Hz. Wake up. 120Hz isn't that impressive, just a necessary bump to make sure the  new Apple TV is viable.

    Apple Arcade is expanding considerably. Anyone who thinks playing that w/o an AppleTV must want to stream directly from their Mac to the TV. I'll prefer the AppleTV to house the games and Streaming TV services thank you very much, especially when older quality TVs get locked out.
  • AMD Radeon RX 6700 XT GPU may launch at March 3 event

    elijahg said:
    zimmie said:
    Right now, no one is talking about Apple's upcoming discrete GPUs. Apple has not announced them but they pretty much have to be released this year. The release of Apple discrete GPUs will be an extremely important event. It is clear that Apple can compete with Intel/AMD in CPUs but how will it compare on GPUs? Apple's embedded GPUs are at best 1/10th the speed of discrete GPUs. That's actually pretty impressive for a mobile GPU built into an iPhone or iPad but it is not going to impress anyone buying an iMac, let alone a Mac Pro. The only other choice Apple has is to write Apple Silicon drivers for an AMD discrete GPU. That seems counterproductive given Apple's stated intention to build its own CPUs and GPUs going forwards.
    Your "1/10th the speed" statement is incorrect. The M1's GPU can perform 2.6 TFLOPS with eight cores. That's 325 GFLOPS per core.

    The Radeon RX 5700 XT 50th Anniversary can get 10.1 TFLOPS with 40 compute units, so 253 GFLOPS per compute unit. This is actually the best performance per compute unit across the Radeon RX 5000 line.

    The Radeon RX 6000 series is technically released, but they're extremely rare right now. I'm not going to count them until stores can go longer than an hour without selling out. Even so, they get 288 GFLOPS per compute unit.

    GPU compute performance scales linearly with core count. Power performance scales a bit worse because high-core interconnects have to be more complicated, but not hugely so. 32 of Apple's A14/M1 GPU cores would get about 10.4 TFLOPS, beating the best AMD consumer cards you could get last generation and beating the Nvidia RTX 2080 (also 10.1 TFLOPS). That would still have low enough power draw to fit in a laptop, though admittedly, not a laptop Apple is interested in making. An iMac could easily have four times the M1's GPU.
    FLOPS isn't a great measurement of GPU power in graphics processing at least, due the memory bandwidth etc being important. But the top end Nvidia 30x0 CPUs range from 12.7 to 35.7 TFLOPS. AMD GPUs whilst better than they were a few years ago, are still far behind; the RX6900 gets 23 TFLOPS. Also texture fill rate is important, and the M1 gets only 82GT/sec vs 460GT/sec for the RTX 3090, and 584GT/sec for the RX6900. This is mainly due to the memory bandwidth being a fraction of a discrete GPU, 68GB/sec vs 500GB+/sec. Adding more cores to improve the texture fill rate hits diminishing returns pretty quickly due to shared busses etc, so just using more GPU cores won't help things as much as it initially appears.

    Apple is screwing is own customers by still refusing to use (or permit use of) Nvidia cards. Also there're plenty of hardware features the Apple GPUs don't support, and unlike CPUs, the architecture of the card doesn't really matter as the drivers handle the Metal/OpenGL/Vulkan/DirectX translation so the architecture has much less bearing on performance. Also the M1 relies on Metal shader language, some parts of which even Unity doesn't support, presumably because the userbase is so small its not worth it.

    M1 hardware is impressive, but much like force touch of yore, if the userbase is too small developers won't put effort into supporting the more obscure parts of it. Hardware is useless without software to go with it, and the software needs developers, who need marketshare. And Apple's disinterest in marketshare (stagnant as a symptom of Macs being borderline overpriced) means the support isn't likely to improve anytime soon unfortunately.
    Those TFLOPS are in FP32 Single Precision and You're comparing the RTX 3090 against the RX 6900.

    The irony with those numbers is the FP16 and the FP64 for AMD stomps RTX 30. There is a reason Nvidia isn't winning over in gaming and people want the Radeons but since we'll soon be passing 10 million custom APUs of Zen 2/Radeon SoC APUs for the PS 5/XBox X it's been very difficult to get newer cards, and when they arrive they're sold immediately in large orders. AMD prioritized their TSMC allocation to EPYC, Game Consoles, Zen 3 and then RX 6000 cards.

    And right now the ones that are available by OEM are absurdly overpriced, right along side Nvidia's own top models.

    AMD vowed to sell the RX 6700XT on-site and to start increasing non-OEM models [Reference models] of all RX 6000 series to be available directly from AMD online.

    The fact is Apple will never have a discrete GPU to touch AMD or Nvidia because the mine field of IP required to do so. As it stands, they're licensing the vast majority of their IP [including Tiling that both AMD and Nvidia have had for quite some time] from Imagination Technologies. The inventor of Apple's ‘tiling' they bragged about is at AMD, and it is nothing that AMD and Nvidia have already had for several years.

    MCM is coming to RDNA 3.0 and CDNA 2.0 cards and no one, not even Nvidia will be ready. As of now the Xilinx merger looms even larger for AMD. They are already incorporating Xilinx IP as well [as of now Lisa Su has not said for which product lines] so it would be wise of Apple to extend eGPU support for future Mac M series systems.

    We already known MCM is coming to the M200 this year for CDNA 2.0 HPC/Supercomputing this year with Frontier and Zen 4 EPYC Genoa whose specs also have been partially leaked.

    Of course, if you bought a current M series Mac you're screwed for eGPU support and with all this power coming Apple better re-enable it in a later version of Big Sur or expect sales to start waning. As it stands lots of refurbs are on the Store and M series are selling with steep discounts--two things Apple rarely does and thus I doubt they are booming in sales after the initial push.

    As it stands the Mac Mini Intel is still available for sale:

    And if you want eGPU that's your option, along side a Macbook Pro Intel. 

    If you don't know what MCM this is the patent read up and it is quite interesting. &HomeUrl=/

    There are roughly another dozen or so with Neural Engines, Machine Learning, Tensor Calculus specific operations, Ray Tracing and more that are of huge interest you can dig up on Justia and view the full PDFs at the US Patent office.
  • Apple Silicon iMac & MacBook Pro expected in 2021, 32-core Mac Pro in 2022

    blastdoor said:
    ph382 said:
    rob53 said:
    Why stop at 32 cores. 

    I don't see even 32 happening for anything but the Mac Pro. I saw forum comments recently that games don't and can't use more than six cores. How much RAM (and heat) would you need to feed 32 cores?

    The 32 core Threadripper 3970x has a TDP of 280 watts on a 7nm process. It has four DDR4 3200 RAM channels.

    Based on comparisons of the M1 to mobile Ryzen, I would expect an ASi 32 core SOC to have a TDP much lower than 280 watts. 

    I bet a 32 core ASi SOC on a 5nm process could fit within the thermal envelope of an iMac Pro. 
    It has 8 not 4 DDR 4 3200 ECC RAM Channels and those are limited by the OEMs. Zen supports 2TB of DDR4 RAM since Zen 2. And Threadripper is limited to 32 Cores presently because they haven't moved to the Zen 4 5nm process with RDNA 3.0/CDNA 2.0 based solutions that are on a unified memory plane.

    Those 32 cores would be a 16/16 Big/Little and then combine their GPU and other co-processors and you have a much larger SoC or very small cores.

    TR 3 arrives this January along with EPYC 3 Milan with 64/128 Cores. The next releases as Lisa Su has stated and their software ROCm has shown will be integrating Xilinx co-processors into the Zen 4/RDNA 3.0/CDNA 2.0 based solutions and beyond. 

    Both AMD and Xilinx have Architecture licenses to ARM and have been designing and producing ARM processors for years. Xilinx itself has an arsenal of solutions in ARM.

    32 Cores would only be in the Mac Pro. 8/8 cores in the iMac and 12/12 in the iMac Pro is pushing it.

    In 2022 Jim Keller's CPU designs from Intel hit the market. The upcoming Zen architecture designs will be announced in January 2021 at the CES Virtual conference. AMD has already announced by 2025 its conservative Product sales of Hardware will be over $22 Billion. That's up from this year's just over $8 Billion.

    Apple has zero interest in supporting anything beyond their Matrix of hardware options and people believing they want to be all solutions to all people don't understand and never have understood the mission statement of Apple.

    A lot of the R&D in M1 is going into their IoT future products and Automobile products.
  • Apple predicted to adopt 4nm process for 'A16' processor

    Apple is not TSMC's sole 5nm client. The other is AMD whose entire CPU / GPU line goes 5nm Fall 2021. Samples early Spring 2021. This 4nm/3nm stuff is a few years away folks.