tht

About

Username
tht
Joined
Visits
196
Last Active
Roles
member
Points
8,042
Badges
1
Posts
6,033
  • Surgeon General warns of social media's negative impact on kids' mental health

    What’s the distinction between adults and children here, other than convention?

    If social media is harmful to children, it will be harmful to adults, no? It’s only by convention that people over 18 are fully responsible for themselves and children are not, and therefore get a special carve out. The harm still applies to people of all ages. 

    So, perhaps the surgeon general should include the warning to everyone. 
    williamlondonmknelsonmacxpresswatto_cobra
  • Apple manufacturing now uses 13.7 gigawatts of renewable energy, will hit carbon neutral b...

    2morrow said:
    I have a question. 
    So when Apple or it’s suppliers purchase x gigawatts of renewable energy in China or some other country how do they know that the power is for them and not another big company that is asking for renewable energy. Can they not just apply the same power from renewable to as many companies as they want and not really have all of that renewable power allocated?
    Grid power and energy is fed by multiple sources of electricity. Once those individual power sources, like individual potatoes, energize the grid, it is basically mash potatoes to end customers. At the point of usage, this energy cannot be isolated to a specific type of source.

    What Apple does, and basically all renewable energy contracts from power providers, is ensure that for whatever energy they or their supplier uses, they put in an equivalent amount into the grid from a renewable power source. The money that pays for renewable energy from Apple and its suppliers goes to renewable power sources. The grid doesn't have infinite capacity, and the grid operator has to maintain a balance plus reserve. If a power generator isn't paid for or is too expensive, the grid will ask them to decrease their output or shutdown. They will shut down themselves if they aren't been paid.

    Since Apple is guaranteeing a renewable energy source, it typically means the most expensive power generator will be reduced or shutdown. That's coal. Nuclear is actually the most expensive, but the fleet is typically kept alive for strategic reasons. Once grid batteries, fed by renewables, get to significant capacities, natural gas peaker plants will be next. And in the not so distant future, gas plants will be shutdown too.
    cgWerkschasm2morrowwatto_cobratmay
  • 15.5-inch MacBook Air rumored to arrive in early April 2023

    JP234 said:
    What's the point of a 15 inch Air? You want to 15" display, buy the Pro.
    The point is about $500 to $1000.

    A MBA15 with M2, 8 GB RAM and 256 GB will be about $1500. Option for the 16 GB and 512 GB, probably $1900. A MBP16 is $2500.


    h4y3smuthuk_vanalingamroundaboutnowwilliamlondonmattinozTheObannonFiledoozydozen9secondkox2blastdoorAlex1N
  • Mac Studio may never get updated, because new Mac Pro is coming

    I wonder if Apple was just surprised at how little cooling their chips required, which could explain the overbuilt heat sink and cooling fans for the Mac Studio. It seems likely that the M3 Max will run just fine in a Mac mini form factor, and if so the Mac Studio hardware would be relevant only for the M3 Ultra. The M3 Ultra itself might just not have a large enough market to support two form factors, and if they are going to do a Mac Pro with a base price of $5000 and configurations going up to $20K, that might just make the most sense as the platform to keep, particularly if the M3 Ultra Mac Pro is a decent amount smaller than the current Mac Pro. 

    But who knows, given that this is the post Steve Jobs Apple we are talking about, we can be assured that some of the decisions Apple will continue to make in support of their pro users really won't make much sense. 
    The amount of heat generated is easy to calculate especially with the design software that is used.
    They weren't surprised, but I'm curious how they got to the cooling system they got to. Apple designs from the outside in. So the product marketing people said, "Here is the shape of the Mac Studio, like a tall Mac mini. We want to put in a 110 W and 220 W SoC in it, and it can't be anymore that 25 dB in a quiet room." I am surprised that had to resort to copper for the Ultra. Almost thinking the air inlets going into the box are not big enough to feed the fans, so they needed to use copper to have it run at low RPMs.

    The M1 Max, M2 Max and M3 Max can all run in the Mac mini form factor. The 2018 6-core Intel Mac mini was 122 W max. The 2022 M1 Max Mac Studio is 115 W. So, the Mac mini can have a Max chip inside, but it will likely run loud and hot like an Intel processor. It likely will never be in the Mac mini.

    The M3 Max version, presumably on TSMC N3, will not run much cooler than the M2 Max version. Apple is going to use all the additional power efficiency headroom for more transistors and more performance. There's this belief going around that the TSMC 3nm chips will run cooler. They might, but it will be at a level indistinguishable for the user, and they need to use more transistors to increase performance. They will consume about the same power as M2 and M1 versions.
    williamlondon
  • New Mac Pro may not support PCI-E GPUs

    Marvin said:
    tht said:
    cgWerks said:
    tht said:
    … The big issue, how do they get a class competitive GPU? Will be interesting to see what they do. The current GPU performance in the M2 Max is intriguing and if they can get 170k GB5 Metal scores with an M2 Ultra, that's probably enough. But, it's probably going to be something like 130k GB5 Metal. Perhaps they will crank the clocks up to ensure it is more performant than the Radeon Pro 6900X …
    The devil is in the details. I’ve seen M1 Pro/Max do some fairly incredible things in certain 3D apps that match high-end AMD/Nvidia, while at the same time, there are things the top M1 Ultra fails at so miserably, it isn’t usable, and is bested by a low-mid-end AMD/Nvidia PC.

    I suppose if they keep scaling everything up, they’ll kind of get there for the most part. But, remember the previous Mac Pro could have 4x or more of those fast GPUs. Most people don’t need that, so maybe they have no intention of going back there again. But, I hope they have some plan to be realistically competitive with more common mid-to-high end PCs with single GPUs. If they can’t even pull that off, they may as well just throw in the towel and abandon GPU-dependant professional markets.
    If only they can keep scaling up. Scaling GPU compute performance with more GPU cores has been the Achilles heel of Apple Silicon. I bet not being able to scale GPU performance is the primary reason why the M1 Mac Pro was not shipped or got to validation stage. On a per core basis, 64 GPU cores in the M1 Ultra is performing at little over half half (GB5 Metal 1.5k points per core) of what a GPU core does in an 8 GPU core M1 (2.6k per core). It's basically half if you compare the Ultra to the A14 GPU core performance. And you can see the scaling efficiency get worse and worse when comparing 4, 8, 14, 16, 24, 32, 48 and 64 cores.

    The GPU team inside Apple is not doing a good job with their predictions of performance. They have done a great job at the smartphone, tablet and even laptop level, but getting the GPU architecture to scale to desktops and workstations has been a failure. Apple was convinced that the Ultra and Extreme models would provide competitive GPU performance. This type of decision isn't based on some GPU lead blustering that this architecture would work. It should have been based on modeled chip simulations showing that it would work and what potential it would have. After that, a multi-billion decision would be made. So, something is up in the GPU architecture team inside Apple imo. Hopefully they will recover and fix the scaling by the time the M3 architecture ships. The M2 versions has improved GPU core scaling efficiency, but not quite enough to make a 144 GPU core model worthwhile, if the rumors of the Extreme model being canceled are true (I really hope not).

    If the GPU scaling for the M1 Ultra was say 75% efficient, it would have scored about 125k in GB5 Metal. About the performance of a Radeon Pro 6800. An Extreme version with 128 GPU cores at 60% efficiency would be 200k in GB5 Metal. That's Nvidia 3080 territory, maybe even 3090. Both would have been suitable for a Mac Pro, but alas no. The devil is in the details. The Apple Silicon GPU team fucked up imo.
    There are some tests where it scales better. Here they test 3DMark at 7:25:



    Ultra gets 34k, Max gets 20k so 70% increase.

    Later in the video at 20:00, they test Tensorflow Metal and Ultra GPU scales to as high as 93% faster than Max.

    https://github.com/tlkh/tf-metal-experiments

    Some GPU software won't scale well due to CPU holding it back, especially if it's Rosetta translated.

    It's hard to tell if it's a hardware issue or software/OS/drivers, likely a combination. If the performance gain is more consistent with M2 Ultra and doesn't get fixed with the same software on M1 Ultra, it will be clearer that it's been a hardware issue. The good thing is they have software like Blender now that they can test against a 4090 and figure out what they need to do.

    The strange part is that the CPU is scaling very well, almost double in most tests. The AMD W6800X Duo scales well for compute too and those separate GPU chips are connected together in a slower way than Apple's chip. There are the same kind of drops in some tests but OctaneX is near double:

    https://barefeats.com/pro-w6800x-duo-other-gpus.html

    As you say, the GPU scaling issue could have been the reason for not doing an M1 Extreme. M2 Ultra and the next Mac Pro will answer a lot of questions.

    An Extreme model wouldn't have to be 4x Ultra chips, the edges on the Max chips could both be connected to a special GPU chip that only has GPU cores. Given that the Max is around 70W, they can do 4x Max GPU cores just on the extra chip (152) plus 76 on the others for 228 cores. This would be 90TFLOPs and would be needed to rival a 4090 and this extra chip can have hardware RT cores. This should scale better as 2/3 of the GPU cores would be on the same chip.
    I wonder if the AMD Duo MPX modules, which use Infinity Fabric internally (to join two GPUs) in conjunction with the external "Infinity Fabric Link" Mac Pro interconnect to bypass PCIe (although PCIe remains) and create an Infinity Fabric quad (a total of four GPUs, despite the "duo" name), might serve as a model for something Apple could do, using UltraFusion (or the next generation of it) to link the base SoC to MPX GPUs. Imagine what Apple could do with that.

    My understanding is that Infinity Fabric and UltraFusion are similar technologies, and if Apple and AMD can collaborate to create the Infinity Fabric Link, they can also coordinate the next generation(s). I guess that's really the thing that doesn't make sense to me -- that AMD and Apple would walk away from their alliance just as things start getting interesting, with RDNA 3 architecture on the AMD side, and Apple's own silicon with next-generation UltraFusion on the other. Metal 3 supports advanced AMD GPUs (i.e., those in the iMac Pro and the lattice Mac Pro), so until I hear otherwise, I'll continue to believe that Apple is going to leave the door open to AMD modules (even if only via PCIe, the status quo)...
    Yes, my best guess for what Apple is going to do for more compute performance in the Mac Pro was essentially an Ultra or an Extreme in an MPX module. The machine would have a master Ultra or Extreme SoC that boots the machine and controls the MPX modules. You can add 3 or 4 2-slot wide MPX modules or 2 4-slot wide MPX modules. Most software will allow you to select which MPX module, or modules, to use. Don't think an additional connection, like an infinity fabric, over and above 16 lanes of PCIe is necessary, as most of the problems that can make use of these types of architectures are embarrassingly parallel.

    UltraFusion represents something a bit different than AMD's Infinity Fabric (ex-Apple exec Mark Papermaster has a patent on IF!). An Infinity fabric is basically a serial connection for chips with about 400 GB/s of bandwidth. UltraFusion is a silicon bridge. It's a piece of silicon, not wire traces in the PCB with connection ports on the chip like AMD's Infinity Fabric, and is overlaid on top of the edges of the Max chip silicon.

    Apple's chips have an on-die (read as silicon) fabric bus that connects the core components of the chips: CPU complexes, memory interfaces, GPU complexes, ML and media engines, SLC, PCIe, etc. The UltraFusion silicon bridge or interposer basically extends that fabric bus so to a second Max chip, at 2500 GB/s, with low enough latency that 2 GPUs can appear as one. I don't know if there is a hit to GPU core scaling performance because of this. I bet there probably is.

    For the Extreme SoC, I was thinking the "ExtremeFusion" silicon bridge would be a double long version of UltraFusion one, also wider, and would have a 4 port fabric bus switch and 32 lanes of PCIe in it. So 4 Max chips would behave like one chip, and it would have 32 lanes of PCIe 4 coming out of it for 8 slots. Memory would be LPDDR, no DIMM slots. And they could stack the LPDDR 4-high. 16 stacks of 24 GB LPDDR5 stacked 4-high gets you 1.5 TB of memory. Hmm, just like the memory capacity of the 2019 Mac Pro.

    Tremendous expense for a machine that sells what, 100k units per year? This level of integration doesn't come cheap. Heck, I'd like Apple to make an MBP16 with an Ultra to amortize the costs. ;)
    cgWerkstenthousandthingsfastasleep