Remote work = less productivity. Get everyone back to their campus already.
Pretty much this.
I work from home often. But I hate it. My work takes longer, digital collab isn’t nearly as good as face to face, and the quality isn’t the same. Way different energy.
I can’t imagine architecting and testing such complicated and important things as a world leading SOC…from home.
Additionally, Apple lost some early chip engineers. That no doubt was a factor. Hard to replace people like that. Takes time. I imagine A17 is going to blow the doors off though. Fixing a16 graphics along with planned a17 features.
Dont sweat the Apple gloom and doom stories, many of these are bought and paid for by their inferior competitors namely Samsung. Whenever you see a negative story about Apple from “Nikkei times” or a publication called “the information”, those are both on Samsung Payroll over the years as well as various Asian suppliers looking to hit Apple for whatever reason.
my 14 Pro is amazingly fast and I came from a 13 non pro.
I can feel a difference because I still have my 13 regular as a business line.
with all the stories about stranded people even just outside of Los Angeles high way getting saved by the iPhone 14 satellite sos, I am not sure why everyone wouldn’t get a 14 if they value their life and families life above $1000?
I have a question for the more technically knowledgeable folks. I keep seeing the “designed itself into a box” idea trotted out. I’m curious why people would think that the M series chips would be subject to that kind of bind, when it’s reasonable to say the the A-series hasn’t in many iterations. It’s clear the architecture itself has room to grow — multiple billion-dollar companies have ARM-based commercial and consumer initiatives in place.
So given that Apple’s teams now have multiple generations of chip and hardware design experience to draw upon, and a culture that doesn’t rest, why should anyone think this is a screw-up, rather than the natural design process and temporary technological limitations? I’m sure we’re all well aware just how difficult it is to create and implement leaps in chip tech, and that one of the keys to success has always been in how well the software leverages those advances into significant improvements for end-users.
I don't think anyone is showing any doom and gloom for iPhones, outside of trolldom which you have to learn to ignore. Numbers will be bad for the holiday quarter due to COVID affecting manufacturing, but everyone is expecting those decreased units are mostly delayed sales. And, hardware ray tracing as a phone feature isn't a big feature, so the "doom and gloom" from the Information is the typical media sensationalism, which you have to learn to ignore.
Hardware ray tracing will be a big feature for the goggles though, and, it is a required feature for higher end Macs where 3D workflows are more common. So, Apple has to do it. Otherwise, steady incrementalism is basically all that is needed for iPhones (A-series SoCs) and low end Macs (M1, M2 SoCs). My comment was in regard to the large iMac and Mac Pro, and to some extent the M1 Max and M1 Pro. Lots of curious decisions at many different levels.
Product feature decisions: 1. The Mac mini and iMac 24 should have the M1 Pro as an option, at least 6 months ago. 2. The Mac mini and iMac 24 should have the M2 as an option, at least 3 months ago.
These are just the existing machines with multiple update paths that weren't taken. Instead, some of the Macs will be over 2 years old before updates. There's obviously a blue sky of possibilities for other form factors.
SoC design decisions: 1. The M1 Pro to M1 Max to M1 Ultra GPU has terrible scaling with cores. It presents like a memory bottleneck, and I suspect there isn't enough tile memory to hold enough GPU threads to utilize all the performance in the cores if so. Whatever it is, they should have caught this in GPU performance simulations 3 years ago and changed whatever it was to improve GPU performance in the M1 Max and M1 Ultra SoCs. They decided to live it, and maybe they are hoping they would quickly move to the M2 Pro/Max/Ultra versions, but COVID and TSMC delayed it by a year.
2. The poor GPU scaling likely killed a Mac Pro with a M1 Ultra/Extreme. If the scaling was perfect, the M1 Max would have a GB5 Metal score of 80k (Radeon 6600 XT), the M1 Ultra would be 160k (Radeon 6900XT) and an "M1 Extreme" would be 320k (GeForce 3090/4080/4090 territory). Scaling with core counts is never perfect and if they are say 80 to 90%, they'd be doing quite well. But the scaling from 16 to 32 cores was 80%, and 32 to 64 cores is 60%. That's quite poor. The M1 Extreme GPU scaling would have dropped even further, and probably less than 50%. This resulted GPU performance that is less than 2 to 3 year old Radeon GPUs. No point in shipping an M1 Extreme if so.
3. Apple's high end Mac SoC strategy is notionally a minimum cost strategy of only designing 1 SoC, the M1 Max chip, and either chopping off part of the GPU (M1 Pro) or bridging together 2 (M1 Ultra) or 4 ("M1 Extreme") for higher end versions. If they got the GPU performance to scale better with increasing core counts, it would work, but it didn't and they minimally lost a cycle on the high end.They should have known this 3 years ago. Perhaps they thought they would figure out and kept on trying. Either way, it is definitely a mistake somewhere in the hardware designers, Metal designers, or both.
I don't know if number 3 is actually the minimum cost option. The cost of the strategy doesn't sound any cheaper or faster than a path where scaling is the primary purpose versus one discrete chip that could be chopped or glued. The M1 Pro to M1 Max upgrade option is just a GPU upgrade. Buyers don't get increased CPU performance. GPU compute workflows can make use of a lot of GPU cores, but Apple's M1 Max (Jade-die) strategy limits them in how they can get more GPU cores.
Notionally, they will eventually have to go with a chip tiling strategy (both vertical and horizontal). The Ultra is basically an early version. If there was a CPU chip tile, GPU chip tile, they could scale an SoC in multiple directions. Need a chip with lots of CPU, but no GPU, just tile a bunch of CPU chips together. Need a lot of GPU but not a lot of CPU, tile a bunch of GPU chips together. This doesn't sound anymore expensive or time consuming than the plans we think they had.
The poor GPU core scaling has to be fixed for any strategy to work. I don't think this is a problem with employee turn over or COVID. They definitely know about it. It's probably just a series of compounding events. Like, they decided to wait to fix the M1 GPU scaling in the M2 versions sometime 2020, and felt the M2 versions would arrive by late 2022. Then, COVID delayed them a year. Those two compounded to make it 2023.
The iPhone A-series stuff can just be explained by TSMC being late with 3nm and they had to fall back to 4nm and a more minimal upgrade to the SoC. Nothing odd about that.
tht said: Calling it 5nm makes it sound like there isn't improvement, and that's factually incorrect. It's a half node improvement that TSMC has done for basically a decade now.
If in 2021, Apple was expecting TSMC 3nm to be in mass production by summer of 2022, which would be in time for fall iPhone shipments, they would have designed a chip given TSMC 3nm capabilities, like hardware ray tracing features. Once TSMC and Apple saw that 3nm was going to make it on time, and they would have figured it out in 2021, they would fallback to the half node step, TSMC 4nm, and get the typical half node improvements, like 10 to 15% performance, 10 to 15% less power, some combination.
Strategically, I kind of think the Jade C die (M1 Max), Jade C chop (M1 Pro), Jade 2C (M1 Ultra), and the failed Jade 4C was a mistake. It didn't scale in the manner that buyers wanted. The M1 Max and on down appear fine. The M1 Ultra and on up? There have been issues. Apple's designed itself into a box that can't get them to ship higher end machines. That's a bigger issue than TSMC being late.
Good analysis. Designing themselves into a box was my concern right from day 1 of the Intel -> ARM switch. Apple's CPU team had the advantage that they could design their own architecture which meant specific hardware optimisations for Swift, Javascript, low power, AI etc and then not worry about backwards compatibility. This gave them a huge jump ahead performance wise right out of the gate with M1. However, that is a once-only jump. Future improvements are incremental, optimising aspects of the silicon and adding hardware to improve particular software functions. Apple does still have an advantage with this as they write the compiler so they can remove lesser-used silicon and emulate it in software if they want to keep the die size down, keep backwards compatibility and add new instructions to the silicon.
Apple's CPU team is tiny compared to Intel's or AMD's. Generational Apple Silicon speed improvements are nothing like they were, and in many ways the M2 was a disappointment. I don't think we're going to see Apple staying so far ahead of Intel on the performance front. Power consumption wise though Apple will always have the lead as they don't have a huge amount of silicon dedicated to RISCifying the complex CISC x86 instructions.
Yes, all the low hanging fruit were picked years ago. Apple rode TSMC's and Samsung's fab node march from being more than generation behind Intel to being a generation ahead, and coupled with that they rode the CPU microarchitecture march from a small simple in-order core to a highly out-of-order, large issue core. This enabled performance to increase performance 50% to 100% for about 6 straight years. Today, it is slow incrementalism.
Apple doesn't want the performance crown. They will crow about it they have it, but they will generally lean towards well balanced systems with great runtimes and handling qualities. They will be fine as long as TSMC is a node ahead of Intel, or is able to buy capacity from leading edge foundries.
We should be more worried about the product marketing team.
Ok, and if it would have been faster? Then what? Do you think Apple would have sold noticeably lesser number of iPhones? Satellite SOS is a bigger and more important feature / upgrade than speed bump.
“ early prototypes drew more power than what the company had expected based on software simulations. The high power draw could have affected battery life and made the device run too hot. ”
Lol. The “mistake” would have been pushing forward with this design given how whiny everyone is over battery life !
Interesting. Thanks for the extended, thoughtful response. I haven’t been able to do my own nit-picking research on this on yet, and I’m not sure I’ll ever have the time. But thanks for giving me several points of inquiry, if I do get around to it.
Out of curiosity, what’s your background? Don’t feel like you have to answer that. I can think of several reasons why you might not be able, or feel so inclined.
— Separately… hope all ya’ll AppleInsiders (staff and community) are getting to enjoy the season (storms notwithstanding), the holidays, and a much happier new year.
I have a question for the more technically knowledgeable folks. I keep seeing the “designed itself into a box” idea trotted out. I’m curious why people would think that the M series chips would be subject to that kind of bind, when it’s reasonable to say the the A-series hasn’t in many iterations. It’s clear the architecture itself has room to grow — multiple billion-dollar companies have ARM-based commercial and consumer initiatives in place.
So given that Apple’s teams now have multiple generations of chip and hardware design experience to draw upon, and a culture that doesn’t rest, why should anyone think this is a screw-up, rather than the natural design process and temporary technological limitations? I’m sure we’re all well aware just how difficult it is to create and implement leaps in chip tech, and that one of the keys to success has always been in how well the software leverages those advances into significant improvements for end-users.
I don't think anyone is showing any doom and gloom for iPhones, outside of trolldom which you have to learn to ignore. Numbers will be bad for the holiday quarter due to COVID affecting manufacturing, but everyone is expecting those decreased units are mostly delayed sales. And, hardware ray tracing as a phone feature isn't a big feature, so the "doom and gloom" from the Information is the typical media sensationalism, which you have to learn to ignore.
Hardware ray tracing will be a big feature for the goggles though, and, it is a required feature for higher end Macs where 3D workflows are more common. So, Apple has to do it. Otherwise, steady incrementalism is basically all that is needed for iPhones (A-series SoCs) and low end Macs (M1, M2 SoCs). My comment was in regard to the large iMac and Mac Pro, and to some extent the M1 Max and M1 Pro. Lots of curious decisions at many different levels.
Product feature decisions: 1. The Mac mini and iMac 24 should have the M1 Pro as an option, at least 6 months ago. 2. The Mac mini and iMac 24 should have the M2 as an option, at least 3 months ago.
These are just the existing machines with multiple update paths that weren't taken. Instead, some of the Macs will be over 2 years old before updates. There's obviously a blue sky of possibilities for other form factors.
SoC design decisions: 1. The M1 Pro to M1 Max to M1 Ultra GPU has terrible scaling with cores. It presents like a memory bottleneck, and I suspect there isn't enough tile memory to hold enough GPU threads to utilize all the performance in the cores if so. Whatever it is, they should have caught this in GPU performance simulations 3 years ago and changed whatever it was to improve GPU performance in the M1 Max and M1 Ultra SoCs. They decided to live it, and maybe they are hoping they would quickly move to the M2 Pro/Max/Ultra versions, but COVID and TSMC delayed it by a year.
2. The poor GPU scaling likely killed a Mac Pro with a M1 Ultra/Extreme. If the scaling was perfect, the M1 Max would have a GB5 Metal score of 80k (Radeon 6600 XT), the M1 Ultra would be 160k (Radeon 6900XT) and an "M1 Extreme" would be 320k (GeForce 3090/4080/4090 territory). Scaling with core counts is never perfect and if they are say 80 to 90%, they'd be doing quite well. But the scaling from 16 to 32 cores was 80%, and 32 to 64 cores is 60%. That's quite poor. The M1 Extreme GPU scaling would have dropped even further, and probably less than 50%. This resulted GPU performance that is less than 2 to 3 year old Radeon GPUs. No point in shipping an M1 Extreme if so.
3. Apple's high end Mac SoC strategy is notionally a minimum cost strategy of only designing 1 SoC, the M1 Max chip, and either chopping off part of the GPU (M1 Pro) or bridging together 2 (M1 Ultra) or 4 ("M1 Extreme") for higher end versions. If they got the GPU performance to scale better with increasing core counts, it would work, but it didn't and they minimally lost a cycle on the high end.They should have known this 3 years ago. Perhaps they thought they would figure out and kept on trying. Either way, it is definitely a mistake somewhere in the hardware designers, Metal designers, or both.
I don't know if number 3 is actually the minimum cost option. The cost of the strategy doesn't sound any cheaper or faster than a path where scaling is the primary purpose versus one discrete chip that could be chopped or glued. The M1 Pro to M1 Max upgrade option is just a GPU upgrade. Buyers don't get increased CPU performance. GPU compute workflows can make use of a lot of GPU cores, but Apple's M1 Max (Jade-die) strategy limits them in how they can get more GPU cores.
Notionally, they will eventually have to go with a chip tiling strategy (both vertical and horizontal). The Ultra is basically an early version. If there was a CPU chip tile, GPU chip tile, they could scale an SoC in multiple directions. Need a chip with lots of CPU, but no GPU, just tile a bunch of CPU chips together. Need a lot of GPU but not a lot of CPU, tile a bunch of GPU chips together. This doesn't sound anymore expensive or time consuming than the plans we think they had.
The poor GPU core scaling has to be fixed for any strategy to work. I don't think this is a problem with employee turn over or COVID. They definitely know about it. It's probably just a series of compounding events. Like, they decided to wait to fix the M1 GPU scaling in the M2 versions sometime 2020, and felt the M2 versions would arrive by late 2022. Then, COVID delayed them a year. Those two compounded to make it 2023.
The iPhone A-series stuff can just be explained by TSMC being late with 3nm and they had to fall back to 4nm and a more minimal upgrade to the SoC. Nothing odd about that.
I think iPhones are pretty fast and don’t think they need to focus on shrinking the SOC every year. They could stay on the same 4 nm size for 2or 3 Years and focus on lowering the unit cost to pass on to consumers. Also focus more on software efficiency and functionality. Real happy with my IPhone 13pro and 13 Mini. Hopefully won’t get another till IPhone 18 or 19 pro.
Comments
Dont sweat the Apple gloom and doom stories, many of these are bought and paid for by their inferior competitors namely Samsung. Whenever you see a negative story about Apple from “Nikkei times” or a publication called “the information”, those are both on Samsung Payroll over the years as well as various Asian suppliers looking to hit Apple for whatever reason.
my 14 Pro is amazingly fast and I came from a 13 non pro.
I can feel a difference because I still have my 13 regular as a business line.
with all the stories about stranded people even just outside of Los Angeles high way getting saved by the iPhone 14 satellite sos, I am not sure why everyone wouldn’t get a 14 if they value their life and families life above $1000?
Hardware ray tracing will be a big feature for the goggles though, and, it is a required feature for higher end Macs where 3D workflows are more common. So, Apple has to do it. Otherwise, steady incrementalism is basically all that is needed for iPhones (A-series SoCs) and low end Macs (M1, M2 SoCs). My comment was in regard to the large iMac and Mac Pro, and to some extent the M1 Max and M1 Pro. Lots of curious decisions at many different levels.
Product feature decisions:
1. The Mac mini and iMac 24 should have the M1 Pro as an option, at least 6 months ago.
2. The Mac mini and iMac 24 should have the M2 as an option, at least 3 months ago.
These are just the existing machines with multiple update paths that weren't taken. Instead, some of the Macs will be over 2 years old before updates. There's obviously a blue sky of possibilities for other form factors.
SoC design decisions:
1. The M1 Pro to M1 Max to M1 Ultra GPU has terrible scaling with cores. It presents like a memory bottleneck, and I suspect there isn't enough tile memory to hold enough GPU threads to utilize all the performance in the cores if so. Whatever it is, they should have caught this in GPU performance simulations 3 years ago and changed whatever it was to improve GPU performance in the M1 Max and M1 Ultra SoCs. They decided to live it, and maybe they are hoping they would quickly move to the M2 Pro/Max/Ultra versions, but COVID and TSMC delayed it by a year.
2. The poor GPU scaling likely killed a Mac Pro with a M1 Ultra/Extreme. If the scaling was perfect, the M1 Max would have a GB5 Metal score of 80k (Radeon 6600 XT), the M1 Ultra would be 160k (Radeon 6900XT) and an "M1 Extreme" would be 320k (GeForce 3090/4080/4090 territory). Scaling with core counts is never perfect and if they are say 80 to 90%, they'd be doing quite well. But the scaling from 16 to 32 cores was 80%, and 32 to 64 cores is 60%. That's quite poor. The M1 Extreme GPU scaling would have dropped even further, and probably less than 50%. This resulted GPU performance that is less than 2 to 3 year old Radeon GPUs. No point in shipping an M1 Extreme if so.
3. Apple's high end Mac SoC strategy is notionally a minimum cost strategy of only designing 1 SoC, the M1 Max chip, and either chopping off part of the GPU (M1 Pro) or bridging together 2 (M1 Ultra) or 4 ("M1 Extreme") for higher end versions. If they got the GPU performance to scale better with increasing core counts, it would work, but it didn't and they minimally lost a cycle on the high end.They should have known this 3 years ago. Perhaps they thought they would figure out and kept on trying. Either way, it is definitely a mistake somewhere in the hardware designers, Metal designers, or both.
I don't know if number 3 is actually the minimum cost option. The cost of the strategy doesn't sound any cheaper or faster than a path where scaling is the primary purpose versus one discrete chip that could be chopped or glued. The M1 Pro to M1 Max upgrade option is just a GPU upgrade. Buyers don't get increased CPU performance. GPU compute workflows can make use of a lot of GPU cores, but Apple's M1 Max (Jade-die) strategy limits them in how they can get more GPU cores.
Notionally, they will eventually have to go with a chip tiling strategy (both vertical and horizontal). The Ultra is basically an early version. If there was a CPU chip tile, GPU chip tile, they could scale an SoC in multiple directions. Need a chip with lots of CPU, but no GPU, just tile a bunch of CPU chips together. Need a lot of GPU but not a lot of CPU, tile a bunch of GPU chips together. This doesn't sound anymore expensive or time consuming than the plans we think they had.
The poor GPU core scaling has to be fixed for any strategy to work. I don't think this is a problem with employee turn over or COVID. They definitely know about it. It's probably just a series of compounding events. Like, they decided to wait to fix the M1 GPU scaling in the M2 versions sometime 2020, and felt the M2 versions would arrive by late 2022. Then, COVID delayed them a year. Those two compounded to make it 2023.
The iPhone A-series stuff can just be explained by TSMC being late with 3nm and they had to fall back to 4nm and a more minimal upgrade to the SoC. Nothing odd about that.
I applaud Apple's engineering team for being able to revert to the original GPU at such a late stage in development. That counts for something.
Apple doesn't want the performance crown. They will crow about it they have it, but they will generally lean towards well balanced systems with great runtimes and handling qualities. They will be fine as long as TSMC is a node ahead of Intel, or is able to buy capacity from leading edge foundries.
We should be more worried about the product marketing team.
Satellite SOS is a bigger and more important feature / upgrade than speed bump.