tenthousandthings

About

Username
tenthousandthings
Joined
Visits
179
Last Active
Roles
member
Points
2,055
Badges
1
Posts
1,068
  • Rumored next-generation Apple Silicon processor expected in fall 2023 at the earliest

    grom007 said:
    I do not understand why Apple does not release the m3 MacBook Pro before the MacBook Air. It gives incentive to consumers to buy the very best.
    This is how it works:
    1. The first silicon out in a new generation is the A-series. This is for obvious reasons, Apple builds hundreds of millions of these for iPhone and iPad.
    2. The base M-series is a variant of the A-series. It powers the consumer line (MB Air, iMac, Mini, iPad Pro) and Apple can make these in volume without missing a beat. The iPhone and the iPad pay for this. That's the genius of Apple Silicon.
    3. The M-series Pro/Max (and Ultra) is a different story. There are real development costs associated with designing and building the Pro/Max (the Max is just a Pro with two GPU units instead of one, or the Pro is just a Max with only one GPU instead of two), and that development must follow the A-series and the base M-series. It can't happen the other way around. The science and the economics of fabrication don't allow it. Thus, the MB Pro, the (possible) iMac Pro, the Mini Pro, the Mac Studio, and the (probable) Mac Pro all have to wait for the process to unfold.
    The economics of doing it the other way around are beyond prohibitive. No one with any knowledge of TSMC's fabs has ever suggested the M3 would appear before 2024. The iPhone Pro line will get 3nm silicon in Fall 2023, probably. But the M-series won't see a refresh until 2024. All the rumors of M3 appearing imminently are just wishful thinking. It's never had any basis in reality.
    williamlondonFileMakerFellerdarkvaderwatto_cobracgWerks
  • MacBook Air 15-inch with 'M2-like' chip in testing behind closed doors at Apple

    tyler82 said:
    re: sluggish Mac sales
    Apple asking an additional grand to upgrade the memory to an acceptable amount and a 1TB hard drive  is what turns me off from buying a new Mac. Especially when market prices for these components are a fraction of that cost. 
    This is disingenuous. Take HP ZBook Firefly, for example. Upgrade from 8GB to 16GB RAM is $140. Apple MacBook Air is $200, for unified memory upgrade, not just RAM upgrade. Upgrade from 256GB to 1TB storage is $365. Apple MacBook Air is $400.

    So we’re talking $505 (HP) versus $600 (Apple) total, and it’s not a one-to-one comparison because with Apple you get a GPU memory boost, not just CPU.

    I gather you also mean 16GB isn’t an “acceptable” amount of RAM or unified memory. The HP upgrade to 32GB from 8GB RAM is $410.  upgrade to 24GB from 8GB of unified memory is $400. 

    So, again, we’re talking $775 (HP) versus $800 (Apple) total — the additional 8GB of RAM you get with HP still doesn’t outweigh the boost in graphics performance you get with Apple’s approach.
    Xedwilliamlondon
  • Apple TV+ considering bid to stream UK soccer

    Japhey said:
    I’m still not entirely convinced of Apple’s competence in broadcasting live sports. Part of me wants to say “wallet open, take money”…but the other part wishes Apple would fine tune its approach before tackling (!) something as prestigious as the EPL. If you think people have been vocal over the way Apple is handling its MLB responsibilities, just wait and see how bad they will be if the EPL on Apple TV+ isn’t absolutely perfect. Seriously, Apple can’t afford an own goal here, they will need to be perfect. And so far I’ve seen no indication that they are anywhere near that level yet in live broadcasts. 
    Second this. NBC’s deal for the EPL broadcasts in the US runs through 2027-28, so like the article says, it sounds like we’re talking about partial UK EPL rights (and maybe The Championship rights in both UK and US?) — the bar for any of that is very high. 
    Japhey
  • Future Mac Pro may use Apple Silicon & PCI-E GPUs in parallel

    CookItOff said:
    I would love to be totally wrong on this, but after thoroughly reading each patent application it looks like these are the patents for how the Ultra chip (two Max chip linked together through infinity fabric) processes graphic task in tandem, and the infinity fabric link which Apple uses to tie both MAX chips together, and has nothing to do with future multiple dGPU support by Apple. 

    Further evidence of this is that the patent file date is for August 2021 and the Ultra chip debuted in March 2022.

    This doesn’t mean that Apple isn’t working on dGPUs though. And we may very well still see dGPU support come to the new AS MacPros. I just think these patent applications cover the Apple Silicon Ultra’s pair of GPUs talking to each other in tandem. 
    The UltraFusion silicon bridge that interconnects the M1 Ultra is not "infinity fabric," which is an AMD technology. UltraFusion uses the TSMC InFO_LI (or InFO_LSI) packaging method. It's a passive, "local" silicon interconnect, something like Intel's EMIB (embedded die interconnect bridge) packaging. Dylan Patel says "it can evolve into being active (transistors and various IP) in the future." [https://www.semianalysis.com/p/advanced-packaging-part-2-review]

    So these patents could be aimed at the next (or a future) generation of UltraFusion. Note that Infinity Fabric (now in its fourth generation and marketed as Infinity Architecture) isn't a silicon interposer like UltraFusion. Using silicon to connect chip-to-chip is very expensive, which is why Infinity Fabric was created.

    So I think it's possible that Apple, unlike AMD (which as a "merchant silicon vendor" has to have the ability to lower prices in the face of competition), can work with TSMC to link additional GPU cores to M2 Ultra in a Mac Pro using silicon inside of an end product. For AMD, the product is the chip. For Apple, the product is this M1 Mac mini I'm typing this on, and the iPhone I first noticed this article on.

    That said, AMD's aims with the current generation of Infinity Architecture are probably instructive for Apple. The most extreme configuration it supports are two CPU chiplets and eight GPU chiplets, a ratio of 1:4 (1 CPU to 4 GPU), all interconnected using the architecture. The M1 Max has a ratio of 1:2 (1 CPU to 2 GPU). The M2 Ultra doesn't improve on this ratio, with two CPU and four GPU. Apple has an advantage because it uses a silicon interconnect, but that's not enough. So I think (in my layman's ignorance) that might have been the fundamental problem—a quad M1 Extreme doesn't improve on the CPU-to-GPU ratio. They need a ratio of 1:4 to really compete with high-end AMD configurations. Not to mention Intel and Nvidia. 

    Finally, one last observation: the M1/M2 Pro is thought to be cut down from the M1/M2 Max. I don't know if this is actually true, but that's what I've read. So they cut one of the two GPU units off the Max to make the Pro. When they do that, there is a GPU unit left over. Some of them may have flaws that make them unusable, but demand for the M2 Pro (especially now with the M2 Pro Mac mini) is high enough that some percentage of them must also be flawless. What to do with them? Put four of them (for M2 Ultra), or eight of them (for M2 Extreme), into a GPU extension that uses the next generation of UltraFusion silicon interconnect (i.e., not "local") to add GPU power, to reach that 1:4 ratio.
    roundaboutnow
  • Future Mac Pro may use Apple Silicon & PCI-E GPUs in parallel

    This seems like a good place to re-post my transcript of Anand Shimpi's comments on Apple Silicon for the Mac in the Apple interview with Andru Edwards. I recently posted it there, but that thread is buried. For those who don't know, Anand is the founder of AnandTech.com, but Apple hired him away from himself in 2014. It is edited for clarity, removing interjections like "I think" and "kind of" and "sort of," but I've retained his rhetorical ", right? ..." habit, as it captures the feel of the dialogue.

    It's a good overview of what Apple is trying to do. If you think about what he's saying carefully, despite what he says about the Mac joining the "cadence" of the iPhone and iPad, you can see Apple isn't going to release silicon just because: "If we’re not able to deliver something compelling, we won’t engage, right? ... We won’t build the chip." 

    [General questions]

    The silicon team doesn’t operate in a vacuum, right? ... When these products are being envisioned and designed, folks on the architecture team, the design team, they’re there, they’re aware of where we’re going, what’s important, both from a workload perspective, as well as the things that are most important to enable in all of these designs.

    I think part of what you’re seeing now is this now decade-plus long, maniacal obsession with power-efficient performance and energy efficiency. If you look at the roots of Apple Silicon, it all started with the iPhone and the iPad. There we’re fitting into a very, very constrained environment, and so we had to build these building blocks, whether it’s our CPU or GPU, media engine, neural engine, to fit in something that’s way, way smaller from a thermal standpoint and a power delivery standpoint than something like a 16-inch MacBook Pro. So the fundamental building blocks are just way more efficient than what you’re typically used to seeing in a product like this. The other thing you’re noticing is, for a lot of tasks that maybe used to be high-powered use cases, on Apple Silicon they actually don’t consume that much power. If you look, compared to what you might find in a competing PC product, depending on the workload, we might be a factor of two or a factor of four times lower power. That allows us to deliver a lot of workloads, that might have been high-power use cases in a different product, in something that actually is a very quiet and cool and long-lasting. The other thing that you’re noticing is that the single-thread performance, the snappiness of your machine, it’s really the same high-performance core regardless of if you’re talking about a MacBook Air or a 14-inch Pro or 16-inch Pro or the new Mac mini, and so all of these machines can accommodate one of those cores running full tilt, again we’ve turned a lot of those usages and usage cases into low-power workloads. You can’t get around physics, though, right? ... So if you light up all the cores, all the GPUs, the 14-inch system just has less thermal capacity than the 16, right? ... So depending on your workload, that might drive you to a bigger machine, but really the chips are across the board incredibly efficient.

    [Battery life question]

    You can look at how chip design works at Apple. You have to remember we’re not a merchant silicon vendor, at the end of the day we ship product. So the story for the chip team actually starts at the product, right? ... There is a vision that the design team, that the system team has that they want to enable, and the job of the chip is to enable those features and enable that product to deliver the best performance within the constraints, within the thermal envelope of that chassis, that is humanly possible. So if you look at what we did going from the M1 family to the M2 Pro and M2 Max, at any given power point, we’re able to deliver more performance. On the CPU we added two more efficiency cores, two more of our e-cores. That allowed us, or was part of what allowed us, to deliver more multi-thread performance, again, at every single power point where the M1 and M2 curves overlap we were able to deliver more performance at any given power point. The dynamic range of operations [is] a little bit longer, a little bit wider, so we do have a slight increase in terms of peak power, but in terms of efficiency, across the range, it is a step forward versus the M1 family, and that directly translates into battery life. The same thing is true for the GPU, it’s counterintuitive, but a big GPU running a modest frequency and voltage, is actually a very efficient way to fill up a box. So that’s been our philosophy dating back to iPhone and iPad, and it continues on the Mac as well.

    // But really the thing that we see, that the iPhone and the iPad have enjoyed over the years, is this idea that every generation gets the latest of our IPs, the latest CPU IP, the latest GPU, media engine, neural engine, and so on and so forth, and so now the Mac gets to be on that cadence too. If you look at how we’ve evolved things on the phone and iPad, those IPs tend to get more efficient over time. There is this relationship, if the fundamental chassis doesn’t change, any additional performance you draw, you deliver has to be done more efficiently, and so this is the first time the MacBook Pro gets to enjoy that and be on that same sort of cycle.

    On the silicon side, the team doesn’t pull any punches, right? ... The goal across all the IPs is, one, make sure you can enable the vision of the product, that there’s a new feature, a new capability that we have to bring to the table in order for the product to have everything that we envisioned, that’s clearly something that you can’t pull back on. And then secondly, it’s do the best you can, right? ... Get as much down in terms of performance and capability as you can in every single generation. The other thing is, Apple’s not a chip company. At the end of the day, we’re a product company. So we want to deliver, whether it’s features, performance, efficiency. If we’re not able to deliver something compelling, we won’t engage, right? ... We won’t build the chip. So each generation we’re motivated as much as possible to deliver he best that we can.

    [Neural engine question]

    … There are really two things you need to think about, right? ... The first is the tradeoff between a general purpose compute engine and something a little more specialized. So, look at our CPU and GPU, these are big general purpose compute engines. They each have their strengths in terms of the types of applications you’d want to send to the CPU versus the GPU, whereas the neural engine is more focused in terms of the types of operations that it is optimized for. But if you have a workload that’s supported by the neural engine, then you get the most efficient, highest density place on the chip to execute that workload. So that’s the first part of it. The second part of it is, well, what kind of workload are we talking about? Our investment in the neural engine dates back years ago, right? The first time we had a neural engine on an Apple Silicon chip was A11 Bionic, right? ... So that was five-ish years ago on the iPhone. Really, it was the result of us realizing that there were these emergent machine learning models, where, that we wanted to start executing on device, and we brought this technology to the iPhone, and over the years we’ve been increasing its capabilities and its performance. Then, when we made the transition of the Mac to Apple Silicon, it got that IP just like it got the other IPs that we brought, things like the media engine, our CPU, GPU, Secure Enclave, and so on and so forth. So when you’re going to execute these machine learning models, performing these inference-driven models, if the operations that you’re executing are supported by the neural engine, if they fit nicely on that engine, it’s the most efficient way to execute them. The reality is, the entire chip is optimized for machine learning, right? ... So a lot of models you will see executed on the CPU, the GPU, and the neural engine, and we have frameworks in place that kind of make that possible. The goal is always to execute it in the highest performance, most efficient place possible on the chip.

    [Nanometer process question]

    ... You’re referring to the transistor. These are the building blocks all of our chips are built out of. The simplest way to think of them is like a little switch, and we integrate tons of these things into our designs. So if you’re looking at M2 Pro and M2 Max, you’re talking about tens of billions of these, and if you think about large collections of them, that’s how we build the CPU, the GPU, the neural engine, all the media blocks, every part of the chip is built out of these transistors. Moving to a new transistor technology is one of the ways in which we deliver more features, more performance, more efficiency, better battery life. So you can imagine, if the transistors get smaller, you can cram more of them into a given area, that’s how you might add things like additional cores, which is the thing you get in M2 Pro and M2 Max—you get more CPU cores, more GPU cores, and so on and so forth. If the transistors themselves use less power, or they’re faster, that’s another method by which you might deliver, for instance, better battery life, better efficiency. Now, I mentioned this is one tool in the toolbox. What you choose to build with them, the underlying architecture, microarchitecture and design of the chip also contribute in terms of delivering that performance, those features, and that power efficiency.

    If you look at the M2 Pro and M2 Max family, you’re looking at a second-generation 5 nanometer process. As we talked about earlier, the chip got more efficient. At every single operating point, the chip was able to deliver more performance at the same amount of power.

    [Media engine question]

    ... Going back to the point about transistors, taking that IP and integrating it on the latest kind of highly-integrated SOC and the latest transistor technology, that lets you run it at a very high speed and you get to extract a lot of performance out of it. The other thing is, and this is one of the things that is fairly unique about Apple Silicon, we built these highly-integrated SOCs, right? ... So if you think about the traditional system architecture, in a desktop or a notebook, you have a CPU from one vendor, a GPU from another vendor, each with their own sort of DRAM, you might have accelerators kind of built into each one of those chips, you might have add-in cards as additional accelerators. But with Apple Silicon in the Mac, it’s all a single chip, all backed by a unified memory system, you get a tremendous amount of memory bandwidth as well as DRAM capacity, which is unusual, right? ... In a machine like this a CPU is used to having large capacity, low bandwidth DRAM, and a GPU might have very low capacity, high bandwidth DRAM, but now the CPU gets access to GPU-like memory bandwidth, while the GPU gets access to CPU-like capacity, and that really enables things that you couldn’t have done before. Really, if you are trying to build a notebook, these are the types of chips that you want to build it out of. And the media engine comes along for the ride, right? ... The technology that we’ve refined over the years, building for iPhone and iPad, these are machines where the camera is a key part of that experience, and being able to bring some of that technology to the Mac was honestly pretty exciting. And it really enabled just a revolution in terms of the video editing and video workflows.

    The addition of ProRes as a hardware accelerated encode and decode engine as a part of the media engine, that’s one of the things you can almost trace back directly to working with the Pro Workflows team, right? ... This is a codec that it makes sense to accelerate to integrate into hardware for our customers that we're expecting to buy these machines. It was something that the team was able to integrate, and for those workflows, there’s nothing like it in the industry, on the market. 
    keithwdanoxd_2watto_cobraroundaboutnow