melgross

About

Username
melgross
Joined
Visits
127
Last Active
Roles
member
Points
10,981
Badges
2
Posts
33,729
  • AMD to unveil Radeon RX 6000 GPU family on Oct. 28

    tht said:
    melgross said:
    The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.
    The A12Z on TSMC N7 has 10 billion transistors on a 120 mm^2 die. This is for a fanless tablet. I definitely see them hitting 20 to 30 billion transistors on TSMC N5 for the Mac SoCs. A notional A14X for an iPad Pro could be 15 billion transistors just based on the new fab alone. A Mac SoC for an iMac can easily be 2x that at around 30 billion.

    Transistor budgets of over 25 billion are not that daunting for Apple Silicon. It's the power budgets that are. They just aren't going to run any of their SoCs at over 150 Watts, and I'd bet they want 100 W or lower. That will be self-limiting, so performance probably won't match these 300 W dGPUs or 200 W CPUs on average.

    I expect them to be at parity or possibly more performant than dGPUs at each Watt class though. The fab advantage will give them about 30% perf/W advantage. The density advantage is going to be about 100%. They have so many transistors to play with on TSMC N5 that you wonder if they are willing use it all.

    melgross said:
    but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.

    ‘’anything that exceeds that would thrill me.
    Yes. Apple has a chance to do something different than the packaging and designs afforded to them with components from Intel and AMD. It could be a monolithic SoC. It could be chiplets. It could be stacks of chiplets. They could use HBM or GDDR or whatever custom memory solution used commodity DRAM. It's going to be interesting to say the least. 
    I just have to disagree. Apple has been pretty clear that these Macs would use less power and, according to the chart they showed, have equal performance. But they didnt say which Macs they were talking about. There is no way Apple is going to make that leap in the next year. Maybe, over time, if they are willing to make a much larger chip, and Nvidia’s chips are very large indeed, they can do it. But then, the efficiency and power draw will be seventy comoromised. And you’re forgetting, I suppose that thatb25 billion transistores is just for the GPU. Apple has to. Contend with an entire SoC. It’s not going to happen.

    I really don’t care how efficient ARM instructions are, or how good Apple’s engineering is. There are physical laws that everyone has to follow, and it’s not likely that Apple is special here. Not only is Apple nor go8ng to run over 150 watts, likely they don’t want to run much over 20 watts. There’s only so much they can do.
    GG1
  • AMD to unveil Radeon RX 6000 GPU family on Oct. 28

    tht said:
    melgross said:
    tht said:
    melgross said:
    tht said:
    I'm betting that the A14 GPUs will have hardware support for raytracing. So, the iPhone will be the first Apple device to have hardware raytracing support, not Macs with AMD GPUs. ;)

    Anyways, anyone want to clarify AMD GPU codenames? Both the chip and graphics API support? Getting confusing out there. Navi, Navi 14, Navi 12, RDNA, RDNA2, so on and so forth. 
    Having support doesn’t mean having effective support. Nvidia has had support for ray tracing in their top GPUs for two generations, but despite having fast memory, and a lot of it, they weren’t really usable for that. Their new generation is supposed to have for the first time, enough oomf to make it useful.

    ‘how Apple could duplicate that in an IG graphics SoC is something that I doubt right now, even assuming it’s something they’re looking at. Ray tracing is one of the most difficult things to do in real-time. The number of calculations is immense. Apple also uses a small amount of shared RAM, that’s not considered to be the best way to power graphics hardware, so we [will] see.
    Yeah, they may not have enough transistors to do it. If AR is going to be a thing though, having some properly rendered shadows for the virtual object will be really really nice to see! And, they appear to be investing rather heavily on VR, AR glasses that will be powered by iPhones. Seemingly. So, I think there is a driving need for it for some of their upcoming products. Hence, raytracing hardware will be in the SoCs sooner or later.

    As far as iGPUs versus dGPUs, our common distinctions for the difference between the two will become or are gradually becoming more and less distinct. If you look at the Xbox X series SoC, it's basically a 12 TFLOP GPU with a vestigial CPU (an 8 core Zen 2) in terms of the amount of area the GPU and CPU occupy. And this is fabbed on TSMC 7nm. TSMC 5nm yields 70% more transistors. So, I can see a TSMC 5nm SoC with raytracing hardware in it. Intel with Tiger Lake is getting closer to this with its iGPU taking about 40% of the chip area. This will drive dGPUs to higher performance niches, and commensurate higher Watt tiers. It's going to be interesting to see how the dGPU market shakes out. Maybe, server GPUs will be the vast majority sales of dGPUs in a few years as iGPUs start to take even more of the mid range of the gaming market for GPUs.

    It's still a very big question on how much power Apple is willing to burn on the GPU and how they get more perf/Watt out of the GPU than competitors. The obvious way is to run the iGPU at low clocks, have a lot of GPU cores, and have a high bandwidth memory subsystem to feed it. This will require a lot of chip area and transistors, which is something Apple may have the luxury for, as they don't have to have 50% margin on their chips, and they are on the densest fab.
    Almost anything is possible, but no matter what Apple does, they won’t have a competitor to new top line GPUs. At least, not as an integrated unit. No matter how good Apple is, and how much they can leverage the machine learning cores, the neural engine and the ISP, it’s not going to come close to a 350 watt GPU board with huge amounts of graphics RAM.
    Yeah, sure I agree that Apple Silicon isn't going to compete with a 350 W GPU. But, I wouldn't eliminate the idea of hardware support for raytracing in an iGPU either. There is a lot of transistors on TSMC N5 for Apple to use.
    The latest Nvidia GPUs have over 25 billion transistors, and an energy budget to support that. There is simply no way Apple can even begin to approach that. It takes Nvidia’s highest GPU boards to enable ray tracing in a useful way. Remember we’re talking not about ray trace in static imagery, but in dynamically rendered images at a rate of at least 60fps. That’s real time tracing. It’s in addition to every other graphic demand, such as real time physics, dynamic texture generation, hidden image elimination, etc. these all require very large amounts of graphics RAM additionally.

    while Apple’s built-in GPU will compete well against Intel’s new IG, which is twice as powerful as the current, to be replaced next month, generation, and possibly low end GPUs boards, and possibly even some low mid range boards, none of that can do ray trace, or other really high level computations. If Apple manages to come out with a separate GPU, then things could be somewhat different. But Apple has stated that one reason their GPU will be competitive is because of their “unique” admixture of these other SoC elements, and the software tying it all together. I’ve stated that possibly, if Apple can figure out how to put those elements on a separate GPU, and give that GPU sufficient bandwidth with proper graphics memory, and a lot of it, with a decent power budget, they could really have something.

    but none of us know what Apple’s plans really are. We can speculate, but that’s about it. And Apple has told us the basics during the June conference. In fact, they told us more than I expected. I listened very closely. I looked at their charts. We don’t see a separate GPU. At least not for the recognizable future. I suspect that we’ll se an SoC that competes with Intel very well in the beginning, likely exceeding performance at the level of Apple’s similar machines previously. I don’t expect to see overwhelming GPU performance. I would to see performance of a lower level separate GPU.

    ‘’anything that exceeds that would thrill me.
    GG1watto_cobra
  • AMD to unveil Radeon RX 6000 GPU family on Oct. 28

    tht said:
    I'm betting that the A14 GPUs will have hardware support for raytracing. So, the iPhone will be the first Apple device to have hardware raytracing support, not Macs with AMD GPUs. ;)

    Anyways, anyone want to clarify AMD GPU codenames? Both the chip and graphics API support? Getting confusing out there. Navi, Navi 14, Navi 12, RDNA, RDNA2, so on and so forth. 
    Having support doesn’t mean having effective support. Nvidia has had support for ray tracing in their top GPUs for two generations, but despite having fast memory, and a lot of it, they weren’t really usable for that. Their new generation is supposed to,have, for the first time, enough oomf to make it useful.

    ‘how Apple could duplicate that in an IG graphics SoC is something that I doubt right now, even assuming it’s something they’re looking at. Ray tracing is one of the most difficult things to do in real-time. The number of calculations is immense. Apple also uses a small amount of shared RAM, that’s not considered to be the best way to power graphics hardware, so we
    ll see.
    muthuk_vanalingam
  • Netgear Orbi Pro WiFi 6 Tri-band Mesh System brings reliable WiFi to your small business

    Does this usually mean the home versions will get an update soon?

    Our AirPort Extreme home network is due for an update — the lack of mesh capabilities has caused a few problems lately. 

    Any recommendations? eero vs. Velop vs. Orbi?
    I have four Apple Extreme routers connected. Apple doesn’t call it that, but they do work as a mesh network. When you have a base station, and add routers in a “extend network” configuration, which Apple’s software does, younger the same advantages as mesh. That it, same network name and password for all routers, as well as automatic handoff. No problem at all. It’s very easy to implement.

    it’s why I haven’t yet moved to anything else. Why I have so many? My house was built in 1925. The old methods resulted in interior brick walls with wood lathe with steel expanded metal mesh over all walls and ceilings. Over that is 3/4” mortar, 1/4” plaster, with many coats of paint, among which is lead based for the early layers. Not dangerous, by the way, unless that gets exposed and peels. 

    In other words, much of my home is a faraday cage. Apple’s routers work very well, giving me from 375 to 550 Mb/s everywhere, though lower in the basement. That’s better than the oerformance of most mesh routers currently being sold. If it weren’t for WiFi 6, I might be willing to wait another couple of years. I’ve been looking at this new Netgear model and have been thinking about trying it. But I want to learn a bit more about how strong the signal really is.
    watto_cobraRayz2016
  • Apple isn't getting $454M back from VirnetX because it waited too long to ask


    If this stands, it means that in future, anyone found to infringe will refuse to pay up until all possible avenues for appeal are exhausted.
    So any legitimate patent holders will have to wait essentially forever for any royalties/damages.
    And very often, that’s exactly what happens.
    muthuk_vanalingam