Future Macs could adopt Intel's new, high-performance discrete graphics chips

2»

Comments

  • Reply 21 of 31
    MindRightMindRight Posts: 7unconfirmed, member
    I highly doubt Apple will use Intel as their GPU supplier going forward. The "A" series of their ARM chips are already so much more superior in performance to Intel's CPU's and are more energy efficient that its only a matter of time before they start using them in their laptops and other Mac products. I would really be surprised if this actually happens.
    watto_cobra
  • Reply 22 of 31
    mattinozmattinoz Posts: 2,322member
    Honest question: Is there a technical reason why an integrated GPU can't be as good as any discrete GPU?
    Possibly due to the CPU and iGPU being on the same die. So either they don't want to go to complex for economic reasons or the die is tuned to one working well but that effects the ability of the other.

    Intel are moving towards packaging abilities together from separate dies (nice background piece on EMIB) so they can mix 'n' match parts to create different products even custom products for various partners. The G-series shows this off also guessing if Intel are moving towards GPUs being on their own die then why not also package them for as their own product as well to get as many sales as possible.

    roundaboutnowwatto_cobra
  • Reply 23 of 31
    foggyhillfoggyhill Posts: 4,767member
    mattinoz said:
    Honest question: Is there a technical reason why an integrated GPU can't be as good as any discrete GPU?
    Possibly due to the CPU and iGPU being on the same die. So either they don't want to go to complex for economic reasons or the die is tuned to one working well but that effects the ability of the other.

    Intel are moving towards packaging abilities together from separate dies (nice background piece on EMIB) so they can mix 'n' match parts to create different products even custom products for various partners. The G-series shows this off also guessing if Intel are moving towards GPUs being on their own die then why not also package them for as their own product as well to get as many sales as possible.

    Heat dissipation and having the ability to move data in and out of that small die are the main issues I bet.
    For multicore SOC CPU's, when all cores are at work, you get reduced performance from each. Then you imagine what happens if you have a very power hungry GPU on there doing god knows what function.
    roundaboutnowwatto_cobra
  • Reply 24 of 31
    wood1208wood1208 Posts: 2,913member
    We know how 10nm Cannon lake schedule pan out !! Intel discrete GPU in 2020 could be 2022-23. By that time, AMD and Nividia will be further ahead.
    muthuk_vanalingamwatto_cobra
  • Reply 25 of 31
    SoliSoli Posts: 10,035member
    wood1208 said:
    We know how 10nm Cannon lake schedule pan out !! Intel discrete GPU in 2020 could be 2022-23. By that time, AMD and Nividia will be further ahead.
    That's a salient point. Hard to trust any timeline with Intel at this point and they have a lot more reasons to use their position in the market to announce vaporware (technically delay-ware, if I can coin a phrase).
    muthuk_vanalingamwatto_cobra
  • Reply 26 of 31
    mcdavemcdave Posts: 1,927member
    Honest question: Is there a technical reason why an integrated GPU can't be as good as any discrete GPU?
    Thermals.  Lots of transistors to pack in.

    If the memory is shared (unlike Intel) iGPUs can outperform dGPUs as there’s no data round tripping.
    mattinozroundaboutnowwatto_cobra
  • Reply 27 of 31
    bkkcanuckbkkcanuck Posts: 864member
    Intel has confirmed it intends to create its own discrete graphics processing chips by 2020, a move that opens up the possibility of Apple using discrete Intel GPUs across its laptop and desktop Mac lines.

    Speculation on speculation on speculation leads inevitably to being worthless guessing on what could be worthwhile or could be a piece of junk.  In fact, by 2020 Apple could be pulling away from Intel altogether on their laptop line.  First, Intel has little history of being able to create a discrete GPU, they are working on creating a first generation GPU by 2020, the first generation of anything in a new market against established players are often -- laggards.  Apple is working on their own GPU technology (which may or may not be laptop usage as well).  Apple has been rumoured on maybe moving to ARM-based Macs on some of their lines by 2020.  The article has about the same odds as someone buying a lottery ticket then before the draw telling them they have won and are all the things they have already spent their money on ... I am sorry Apple Insider, but this is just a piece of crap filler and I expect better.  Yes, Intel is working on GPU technology, they have worked on it before and binned it... but that is about as much of this story that is worth writing...
  • Reply 28 of 31
    mcdave said:
    Honest question: Is there a technical reason why an integrated GPU can't be as good as any discrete GPU?
    Thermals.  Lots of transistors to pack in.

    If the memory is shared (unlike Intel) iGPUs can outperform dGPUs as there’s no data round tripping.
    Foggyhill mentioned thermals too, and that make sense. Technically, it seems the thermal issue could be addressed allowing the implementation and benefit of a shared memory architecture as you mention, although I imagine there are economic reasons that then get in the way of this as Mattinoz suggested.
    watto_cobra
  • Reply 29 of 31
    tallest skiltallest skil Posts: 43,388member
    I’m surprised the article (and commenters) didn’t make mention of this. They very recently made their own card, but internal corporate nonsense kept it from release and completion.

    https://www.youtube.com/watch?v=um-1fAVU1OQ
    watto_cobra
  • Reply 30 of 31

    Intel has seemingly already acknowledged the lower performance of its own integrated graphics, after revealing G-series processors in January that combined Intel CPUs with an onboard AMD GPU, providing the equivalent of discrete graphics performance on the same board as the processor.


    Due to the significant effort required to create a new GPU architecture that can compete with AMD and Nvidia, Intel is likely to have worked on the project for some time already if it is to meet its 2020 target. In November 2017, Intel hired Raja Koduri, formerly the head of AMD's graphics arm and credited with improving the Radeon brand, to head up its graphics and compute projects.

     while the iMac Pro offers AMD's Vega GPUs. If acceptable to Apple, there is the prospect of Intel offered for both integrated and discrete graphics across the board.

    Intel's attempt to join the discrete GPU market could also apply pressure on AMD and Nvidia, especially considering Intel's size and experience. The sudden appearance of a viable competitor may force the incumbent industry leaders to make bigger moves forward in performance, if only to make it harder for the relative newcomer to find a consumer audience.
    As already noted, lots of pure speculation, without much logic to back it up. Story needed to be fleshed out to be more informative, given the 'halo' effect of the iP, Apple's core constituents these days...so much more ignorance/uninformed POV's than perhaps when the iP was 1st intro'd.

    Other post(s) here I don't have enough time to respond directly to, just dump the info here, sorry:

    "The graphics test mainly showcases the GPU improvements of a system and here the Snapdragon 845 easily reaches its performance target, improving by up to 32% compared to the Snapdragon 835 powered Pixel 2 XL and Galaxy S8. This is an astonishingly great achievement for Qualcomm in one generation.

    When we’re looking at competitor devices we see only the the iPhone X able to compete with the last generation Snapdragon 835 devices – however with a catch. The A11 is severely thermally constrained and is only able to achieve these scores when the devices are cold. Indeed as seen from the smaller score of the iPhone 8, the SoC isn’t able to sustain maximum performance for even one benchmark run before having to throttle. Unfortunately this also applies to current and last generation Exynos and Kirin SoCs as both shed great amount of performance after only a few minutes. I’ve addressed this issue and made a great rant about it in our review of the Kirin 970. For this reason going forward AnandTech is going to distinguish between Peak and Sustained scores across all 3D benchmarks. "

    http://browser.geekbench.com/mobile-benchmarks

    ^Geekbench mobile scores, so everyone knows how many cores at what Mhz they are all running> doesn't show what is on the deck for later this year, or early next>>>but be sure Apple and all other manufacturers already have these samples, have made all their decisions about near term products, speculate all you want. Then we also 'guess' what products coming down the pike will be maybe used, no one knows other than the R&D depts :)

    https://wccftech.com/exclusive-amd-navi-gpu-roadmap-cost-zen/

    "Lisa Su was focused primarily on bringing back AMD’s CPU side of things, and establishing a strong semi-custom GPU side. Maintaining leadership in the descrete graphics market (gamers) is a costly business and with the finite amount of resources the company had, something had to give.

    Lisa packaged the graphics department neatly into Radeon Technologies Group and gave control of that to Raja Koduri, but at the same time devoted 2/3rds of the talent on-hand for RTG to develop the next semi-custom solution – which was Navi for Sony. The P&L of RTG was also not handed over to the group head and that meant they were effectively left with 1/3rd of the engineering talent devoted to making a graphics card for gamers and almost no control over their own finances.

    The result of this choice was that AMD was spectacularly succesful at its x86 comeback and also locked in Apple"

    "for Vega, Apple’s timeline is what actually dictated the release of the GPU and not the other way around. AMD’s Radeon graphics cards were intricately tied to the industry’s semi-custom roadmaps by design and that is something that a lot of people disagreed with. This is also what, I suspect, precipitated the departure of key executives including the RTG boss, Raja Koduri." I also read the 3rd member of the AMD, the marketing guy, joined the team there with Koduri, so the all star line up is now at Intel, anything is possible

    ^Highly recommend reading the whole article for it being 'informative' as to real decision making, rather than speculation ;p


    ^side bar article @wccftech>; Summit, not that a supercomputer is of any use to this discusion...uses 27,648 NVIDIA Tesla V100 GPUs, & nearly needs a nuk reactor to power it, lol. Yeah, Vega 14nm Kaby in the ultra thin MB, tdp of 65 & 100w, don't thin so, unless Apple has done one better than Razer's new 15 stealth that has a 1070 max-Q, only bc they supposedly obtained higher tdp, using advanced coolling system. Maybe Apple's got some new liquid nitro add-on ;p , on my rmbp, even with fans at full speed, it's not very noisy, supposedly 50dba is 'loud' whereas 40dba is nearly silent, ymmv. And the Dell xps 15 2-1 being mentioned for those new 8 core i7's...yeah, the new firmware does throttling, but they don't call it that, to make the constant fan speed noise reduced after complaints. reason u don't see discreet gpu'(higher power, both Nvidia & AMD have lower power in the lineups, just don't have much more performance than Intel integrated 650 iris plus in 13" laptop, just too much heat to dissipate.

    https://www.notebookcheck.net/Nvidia-Max-Q-limits-fan-noise-to-40-dBA-when-gaming-so-why-are-we-recording-louder-results.258636.0.html


    ^nother side bar on 7nm TSMC process that is the future...coming soon, how soon?>"The fab will have a good time with its 7nm node as both Apple and Qualcomm will source their chips from the company. Apple made some serious performance gains with the A11 and this year the company might rely more on process gains rather than architectural. "


    https://wccftech.com/amd-navi-gpus-not-using-mcm-feature-monolithic-die-radeon-rx-gaming-cards/

    ^hey, there's Koduri mentioned again, noting upcoming Navi gpu line will stick with traditional 'monolithic' as opposed to newer MCM (multiple gpu one single die?) that need software support to gain in the market.

    I don't want iOS on Mac OSX systems, I want some form of Mac 'OSX mobile', for iPhones & iPads...Watch screen is too tiny for osx. Apple will let M$ figure out Windows 10 on laptops with Samsung SoC's, then Apple will do it later, much later> but hopefully somebody up there in 1 infinite loop is going to get them to do it.


    IIRC, snapdragon 1000 is rumored to go from 7-12watts...assuming they could get up to 3+Ghz as rumored! thermal throttling???

    ^https://www.slashgear.com/snapdragon-1000-for-windows-10-on-arm-reeks-of-desperation-31532562/

    "

    The one area where ARM chips have had slow progress penetrating is desktop computers and laptops because of the workload they require.

    Based on WinFuture‘s information, this rumored Snapdragon 1000 seems to be Qualcomm’s attempt to really address that market. For processors to scale up to desktop-level performance, you either put in more cores or consume more power. Qualcomm is apparently choosing the latter route by increasing the power draw to 6.5 watts. The Snapdragon 845, its current platform for mobile, only consumes 5 watts at most.

    This would put the Snapdragon 1000 on par with the power consumption of Intel’s lower end processors like the Atom and the Celerons. In theory, given ARM’s more efficient power use, the Snapdragon 1000 could outperform those quite easily. In practice, however, it could be asking for trouble, given the greater power drain, which translates to shorter battery life, and greater thermal emission.

    It seems that Qualcomm and its partners, like Microsoft and ASUS, are working overtime to address the complaints that buried the first batch of Windows 10 on ARM 2-in-1 devices, like the HP Envy x2 and the ASUS Nova Go. But, if the early 2019 launch for the Snapdragon 1000 and an ASUS “Primus” device is correct, then Qualcomm would have gone from 835 to 845 to 850/950 to 1000 in just a year.

    "


    Forbes website, has too many blocking ads, if the link doesn't work, search on the article title:

    "Keller will start at Intel next Monday and work alongside Koduri as an SVP in the hardware engineering space, although his specific title has not yet been disclosed by Intel. Basically, Koduri is designing the IP cores, and Keller will transform them into products. Something he's quite adept at doing.

    The partnership is notable given their rich histories together, and impact at companies like AMD and Apple. At AMD, Keller worked on the Athlon architecture, and was most recently responsible for designing the Zen core (Ryzen) that led to AMD's long overdue comeback in the CPU space. Before that he designed the A4 and A5 SoCs (System on a Chip) that Apple used in earlier generation iPads, iPhones and Apple TVs....

    Koduri served as director of graphics architecture at Apple, helping them establish a graphics sub-system for the Mac line, and was instrumental in the creation of Apple's high-resolution Retina displays among other accomplishments. After that, he oversaw all things related to AMD graphics hardware and software, helping the company reclaim consumer graphics market share and push into VR, machine intelligence and data center waters.

    "

    https://www.forbes.com/sites/jasonevangelho/2018/04/26/the-engineering-duo-that-saved-apple-and-amd-are-teaming-up-at-intel/#6961dea5241f

    http://barefeats.com/iphonex_v_others.html


    sure, tesla v100 is 10x times faster than the Arm in the ip/ipad's, but even the a11 is now almost as good as the intel integrated iris 650 plus...which is good enough for the 13" MBP.

    There, fleshed-out now.


    uh, didn't see it, didn't do a search of this site either:


    https://wccftech.com/samsung-mongoose-m4-performance-much-higher-than-cortex-a76/

    I feel the need for speed...Goose "

    It might even reach 2.80GHz in devices that feature sophisticated cooling solutions. According to Ice Universe, the efficiency factor of the Mongoose M4 is going to be better than Cortex-A76, with the individual stating that in Geekbench, the performance cores will contribute to a multi-core score of 13,000+ points. Now here is a very interesting leak that I would like to share with you guys that was reported earlier.

    galaxy-note-9-3RELATEDSamsung Still in Doubt Over Which Camera Orientation to Use for the Upcoming Galaxy Note 9

    The upcoming Apple A12, a chipset that is expected to fuel the upcoming 2018 iPhone family was leaked to obtain 13,000 points in Geekbench 4’s multi-core results so does this mean that Mongoose M4 will be able to reach the same level of performance when all cores are fired up? Looks like we will find out during an official scores comparison.

    Do remember that Mongoose M4 is expected to be found in the Galaxy S10"























  • Reply 31 of 31
    mattinozmattinoz Posts: 2,322member
    I’m surprised the article (and commenters) didn’t make mention of this. They very recently made their own card, but internal corporate nonsense kept it from release and completion.

    https://www.youtube.com/watch?v=um-1fAVU1OQ
    Interesting indeed given one of Apples early uses of LLVM compiler was to redirect graphics instructions off intels embedded gpus on to cpu to cover the shortcomings of those GPU. I do wonder given Metal has come out of that thinking what Apple could do with one or two of these if they had a case that could take 250W gpus.
    Steve did say we need to be ready for a future where the core count ramps up.  
    edited June 2018
Sign In or Register to comment.