After eating AMD & Nvidia's mobile lunch, Apple Inc could next devour their desktop GPU business

2

Comments

  • Reply 21 of 52
    MarvinMarvin Posts: 15,326moderator
    richl wrote: »
    Where's the value in Apple using its own GPUs on x86?

    There's a driver problem when it comes to NVidia and AMD because Apple seems to want to develop the drivers but they have no control over the hardware. With ARM, they'd be in control of both.

    Then there's efficiency. If you look at chip benchmarks, an iPad that draws about 5W runs at speeds close to an intel HD4000, which is in a 35-45W chip. This year's generation of mobile GPUs will come close to the current MBPs. Although the Skylake GPUs would be about 2x faster, the power draw is significantly higher.

    Then there's the price, the mobile GPUs cost Apple hardly anything to make. Cost estimates are around $20 for the CPU and GPU because they're only paying the license for the IP. This matters more for the high-end GPUs.

    Then there's OpenCL to consider. Apple can build their GPU chips to perform better for computation. NVidia's OpenCL performance is terrible vs AMD and even Intel because they want to push their proprietary CUDA.

    There's downsides to Apple doing the GPU themselves. Intel does sell chips without IGPs but there's an issue getting the bandwidth between the CPU and GPU as well as memory so external GPUs aren't really an option. Apple couldn't put a PowerVR GPU inside an Intel CPU unless they had some sort of agreement with Intel. Tallest Skil mentioned before that he knew someone at Intel who said something to that effect though:

    http://forums.appleinsider.com/t/181581/rumor-in-store-signage-outs-speed-bumped-macbook-pros-16gb-of-ram-to-come-standard/40#post_2570404

    "I’m to understand from an Intel insider that the company is making Apple a custom GPU to be used within the next few years.
    This was a little while ago, however, so things may have changed."

    Iris Pro was rumored to have been requested by Apple so they might be driving the Iris Pro design. Intel never put much effort into GPUs before and Steve Jobs had said that they tried to tell Intel and they wouldn't listen. Now that they're good for computation and it can be seen how important GPUs are in general, Intel is doing much better with them.

    The Skylake version of Iris Pro should be competitive with anything AMD and NVidia has right now and I could see Apple ditching dedicated GPUs in the laptop line entirely. Maybe not the 27" iMacs but it depends. They just need to put 16GB DDR4 RAM in models that they want more video memory in and they can double the core count of the IGP designed for the highest models (GT4 where Iris Pro is GT3).

    The Mac Pro would really be the only computer left with dedicated GPUs. I don't think the Mac Pro is going to be around forever. The current design is a jumping-off point. It will ease people down gradually to lower models. By 2022, CPUs in laptops will perform like a 24-core Mac Pro would now.

    AMD and NVidia will be history by that point because they won't be bundled in computers. If you look at NVidia's revenue (AMD's is similar overall but the GPU revenue is lower):

    http://investor.nvidia.com/financials.cfm

    you can see a smaller amount comes from mobile chips. 80% is from the standalone GPUs. If you look at marketshare numbers, Intel is 4x more units than each:

    http://jonpeddie.com/press-releases/details/intel-gains-nvidia-flat-and-amd-loses-graphics-market-share-in-q1/

    The units there are around 400m units total so NVidia and AMD account for about 66m units each (AMD won the contract for the consoles because they undercut NVidia in price), this means the ASP of their GPUs are $991m/66m = $15. In other words, they aren't surviving because of the high-end GPUs like Titan/GTX 680/780/880/980 etc. It's laptops that are making the money and they will be gone from this space. Intel used to be below 40% unit share back in 2007 and there were competitors like Via, SiS, Matrox.

    The Iris Pro 5200 was the first GPU to be able to replace a dedicated GPU. Skylake's version can replace more mobile dedicated GPUs. If Intel can come up with a fast anti-aliasing method and adaptive display vertical sync over Thunderbolt, that gives manufacturers less reason to invest in 3rd party components.

    I could see NVidia and AMD merging. That wouldn't be anti-competitive because their biggest competition is Intel. This would give them a company that could do x86 CPUs too, the GPUs would be great for games and OpenCL. There would be no more issues about CUDA support. They still won't be able to stick around but having them together would make them a stronger competitor in the near-term.
  • Reply 22 of 52
    I am a little confused ....

    Desktop GPUs require/exploit dedicated Video RAM to do their thing.

    For example, this from the iMac 5K configuration options:

    Graphics

    Your iMac with Retina 5K display comes standard with the AMD Radeon R9 M290X with 2GB of dedicated GDDR5 video memory for superior graphics performance.
    For even better graphics response, configure your iMac with the AMD Radeon R9 M295X with 4GB of dedicated GDDR5 video memory.

    Graphics

    • AMD Radeon R9 M290X 2GB GDDR5
    • AMD Radeon R9 M295X 4GB GDDR5 [Add $250.00]

    It cost $250 for 2GB -- for a part not made by Apple.


    Wouldn't any Apple Desktop GPU offering need to include expensive, 3rd-party VRAM to support its GPU chip?

    If so, then estimating cost advantages of multiple Ax chips costing ~$35 become much less meaningful -- and would likely require extra engineering to allow the Ax GPUs to share the VRAM.

    There is more difference between the 290x and 295x than an extra 2GB of VRAM. GDDR5 is fairly pricey, but not horrendously so, and Apple practically owns the mobile DRAM market. Those companies produce what Apple wants them to, for the price Apple wants.

    As to the iGPU, yes Intel does build chips without it. Primarily the high end Xeon's and i7's, but they do exist, and Apple can demand i5's without it.
  • Reply 23 of 52
    Quote:

    Originally Posted by RichL View Post

     

    ... My understanding is that the motherboards that Apple use are designed and manufactured by Intel. ...


    During the PowerPC to Intel transition that was essentially true. I don't think it has been the case for some years now.

  • Reply 24 of 52
    Quote:
    Originally Posted by Dick Applebaum View Post



    • AMD Radeon R9 M290X 2GB GDDR5

    • AMD Radeon R9 M295X 4GB GDDR5 [Add $250.00]



    It cost $250 for 2GB -- for a part not made by Apple.

     

    You get more than just 2GB of RAM.  From here Radeon R9 M295X Mac vs M290X:

     


    • Floating Point: 3482 vs 2176 GFLOPS

    • Shading Units: 2048 vs 1280

    • Texture Mapping Units: 128 vs 80

    • Cores: 32 vs 20

     

    and so on.

  • Reply 25 of 52
    jexusjexus Posts: 373member
    Quote:
    Originally Posted by Marvin View Post



    I could see NVidia and AMD merging. That wouldn't be anti-competitive because their biggest competition is Intel. This would give them a company that could do x86 CPUs too, the GPUs would be great for games and OpenCL. There would be no more issues about CUDA support. They still won't be able to stick around but having them together would make them a stronger competitor in the near-term.

    Nvidia's CEO will not accept a merger unless he leads the combined company, so no, CUDA would still very much be pushed and any of AMD's technology that couldn't otherwise be locked up would be abandoned.

     

    That being said, if Creative can still make a living off of selling soundcards to people in the age of "good enough" audio in motherboards, then AMD/Nvidia can do the same.

     

    Nvidia has been expanding into other markets but has hardly done much of note. Meanwhile everyone continues to doom and gloom AMD, despite the fact that they, unlike Nvidia, have not only expanded to new markets, but done so successfully. Margins from their semi custom, server, and embedded operations are growing healthier and healthier. Nvidia is definitely getting some heavy Desktop GPU share, but they've lost a considerable amount in the workstation market and outside of a few design wins, their efforts to expand their reach beyond the desktop have failed.

     

    The reason AMD's loss was particularly high this quarter was due to a buildup of inventory as a result of the brief bitcoin craze that AMD thought it could build on and a new round of increased R&D funding for their high end X86/ARM products from interested enterprise level customers in addition to already having been designing 14nm products. They already know they'll take a loss next round as well while they allow inventory to burn out.

  • Reply 26 of 52

    It's nice to see AI being such an environmentally friendly website by recycling what's essentially been the same article for the last two weeks...

  • Reply 27 of 52

    12" iPad Pro, 12" MacBook Air. Same size. One has a keyboard, one doesn't. One has an Intel chip, the other has an ARM chip. It doesn't take a genius to figure out where this is heading.

  • Reply 28 of 52
    hexclockhexclock Posts: 1,259member
    jameskatt2 wrote: »

    Sorry.  But there are too many histrionics in these series of articles.

    FIRST: Apple is NOT funding AMD or nVidia.  Apple buys GPUs from them just like Apple buys its CPU/GPU chips from Samsung. The primary difference with the chips it buys from Samsung and those from AMD or nVidia is that Apple designs the chips that it buys from Samsung. AMD and nVidia design their own chips.

    SECOND: Desktop chips - whether CPU or GPU - are hitting ceilings that limit their performance.  It is called the laws of physics.  Intel ran into that problem years ago when its CPUs hit a wall with their power requirements and heat output. In fact, the Mac Pro cannot take Intel's fastest chips because it doesn't have the cooling capacity to take the heat they generate.  Similar problems occur with the GPUs.  Today's top GPUs need TWO POWER SUPPLIES - one from the computer and another separate from the computer.   The top desktop computers draw huge amounts of power.  Think 500 to 1000 Watts.   Run that 24/7 and you get a huge power bill.  Should Apple do its own GPUs, it will run INTO THE SAME PROBLEM AND LIMITATIONS.  

    THIRD:  In the mobile arena, Apple has been improving performance by targeting the low lying fruit, the easiest problems to solve.  But when you look at the performance improvement curve of Apple's iPhone/iPad Ax chips, you see that with the A8, the curve is actually SLOWING DOWN.  And this is because Apple has run into the laws of physics again.  There is only so much you can do with limited battery power and limited cooing capacity on a mobile device.
    FOURTH:  Much of computing CANNOT be done in parallel. Word processing, spreadsheets, games, email, browsing, etc. are not parallel process tasks.  Even Photoshop is limited to how many parallel processes it can handle.  Apple has further been attempting to get users to use single tasks at a time in full-screen mode.  Even on CPUs, after 2 CPU Cores, more parallelism by adding more CPU cores actually limits the top speed that any core can accomplish by increasing the heat output of the chip.  This is why Intel has to slow down the clockspeed as more cores are added to chips. Thus, including further parallelism isn't going to make performance greater on any single task.  

    Should Apple want to tackle the desktop with its own custom GPUs, realize that they will always be playing catch up and will always be slower than those from AMD and nVidia.

    The only reason for doing so is to save money in manufacturing.  But that will have the side effect of lowering the quality of the User Experience.

    For example: just look at the new Apple Mac Mini.  It is now LIMITED to a 2-core CPU, rather than the 4-core of the previous model. It is SLOWER. But it is less expensive to make. The same limitations are found in the new Apple iMac with the 21-inch screen.

    It is a sad day to see Apple going backwards in the user experience and choosing cheap components over higher quality components.
    Very good points. I would point out that some of these limits pertain to using silicon as a substrate. Upcoming technologies (optronics, spintronics, graphene, etc) hold some promise to surpass these current limits.
  • Reply 29 of 52
    richlrichl Posts: 2,213member
    Quote:

    Originally Posted by Marvin View Post



    I could see NVidia and AMD merging.

     

    That would cause a few fanboy heads to explode. :)

  • Reply 30 of 52
    canukstormcanukstorm Posts: 2,701member
    Quote:

    Originally Posted by bdkennedy1 View Post

     

    12" iPad Pro, 12" MacBook Air. Same size. One has a keyboard, one doesn't. One has an Intel chip, the other has an ARM chip. It doesn't take a genius to figure out where this is heading.


    care to enlighten us.

  • Reply 31 of 52
    Dan_DilgerDan_Dilger Posts: 1,583member

    Quote:

    Originally Posted by jameskatt2 View Post

     

    --------------------------------------------------

    Sorry.  But there are too many histrionics in these series of articles.

     

    FIRST: Apple is NOT funding AMD or nVidia.  Apple buys GPUs from them just like Apple buys its CPU/GPU chips from Samsung. The primary difference with the chips it buys from Samsung and those from AMD or nVidia is that Apple designs the chips that it buys from Samsung. AMD and nVidia design their own chips.

     

    Apple is funding AMD & Nvidia if it buys their products. 

     

    SECOND: Desktop chips - whether CPU or GPU - are hitting ceilings that limit their performance.  It is called the laws of physics.  Intel ran into that problem years ago when its CPUs hit a wall with their power requirements and heat output. In fact, the Mac Pro cannot take Intel's fastest chips because it doesn't have the cooling capacity to take the heat they generate.  Similar problems occur with the GPUs.  Today's top GPUs need TWO POWER SUPPLIES - one from the computer and another separate from the computer.   The top desktop computers draw huge amounts of power.  Think 500 to 1000 Watts.   Run that 24/7 and you get a huge power bill.  Should Apple do its own GPUs, it will run INTO THE SAME PROBLEM AND LIMITATIONS.  

     

    The fact that the current leaders are going to hit those ceilings first should factor into your predictions. That makes it a lot easier for Apple to catch up using alternative technology. Sort of like Apple using Unix to catch up and surpass Microsoft's lead with Windows.

     

    THIRD:  In the mobile arena, Apple has been improving performance by targeting the low lying fruit, the easiest problems to solve.  But when you look at the performance improvement curve of Apple's iPhone/iPad Ax chips, you see that with the A8, the curve is actually SLOWING DOWN.  And this is because Apple has run into the laws of physics again.  There is only so much you can do with limited battery power and limited cooing capacity on a mobile device.

     

    The only reason raw performance increases are slowing down is because Apple's current mobile products don't need to be a specific % faster each year, they need to be faster at the same or better power efficiency. The goal is not to just be x% faster. Diminishing returns. iPads need to be fast enough to do what people are doing with them now. They don't need to be as fast as a PC. Remove those power constraints and its obvious that ARMv8 & PowerVR could be achieving much faster raw performance than is currently being targeted in a very thin tablet or smartphone.

     

    FOURTH:  Much of computing CANNOT be done in parallel. Word processing, spreadsheets, games, email, browsing, etc. are not parallel process tasks.  Even Photoshop is limited to how many parallel processes it can handle.  Apple has further been attempting to get users to use single tasks at a time in full-screen mode.  Even on CPUs, after 2 CPU Cores, more parallelism by adding more CPU cores actually limits the top speed that any core can accomplish by increasing the heat output of the chip.  This is why Intel has to slow down the clockspeed as more cores are added to chips. Thus, including further parallelism isn't going to make performance greater on any single task.  

     

    WP & spreadsheets, email, browsing were working fine on 2000-era PCs. Video games are largely GPU-bound, with physics that can also be calculated on GPUs. The things that demand the most computing performance today and moving forward are graphics & video processing and physics--making a fluid interface. There are increasingly fewer needs for much faster CPUs to run today's current apps on conventional PCs/notebooks. Apps that do lots of number crunching tasks (like encryption) can often be run faster on GPUs.

     

    Should Apple want to tackle the desktop with its own custom GPUs, realize that they will always be playing catch up and will always be slower than those from AMD and nVidia.

     

    It doesn't matter if Apple's own products are fast enough and cheap enough to deliver at lower prices (or more performance at the same price). Most of Apple's current Mac lineup uses Intel HD/Iris graphics, which are also "slower than those from AMD and nVidia" but does that matter for those buying them? No. And once you make all the money, you have the capital to invest in getting faster than yesterday's leaders. 

     

    The only reason for doing so is to save money in manufacturing.  But that will have the side effect of lowering the quality of the User Experience.

     

    For example: just look at the new Apple Mac Mini.  It is now LIMITED to a 2-core CPU, rather than the 4-core of the previous model. It is SLOWER. But it is less expensive to make. The same limitations are found in the new Apple iMac with the 21-inch screen.

     

    What market is there for a high end Mac mini? It was formerly sold as a server. Nobody is buying that. The market for the Mini is entry level / convenient rollout of Macs. It doesn't need to be high performance, it needs to be usably fast at a reasonable price ($500-1000).

     

    If you want to pay $1000-$2500 for a Mac, you get a display with it. If you're willing to pay more than $3000, you get a Mac Pro and can customize the display configuration.

     

    The idea that Mac mini and the low end iMac should be available with high end components is ridiculous. It's like being upset that an entry level subcompact economy car doesn't come with heated leather seats, a very expensive audio system or a V8 engine as options. The audience of a model dictates what options its going to offer. 

     

    It is a sad day to see Apple going backwards in the user experience and choosing cheap components over higher quality components.

     

    How is only offering entry level Intel CPUs an example of "cheap vs high quality components"? Doesn't even make sense, unless you are concern trolling.


  • Reply 32 of 52
    nikon133nikon133 Posts: 2,600member
    ksec wrote: »

    No, Intel hasn't used the PowerVR IP in their iGPU for 3 generation now.

    As for the article. This isn't something new. As i have stated before it is much more likely Apple make their own GPU rather then switching OSX to ARM. The reason is simply because making a GPU with PowerVR IP is easy ( relatively ), coding its drivers is freaking hard work and takes long time. Since Apple handles its own drivers, it may be of Apple's best interest to only code/optimize for one GPU.

    And I am sure Intel could make a custom x86 chip without iGPU to Apple rather then losing Apple's Mac business. On Broadwell, ~50% of die space belongs to GPU on a 2 Core + GT3 Die. I am sure Apple could get favorable pricing for the die size saving.

    Actually... Intel did use PowerVR in their low end, namely Atoms... latest one should have been PowerVR G6430 which was a part of Atom Z35XY line, released in Q4 2014.

    Other Atoms released in last 3 years were also using some PowerVR logic. Not all Atoms, though.
  • Reply 33 of 52
    ksecksec Posts: 1,569member
    nikon133 wrote: »
    Actually... Intel did use PowerVR in their low end, namely Atoms... latest one should have been PowerVR G6430 which was a part of Atom Z35XY line, released in Q4 2014.

    Other Atoms released in last 3 years were also using some PowerVR logic. Not all Atoms, though.

    Yes I forgot about their Atom Line Lol. And you can tell from history software/drivers for GPU matter much much more then the hardware. Intel never paid IMG for its driver support. Hence why earlier Atom GPU performance sucks.
  • Reply 34 of 52
    When it introduced the iPad in 2010, it wasn't just a thin new form factor for computing: it was a new non-Wintel architecture. While everyone else had already been making phones and PDAs using ARM chips, there hadn't been a successful mainstream ARM computer for nearly twenty years.

    I would add that there hasn't been a successful non-Wintel computer in 20 years. Apple Mac OS nearly joined the dust heap until Jobs turned the company around when he returned. It's refreshing to me, actually, to see a whole different OS (iOS8) gaining traction that may even have a larger installed base then Windows in only a few more years. While iOS is wedded to an ARM chip at present, it's not just an ARM computer, but a Apple-tweaked ARM CPU that will take years of work to independently match specs by another company. This is assuming Apple will not continue to push the envelope in ways my little brain can't imagine.

    Now, if Apple were to replace Intel's anemic built-in GPU with a multi-core GPU variation of their own in the low-end Macs and Mac laptops, Apple could become a better-priced alternative to the Wintel boxes when it comes to graphic-handling performance... and shine especially well in laptops that do a lot of graphics processing away from a power source.

    Balmer's legacy of helping Apple by his blindness will live on well into the next decade.... and Intel's deafness to Job's entreaties to build ARM chips for Apple will cost them dearly into infinity and beyond as well.
  • Reply 35 of 52
    misamisa Posts: 827member
    Dear writer:
    You know the PowerVR chip is Intel intergrated gpu, right?
    The only thing Apple did on A8x is CPU, and a small group of specialized circuit.
    Otherwise, we won't know it's a member of PowerVR 6XT

    I think you mean "the PowerVR tech is in the Intel Integrated GPU"

    And technically that's true. It's in the GMA 500/600 parts that go into the Atom/Pentium. These GPU parts have always been underwhelming. The i3/i5/i7 however is Intel's own technlogy. This has been true since at least 2012, and Intel isn't using PowerVR's GPU cores in current products.

    But that said, the intel onboard GPU parts are borderline equivalent with other iGPU parts from AMD. They're just sufficiently powerful enough that you can run a 10 year old game at a decent frame rate, but are far inferior otherwise. The Intel 5200 Iris Pro is within ear shot (20%) of the GeForce GT 650M in the Retina MacBook Pro.

    In order for the Apple A series GPU parts to get into that performance range they need to scale up. Doubling or tripling the number of GPU cores at (iPad's) in a notebook design may be a better option than a dedicated AMD or nVidia part for a high end notebook. As for the iGPU in those cases, Apple may simply opt to use the iGPU for OpenCL or h.264/h.265 processing while the dedicated GPU handles all graphics.

    Even on my existing windows desktop, I use the main GPU full time, but use the iGPU for passable video compression. This is so that there's no frame-rate drop from the main GPU.
  • Reply 36 of 52
    At the end of the day, I don't care who makes the GPU as long as the performance is able to match the best from AMD and NVIDIA. And that level of performance needs to be up there across 3rd Party apps as well as Apples own. It's all well and good making a fast GPU and optimising Final Cut Pro to use it, but if that performance doesn't stretch to the Adobe CC apps as well then it's no good for me. Not to mention Mari, Nuke, Modo and Maya.

    Computing in general is at an interesting crossroads. The move to mobile is in full swing and the R&D spend seems to be on making things smaller and more efficient, rather than chasing higher and higher performance. For the average user, CPUs and GPUs are good enough for most of their needs.

    But, content creators are still chasing that performance. And their needs continue to expand. I'm getting more and clients asking for 4k content rather than 1080p. The step up in processing power needed to make that leap is simply huge. Just rendering an animation takes 4 times as long, which for my average job means I'm now measuring render times in weeks rather than days. On top of that, the extra resolution means I need to create more detailed 3d models, using higher resolution textures. This, again, adds to the render time, but also significantly taxes the graphics performance whilst working on the scene. Once the animation is rendered I need significantly faster storage to handle 4k content through compositing, editing and colour correction and as we continue to see more video editing/compositing tasks handed over to the GPU we need more VRAM to deal with the higher resolution.

    My point is this. If Apple move to their own GPUs they need to do make solutions that can compete with the Fire Pros and Quadros and not just Iris Pro, or risk losing content creators. They might be a small percentage of overall revenue, but I feel that its a small percentage that Apple has always been proud to have on board.
  • Reply 37 of 52
    MarvinMarvin Posts: 15,326moderator
    oxonrich wrote: »
    I'm getting more and clients asking for 4k content rather than 1080p. The step up in processing power needed to make that leap is simply huge. Just rendering an animation takes 4 times as long, which for my average job means I'm now measuring render times in weeks rather than days. On top of that, the extra resolution means I need to create more detailed 3d models, using higher resolution textures. This, again, adds to the render time, but also significantly taxes the graphics performance whilst working on the scene. Once the animation is rendered I need significantly faster storage to handle 4k content through compositing, editing and colour correction and as we continue to see more video editing/compositing tasks handed over to the GPU we need more VRAM to deal with the higher resolution.

    Most of the processing there is CPU-dependent but the real-time previews need the GPU performance. When you look at things like this that use physically-based shaders:


    [VIDEO]


    although it's not using raytracing or high levels of anti-aliasing, it would be suitable for a lot of things. There really ought to be a hybrid approach that combines the CPU and GPU. Those demos run in real-time vs days/weeks to do a similar length animation on the CPU.

    It's ultimately going to be better to have the CPU and GPU together so they can process everything in shared memory. Apple has moved to 16GB minimum in MBPs, which will double performance with DDR4. They could easily allocate 4GB to video memory or even better, share data directly between the CPU and GPU.

    I don't think they'll stop pushing performance up. Skylake is supposed to be around 80% faster than the Iris Pro 5200. If they skip Broadwell, the CPU would get a decent boost too.
  • Reply 38 of 52
    Whilst it is suitable for a lot of things, unfortunately it wouldn't be for my work.

    Our most recent project used more than 64Gb of RAM and that was after some heavy optimisation. I'm not going to be able to load that on to a graphics card anytime soon and I don't expect to. We're stuck with the CPU and I'm fine with that.

    I don't pretend to know if shared memory could work. I wouldn't be against it. Just as long as it didn't reduce the overall memory capacity. 64Gb of RAM and 12GB of Video RAM is much more useful to me than 64GB of shared RAM. But they do seem to want to make everything smaller, whether it needs to be or not, so that really wouldn't surprise me.

    They may have moved to 16gb minimum in the rMBP but unfortunately that's also the maximum with no possibility of 3rd party upgrades. I love my rMBP but it generally gets used for Photoshop and Premiere as it struggles with my other work. I don't think Broadwell or Skylake are going to change that.
  • Reply 39 of 52
    MarvinMarvin Posts: 15,326moderator
    oxonrich wrote: »
    Our most recent project used more than 64Gb of RAM and that was after some heavy optimisation. I'm not going to be able to load that on to a graphics card anytime soon and I don't expect to. We're stuck with the CPU and I'm fine with that.

    I don't pretend to know if shared memory could work. I wouldn't be against it. Just as long as it didn't reduce the overall memory capacity. 64Gb of RAM and 12GB of Video RAM is much more useful to me than 64GB of shared RAM.

    Truly shared memory would mean that you wouldn't split it so 64GB RAM would be 64GB of video memory (minus system memory of about 2-4GB). Games consoles are setup this way now, they just reserve some memory for system software.

    There's the option to do split frames just now so that textures don't all have to sit in memory all the time. A 4K frame can be done by doing 4x 1080p frames and then joining them.


    [VIDEO]


    Instead of them running on multiple machines, just do the splits sequentially on a single machine.
    oxonrich wrote: »
    They may have moved to 16gb minimum in the rMBP but unfortunately that's also the maximum with no possibility of 3rd party upgrades. I love my rMBP but it generally gets used for Photoshop and Premiere as it struggles with my other work. I don't think Broadwell or Skylake are going to change that.

    Broadwell/Skylake move to DDR4 memory, which doubles the density so 32GB can fit in the same space as 16GB. It may even be more, 128GB server modules are possible with DDR4:

    http://www.extremetech.com/computing/192711-samsungs-new-20nm-ddr4-clears-the-way-for-massive-128gb-dimms

    The Mac Pro can take 4 so up to 512GB. AMD's next GPUs should have up to 16GB of video memory but with a shared memory architecture, it can use the DDR4 memory.
  • Reply 40 of 52

    Ha - 8 Hours on 7 machines for that image.   So, that's why no one uses Cinema 4D for real animation work.

     

    I really hope the memory can get that high on the next generation of rMBP and nMP.   But then the old MP had 8 dimms and they've scrapped 4 to make it smaller.   There is nothing to stop them doing that again and you never know with Apple until it actually happens.

Sign In or Register to comment.