Comparing the 2.6GHz i7 versus the 2.9GHz i9 Vega 20 MacBook Pro

Posted:
in Current Mac Hardware edited September 2019
We compared both models of Apple's late-released Vega 20-equipped MacBook Pros to help determine whether the performance gains warrant the extra cash needed to upgrade from the 2.6GHz i7 to the 2.9GHz i9

2018 Vega 20 MacBook Pro
2018 Vega 20 MacBook Pros


Four months ago, when Apple refreshed the 15.4-inch MacBook Pro with six-core CPU's, graphics cards that come standard with 4GB of memory, and for the first time in a MacBook, the option of 32GB of system memory, we compared the performance of all three available CPUs and found the performance difference to be negligible.





The current design of the 15-inch MacBook Pros is incredibly thin and light for the performance that they pack. There are other Windows alternatives that are more powerful but they require a lot more than 87W of power, and as we saw with the Dell XPS 15 if you're working without wall power, all that extra GPU performance and slightly better cooling are lost and then some -- even with all the power saving profiles turned off.

2018 Vega 20 MacBook Pro
2018 Vega 20 MacBook Pro thin chassis


Although the MacBook Pros have consistent performance when unplugged, that thin design forces smaller cooling assemblies than some of its thicker competitors. Even though each CPU option in the 2018 15-inch MacBook Pro features six cores, the CPU clock speeds are nearly identical after the fans and temps stabilize during a heavy workload.

.tg{border-collapse:collapse;border-spacing:0;border-color:#ccc;}.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:#ccc;color:#333;background-color:#fff;}.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:#ccc;color:#333;background-color:#f0f0f0;}.tg .tg-1wig{font-weight:bold;text-align:left;vertical-align:top}.tg .tg-s6z2{text-align:center}.tg .tg-spn1{background-color:#f9f9f9;text-align:center}.tg .tg-s268{text-align:left}.tg .tg-5ua9{font-weight:bold;text-align:left}.tg .tg-dzk6{background-color:#f9f9f9;text-align:center;vertical-align:top}
2.2GHz i7 / 555X / 16GBV 2.6GHz i7 / 560X / 16GB 2.9 GHz i9 / 560X / 32GB
Cinebench Score (5-run avg)99110011011
Cinebench Speed (5-run avg)3.05GHz3.1GHz3.15GHz


In our Cinebench R15 CPU test, which is designed to allow the systems to heat up and eventually stabilize by running it back-to-back repeatedly, there was only a 100MHz clock speed difference between the base 2.2GHz CPU and the top spec 2.9GHz i9. Based on the results, our initial takeaway was that the i9 CPU option wasn't worth it for vast majority of users.

Since then, Apple released new MacBook Pro models with Vega 16 and 20 GPUs. The pair are entirely new graphics chips which come equipped with fast HBM2 memory built right into the GPU package, with the entire package now cooled by the heat pipes, versus just the GPU in the previous model..

2018 i7 Vega 20 MacBook Pro
2018 i7 Vega 20 MacBook Pro


We have a full review of the Vega 20 equipped MacBook Pro that you can peruse if you want to see how it compares to the 555X and 560X models. Spoiler alert -- the Vega 20 configuration is a screamer.

That leaves one question: If you're ordering the faster and more efficient Vega 20 graphics, is it worth spending another $300 to jump from the 2.6GHz i7 to the 2.9GHz i9?

Starting off with the standard Geekbench 4 test, as we saw in the past, the i9 model showed noticeably higher single and multi-core scores. Geekbench tests a variety of tasks and the CPU usage is very low most of the time meaning the CPU can turbo boost higher resulting in the i9 scoring about 11 percent faster in single core and nine percent faster in multi-core.

2.6GHz i7 Vega 20 2.9GHz i9 Vega 20
Geekbench 4 Single Core51055675
Geekbench 4 Multi-Core2355225650


Onto our five-run Cinebench CPU test, which pushes all cores to the max, we see a bigger difference than in our previous ones with the 560X graphics. Our i7 is slightly slower than our previous i7 with 560X (991 score versus 1001) whereas the new i9 paired with Vega 20 is noticeably faster (1011 versus 1068). It also finished at a higher 3.25GHz clock speed versus 3.1GHz.

It's important to note that even though we're seeing an eight percent difference and our i9 paired with Vega 20 is faster than it was paired with the 560X, chipset performance could vary machine to machine.

2.6GHz i7 Vega 20 2.9GHz i9 Vega 20
Cinebench CPU Score9911068
Cinebench Speed3.1GHz3.25GHz


Moving onto real-world tests, we saw no difference whatsoever in Adobe's Lightroom. Our past tests concluded that the graphics were practically unused. The CPU's all ran at the same clock speeds. What actually improved performance was having more RAM.

Now that we have i7 and i9 models, both with 32GB of RAM, we see that the performance is practically the same when exporting 99 RAW 42MP edited images to Jpeg. Develop module performance was identical, both being very responsive.

Lightroom 99 42MP RAW5 minutes 58 seconds5 minutes 50 seconds
CPU speed during rendering3.0GHz3.0GHz


We then turned to video editing in Final Cut Pro X, and started with the Bruce X benchmark. Both machines took 48 seconds to render this 5K project, which was to be expected since this is mainly GPU intensive.

Next, we stabilized a 20-second 4K H.264 clip. Both the i7 and i9 models paired with Vega 20 took just eight seconds to complete the task, which is very impressive but the performance is the same.

Exporting a five-minute 4K project, our i9 MacBook Pro took 3 minutes and 21 seconds and the i7 model was two seconds faster at 3 minutes and 19 seconds, within the margin of error.

Looking at a more power-demanding format, like 4K RAW from the Canon C200, both machines take much longer to complete the task. The i9 was 21 seconds faster, but given that this task takes about 14 minutes, that's only about 2-percent faster. Both machines could play the edited footage back at 30 frames per second without issue.

On graded 4.5K .R3D RAW footage from RED Raven, the Mid-2018 i9 Macbook Pro with 560X graphics finished the render with the CPU performing at 2.4GHz, 500MHz under its base rated clock speed of 2.9GHz.

This RED RAW footage maxes out both the CPU and GPU, which results in a lot of heat output. The i9 had to throttle where the 2.6GHz i7 still manages to turbo boost up to 2.8GHz and was faster overall.

Thankfully, that same i9 paired with the more efficient Vega 20 now runs at a stable 2.9GHz and our i7 finished slightly faster than before at 2.85GHz. Running at almost the same speed, the render times were almost identical; less than a one-percent difference.

2018 Vega 20 MacBook Pro
2018 Vega 20 MacBook Pro


Wrapping up our results, even with the more efficient Vega 20 graphics, we don't feel that the $300 2.9GHz i9 upgrade is worth the added cost if your workflows are long calculations.

However, if your workflow lends itself to quick processor loading and unloading, then it can make a difference, practically demonstrated by the improvements seen in Geekbench 4. This all comes down to thermal design choices that Apple made, possibly based on promises that Intel gave the company in 2015 when it started working on this case design.

As a result, when the i9 chip is placed in a thin and light notebook like the 2018 MacBook Pro, the thermal limitations inhibit the chip's ability to turbo boost and maintain anything close to the maximum speed that the chip allows, and it ends up performing at very similar speeds to the less expensive i7.

Save $150 to $225 on every Vega MacBook Pro

For a limited time, Apple authorized reseller Adorama is taking $150 to $225 off every Mid 2018 15-inch MacBook Pro with Vega 16 or Vega 20 graphics for AI readers. These deals, which can be activated with coupon code APINSIDER using the pricing links below and in our Price Guide, deliver the lowest prices anywhere on the newly released configurations. What's more, Adorama will not collect sales tax on your order if you live outside NY and NJ. For most customers, that incentive combined with our coupon will save you between $470 to $790 on these brawny new machines.

To snap up the discounts, you must shop through the pricing links below or in our Price Guide and enter coupon code APINSIDER during checkout. Need help? Send us a note at [email protected] and we will do our best to assist.

2018 15" MacBook Pros with Vega 16 graphics 2018 15" MacBook Pros with Vega 20 graphics

Comments

  • Reply 1 of 13
    What a waste. This is why Apple is getting a bad rap and a repeat of the early 90s. Who gives a fu** of the i9 if you can’t use it?
    cpsroelijahg
  • Reply 2 of 13
    This was my impression as well from the previous articles.  The Vega is a “must have” and maxing the ram is #2

    To be fair, this is also true with Windows PC’s.  The middle of the road CPU almost always gives the best value.

    Maybe for 1% of buyers is maxing the machines processor worthwhile. (Additional cost vs. additional productivity)


    mdriftmeyerradarthekat
  • Reply 3 of 13
    What a waste. This is why Apple is getting a bad rap and a repeat of the early 90s. Who gives a fu** of the i9 if you can’t use it?
    This has nothing to do with Apple and everything to do with Intel. With the Spectre firmware mitigations no advantage is shown for their flagship CPUs over mid-tiers and they know it. So go blame Apple for their problems.
    bb-15radarthekatmacxpresspscooter63
  • Reply 4 of 13
    What a waste. This is why Apple is getting a bad rap and a repeat of the early 90s. Who gives a fu** of the i9 if you can’t use it?
    This has nothing to do with Apple and everything to do with Intel. With the Spectre firmware mitigations no advantage is shown for their flagship CPUs over mid-tiers and they know it. So go blame Apple for their problems.
    Wouldn’t the Spectre fixes impact all tiers alike?
  • Reply 5 of 13
    So, this entire article presupposes that the 'heavy workload' == 'graphics intensive'.   I'm a member of a neglected (in these kinds of articles) Mac-user demographic, i.e. a software engineer doing iOS.

    And really, this demographic is not all that rarefied and unimportant.   We're the ones that actually make all the stuff that you guys run on your Macs and iPhones and iPads.

    In any case, whether you think our viewpoint is one to be considered, or is an exotic edge-case, here it is:

    0) Graphics performance matters almost not at all.   As long as it can run 3 4K monitors and see lotsa code, I'm good.   The integrated graphics on my MBP 13 are fully adequate.   Any sort of discrete graphics card is a waste of money, heat, and battery life.   I don't care about games, I don't care about Final Cut Pro performance, I don't care about Adobe Photoshop.   All that matters, to me, is how fast Xcode can compile Swift and Objective-C code, link it, and deploy it to a simulator or device.

    1). My company bought me a 2018 i7 six core, and I personally have a 2018 i9 six core.   I can tell you the i9 finishes compiles incrementally faster than the i7 does.   It's not dramatic, but a 2 minute 58 second build comes down to 2 minutes, 20 seconds.   This is real-world performance that, magnified over the course of a workday, translates to substantially more code getting written.

    2) My old despised 2013 Mac Pro 12-core 64 G 1 TB beast compiles almost as fast as the 2018 i7 six core.   And it's a lot quieter.    And it doesn't degrade compile speed no matter how hard I push it.

    3) With both MacBook Pros, it seems to help the compile speed a little bit to run an app (TG Pro in my case) that takes control of the internal fans and runs them full-out.  I also put them on a cooling laptop pad, but it's not clear to me that this has any effect, except to raise the ambient white noise level in my workspace.

    4) Configuring Xcode to put the DerivedData products on a RAMDisk helps quite a bit.    I suppose that's irrelevant to this discussion, though.

    All I'm asking is that, when AppleInsider does analyses like this, you include some Xcode runs - get some large Swift codebases from GitHub (they abound there), and pick one as your testbed baseline.  Xcode is free from the Mac App Store, and for my money, is the single-most important source of compute workloads you should be evaluating. Run clean builds and incremental builds and time them.   Include that along with CineBench and Geekbench and Handbrake video file transcodes and Final Cut Pro renders, and whatever else you're doing.    I promise you, you will find that your video card matters less than nothing, when you view performance from this viewpoint.    And this is kind of an important viewpoint - software developers represent a significant subset of 'pro' users; Apple claims over 50% of GitHub open source submissions are done from Macs.

    my .02
    Don    
    watto_cobrad_2TomEbaconstangthtmwhiteednlcwingravdewmeradarthekat
  • Reply 6 of 13
    I bit the bullet and am typing this on a i9/V20/32GB. I figured what's another $300 when you're already up to $4.5K on a machine? :D If it helps with certain parts of my workflow, then great.
    watto_cobrad_2bb-15radarthekat
  • Reply 7 of 13
    cpsrocpsro Posts: 3,198member
    Who's going to tear apart their i9 V20* to see if Apple is using a different/better thermal paste? Otherwise, why should Cinebench scores be better than the i9 560X? (yeah, yeah, unit-to-unit variability.)

    *iFixit score 1/10
    edited December 2018
  • Reply 8 of 13
    doncl said:
    So, this entire article presupposes that the 'heavy workload' == 'graphics intensive'.   I'm a member of a neglected (in these kinds of articles) Mac-user demographic, i.e. a software engineer doing iOS.

    And really, this demographic is not all that rarefied and unimportant.   We're the ones that actually make all the stuff that you guys run on your Macs and iPhones and iPads.

    In any case, whether you think our viewpoint is one to be considered, or is an exotic edge-case, here it is:

    0) Graphics performance matters almost not at all.   As long as it can run 3 4K monitors and see lotsa code, I'm good.   The integrated graphics on my MBP 13 are fully adequate.   Any sort of discrete graphics card is a waste of money, heat, and battery life.   I don't care about games, I don't care about Final Cut Pro performance, I don't care about Adobe Photoshop.   All that matters, to me, is how fast Xcode can compile Swift and Objective-C code, link it, and deploy it to a simulator or device.

    1). My company bought me a 2018 i7 six core, and I personally have a 2018 i9 six core.   I can tell you the i9 finishes compiles incrementally faster than the i7 does.   It's not dramatic, but a 2 minute 58 second build comes down to 2 minutes, 20 seconds.   This is real-world performance that, magnified over the course of a workday, translates to substantially more code getting written.

    2) My old despised 2013 Mac Pro 12-core 64 G 1 TB beast compiles almost as fast as the 2018 i7 six core.   And it's a lot quieter.    And it doesn't degrade compile speed no matter how hard I push it.

    3) With both MacBook Pros, it seems to help the compile speed a little bit to run an app (TG Pro in my case) that takes control of the internal fans and runs them full-out.  I also put them on a cooling laptop pad, but it's not clear to me that this has any effect, except to raise the ambient white noise level in my workspace.

    4) Configuring Xcode to put the DerivedData products on a RAMDisk helps quite a bit.    I suppose that's irrelevant to this discussion, though.

    All I'm asking is that, when AppleInsider does analyses like this, you include some Xcode runs - get some large Swift codebases from GitHub (they abound there), and pick one as your testbed baseline.  Xcode is free from the Mac App Store, and for my money, is the single-most important source of compute workloads you should be evaluating. Run clean builds and incremental builds and time them.   Include that along with CineBench and Geekbench and Handbrake video file transcodes and Final Cut Pro renders, and whatever else you're doing.    I promise you, you will find that your video card matters less than nothing, when you view performance from this viewpoint.    And this is kind of an important viewpoint - software developers represent a significant subset of 'pro' users; Apple claims over 50% of GitHub open source submissions are done from Macs.

    my .02
    Don    
    Finally someone talking some real sense. Who really gives a rats about relative CineBench and Geekbench scores. Like it’s going to make me soooo much more productive...NOT !
  • Reply 9 of 13
    k2kwk2kw Posts: 2,075member
    What a waste. This is why Apple is getting a bad rap and a repeat of the early 90s. Who gives a fu** of the i9 if you can’t use it?
    This has nothing to do with Apple and everything to do with Intel. With the Spectre firmware mitigations no advantage is shown for their flagship CPUs over mid-tiers and they know it. So go blame Apple for their problems.
    This is why I don't want to buy a laptop now.    I'm waiting for when Intel finally delivers 10nm chips for these MBPs.   Its so far overdue.
    At this point I think that Intel should have tried making 12 nm chips.    Yes they said they were going to release the 10 nm chips and its a little like raising the white flag but sometimes a "Bird in the hand is worth two in the Bush".

    I also believe that Apple really wants to do the 10 nm chips because its supposed to come with support for LPDDR4.  I'm hoping that when Intel  finally does release 10 nm (or 12 nm) Apple could use more space in the body for returning the MagSafe and SD card slots as options (yep I know it's a fantasy).   A little more keyboard travel depth would be nice too.
  • Reply 10 of 13
    dewmedewme Posts: 5,356member
    doncl said:
    So, this entire article presupposes that the 'heavy workload' == 'graphics intensive'.   I'm a member of a neglected (in these kinds of articles) Mac-user demographic, i.e. a software engineer doing iOS.

    And really, this demographic is not all that rarefied and unimportant.   We're the ones that actually make all the stuff that you guys run on your Macs and iPhones and iPads.

    In any case, whether you think our viewpoint is one to be considered, or is an exotic edge-case, here it is:

    0) Graphics performance matters almost not at all.   As long as it can run 3 4K monitors and see lotsa code, I'm good.   The integrated graphics on my MBP 13 are fully adequate.   Any sort of discrete graphics card is a waste of money, heat, and battery life.   I don't care about games, I don't care about Final Cut Pro performance, I don't care about Adobe Photoshop.   All that matters, to me, is how fast Xcode can compile Swift and Objective-C code, link it, and deploy it to a simulator or device.

    1). My company bought me a 2018 i7 six core, and I personally have a 2018 i9 six core.   I can tell you the i9 finishes compiles incrementally faster than the i7 does.   It's not dramatic, but a 2 minute 58 second build comes down to 2 minutes, 20 seconds.   This is real-world performance that, magnified over the course of a workday, translates to substantially more code getting written.

    2) My old despised 2013 Mac Pro 12-core 64 G 1 TB beast compiles almost as fast as the 2018 i7 six core.   And it's a lot quieter.    And it doesn't degrade compile speed no matter how hard I push it.

    3) With both MacBook Pros, it seems to help the compile speed a little bit to run an app (TG Pro in my case) that takes control of the internal fans and runs them full-out.  I also put them on a cooling laptop pad, but it's not clear to me that this has any effect, except to raise the ambient white noise level in my workspace.

    4) Configuring Xcode to put the DerivedData products on a RAMDisk helps quite a bit.    I suppose that's irrelevant to this discussion, though.

    All I'm asking is that, when AppleInsider does analyses like this, you include some Xcode runs - get some large Swift codebases from GitHub (they abound there), and pick one as your testbed baseline.  Xcode is free from the Mac App Store, and for my money, is the single-most important source of compute workloads you should be evaluating. Run clean builds and incremental builds and time them.   Include that along with CineBench and Geekbench and Handbrake video file transcodes and Final Cut Pro renders, and whatever else you're doing.    I promise you, you will find that your video card matters less than nothing, when you view performance from this viewpoint.    And this is kind of an important viewpoint - software developers represent a significant subset of 'pro' users; Apple claims over 50% of GitHub open source submissions are done from Macs.

    my .02
    Don    
    Yep. Totally agree. The biggest workstation-level productivity improvements I've seen on the hardware side for SW dev teams on all platforms (Window, Linux, Mac, embedded) have been the result of moving to all solid state storage and multiple monitors. After that I'd say that it would be equipping machines with real workstation class processors (like Xeons), lots of memory, and all housed in well cooled chassis so the machines don't buckle under load. Software developers do not generally benefit from gaming type computers setups - unless you are a game developer. To further reinforce the notion that video performance is secondary or less importance, more teams are moving to rack based server farms and on-premise cloud servers (and hybrid) for driving automated builds for CI and CD dev strategies. I've always viewed laptops and notebook computers as somewhat "luxury" items for SW developers who need to be productive on the road, at home, while traveling, etc., and not really the best platform for mainstream development. Issuing notebooks to SW devs in large corporate settings is also a really great way to chisel out little more of their personal lives from themselves and their families by getting them to take more work home with them. 
    chia
  • Reply 11 of 13
    macxpressmacxpress Posts: 5,808member
    What a waste. This is why Apple is getting a bad rap and a repeat of the early 90s. Who gives a fu** of the i9 if you can’t use it?
    Let me guess...if Steve were here! Oh wait...a very similar thing happened under Steve as well. Hmm...Well there goes that theory!
    edited December 2018 elijahgfastasleep
  • Reply 12 of 13
    Mike WuertheleMike Wuerthele Posts: 6,861administrator
    doncl said:
    So, this entire article presupposes that the 'heavy workload' == 'graphics intensive'.   I'm a member of a neglected (in these kinds of articles) Mac-user demographic, i.e. a software engineer doing iOS.

    And really, this demographic is not all that rarefied and unimportant.   We're the ones that actually make all the stuff that you guys run on your Macs and iPhones and iPads.

    In any case, whether you think our viewpoint is one to be considered, or is an exotic edge-case, here it is:

    0) Graphics performance matters almost not at all.   As long as it can run 3 4K monitors and see lotsa code, I'm good.   The integrated graphics on my MBP 13 are fully adequate.   Any sort of discrete graphics card is a waste of money, heat, and battery life.   I don't care about games, I don't care about Final Cut Pro performance, I don't care about Adobe Photoshop.   All that matters, to me, is how fast Xcode can compile Swift and Objective-C code, link it, and deploy it to a simulator or device.

    1). My company bought me a 2018 i7 six core, and I personally have a 2018 i9 six core.   I can tell you the i9 finishes compiles incrementally faster than the i7 does.   It's not dramatic, but a 2 minute 58 second build comes down to 2 minutes, 20 seconds.   This is real-world performance that, magnified over the course of a workday, translates to substantially more code getting written.

    2) My old despised 2013 Mac Pro 12-core 64 G 1 TB beast compiles almost as fast as the 2018 i7 six core.   And it's a lot quieter.    And it doesn't degrade compile speed no matter how hard I push it.

    3) With both MacBook Pros, it seems to help the compile speed a little bit to run an app (TG Pro in my case) that takes control of the internal fans and runs them full-out.  I also put them on a cooling laptop pad, but it's not clear to me that this has any effect, except to raise the ambient white noise level in my workspace.

    4) Configuring Xcode to put the DerivedData products on a RAMDisk helps quite a bit.    I suppose that's irrelevant to this discussion, though.

    All I'm asking is that, when AppleInsider does analyses like this, you include some Xcode runs - get some large Swift codebases from GitHub (they abound there), and pick one as your testbed baseline.  Xcode is free from the Mac App Store, and for my money, is the single-most important source of compute workloads you should be evaluating. Run clean builds and incremental builds and time them.   Include that along with CineBench and Geekbench and Handbrake video file transcodes and Final Cut Pro renders, and whatever else you're doing.    I promise you, you will find that your video card matters less than nothing, when you view performance from this viewpoint.    And this is kind of an important viewpoint - software developers represent a significant subset of 'pro' users; Apple claims over 50% of GitHub open source submissions are done from Macs.

    my .02
    Don    
    The GeekBench benchmarks are a very effective assessment of this kind of processing -- and we address that rapidly loading and unloading workflow a bit at the end of the piece. 
    edited December 2018 elijahgfastasleep
  • Reply 13 of 13
    doncl said:
    So, this entire article presupposes that the 'heavy workload' == 'graphics intensive'.   I'm a member of a neglected (in these kinds of articles) Mac-user demographic, i.e. a software engineer doing iOS.

    And really, this demographic is not all that rarefied and unimportant.   We're the ones that actually make all the stuff that you guys run on your Macs and iPhones and iPads.

    In any case, whether you think our viewpoint is one to be considered, or is an exotic edge-case, here it is:

    0) Graphics performance matters almost not at all.   As long as it can run 3 4K monitors and see lotsa code, I'm good.   The integrated graphics on my MBP 13 are fully adequate.   Any sort of discrete graphics card is a waste of money, heat, and battery life.   I don't care about games, I don't care about Final Cut Pro performance, I don't care about Adobe Photoshop.   All that matters, to me, is how fast Xcode can compile Swift and Objective-C code, link it, and deploy it to a simulator or device.

    1). My company bought me a 2018 i7 six core, and I personally have a 2018 i9 six core.   I can tell you the i9 finishes compiles incrementally faster than the i7 does.   It's not dramatic, but a 2 minute 58 second build comes down to 2 minutes, 20 seconds.   This is real-world performance that, magnified over the course of a workday, translates to substantially more code getting written.

    2) My old despised 2013 Mac Pro 12-core 64 G 1 TB beast compiles almost as fast as the 2018 i7 six core.   And it's a lot quieter.    And it doesn't degrade compile speed no matter how hard I push it.

    3) With both MacBook Pros, it seems to help the compile speed a little bit to run an app (TG Pro in my case) that takes control of the internal fans and runs them full-out.  I also put them on a cooling laptop pad, but it's not clear to me that this has any effect, except to raise the ambient white noise level in my workspace.

    4) Configuring Xcode to put the DerivedData products on a RAMDisk helps quite a bit.    I suppose that's irrelevant to this discussion, though.

    All I'm asking is that, when AppleInsider does analyses like this, you include some Xcode runs - get some large Swift codebases from GitHub (they abound there), and pick one as your testbed baseline.  Xcode is free from the Mac App Store, and for my money, is the single-most important source of compute workloads you should be evaluating. Run clean builds and incremental builds and time them.   Include that along with CineBench and Geekbench and Handbrake video file transcodes and Final Cut Pro renders, and whatever else you're doing.    I promise you, you will find that your video card matters less than nothing, when you view performance from this viewpoint.    And this is kind of an important viewpoint - software developers represent a significant subset of 'pro' users; Apple claims over 50% of GitHub open source submissions are done from Macs.

    my .02
    Don    
    Best first post ever. 
    tht
Sign In or Register to comment.