The two water cooled X1900 cards working in crossfire played the main role in this, since 3dMark is primarily a GPU benchmark.
On the CPU side, the CPU score of 2897 in 3dMark06 is actually not that great, because Athlon X2s seem to get 3000-4000... Hmmm... Still 2.16 to 3.4ghz is impressive in its own right.
3DMark01 is a very processor-intensive benchmark that adds some stuff together and you get a number of how many 3DMarks. 60,000 is well...amazing.
Originally posted by backtomac
Do you know of any reference scores? How does this compare to an Athlon or Pent D?
3dMark01 is a 5 year old benchmark. Like I mentioned the CPU overclock is very impressive. However if you look at the 3dMark06 CPU score, 2000+ is not great, this gets whipped by Athlon X2s with 3000-4000 points.
So GPU overclock and CPU overclock: very excellent
GPU scores for 3dMark05 and 3dMark06: super excellent
3dMark01 score: good for OMFG WTF w000t!! but it's really a very old benchmark...
A 3.4ghz Merom should really get more CPU marks in 3dMark06* than Athlon X2s. This is weird. Maybe the 3.4ghz Merom should be run through other benchmarks, it should destroy all existing CPU-intensive benchmarks Something is weird. I'm confused.
*Note that the 12,585 3dMark06s is the overall score that is strongly GPU biased, if you look through the posts the CPU part of the score is onlu 2897 which is less than Athlon X2s which hit 3000-4000 CPU part of the scores.
Umm... It's called sharing thoughts on the forum. Anyways I always make two or three posts at the time as my thoughts are formulated Also, I thought I'd clarify things as sometimes readers on this forum, while being very Mac savvy, don't follow the strange PC world/ overclocking/ benchmarking world that closely... For example, in that world getting 4 frames per second more than someone else in the same class of machines is widely celebrated.
Originally posted by MacSuperiority
Keep in mind this is a mobile chip too. I'm sure the conroe and woodcrest will own the CPU test.
They'd better, otherwise the whole performance-per-watt thing goes out the door. But right now I think the performance-per-watt for Yonah/Core Duo is doing okay, IIRC tomshardware.com did some wider benchmarks (not just 3dMark but lots of other stuff) and Core Duo performance-per-watt was looking good compared to Pentium D and Athlon X2s.
I was also commenting on the weirdness that such a highly overclocked Merom, despite being a mobile chip should perform better at that particular CPU 3dMark06 test. More comprehensive tests eg audio/video encoding, photoshop actions, etc. should reflect better performance.
Also, not sure where this guy got the Merom, finalized shipping Meroms should only be out July/August?
Do you know of any reference scores? How does this compare to an Athlon or Pent D?
It's the combination of the cpu AND the gpu that matters in graphics. The GPU isn't worth a damn, if it can't be fed quickly enough by the cpu.
But, the cpu isn't worth a damn if the gpu can't render and get the info onto the screen quickly enough, or properly.
But, 60,000 is pretty fast. When you consider that it's been pushed to 3.4GHz, that's amazing enough! I wonder which Merom he used, the 2.33GHz, or the 2,66Ghz. It's probably in one of those charts, but too much information overload.
If the 2.33 has been pushed to 3.4, even with dry ice, that shows there is a lot of room for the chip to grow into.
When Intel showed a pre-production P3 a bunch of years ago running at 1GHZ, at the microprocessor trade show, it turned out that they were refrigerating it from a unit underneath the table. Everyone thought that was funny. But, 6 months later, the cpu came up to 1GHz at retail, and it's been upwards ever since. so, these things do tell us something about headroom.
It's the combination of the cpu AND the gpu that matters in graphics. The GPU isn't worth a damn, if it can't be fed quickly enough by the cpu....But, the cpu isn't worth a damn if the gpu can't render and get the info onto the screen quickly enough, or properly.
3Dmark01 it seems is a heavily CPU intensive score. But my problem with it is that it's old. My other problem is that I would like to see other benchmarks eg. audio/video encoding and photoshop actions, etc. ala tomshardware.com ...3dMark05 and 3dMark06 are very much more dependent on GPU. I'm quite confident on that point. GPU is usually the bottleneck on achieving high frames per second that would give you high 3dMarks. So the great GPU scores come from crossfire'd ati x1900s liquid cooled and overclocked heavily.
Originally posted by melgross
But, 60,000 is pretty fast. When you consider that it's been pushed to 3.4GHz, that's amazing enough! I wonder which Merom he used, the 2.33GHz, or the 2,66Ghz. It's probably in one of those charts, but too much information overload.
It's 2.16ghz @ 3385mhz in the charts Again, I feel 3dMark01 is too outdated a benchmark to be important. But in PC Overclocking land, if they're all super psyched about it, then, well, good on them... the strange, strange world of geekdom we all enjoy
Originally posted by melgross
If the 2.33 has been pushed to 3.4, even with dry ice, that shows there is a lot of room for the chip to grow into.
Yeah, that sounds cool but not as insane as the Pentium D 805 overclocking from 2.66 to almost 4ghz on air
It's the combination of the cpu AND the gpu that matters in graphics. The GPU isn't worth a damn, if it can't be fed quickly enough by the cpu....But, the cpu isn't worth a damn if the gpu can't render and get the info onto the screen quickly enough, or properly.
3Dmark01 it seems is a heavily CPU intensive score. But my problem with it is that it's old. My other problem is that I would like to see other benchmarks eg. audio/video encoding and photoshop actions, etc. ala tomshardware.com ...3dMark05 and 3dMark06 are very much more dependent on GPU. I'm quite confident on that point. GPU is usually the bottleneck on achieving high frames per second that would give you high 3dMarks. So the great GPU scores come from crossfire'd ati x1900s liquid cooled and overclocked heavily.
Originally posted by melgross
But, 60,000 is pretty fast. When you consider that it's been pushed to 3.4GHz, that's amazing enough! I wonder which Merom he used, the 2.33GHz, or the 2,66Ghz. It's probably in one of those charts, but too much information overload.
It's 2.16ghz @ 3385mhz in the charts Again, I feel 3dMark01 is too outdated a benchmark to be important. But in PC Overclocking land, if they're all super psyched about it, then, well, good on them... the strange, strange world of geekdom we all enjoy
Originally posted by melgross
If the 2.33 has been pushed to 3.4, even with dry ice, that shows there is a lot of room for the chip to grow into.
Yeah, that sounds cool but not as insane as the Pentium D 805 overclocking from 2.66 to almost 4ghz on air
It's too bad that Apple never takes advantage of the performance gains made possible by even some modest overclocking. even if the total is just 15%, it could be enough to throw the Mac over the hurdle. Apple seems interested in stability at any cost. That's why they underclocked the 1600 on the MBP. But the increased size and cooling ability of the 17" seems to have alleviated that problem for them to a great exrent.
I've always felt that the pro machines, at least the towers, should be adjustable for greater performance memory, etc.
It's too bad that Apple never takes advantage of the performance gains made possible by even some modest overclocking. even if the total is just 15%, it could be enough to throw the Mac over the hurdle. Apple seems interested in stability at any cost. That's why they underclocked the 1600 on the MBP. But the increased size and cooling ability of the 17" seems to have alleviated that problem for them to a great exrent.
I've always felt that the pro machines, at least the towers, should be adjustable for greater performance memory, etc.
The pro machines are supposed to place a high priority on stability, that's something I can identify with. I've been using off-lease PC workstations bougth on eBay and they are stable such that I only reboot for system updates. None of these systems that I have used have offered adjustable memory settings, and I really don't think having them on the Mac platform would help.
These esoteric overclocking excercises are interesting, but I am curious if they are sustainable for production work. It's no fun losing a day's worth of work on a project due to some random glitch.
The pro machines are supposed to place a high priority on stability, that's something I can identify with. I've been using off-lease PC workstations bougth on eBay and they are stable such that I only reboot for system updates.
These esoteric overclocking excercises are interesting, but I am curious if they are sustainable for production work. It's no fun losing a day's worth of work on a project due to some random glitch.
Most of the upgrade cards I've bought over the years have been clockable, and as long as you test out what you are doing, and don't go insane with it, they are very stable.
There is the clocking utility for the ATI cards as well. Again, if you stay within decent boundaries, you will be fine. If a component is conservatively rated, overclocking is perfectly ok.
I've been able to achieve between 10 and almost 20% overclocking on my old 9500 and 9600's. There, you could overclock the bus as well. What held us back, was the cache being soldered to the board. Not a problem for a long time now.
...Apple seems interested in stability at any cost.......
I've always felt that the pro machines, at least the towers, should be adjustable for greater performance memory, etc.
The thing is Apple's clocking allows for more flaws in code that it processes, particularly 3rd party. You could tweak the memory to overclock it, adjust latency, etc. but you increase the risk of crashes with poorly-coded or just mistakes in apps.
On my PC rig I can run memory latencies of 2.5-3-3-8 @ 400mhz and play NeedForSpeed: MostWanted fine, but LOTR: BattleforMiddleEarth2 chokes. I have to run memory at the usual 3-3-3-8 stock and downclock the memory to 392mhz to get a real stable gaming experience.
But it's true that if you could tweak a Mac over 15% and you test it out and your core apps that you use are stable, then that would be nice.*
Anyway these PC people seem to have too much time on their hands and have nothing better to do than tinker around with settings and cooling mods and who knows what else.
Pro Mac machines are built for those who know about the technology but just get down to work and keep working through and through.
When I was doing web design and handling deadlines and all that stuff, believe me, the clocking of my machine and memory latency and shite was the last thing on my mind. It was all focus on the project, deadlines, delivering the artwork and coding, etc, etc.
It's only being unemployed now and recovering from mental illness that I've got time to read up on these overclocking stuff and mod out my own CPU and GPU heatsinks to overclock them modestly on air. Sadly my system won't be breaking any world records or the laws of physics but most importantly they should be able to play the latest games at Medium... and eventually Low settings for the next 1.5 years.
*In which case you'd get the benefits of overclocking as Melgross suggests and the stability that JeffDM says is important. But you'd need to set aside time from getting work done to tinker around a bit to test your apps on overclocked settings. Very risky to do when you have major projects and clients, etc.
There is the clocking utility for the ATI cards as well. Again, if you stay within decent boundaries, you will be fine. If a component is conservatively rated, overclocking is perfectly ok.
Yeah the ATI GPU overclocking in the PowerPC Macs was quite a nice thing with ATIcellerator2 app. Shows that there is some good overhead, and overclocking was nice there getting 10-15% more. But heatwise you could feel it when doing 3d games or something.
Apple's downclocking of GPUs is mainly a heat issue thing. Yet ironically with this and using low wattage CPUs the MacBook Pro is still hot hot hot...!! WTF
There is the clocking utility for the ATI cards as well. Again, if you stay within decent boundaries, you will be fine. If a component is conservatively rated, overclocking is perfectly ok.
Yeah the ATI GPU overclocking in the PowerPC Macs was quite a nice thing with ATIcellerator2 app. Shows that there is some good overhead, and overclocking was nice there getting 10-15% more. But heatwise you could feel it when doing 3d games or something.
Apple's downclocking of GPUs is mainly a heat issue thing. Yet ironically with this and using low wattage CPUs the MacBook Pro is still hot hot hot...!! WTF
I don't recommend overclocking a laptop. But, when it comes to towers, there is room.
I've never had a problem with any of the machines I've overclocked. Like I said, don't push it to the brink. But, once you have it working correctly, and it doesn't take more than a couple of hours to get right, with some attention being paid to it over the course of the next few days, where you run your most critical software and hardware, it's stable.
Naturally, never do that when in the middle of projects. But, the beauty is that everything can be reset, if necessary, so there is no permanent problem, if you should have one.
When using programs like PS, I haven't found the difference to of much use. Truthfully, the difference between waiting 4 seconds rather than five, is of no concern.
But, when working on video, it can have a major effect. Back in the days when it took me an entire weekend to render a half hour of edited video, a 20% decrease in that time would mean eight hours!!!
When doing a render, later on, that would take an entire day, it could mean the difference between waiting to see the result the next morning, or before the evening. The productivity gains are quite startling.
Again, this means nothing for Word, or Quark. But for projects where times are long, this makes a major difference.
It's why mainframes are exchanged when the newer generation is 12% faster.
Comments
Back to topic for a bit. Here's another pushed speed chip test. Merom at 3.4GHz ? cooled. With, well, read it. http://www.nordichardware.com/news,3794.html
20,672 3dMark05s is a fucking insane score.
12,585 3dMark06s is also a fucking insane score.
The two water cooled X1900 cards working in crossfire played the main role in this, since 3dMark is primarily a GPU benchmark.
On the CPU side, the CPU score of 2897 in 3dMark06 is actually not that great, because Athlon X2s seem to get 3000-4000... Hmmm... Still 2.16 to 3.4ghz is impressive in its own right.
3DMark01 is a very processor-intensive benchmark that adds some stuff together and you get a number of how many 3DMarks. 60,000 is well...amazing.
Originally posted by backtomac
Do you know of any reference scores? How does this compare to an Athlon or Pent D?
3dMark01 is a 5 year old benchmark. Like I mentioned the CPU overclock is very impressive. However if you look at the 3dMark06 CPU score, 2000+ is not great, this gets whipped by Athlon X2s with 3000-4000 points.
So GPU overclock and CPU overclock: very excellent
GPU scores for 3dMark05 and 3dMark06: super excellent
3dMark01 score: good for OMFG WTF w000t!! but it's really a very old benchmark...
*Note that the 12,585 3dMark06s is the overall score that is strongly GPU biased, if you look through the posts the CPU part of the score is onlu 2897 which is less than Athlon X2s which hit 3000-4000 CPU part of the scores.
Keep in mind this is a mobile chip too. I'm sure the conroe and woodcrest will own the CPU test.
Why are you talking to yourself?
Umm... It's called sharing thoughts on the forum. Anyways I always make two or three posts at the time as my thoughts are formulated Also, I thought I'd clarify things as sometimes readers on this forum, while being very Mac savvy, don't follow the strange PC world/ overclocking/ benchmarking world that closely... For example, in that world getting 4 frames per second more than someone else in the same class of machines is widely celebrated.
Originally posted by MacSuperiority
Keep in mind this is a mobile chip too. I'm sure the conroe and woodcrest will own the CPU test.
They'd better, otherwise the whole performance-per-watt thing goes out the door. But right now I think the performance-per-watt for Yonah/Core Duo is doing okay, IIRC tomshardware.com did some wider benchmarks (not just 3dMark but lots of other stuff) and Core Duo performance-per-watt was looking good compared to Pentium D and Athlon X2s.
I was also commenting on the weirdness that such a highly overclocked Merom, despite being a mobile chip should perform better at that particular CPU 3dMark06 test. More comprehensive tests eg audio/video encoding, photoshop actions, etc. should reflect better performance.
Also, not sure where this guy got the Merom, finalized shipping Meroms should only be out July/August?
Originally posted by backtomac
Do you know of any reference scores? How does this compare to an Athlon or Pent D?
It's the combination of the cpu AND the gpu that matters in graphics. The GPU isn't worth a damn, if it can't be fed quickly enough by the cpu.
But, the cpu isn't worth a damn if the gpu can't render and get the info onto the screen quickly enough, or properly.
But, 60,000 is pretty fast. When you consider that it's been pushed to 3.4GHz, that's amazing enough! I wonder which Merom he used, the 2.33GHz, or the 2,66Ghz. It's probably in one of those charts, but too much information overload.
If the 2.33 has been pushed to 3.4, even with dry ice, that shows there is a lot of room for the chip to grow into.
When Intel showed a pre-production P3 a bunch of years ago running at 1GHZ, at the microprocessor trade show, it turned out that they were refrigerating it from a unit underneath the table. Everyone thought that was funny. But, 6 months later, the cpu came up to 1GHz at retail, and it's been upwards ever since. so, these things do tell us something about headroom.
It's the combination of the cpu AND the gpu that matters in graphics. The GPU isn't worth a damn, if it can't be fed quickly enough by the cpu....But, the cpu isn't worth a damn if the gpu can't render and get the info onto the screen quickly enough, or properly.
3Dmark01 it seems is a heavily CPU intensive score. But my problem with it is that it's old. My other problem is that I would like to see other benchmarks eg. audio/video encoding and photoshop actions, etc. ala tomshardware.com ...3dMark05 and 3dMark06 are very much more dependent on GPU. I'm quite confident on that point. GPU is usually the bottleneck on achieving high frames per second that would give you high 3dMarks. So the great GPU scores come from crossfire'd ati x1900s liquid cooled and overclocked heavily.
Originally posted by melgross
But, 60,000 is pretty fast. When you consider that it's been pushed to 3.4GHz, that's amazing enough! I wonder which Merom he used, the 2.33GHz, or the 2,66Ghz. It's probably in one of those charts, but too much information overload.
It's 2.16ghz @ 3385mhz in the charts Again, I feel 3dMark01 is too outdated a benchmark to be important. But in PC Overclocking land, if they're all super psyched about it, then, well, good on them... the strange, strange world of geekdom we all enjoy
Originally posted by melgross
If the 2.33 has been pushed to 3.4, even with dry ice, that shows there is a lot of room for the chip to grow into.
Yeah, that sounds cool but not as insane as the Pentium D 805 overclocking from 2.66 to almost 4ghz on air
http://www.tomshardware.com/2006/05/..._41_ghz_cores/
Originally posted by sunilraman
Originally posted by melgross
It's the combination of the cpu AND the gpu that matters in graphics. The GPU isn't worth a damn, if it can't be fed quickly enough by the cpu....But, the cpu isn't worth a damn if the gpu can't render and get the info onto the screen quickly enough, or properly.
3Dmark01 it seems is a heavily CPU intensive score. But my problem with it is that it's old. My other problem is that I would like to see other benchmarks eg. audio/video encoding and photoshop actions, etc. ala tomshardware.com ...3dMark05 and 3dMark06 are very much more dependent on GPU. I'm quite confident on that point. GPU is usually the bottleneck on achieving high frames per second that would give you high 3dMarks. So the great GPU scores come from crossfire'd ati x1900s liquid cooled and overclocked heavily.
Originally posted by melgross
But, 60,000 is pretty fast. When you consider that it's been pushed to 3.4GHz, that's amazing enough! I wonder which Merom he used, the 2.33GHz, or the 2,66Ghz. It's probably in one of those charts, but too much information overload.
It's 2.16ghz @ 3385mhz in the charts Again, I feel 3dMark01 is too outdated a benchmark to be important. But in PC Overclocking land, if they're all super psyched about it, then, well, good on them... the strange, strange world of geekdom we all enjoy
Originally posted by melgross
If the 2.33 has been pushed to 3.4, even with dry ice, that shows there is a lot of room for the chip to grow into.
Yeah, that sounds cool but not as insane as the Pentium D 805 overclocking from 2.66 to almost 4ghz on air
http://www.tomshardware.com/2006/05/..._41_ghz_cores/
It's too bad that Apple never takes advantage of the performance gains made possible by even some modest overclocking. even if the total is just 15%, it could be enough to throw the Mac over the hurdle. Apple seems interested in stability at any cost. That's why they underclocked the 1600 on the MBP. But the increased size and cooling ability of the 17" seems to have alleviated that problem for them to a great exrent.
I've always felt that the pro machines, at least the towers, should be adjustable for greater performance memory, etc.
Here we go again. Where will it stop?
http://macenstein.com/default/archives/302
Originally posted by melgross
It's too bad that Apple never takes advantage of the performance gains made possible by even some modest overclocking. even if the total is just 15%, it could be enough to throw the Mac over the hurdle. Apple seems interested in stability at any cost. That's why they underclocked the 1600 on the MBP. But the increased size and cooling ability of the 17" seems to have alleviated that problem for them to a great exrent.
I've always felt that the pro machines, at least the towers, should be adjustable for greater performance memory, etc.
The pro machines are supposed to place a high priority on stability, that's something I can identify with. I've been using off-lease PC workstations bougth on eBay and they are stable such that I only reboot for system updates. None of these systems that I have used have offered adjustable memory settings, and I really don't think having them on the Mac platform would help.
These esoteric overclocking excercises are interesting, but I am curious if they are sustainable for production work. It's no fun losing a day's worth of work on a project due to some random glitch.
Originally posted by JeffDM
The pro machines are supposed to place a high priority on stability, that's something I can identify with. I've been using off-lease PC workstations bougth on eBay and they are stable such that I only reboot for system updates.
These esoteric overclocking excercises are interesting, but I am curious if they are sustainable for production work. It's no fun losing a day's worth of work on a project due to some random glitch.
Most of the upgrade cards I've bought over the years have been clockable, and as long as you test out what you are doing, and don't go insane with it, they are very stable.
There is the clocking utility for the ATI cards as well. Again, if you stay within decent boundaries, you will be fine. If a component is conservatively rated, overclocking is perfectly ok.
I've been able to achieve between 10 and almost 20% overclocking on my old 9500 and 9600's. There, you could overclock the bus as well. What held us back, was the cache being soldered to the board. Not a problem for a long time now.
Woo hoo! Here we go again. Where will it stop?
http://macenstein.com/default/archives/302
Heh. the "esoteric" overclocking continues
w000t! Core 2 Extreme (Conroe) and Woodcrest should blow all records away on air and on liquid/phase change.
It's the overhead of 65nm.... dammnit AMD get to 65nm!!
Yet on 90nm, AMD fights back:
AMD challenges Intel with 35W, 65W desktop processors
http://www.tgdaily.com/2006/05/16/am...er_processors/
...Apple seems interested in stability at any cost.......
I've always felt that the pro machines, at least the towers, should be adjustable for greater performance memory, etc.
The thing is Apple's clocking allows for more flaws in code that it processes, particularly 3rd party. You could tweak the memory to overclock it, adjust latency, etc. but you increase the risk of crashes with poorly-coded or just mistakes in apps.
On my PC rig I can run memory latencies of 2.5-3-3-8 @ 400mhz and play NeedForSpeed: MostWanted fine, but LOTR: BattleforMiddleEarth2 chokes. I have to run memory at the usual 3-3-3-8 stock and downclock the memory to 392mhz to get a real stable gaming experience.
But it's true that if you could tweak a Mac over 15% and you test it out and your core apps that you use are stable, then that would be nice.*
Anyway these PC people seem to have too much time on their hands and have nothing better to do than tinker around with settings and cooling mods and who knows what else.
Pro Mac machines are built for those who know about the technology but just get down to work and keep working through and through.
When I was doing web design and handling deadlines and all that stuff, believe me, the clocking of my machine and memory latency and shite was the last thing on my mind. It was all focus on the project, deadlines, delivering the artwork and coding, etc, etc.
It's only being unemployed now and recovering from mental illness that I've got time to read up on these overclocking stuff and mod out my own CPU and GPU heatsinks to overclock them modestly on air. Sadly my system won't be breaking any world records or the laws of physics but most importantly they should be able to play the latest games at Medium... and eventually Low settings for the next 1.5 years.
*In which case you'd get the benefits of overclocking as Melgross suggests and the stability that JeffDM says is important. But you'd need to set aside time from getting work done to tinker around a bit to test your apps on overclocked settings. Very risky to do when you have major projects and clients, etc.
There is the clocking utility for the ATI cards as well. Again, if you stay within decent boundaries, you will be fine. If a component is conservatively rated, overclocking is perfectly ok.
Yeah the ATI GPU overclocking in the PowerPC Macs was quite a nice thing with ATIcellerator2 app. Shows that there is some good overhead, and overclocking was nice there getting 10-15% more. But heatwise you could feel it when doing 3d games or something.
Apple's downclocking of GPUs is mainly a heat issue thing. Yet ironically with this and using low wattage CPUs the MacBook Pro is still hot hot hot...!! WTF
Originally posted by sunilraman
Originally posted by melgross
There is the clocking utility for the ATI cards as well. Again, if you stay within decent boundaries, you will be fine. If a component is conservatively rated, overclocking is perfectly ok.
Yeah the ATI GPU overclocking in the PowerPC Macs was quite a nice thing with ATIcellerator2 app. Shows that there is some good overhead, and overclocking was nice there getting 10-15% more. But heatwise you could feel it when doing 3d games or something.
Apple's downclocking of GPUs is mainly a heat issue thing. Yet ironically with this and using low wattage CPUs the MacBook Pro is still hot hot hot...!! WTF
I don't recommend overclocking a laptop. But, when it comes to towers, there is room.
I've never had a problem with any of the machines I've overclocked. Like I said, don't push it to the brink. But, once you have it working correctly, and it doesn't take more than a couple of hours to get right, with some attention being paid to it over the course of the next few days, where you run your most critical software and hardware, it's stable.
Naturally, never do that when in the middle of projects. But, the beauty is that everything can be reset, if necessary, so there is no permanent problem, if you should have one.
When using programs like PS, I haven't found the difference to of much use. Truthfully, the difference between waiting 4 seconds rather than five, is of no concern.
But, when working on video, it can have a major effect. Back in the days when it took me an entire weekend to render a half hour of edited video, a 20% decrease in that time would mean eight hours!!!
When doing a render, later on, that would take an entire day, it could mean the difference between waiting to see the result the next morning, or before the evening. The productivity gains are quite startling.
Again, this means nothing for Word, or Quark. But for projects where times are long, this makes a major difference.
It's why mainframes are exchanged when the newer generation is 12% faster.