is there a big advantage between 2.8 ghz vs 3.0 ghz and 3.0 ghz vs. 3.2 ghz in performance? That will justify if getting the faster processor worthwhile.
I don't think it's worth the extra money. If you absolutely had to have the most powerful machine, then it might be worth it. Some people might not even notice the speedup.
I would wait until there's confirmation that there's an open socket if you buy the one chip version, if that's so, then I think it's worth considering that version and adding the second chip several years from now.
I don't think it's worth the extra money. If you absolutely had to have the most powerful machine, then it might be worth it. Some people might not even notice the speedup.
I would wait until there's confirmation that there's an open socket if you buy the one chip version, if that's so, then I think it's worth considering that version and adding the second chip several years from now.
I'm just going to get the base model with the 8800 graphics card - I doubt that I need 8 cores ever.
It's certainly possible. But with Apple it's also not.
Most likely what Apple has done is not soldered on the sockets for the second CPU and RAM riser card. That would allow them to use the same motherboard, while making it impossible to upgrade with a second processor.
Most likely what Apple has done is not soldered on the sockets for the second CPU and RAM riser card. That would allow them to use the same motherboard, while making it impossible to upgrade with a second processor.
Not that I wouldn't put it past Apple, but I really hope they didn't do that.
On my PowerMac G5 Dual 2.5 GHz (June 2004 model) with 6.5 GB RAM I run HandBrake 0.9.1 to rip a home DVD. I then use same HandBrake program on my MacBook Core 2 Duo with 2 GB RAM to rip the same DVD and it completes in 1/2 the time it took on the PowerMac.
If I now run this same HandBrake to rip the same DVD on the new Mac Pro 2.8 Dual Quad Xeon with say the same RAM footprint of 6.5 GB, how much faster than the MacBook would you expect this Mac Pro to be ? Would you expect any difference in the HandBrake's wall time on this Mac Pro using 2 GB vs. 6.5 GB ?
Most likely what Apple has done is not soldered on the sockets for the second CPU and RAM riser card. That would allow them to use the same motherboard, while making it impossible to upgrade with a second processor.
It doesn't look like they got rid of the second RAM card, the CTO lets me chose single processor and 32GB RAM. The processor socket is a different question, we'll have to see.
Is there a big performance difference between 2.8 vs 3.0 and between 3.0 vs. 3.2.
I know there should be a significant performance boost between 2.8 vs. 3.2 but is it really worth 1600? or is it really worth an extra $800 for a 200Mhz bump?
I don't do anything too hardcore graphics or video but I just want the biggest and best. I have two other PCs with quad cores and 8GB of ram and barely use its potential.
Not too much.
But if you are doing something that takes time, it might make a difference.
If a render takes the 2.8 ten hours, then a 3 GHz machine will finish in nine and a quarter, and a 3.2 GHz machine will finish in about eight and three quarters.
That might make the difference between seeing it today, if you started first thing in the morning, and seeing it tomorrow.
The standard configuration uses the Radeon HD2600XT? That's such a horrible mismatch for Intel's mighty 8 cores @ 2.8Ghz @ 1600FSB!!!!!....You will be crawling at the 30 inch resolution for anything 3d. I don't know if iTunes can use the visualization at that resolution.
I am surprised that the Geforce 8800GT 512MB wasn't the default standard configuration. On the other hand, the 200 dollar upgrade for 8800GT isn't that bad. It costs $275 retail to buy a 8800GT. Considering the retarded 2600XT costs 80 bux, it's almost trivial to upgrade to the 8800GT.
If it keeps the cost down I'm glad the 2600xt is the default card. Not everyone uses a 30". I'm on a 24" and I think the 2600xt would be fine for most things besides any semi-serious gaming. They could have done worse... Like they did with the last rev... 7300gt as default card? LOL ... they could have put a x2300pro or 8300gt as default card.
<B>If it keeps the cost down I'm glad the 2600xt is the default card. Not everyone uses a 30". I'm on a 24" and I think the 2600xt would be fine for most things besides any semi-serious gaming. They could have done worse... Like they did with the last rev... 7300gt as default card? LOL ... they could have put a x2300pro or 8300gt as default card.[/B]
Well, you are obviously an idiot. First x2300pro is not a valid model. 8300GT doesn't exist either. At the time of last Mac Pro revision, Geforce 7 series was the latest from Nvidia, and the X1000 series were the latest from ATI, so how did you come up with 8300GT and x2300pro?
You are on a 24 inch. Have you tried to play games at 1920x1200 on the 2600XT? You will be getting 10 fps in Counter-Strike if you are lucky. 2fps in Crysis on Medium. 2600XT is only usable for 19inch LCD max. Anything bigger than 19 inch, you need more graphics power.
Yeah, but what software is going to use more than 4 processors effectively? Most of the benchmarks are comparing the new machines to the Quad G5s, and comparisons to other Mac Pros just seem to show the difference in clock speed.
Having worked on efficient coding of multiprocessor software for my PhD about a decade back (look for it at http://www.itee.uq.edu.au/~philip/Publications/), I wouldn't be too surprised if the problem is that too few people understand the performance issues. If you don't have a good grounding in computer architecture with some understanding of OS and other hardware-software interactions, it's hard to get decent speedups. This is especially true as the speed gap between DRAM and processors grows. There's been a bit of a stall in clock speed increases the last few years, but DRAM hasn't been improving that dramatically either -- transfer rates have been going up but total time to start a new random access is still pretty slow compared with CPU cycle times. To look at some numbers, if you get a DDR3 SDRAM from Micron with a 1.5ns cycle time, it has an overhead of at least 24ns before any data starts to move. If your shiny new 8 core 3.2GHz machine is only trying to deliver a conservative 1 instruction per clock per core, it can execute over 600 instructions in the time it takes this kind of DRAM to heave into life. The DDR2 stuff the Mac uses is not far off this sort of speed.
But anyway, the point is that any code that uses all the cores has to avoid touching DRAM as much as possible, otherwise memory accesses become a serious bottleneck.
Gotcha. I have to click a user's quick reply button first before that field becomes active. Of all teh vBulletin boards I frequent this is the only one setup that way. At least I know why that field is down there. Thanks.
I bit on the 8-Core base model + GeForce 8800 GT. It's like $1000 over my budget, but I've got 0% APR for the rest of the year on this card so what the hell.
I was a bit surprised to still see FB-DIMMS. I thought Intel was going to phase out the use of FB because of the extra latency, power consumption and cost?
Comments
Let me rephrase then...
is there a big advantage between 2.8 ghz vs 3.0 ghz and 3.0 ghz vs. 3.2 ghz in performance? That will justify if getting the faster processor worthwhile.
I don't think it's worth the extra money. If you absolutely had to have the most powerful machine, then it might be worth it. Some people might not even notice the speedup.
I would wait until there's confirmation that there's an open socket if you buy the one chip version, if that's so, then I think it's worth considering that version and adding the second chip several years from now.
I don't think it's worth the extra money. If you absolutely had to have the most powerful machine, then it might be worth it. Some people might not even notice the speedup.
I would wait until there's confirmation that there's an open socket if you buy the one chip version, if that's so, then I think it's worth considering that version and adding the second chip several years from now.
I'm just going to get the base model with the 8800 graphics card - I doubt that I need 8 cores ever.
It's certainly possible. But with Apple it's also not.
Most likely what Apple has done is not soldered on the sockets for the second CPU and RAM riser card. That would allow them to use the same motherboard, while making it impossible to upgrade with a second processor.
Most likely what Apple has done is not soldered on the sockets for the second CPU and RAM riser card. That would allow them to use the same motherboard, while making it impossible to upgrade with a second processor.
Not that I wouldn't put it past Apple, but I really hope they didn't do that.
If I now run this same HandBrake to rip the same DVD on the new Mac Pro 2.8 Dual Quad Xeon with say the same RAM footprint of 6.5 GB, how much faster than the MacBook would you expect this Mac Pro to be ? Would you expect any difference in the HandBrake's wall time on this Mac Pro using 2 GB vs. 6.5 GB ?
Most likely what Apple has done is not soldered on the sockets for the second CPU and RAM riser card. That would allow them to use the same motherboard, while making it impossible to upgrade with a second processor.
It doesn't look like they got rid of the second RAM card, the CTO lets me chose single processor and 32GB RAM. The processor socket is a different question, we'll have to see.
Sorry, but I can't see how this machine is up to 2x faster than the previous generation?
Apple's own benchmarks point to a 1.1x to 1.3x improvement - not 2x?
? https://www.apple.com/macpro/performance.html
Sorry, but I can't see how this machine is up to 2x faster than the previous generation?
Apple's own benchmarks point to a 1.1x to 1.3x improvement - not 2x?
8 core vs 4 cores for the mid line models.
Faster bus, faster memory, faster cpu's etc.
It's all theoretical.
We'll find out when they land in the sites hands and we see tests.
I just noticed that the new Mac Pro can accept SAS drives. Was this possible in the yesterday's Mac Pro?
Note to site admins: If you're going to disable "Quick Reply" you might as well remove the code from the page.
It does, but you need the card as well.
Is there a big performance difference between 2.8 vs 3.0 and between 3.0 vs. 3.2.
I know there should be a significant performance boost between 2.8 vs. 3.2 but is it really worth 1600? or is it really worth an extra $800 for a 200Mhz bump?
I don't do anything too hardcore graphics or video but I just want the biggest and best. I have two other PCs with quad cores and 8GB of ram and barely use its potential.
Not too much.
But if you are doing something that takes time, it might make a difference.
If a render takes the 2.8 ten hours, then a 3 GHz machine will finish in nine and a quarter, and a 3.2 GHz machine will finish in about eight and three quarters.
That might make the difference between seeing it today, if you started first thing in the morning, and seeing it tomorrow.
I am surprised that the Geforce 8800GT 512MB wasn't the default standard configuration. On the other hand, the 200 dollar upgrade for 8800GT isn't that bad. It costs $275 retail to buy a 8800GT. Considering the retarded 2600XT costs 80 bux, it's almost trivial to upgrade to the 8800GT.
Well, you are obviously an idiot. First x2300pro is not a valid model. 8300GT doesn't exist either. At the time of last Mac Pro revision, Geforce 7 series was the latest from Nvidia, and the X1000 series were the latest from ATI, so how did you come up with 8300GT and x2300pro?
You are on a 24 inch. Have you tried to play games at 1920x1200 on the 2600XT? You will be getting 10 fps in Counter-Strike if you are lucky. 2fps in Crysis on Medium. 2600XT is only usable for 19inch LCD max. Anything bigger than 19 inch, you need more graphics power.
Yeah, but what software is going to use more than 4 processors effectively? Most of the benchmarks are comparing the new machines to the Quad G5s, and comparisons to other Mac Pros just seem to show the difference in clock speed.
Having worked on efficient coding of multiprocessor software for my PhD about a decade back (look for it at http://www.itee.uq.edu.au/~philip/Publications/), I wouldn't be too surprised if the problem is that too few people understand the performance issues. If you don't have a good grounding in computer architecture with some understanding of OS and other hardware-software interactions, it's hard to get decent speedups. This is especially true as the speed gap between DRAM and processors grows. There's been a bit of a stall in clock speed increases the last few years, but DRAM hasn't been improving that dramatically either -- transfer rates have been going up but total time to start a new random access is still pretty slow compared with CPU cycle times. To look at some numbers, if you get a DDR3 SDRAM from Micron with a 1.5ns cycle time, it has an overhead of at least 24ns before any data starts to move. If your shiny new 8 core 3.2GHz machine is only trying to deliver a conservative 1 instruction per clock per core, it can execute over 600 instructions in the time it takes this kind of DRAM to heave into life. The DDR2 stuff the Mac uses is not far off this sort of speed.
But anyway, the point is that any code that uses all the cores has to avoid touching DRAM as much as possible, otherwise memory accesses become a serious bottleneck.
I just noticed that the new Mac Pro can accept SAS drives. Was this possible in the yesterday's Mac Pro?
Note to site admins: If you're going to disable "Quick Reply" you might as well remove the code from the page.
Quick reply isn't disabled.
Quick reply isn't disabled.
Gotcha. I have to click a user's quick reply button first before that field becomes active. Of all teh vBulletin boards I frequent this is the only one setup that way. At least I know why that field is down there. Thanks.
Well done Apple!
Kodo's to Apple!
I think you mean "kudos"
back on topic:
I was a bit surprised to still see FB-DIMMS. I thought Intel was going to phase out the use of FB because of the extra latency, power consumption and cost?