Doesn't seem like there's anything that would work in Mountain Lion if Apple wanted to do that. For business users they typically need the ability to wait a few months before updating to the newest OS for a variety of reasons.
Still no second socket, so my point stands. Pins for it, but good luck being the one to solder on your own and try to get software to work with it.
No second socket now, but who knows what Apple will decide when it comes shipping time…
I personally think a second socket would make sense; say I want to increase my storage space, but I still want the blazing speed of the internal PCIe SSD stick…
Well, if only a single socket, then i have to buy a more expensive stick, shuffle all my data off of the current one to an external drive, switch sticks & then shuffle all the data to the new stick… Then I am left with a still expensive PCIe SSD stick with nothing to put it in… Wasteful…!
Two sockets makes better sense for end user convenience, and it is an upsell whe one is perusing the Apple Store for a new BTO Mac Pro…
Asking pros to throw out their $5000 iTubes to upgrade the GPU isn't what most people would call "compelling". I agree that Apple should focus on Mac Pro GPUs, but using proprietary non-upgradable GPUs is frankly quite insulting to Mac users.
It's insulting to you because you bought a second hand 2009 Mac Pro and upgraded it with a 6870 yourself and Apple made zero dollars from you so they owe you the same amount. When you say 'throw away', you mean sell second-hand.
The top model should be very fast though not as fast as an Ivy Bridge Mac Pro tower would have been.
It depends on how you measure speed. You couldn't get two FirePros in the old Mac Pro and Apple has never gone up to the fastest CPUs in dual configs. What you would be comparing is:
2x 8-core Xeon ($1440 x 2) + 1x FirePro
vs
1x 12-core Xeon ($2k) + 2x FirePro
For CPU-bound tasks, the first one is faster but not as much as you'd think because the single CPU models are faster relative to the price. This is really making the point that having a second GPU is more value computationally than a CPU. While that's only true for things that use it, the things that need the power need to start using OpenCL.
We will have to wait to hear if Mavericks supports CPU sharing across Thunderbolt 2. I don't believe TB-1 does. If it does, then one rack of 36 MPs could conceivably run at 1/4 petaflop. That's cooking!
It depends on how you measure speed. You couldn't get two FirePros in the old Mac Pro and Apple has never gone up to the fastest CPUs in dual configs. What you would be comparing is:
2x 8-core Xeon ($1440 x 2) + 1x FirePro
vs
1x 12-core Xeon ($2k) + 2x FirePro
For CPU-bound tasks, the first one is faster but not as much as you'd think because the single CPU models are faster relative to the price. This is really making the point that having a second GPU is more value computationally than a CPU. While that's only true for things that use it, the things that need the power need to start using OpenCL.
There was a 10 core Ivy Bridge Xeon on the roadmap so it could be anywhere from 16 to 24 cores. Also it is 2x FirePros in the spec bumped Mac Pro unless you think that Apple would simply leave a bunch of lanes unused. Why would there not be two x16 slots? None.
So it is:
2x 8+ core Xeon + 2x FirePro
vs
1x 12 core Xeon + 2x FirePro
If you make that last slot x8 then you can have 4 TB2 ports vs 6 TB ports. More importantly if you didn't want two FirePros but two faster K20 cards (faster than the current FirePro S9000s anyway) you could do that with a cheaper GPU running in x8 mode on the third slot or if you needed accelerate X86 code vs OpenCL or CUDA code you could put Xeon Phi's in there at full speed rather than two FirePros. It's slower than both the K20 and S9000 but has it's own advantages from a code perspective.
No second socket now, but who knows what Apple will decide when it comes shipping time…
I personally think a second socket would make sense; say I want to increase my storage space, but I still want the blazing speed of the internal PCIe SSD stick…
Well, if only a single socket, then i have to buy a more expensive stick, shuffle all my data off of the current one to an external drive, switch sticks & then shuffle all the data to the new stick… Then I am left with a still expensive PCIe SSD stick with nothing to put it in… Wasteful…!
Two sockets makes better sense for end user convenience, and it is an upsell whe one is perusing the Apple Store for a new BTO Mac Pro…
Might not have the lanes. The Micron P320h PCIe SSD is an x8 card. Figure each SSD slot is at least 1 TB2 port and maybe even 2 and you can see why there might only be one.
Edit: Now I'm wondering if the GPUs are running at x16...32 lanes for the GPUs, 6 lanes for the TB leaving only 2 lanes for the SSD and everything else? My math must be off...because that seems really tight. Running the GPUs at x8 gives you a lot more flexibility but I guess that might be why only 1 SSD slot.
It will be expensive, but pros can afford it. Same goes for going all external for expansion, the aftermarket will take care of that, it won't be cheap, but not a biggie for pros. Added value: clients coming to the studio will be stunned by the machine's design, an extra plus point for the pros.
For everyone else who can't afford it, there will be tower-shaped (a la Airport Extreme) Mac Mini.
Also it is 2x FirePros in the spec bumped Mac Pro unless you think that Apple would simply leave a bunch of lanes unused. Why would there not be two x16 slots? None.
It's not the lanes in that case but the TDP. The TDP of a single FirePro is 274W so two is 548W. The PCIe slot power allocation was 300W. They could have put in a bigger PSU but they'd also need two double-wide slots and the two power-hungry GPUs would be sitting side by side, which the old cooling system might not cope with easily. It doesn't matter what the MP could have been modified into, we are comparing what it was then and what it is now. If they had stuck with the old design, it wouldn't have been as good as this one. They certainly couldn't have had an intro like this:
More importantly if you didn't want two FirePros but two faster K20 cards (faster than the current FirePro S9000s anyway) you could do that with a cheaper GPU running in x8 mode on the third slot or if you needed accelerate X86 code vs OpenCL or CUDA code you could put Xeon Phi's in there at full speed rather than two FirePros. It's slower than both the K20 and S9000 but has it's own advantages from a code perspective.
It can't be the S9000, they mentioned the GPUs have over 500GB/s memory bandwidth so it has to be the s10000 and custom versions because normally they are 480GB/s, which is the fastest you can get:
This is just going to get back into the theoretical spec thing again. Very few people can afford the top end models and it really doesn't matter if there's a few percentage points difference between one model or the other. Two FirePros in Crossfire is going to offer amazing performance.
It's not the lanes in that case but the TDP. The TDP of a single FirePro is 274W so two is 548W. The PCIe slot power allocation was 300W. They could have put in a bigger PSU but they'd also need two double-wide slots and the two power-hungry GPUs would be sitting side by side, which the old cooling system might not cope with easily. It doesn't matter what the MP could have been modified into, we are comparing what it was then and what it is now.
The same thermal and power requirements exist for both machines for the FirePros. Given that Dell Precision workstations regularly run with dual high end GPUs it is a huge stretch to state that the Mac Pro case would not have been able to handle the same thermally.
Quote:
It can't be the S9000, they mentioned the GPUs have over 500GB/s memory bandwidth so it has to be the s10000 and custom versions because normally they are 480GB/s, which is the fastest you can get:
This is just going to get back into the theoretical spec thing again. Very few people can afford the top end models and it really doesn't matter if there's a few percentage points difference between one model or the other. Two FirePros in Crossfire is going to offer amazing performance.
Except that the very few people who can afford them are the same folks that buy Mac Pros. Whether two FirePros offer amazing performance or not is immaterial if you have CUDA code or X86 code designed for the Phi.
Given that the machine isn't shipping and very little is known about them anyway it's all about theoretical specs. What IS certain is that there isn't dual CPUs like before and there isn't 8 RAM slots and TB2 is much slower than a PCIe slot. There is nothing the new Mac Pro can do that an updated old Mac Pro could not also do except fit in a tighter space or win design awards (although it did in its day). The opposite is not true.
Like I said, I got what I wanted...a Mac Pro Mini. It is irksome that folks can't even admit that an updated Mac Pro classic would not have been faster and more expandable.
At last! I thought Apple were finished in the pro market, it may not suit everyone but I'm looking to mate this online cloud driven puppy with a 22" Wacome HDtouch and a 40 " 4K screen with about 4Tbt nas-ssd with Red Wine driven B&W PM1s and Audeze for the personal touch.
So short of seeing Red which is not in my bag... exactly what sound video Art photography production won't I be able to professionally create/edit in the foreseeable future?
There is nothing the new Mac Pro can do that an updated old Mac Pro could not also do except fit in a tighter space or win design awards (although it did in its day). The opposite is not true.
The space saving is quite dramatic. I don't think they quite got that across in the presentation:
The old MP might not have been able to easily mix slots and Thunderbolt. The old MP could not run two FirePros internally. You can say similar things about the rMBP and the cMBP. You can technically do more with the cMBP like put your own RAM in and HDD and you have an optical drive. It doesn't mean much though if the unique things the old one did aren't that important.
It is irksome that folks can't even admit that an updated Mac Pro classic would not have been faster and more expandable.
I freely admit that it wouldn't have been faster and more expandable.
It could have been built faster but it wouldn't be if it had the same restrictions the design has had for the last 10 years. Also, when you say 'more expandable', you mean you could use 3 devices at a time instead of 36. I'd say 36 is more than 3. The slots are faster but I haven't seen any real world scenarios where it's a measured problem.
*low whistle* Is that really to scale right there? Wish they would have a comparison shot on the website proper rather than that one frame during the keynote.
Just have the new Mac Pro in front and the bottom of the old one fading in the darkness behind it for a sense of scale.
It's not backwards because it's not a good idea to have 10TB+ internally. If you need to move to a new machine, you can't migrate the data easily. You also can't setup a proper hardware RAID internally very easily and you can't hot-swap the drives.
If you need to upgrade the Mac Pro, this is as simple as unplugging the mass storage, migrate the smaller internal at 1.25GB/s and plug the storage into the new machine. Some people have two computers so if you needed to copy something from the storage, again you just unplug it from one and plug it into the other.
Do you notice that you keep doing the same things with everything Apple related? You start with the conclusion that you don't like anything besides what you're accustomed to, whether it's internal storage, Eizo/NEC displays, PCI slots, networking or whatever else is a part of the standard tower format + professional display and then try to dismiss anything that differs from it under a cloud of doubt by saying it's 'less than ideal' or 'might have problems' or 'seems to have negative reviews'. People are using these solutions with no problems at all and have been for a while.
If you want to pick your own drives, then pick a RAID box (USB 3 or Thunderbolt) and put them in it. The Pegasus gets much better speeds than you would with internal drives:
I can see how you would think that about my opinions. In the most basic sense, I tend to favor integrated solutions unless something can't be done well. My long term results have been fairly poor with Apple displays on a number of things (perform poorly at lower brightness settings, uniformity issues, more than average backlight bleed, used to be way too shiny even in controlled lighting, only method of trying to keep some consistency to output is the old matrix profile method which has its own problems). I'm not so much about PCI slots. I think most cards available on Macs have been poorly supported in recent years, so I don't think they offer a great solution for most people. At the same time I don't favor a $600 mac edition of a $400 PC card coupled with an $800 case as a good solution. The tests are pretty mixed. Some show great performance. Others show the higher end cards choking. It's all over the place at the moment, although no card currently demands a full x16 PCIe 3. I think adding distance to the gpu tends to be a bad idea, but if they're going to do that, it would make more sense if it came from a company that tested the entire solution with appropriate drivers and a case with appropriate power and space.
Basically I'm only against integrating components when the quality isn't there yet. Otherwise it's the direction computing has always taken. I mentioned Wizard's solution as far more forward thinking than what Apple did. It was probably just too expensive at this point in time. I would have liked to see them use the available sata connections, perhaps 4-6 2.5" bays. As for the R6, I wasn't concerned about performance there. Promise makes a large range of hardware, although I've never used one of their SANs. I can say that supporting Raid 5 is something that is advertised on a lot of cheap boxes where it's a bad idea. Just the use of it without enterprise drives (shorter error recovery timings to prevent drives from timing out trying to recover data due to bad sectors) and an ECC cache is a bad idea, as it doesn't lead to a stable system. Of course it raises doubts when I run into points like that.
Also I like anandtech, but some of their testing methods are severely flawed. Their display tests made no sense at all, and not because Apple wasn't ranked on top. It was because their criteria lacked context when they were using measurements that make no sense out of context.
Edit: fwiw I don't know whether I will own one. There isn't enough information yet. It's likely enough.
Quote:
This Mac Pro makes the right compromises going forward:
- SSDs will scale up in size over time to as much as 20TB or more
- it focuses on GPUs and OpenCL for compute power; it doesn't matter if software isn't ready yet, the software that uses it will outperform the software that doesn't and they'll get the sale
- focusing on GPUs allows them to offer more compelling upgrades year after year even when Intel is lagging behind
- the storage default means that people buying these machines get the best performance without thinking about it and bulk storage can be handled by people who do it best like the server guys e.g HP:
Hardware often precedes software if developers aren't exactly sure what will show up when. If they're responsive once something shows up, that is a good thing. Workstation gpus don't change that fast though. Assuming static price points, it's generally a staggered 2 year cycle on workstation variants. They might release high end performance options later than the lower ones, but you don't see the same ones constantly turned over. They certainly don't rebrand and just clock bump. Intel tends to be faster in that regard, although I wouldn't expect anything to change beyond this until at least the first half of 2015. I completely disagree that they will bump things outside of cpu cycles. That would be a first, and you're talking about two lines of slowly moving parts coupled with a set of buyers that are likely not the type to line up outside stores. I don't think you'll see a repeat of that last thing from intel. Intel only released 2/3 of a westmere line, then pushed back sandy bridge e rumored to be due to a reverberation of the sata bugs that caused the initial recall and requiring further stepping.
The old one had two CPUs so 130W went to one of them. You can't put two of these FirePros in a 2012 Mac Pro. The Dell Precision has a 1300W PSU.
It depends upon the Fire Pro card. It is hard to tell exactly what FirePro Apple will be using, but it is likely to be between 200 and 300 watts per card. I wouldn't be surprised to find Apple using unreleased chips. If fact I don't think it is much of stretch to say that Apple and AMD may very well have a custom version of the FirePro to support the faster TB ports. In any event a budget of 750 watts for the processors isn't unreasonable add in another 100 watts or so for the rest of the machine and you have ballpark figure for the system powersupply. This is a substantial machine.
Right but that's up to the developer. They should be using OpenCL.
Any developer targeting anything else just isn't with it as far as evaluating the direction of the industry.
The space saving is quite dramatic. I don't think they quite got that across in the presentation:
This machine inspires all sorts of ideas as far as companion hardware goes.
The old MP might not have been able to easily mix slots and Thunderbolt. The old MP could not run two FirePros internally. You can say similar things about the rMBP and the cMBP. You can technically do more with the cMBP like put your own RAM in and HDD and you have an optical drive. It doesn't mean much though if the unique things the old one did aren't that important.
More importantly this machine will run circles around any laptop which is what most people want
I freely admit that it wouldn't have been faster and more expandable.
Even that isn't a given because there is no assurance that Apple would have stayed with dual socket solutions.
It could have been built faster but it wouldn't be if it had the same restrictions the design has had for the last 10 years. Also, when you say 'more expandable', you mean you could use 3 devices at a time instead of 36. I'd say 36 is more than 3. The slots are faster but I haven't seen any real world scenarios where it's a measured problem.
I'm not going to say this new Mac Pro is ideal expansion wise but it really requires one to do a little mind refactoring. If you can convince yourself that disk arrays don't belong inside your computational unit then you are well on your way to accepting the new Mac Pro. All the whining about existing hardware is nonsense because such hardware has been constantly outmoded by technological advancement. I have a cellar full of old analog cards, digital I/O cards and such that can't be plugged into modern computers be they Dells or this new Mac.
Comments
This image doesn't?
No, one is.
RAID them. 8-)
Quote:
Originally Posted by OldCodger73
that it will only run Maverick;
Doesn't seem like there's anything that would work in Mountain Lion if Apple wanted to do that. For business users they typically need the ability to wait a few months before updating to the newest OS for a variety of reasons.
Quote:
Originally Posted by Tallest Skil
This image doesn't?
No, one is.
RAID them.
That would be the PhotoChopped Apple website poc; I am talking about the pic from the Mac Pro on display at WWDC MMXIII…
You know, the one I clearly linked…
http://cdn.thenextweb.com/wp-content/blogs.dir/1/files/2013/06/IMG_8203.jpg
But here it is again…
Still no second socket, so my point stands. Pins for it, but good luck being the one to solder on your own and try to get software to work with it.
Quote:
Originally Posted by Tallest Skil
Still no second socket, so my point stands. Pins for it, but good luck being the one to solder on your own and try to get software to work with it.
No second socket now, but who knows what Apple will decide when it comes shipping time…
I personally think a second socket would make sense; say I want to increase my storage space, but I still want the blazing speed of the internal PCIe SSD stick…
Well, if only a single socket, then i have to buy a more expensive stick, shuffle all my data off of the current one to an external drive, switch sticks & then shuffle all the data to the new stick… Then I am left with a still expensive PCIe SSD stick with nothing to put it in… Wasteful…!
Two sockets makes better sense for end user convenience, and it is an upsell whe one is perusing the Apple Store for a new BTO Mac Pro…
It's insulting to you because you bought a second hand 2009 Mac Pro and upgraded it with a 6870 yourself and Apple made zero dollars from you so they owe you the same amount. When you say 'throw away', you mean sell second-hand.
It depends on how you measure speed. You couldn't get two FirePros in the old Mac Pro and Apple has never gone up to the fastest CPUs in dual configs. What you would be comparing is:
2x 8-core Xeon ($1440 x 2) + 1x FirePro
vs
1x 12-core Xeon ($2k) + 2x FirePro
For CPU-bound tasks, the first one is faster but not as much as you'd think because the single CPU models are faster relative to the price. This is really making the point that having a second GPU is more value computationally than a CPU. While that's only true for things that use it, the things that need the power need to start using OpenCL.
Quote:
Originally Posted by rob53
We will have to wait to hear if Mavericks supports CPU sharing across Thunderbolt 2. I don't believe TB-1 does. If it does, then one rack of 36 MPs could conceivably run at 1/4 petaflop. That's cooking!
+1 Beautiful. Rack em'up.
Quote:
Originally Posted by Marvin
It depends on how you measure speed. You couldn't get two FirePros in the old Mac Pro and Apple has never gone up to the fastest CPUs in dual configs. What you would be comparing is:
2x 8-core Xeon ($1440 x 2) + 1x FirePro
vs
1x 12-core Xeon ($2k) + 2x FirePro
For CPU-bound tasks, the first one is faster but not as much as you'd think because the single CPU models are faster relative to the price. This is really making the point that having a second GPU is more value computationally than a CPU. While that's only true for things that use it, the things that need the power need to start using OpenCL.
There was a 10 core Ivy Bridge Xeon on the roadmap so it could be anywhere from 16 to 24 cores. Also it is 2x FirePros in the spec bumped Mac Pro unless you think that Apple would simply leave a bunch of lanes unused. Why would there not be two x16 slots? None.
So it is:
2x 8+ core Xeon + 2x FirePro
vs
1x 12 core Xeon + 2x FirePro
If you make that last slot x8 then you can have 4 TB2 ports vs 6 TB ports. More importantly if you didn't want two FirePros but two faster K20 cards (faster than the current FirePro S9000s anyway) you could do that with a cheaper GPU running in x8 mode on the third slot or if you needed accelerate X86 code vs OpenCL or CUDA code you could put Xeon Phi's in there at full speed rather than two FirePros. It's slower than both the K20 and S9000 but has it's own advantages from a code perspective.
Quote:
Originally Posted by MacRonin
No second socket now, but who knows what Apple will decide when it comes shipping time…
I personally think a second socket would make sense; say I want to increase my storage space, but I still want the blazing speed of the internal PCIe SSD stick…
Well, if only a single socket, then i have to buy a more expensive stick, shuffle all my data off of the current one to an external drive, switch sticks & then shuffle all the data to the new stick… Then I am left with a still expensive PCIe SSD stick with nothing to put it in… Wasteful…!
Two sockets makes better sense for end user convenience, and it is an upsell whe one is perusing the Apple Store for a new BTO Mac Pro…
Might not have the lanes. The Micron P320h PCIe SSD is an x8 card. Figure each SSD slot is at least 1 TB2 port and maybe even 2 and you can see why there might only be one.
Edit: Now I'm wondering if the GPUs are running at x16...32 lanes for the GPUs, 6 lanes for the TB leaving only 2 lanes for the SSD and everything else? My math must be off...because that seems really tight. Running the GPUs at x8 gives you a lot more flexibility but I guess that might be why only 1 SSD slot.
For everyone else who can't afford it, there will be tower-shaped (a la Airport Extreme) Mac Mini.
Why would they do that?
It's not the lanes in that case but the TDP. The TDP of a single FirePro is 274W so two is 548W. The PCIe slot power allocation was 300W. They could have put in a bigger PSU but they'd also need two double-wide slots and the two power-hungry GPUs would be sitting side by side, which the old cooling system might not cope with easily. It doesn't matter what the MP could have been modified into, we are comparing what it was then and what it is now. If they had stuck with the old design, it wouldn't have been as good as this one. They certainly couldn't have had an intro like this:
It can't be the S9000, they mentioned the GPUs have over 500GB/s memory bandwidth so it has to be the s10000 and custom versions because normally they are 480GB/s, which is the fastest you can get:
http://fireuser.com/blog/comparing_the_amd_firepro_s10000_to_the_tesla_k10_k20_and_k20x/
This is just going to get back into the theoretical spec thing again. Very few people can afford the top end models and it really doesn't matter if there's a few percentage points difference between one model or the other. Two FirePros in Crossfire is going to offer amazing performance.
Originally Posted by Marvin
It's not the lanes in that case but the TDP. The TDP of a single FirePro is 274W so two is 548W. The PCIe slot power allocation was 300W. They could have put in a bigger PSU but they'd also need two double-wide slots and the two power-hungry GPUs would be sitting side by side, which the old cooling system might not cope with easily. It doesn't matter what the MP could have been modified into, we are comparing what it was then and what it is now.
The same thermal and power requirements exist for both machines for the FirePros. Given that Dell Precision workstations regularly run with dual high end GPUs it is a huge stretch to state that the Mac Pro case would not have been able to handle the same thermally.
Quote:
It can't be the S9000, they mentioned the GPUs have over 500GB/s memory bandwidth so it has to be the s10000 and custom versions because normally they are 480GB/s, which is the fastest you can get:
http://fireuser.com/blog/comparing_the_amd_firepro_s10000_to_the_tesla_k10_k20_and_k20x/
This is just going to get back into the theoretical spec thing again. Very few people can afford the top end models and it really doesn't matter if there's a few percentage points difference between one model or the other. Two FirePros in Crossfire is going to offer amazing performance.
Except that the very few people who can afford them are the same folks that buy Mac Pros. Whether two FirePros offer amazing performance or not is immaterial if you have CUDA code or X86 code designed for the Phi.
Given that the machine isn't shipping and very little is known about them anyway it's all about theoretical specs. What IS certain is that there isn't dual CPUs like before and there isn't 8 RAM slots and TB2 is much slower than a PCIe slot. There is nothing the new Mac Pro can do that an updated old Mac Pro could not also do except fit in a tighter space or win design awards (although it did in its day). The opposite is not true.
Like I said, I got what I wanted...a Mac Pro Mini. It is irksome that folks can't even admit that an updated Mac Pro classic would not have been faster and more expandable.
So short of seeing Red which is not in my bag... exactly what sound video Art photography production won't I be able to professionally create/edit in the foreseeable future?
The old one had two CPUs so 130W went to one of them. You can't put two of these FirePros in a 2012 Mac Pro. The Dell Precision has a 1300W PSU.
Right but that's up to the developer. They should be using OpenCL.
The space saving is quite dramatic. I don't think they quite got that across in the presentation:
The old MP might not have been able to easily mix slots and Thunderbolt. The old MP could not run two FirePros internally. You can say similar things about the rMBP and the cMBP. You can technically do more with the cMBP like put your own RAM in and HDD and you have an optical drive. It doesn't mean much though if the unique things the old one did aren't that important.
I freely admit that it wouldn't have been faster and more expandable.
It could have been built faster but it wouldn't be if it had the same restrictions the design has had for the last 10 years. Also, when you say 'more expandable', you mean you could use 3 devices at a time instead of 36. I'd say 36 is more than 3. The slots are faster but I haven't seen any real world scenarios where it's a measured problem.
*low whistle* Is that really to scale right there? Wish they would have a comparison shot on the website proper rather than that one frame during the keynote.
Just have the new Mac Pro in front and the bottom of the old one fading in the darkness behind it for a sense of scale.
Yup, that's to scale. The Cube would be a touch shorter (9.1" vs 9.9") but the Cube photo there was at a slight angle and it wasn't that far off.
You'd get 6 of the new MPs in the same space as the old Mac Pro. Phil said it was 1/8th the size but that would be measuring the volume difference.
Quote:
Originally Posted by Marvin
It's not backwards because it's not a good idea to have 10TB+ internally. If you need to move to a new machine, you can't migrate the data easily. You also can't setup a proper hardware RAID internally very easily and you can't hot-swap the drives.
If you need to upgrade the Mac Pro, this is as simple as unplugging the mass storage, migrate the smaller internal at 1.25GB/s and plug the storage into the new machine. Some people have two computers so if you needed to copy something from the storage, again you just unplug it from one and plug it into the other.
Do you notice that you keep doing the same things with everything Apple related? You start with the conclusion that you don't like anything besides what you're accustomed to, whether it's internal storage, Eizo/NEC displays, PCI slots, networking or whatever else is a part of the standard tower format + professional display and then try to dismiss anything that differs from it under a cloud of doubt by saying it's 'less than ideal' or 'might have problems' or 'seems to have negative reviews'. People are using these solutions with no problems at all and have been for a while.
If you want to pick your own drives, then pick a RAID box (USB 3 or Thunderbolt) and put them in it. The Pegasus gets much better speeds than you would with internal drives:
http://www.anandtech.com/show/4489/promise-pegasus-r6-mac-thunderbolt-review/6
I can see how you would think that about my opinions. In the most basic sense, I tend to favor integrated solutions unless something can't be done well. My long term results have been fairly poor with Apple displays on a number of things (perform poorly at lower brightness settings, uniformity issues, more than average backlight bleed, used to be way too shiny even in controlled lighting, only method of trying to keep some consistency to output is the old matrix profile method which has its own problems). I'm not so much about PCI slots. I think most cards available on Macs have been poorly supported in recent years, so I don't think they offer a great solution for most people. At the same time I don't favor a $600 mac edition of a $400 PC card coupled with an $800 case as a good solution. The tests are pretty mixed. Some show great performance. Others show the higher end cards choking. It's all over the place at the moment, although no card currently demands a full x16 PCIe 3. I think adding distance to the gpu tends to be a bad idea, but if they're going to do that, it would make more sense if it came from a company that tested the entire solution with appropriate drivers and a case with appropriate power and space.
Basically I'm only against integrating components when the quality isn't there yet. Otherwise it's the direction computing has always taken. I mentioned Wizard's solution as far more forward thinking than what Apple did. It was probably just too expensive at this point in time. I would have liked to see them use the available sata connections, perhaps 4-6 2.5" bays. As for the R6, I wasn't concerned about performance there. Promise makes a large range of hardware, although I've never used one of their SANs. I can say that supporting Raid 5 is something that is advertised on a lot of cheap boxes where it's a bad idea. Just the use of it without enterprise drives (shorter error recovery timings to prevent drives from timing out trying to recover data due to bad sectors) and an ECC cache is a bad idea, as it doesn't lead to a stable system. Of course it raises doubts when I run into points like that.
Also I like anandtech, but some of their testing methods are severely flawed. Their display tests made no sense at all, and not because Apple wasn't ranked on top
Edit: fwiw I don't know whether I will own one. There isn't enough information yet. It's likely enough.
Quote:
This Mac Pro makes the right compromises going forward:
- SSDs will scale up in size over time to as much as 20TB or more
- it focuses on GPUs and OpenCL for compute power; it doesn't matter if software isn't ready yet, the software that uses it will outperform the software that doesn't and they'll get the sale
- focusing on GPUs allows them to offer more compelling upgrades year after year even when Intel is lagging behind
- the storage default means that people buying these machines get the best performance without thinking about it and bulk storage can be handled by people who do it best like the server guys e.g HP:
Hardware often precedes software if developers aren't exactly sure what will show up when. If they're responsive once something shows up, that is a good thing. Workstation gpus don't change that fast though. Assuming static price points, it's generally a staggered 2 year cycle on workstation variants. They might release high end performance options later than the lower ones, but you don't see the same ones constantly turned over. They certainly don't rebrand and just clock bump. Intel tends to be faster in that regard, although I wouldn't expect anything to change beyond this until at least the first half of 2015. I completely disagree that they will bump things outside of cpu cycles. That would be a first, and you're talking about two lines of slowly moving parts coupled with a set of buyers that are likely not the type to line up outside stores. I don't think you'll see a repeat of that last thing from intel. Intel only released 2/3 of a westmere line, then pushed back sandy bridge e rumored to be due to a reverberation of the sata bugs that caused the initial recall and requiring further stepping.
Quote:
Originally Posted by mervynyan
looks like my plug hole
For someone who only posts one a year, it wasn't worth the wait.
I'm not going to say this new Mac Pro is ideal expansion wise but it really requires one to do a little mind refactoring. If you can convince yourself that disk arrays don't belong inside your computational unit then you are well on your way to accepting the new Mac Pro. All the whining about existing hardware is nonsense because such hardware has been constantly outmoded by technological advancement. I have a cellar full of old analog cards, digital I/O cards and such that can't be plugged into modern computers be they Dells or this new Mac.