or Connect
AppleInsider › Forums › Mac Hardware › Current Mac Hardware › Teardown of Apple's new Mac Pro reveals socketed, removable Intel CPU
New Posts  All Forums:Forum Nav:

Teardown of Apple's new Mac Pro reveals socketed, removable Intel CPU - Page 5

post #161 of 281
Quote:
Originally Posted by mknopp View Post
 

 

That is exactly what I was wondering as well. Almost all of the scientific computing work that I have done in the last two decades has been done on clusters. I still remember my first time using a Beowulf setup which cobbled together a bunch of older PC motherboards. Then again, I haven't used Mathematica/Maple for anything to this level. We pretty much make our own programs to reduce as much overhead as possible.

 

Interesting.

 

Certainly for problems with any sort of large-grain parallelism the cluster way is more cost effective (especially when the problem is compute- and not communication bound, which can often be managed with problem organization). Unfortunately, in algebra-type things, a single computation can blow up and use a lot of RAM. Also, Mathematica et al are not the most memory efficient programs out there, but they do have a lot of stuff implemented, and life is short, so it is cheaper to buy 256GB of ram than spend two years reimplementing something.

post #162 of 281
Quote:
Originally Posted by hmm View Post

That part is highly debatable. Installing a roc or a simple host card to a box with an embedded controller isn't really difficult, and you may have a more stable range of options.

The operative word is 'easier'. Plugging in a cable is easier than anything else.
Quote:
Originally Posted by hmm View Post

Given the way they're outfitted, I don't see why they chose to claim Raid 5 support. It is weird to do that without the shorter firmware timings.

You don't have to use Raid 5 and Raid isn't a backup solution.
Quote:
Originally Posted by Cpsro 
I don't need or want GPUs (at least at this time), hence the lack of GPUs in my custom configured linux system which has twice the performance (where I need it) compared to the least-expensive 12-core Mac Pro and for about the same price.

It's also 65% of the price of an HP machine for the same performance. What's the point you are making, that you can build a computer far cheaper than Apple, Dell, HP, Lenovo? I'm sure this has been well known for years and it has nothing to do with Apple's design decisions. If they offered dual 12-cores, it would cost above $10k so just like with HP, you'd be able to build one cheaper. Apple wants 30-40% gross margins, component retailers can't get away with those margins, nor do they care about the quality of the parts they sell.
Quote:
Originally Posted by v5v 
I don't think the current design has nearly as much to do with market research as with what best satisfies the demands of Apple's software product, specifically FCPX. Just look at the test results so far: with Adobe Premiere, meh. With FCPX, wow! Coincidence?

The only coincidence is that Apple happens to make both FCPX and the Mac Pro so they had time to optimize for it. As mentioned earlier, Adobe uses a whitelist for supported GPUs. The AMD R9 290 came out late October/early November but Adobe only added it just over a week ago:

http://blogs.adobe.com/premierepro/2013/12/premiere-pro-cc-update-7-2-1.html

"The AMD Radeon R9 290 Series has been added to the OpenCL supported card list"

When a card isn't added to the list, it drops back to using the CPU. The Adobe apps will benefit from the GPUs just like FCPX when support is added for the Mac Pro GPUs.
Quote:
Originally Posted by v5v 
I don't think anyone besides CS is saying that Apple should have gone with 2xCPU/1xGPU *instead* of 1xCPU/2xGPU, but could very easily have offered such configurations *as well.*

They'd have had to stock two motherboards and the dual socket one would only be useful for anything above 12-core. This means stocking a whole new motherboard design for hardly any buyers.

The GPU expense keeps coming up but the highest option is $1000 or $500 for the extra GPU (the D300 far less than that). An extra 12-core CPU would be $3500 along with the added expense of a dual-socket motherboard. It's not like they are anywhere near the same price and it would be switching one for the other.
Quote:
Originally Posted by ascii 
SATA 3 interfaces max out at about 550 MB/sec. I have the latest Samsung EVO SSD in my PC and *wish* it was using PCI connectivity.

Plus, they don't always achieve that theoretical maximum:

http://www.anandtech.com/bench/SSD/262

Most SATA drives there are below 400MB/s. They will similarly move to PCIe:

http://www.tomshardware.com/reviews/samsung-global-ssd-summit-2013,3570-2.html

"In an effort to standardize solid-state drive access across PCI Express, 80 companies (led in part by Samsung), created the Non-Volatile Memory Host Controller Interface Specification."

Led by Samsung after Apple already does it. PC manufacturers need to use an interoperable standard though, Apple doesn't as they do the whole product = leverage.
Quote:
Originally Posted by v5v 
Do speeds like that really happen in real life, or do other factors limit the rate at which the system can read and write? Would this configuration actually BE faster than a SATA3 interface or only have a "theoretical" advantage?

For sequential reads/writes you'd get those speeds e.g copying or exporting 100GB of video data. Random reads/writes are a lot lower so duplicating a folder of 100,000 photos wouldn't go that fast.
Quote:
Originally Posted by marubeni 
From what I understand there is a major bit rot problem on SSDs (so every bit can be written a finite, and relatively small, number of times)

Yes but the manufacturers use wear-levelling to keep writes more even across the drive. MTBF values are in line with HDDs.
post #163 of 281
Quote:
Originally Posted by marubeni View Post
 
Quote:
Originally Posted by mknopp View Post
 

 

That is exactly what I was wondering as well. Almost all of the scientific computing work that I have done in the last two decades has been done on clusters. I still remember my first time using a Beowulf setup which cobbled together a bunch of older PC motherboards. Then again, I haven't used Mathematica/Maple for anything to this level. We pretty much make our own programs to reduce as much overhead as possible.

 

Interesting.

 

Certainly for problems with any sort of large-grain parallelism the cluster way is more cost effective (especially when the problem is compute- and not communication bound, which can often be managed with problem organization). Unfortunately, in algebra-type things, a single computation can blow up and use a lot of RAM. Also, Mathematica et al are not the most memory efficient programs out there, but they do have a lot of stuff implemented, and life is short, so it is cheaper to buy 256GB of ram than spend two years reimplementing something.

 

Pretty remarkable that analytic problems can expand that far, but that does explain the memory requirements.  It's interesting that even in the large numerical simulation domain of the massively parallel systems, modern processor architecture is starting to be an issue since they do not have fast enough access to sufficient memory - not surprising as the processors were not designed for this kind of problem.

post #164 of 281
Quote:
Originally Posted by Marvin View Post


<...>
Yes but the manufacturers use wear-levelling to keep writes more even across the drive. MTBF values are in line with HDDs.

 

Right, but my question was whether this peculiarity of the technology made it unusable for swapping (where you would be rewriting bits A LOT, as opposed keeping data on the drive, where a bit is left a lone for many minutes at the shortest.

post #165 of 281
Quote:
Originally Posted by marubeni View Post
 

 

The biggest thing in favor of the Titan for me is that the Double Precision is not crippled, as it seems to be on every other nVidia consumer card. Since I am not a big fan of single precision, it is either that or the Tesla for me...

I was unaware of that. If I wanted to use one for computation, I would have gone with a Titan anyway due to vram. It's not an issue of maxing it every time. There is no real system of virtual memory or swap, so everything must fit within the framebuffer. It's not worth taking the risk of such a strict hardware limitation if there is the possibility of reaching it.

 

Quote:
Originally Posted by Marvin View Post


The operative word is 'easier'. Plugging in a cable is easier than anything else.
 

That is assuming no other assembly and stable drivers. I would argue that stability makes things easier, and yeah some terrible cards existed for the old mac towers.

Quote:
You don't have to use Raid 5 and Raid isn't a backup solution.

 

You use my own lines against me sir? I am usually the one that has to inform others of that. It's still a bad idea to go with something that is inherently unstable due to potential write hole issues.

post #166 of 281
Quote:
Originally Posted by marubeni View Post
 

They would look until they figured out the memory limitation (the natural assumption would be that since OS X is a unix variant, it would have the same memory limitations as Linux, that is, none)

 

Memory limitations are devised by the motherboard manufacturer and CPU manufacturer.

 

If Apple came out with a Quad Xeon Motherboard each CPU with 64GB/128GB max then they could conceivably manage a system of 256-512GB of DDR3 Quad Channel ECC RAM.

 

Take this Intel Xeon E5-2697 v2 – 12 Core Ivy Bridge-EP

 

  • Processor: 1x Intel Xeon E5-2697
  • Motherboard: Dual LGA2011 (C602 based) with BIOS update
  • Memory: 4x 8GB Registered ECC DDR3 1600MHz DIMMs

 

If it had 8 slots you're looking at 64 GB or if someone made 16GB DIMMS you're maxed out at 128GB with 8 slots and 64 GB with 4.

 

Seem familiar? It's not the damn OS limiting the Memory configurations, but the physical hardware.

 

If Apple made a 3U/6U rackmount configuration with Quad Xeon 8 x 16GB DIMM configurations OS X would run just fine.

 

Apple has zero interest in competing with Intel, HP and AMD for this space, not to mention wasting the resources in building a fully n-tier Enterprise Computing Division.

post #167 of 281
Quote:
Originally Posted by Marvin View Post

Raid isn't a backup solution.

 

It isn't? Why not? It's redundant and fault tolerant. A thief or fire can cause 100% data loss, but otherwise I'm safe, right?

post #168 of 281
Quote:
Originally Posted by Marvin View Post


The operative word is 'easier'. Plugging in a cable is easier than anything else.
You don't have to use Raid 5 and Raid isn't a backup solution.
It's also 65% of the price of an HP machine for the same performance. What's the point you are making, that you can build a computer far cheaper than Apple, Dell, HP, Lenovo? I'm sure this has been well known for years and it has nothing to do with Apple's design decisions. If they offered dual 12-cores, it would cost above $10k so just like with HP, you'd be able to build one cheaper. Apple wants 30-40% gross margins, component retailers can't get away with those margins, nor do they care about the quality of the parts they sell.
The only coincidence is that Apple happens to make both FCPX and the Mac Pro so they had time to optimize for it. As mentioned earlier, Adobe uses a whitelist for supported GPUs. The AMD R9 290 came out late October/early November but Adobe only added it just over a week ago:

http://blogs.adobe.com/premierepro/2013/12/premiere-pro-cc-update-7-2-1.html

"The AMD Radeon R9 290 Series has been added to the OpenCL supported card list"

When a card isn't added to the list, it drops back to using the CPU. The Adobe apps will benefit from the GPUs just like FCPX when support is added for the Mac Pro GPUs.
They'd have had to stock two motherboards and the dual socket one would only be useful for anything above 12-core. This means stocking a whole new motherboard design for hardly any buyers.

The GPU expense keeps coming up but the highest option is $1000 or $500 for the extra GPU (the D300 far less than that). An extra 12-core CPU would be $3500 along with the added expense of a dual-socket motherboard. It's not like they are anywhere near the same price and it would be switching one for the other.
Plus, they don't always achieve that theoretical maximum:

http://www.anandtech.com/bench/SSD/262

Most SATA drives there are below 400MB/s. They will similarly move to PCIe:

http://www.tomshardware.com/reviews/samsung-global-ssd-summit-2013,3570-2.html

"In an effort to standardize solid-state drive access across PCI Express, 80 companies (led in part by Samsung), created the Non-Volatile Memory Host Controller Interface Specification."

Led by Samsung after Apple already does it. PC manufacturers need to use an interoperable standard though, Apple doesn't as they do the whole product = leverage.
For sequential reads/writes you'd get those speeds e.g copying or exporting 100GB of video data. Random reads/writes are a lot lower so duplicating a folder of 100,000 photos wouldn't go that fast.
Yes but the manufacturers use wear-levelling to keep writes more even across the drive. MTBF values are in line with HDDs.

 

 

Quote:
 "The AMD Radeon R9 290 Series has been added to the OpenCL supported card list"

 

Awesome OpenCL 1.2/2.0 beast and OpenGL 4.x juggernaut for $499 giving you a buttload of power, 4GB DDR5, 512 bit interface and paired with an AMD FX-8350 is a sweet ride.

 

Seeing as AMD provided the OpenCL acceleration for Adobe you new their new R9/R7 series will be added.

post #169 of 281
Quote:
Originally Posted by v5v View Post
 

 

It isn't? Why not? It's redundant and fault tolerant. A thief or fire can cause 100% data loss, but otherwise I'm safe, right?

That second point is highly debatable. To make sure I am not misreading your words, I am positive he meant that the redundancy provided by anything above Raid 0 does not constitute a backup. You could back up one Raid with another Raid. It's just a Raid isn't both storage and backup. There are a few reasons. One is the potential for corruption. In a mirrored scenario anything written to one is written to another. Raid 5 can write bad data if a disk fails to respond, which I mentioned above. Raids can experience controller problems potentially rendering the data inaccessible. When a drive does fail, if there have been any past problems including any kind of bit flipping or minor corruption, you'll crash on rebuild. That is why the better Raid controllers use ECC memory. It's much more destructive than a single flipped bit would be in other scenarios. Anyway there are other problems, and I could go on. Perhaps you have counter-reasoning? Mine is simply that too much can go wrong with a given Raid, and the cost of disaster recovery on a Raid system tends to be cost prohibitive even compared to normal data extraction.

post #170 of 281
Quote:
Originally Posted by marubeni View Post

Right, but my question was whether this peculiarity of the technology made it unusable for swapping (where you would be rewriting bits A LOT, as opposed keeping data on the drive, where a bit is left a lone for many minutes at the shortest.

Write endurance is listed here:

http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923.html

It says writing 10GB/day would last over 18 years with modern SSDs. SSDs do need to use TRIM or idle periods to stay healthy but they should be fine for swapping as long as it's not TBs per day. Disk swap isn't used at all if you have enough RAM though so it would just be for scratch disks but even then RAM is used first and it would be GBs of data.
Quote:
Originally Posted by hmm 
It's still a bad idea to go with something that is inherently unstable due to potential write hole issues.

Every RAID 5 setup has that issue. If you have a UPS so the power can't shut off during writes, it's almost never going to be a concern. Pegasus drives support RAID 6 too. You have said many times that you'd prefer internal RAID where you choose your own drives but it's mainly because you don't like Thunderbolt. Bootable internal RAID means your whole machine is offline if you have to rebuild it:

http://forums.macrumors.com/showthread.php?t=1073435

Having a single SSD internally means that no matter what happens to your mass storage, your workstation is still online. It's better that mass storage is outside and any faults are dealt with by storage companies.
Quote:
Originally Posted by v5v 
It isn't? Why not? It's redundant and fault tolerant. A thief or fire can cause 100% data loss, but otherwise I'm safe, right?

As mentioned above, data corruption is copied in a RAID. This happened with GMail. They run their servers on RAID drives but a fault happened that meant they lost a whole bunch of emails and even their backup RAID didn't have them so they had to pull the data back off their regular LTO tape backups:

http://www.theregister.co.uk/2013/01/28/google_oracle/

Consumers can use tape drives too but they are expensive and only have SAS/SCSI connectors so you'd need a Thunderbolt to SAS adaptor, which for some reason costs nearly $900 and is actually downgrading 10/20Gbps Thunderbolt to 6Gbps SAS. You'd think HP or some other company would make tape drives for consumer use with a USB 3 or Thunderbolt connector. There's a Japanese company making one but I don't know if it's available to buy yet:

http://www.storagenewsletter.com/rubriques/tapes/unitex-lt50-usb/
http://www.unitex.co.jp/products/cmtmt/lt50lt40usb.html

The SAS drives are around $3k new but maybe they could have a $999 model and it can just be LTO4/LTO5 with 800GB/1.5TB capacity tapes. As long as it writes over 200MB/s, that would do fine. Once you are past the initial cost of the drive, the tape cost is under $30 per TB beyond that allowing you to keep archiving forever and RAID would be your working drive. Backup the changes on the RAID to tape either weekly or daily and that setup should work fine for decades. Tapes last about 30 years.
post #171 of 281

 

Quote:
Originally Posted by Marvin View Post




Every RAID 5 setup has that issue. If you have a UPS so the power can't shut off during writes, it's almost never going to be a concern. Pegasus drives support RAID 6 too. You have said many times that you'd prefer internal RAID where you choose your own drives but it's mainly because you don't like Thunderbolt. Bootable internal RAID means your whole machine is offline if you have to rebuild it:
 

 

That is true. I pointed it out specifically related to the cheaper ones due to smaller caches, sometimes lacking ECC ram, and cheaper overall hardware. The disks used were the other issue that I mentioned. This could be avoided with the diskless configurations available on the new thunderbolt boxes, assuming they are available in the desired size. Regarding the disks, I was referring to error recovery timings. The name I used before was firmware timings, which is basically what sets them. Shorter ones are used on the enterprise grade drives in spite of a lot of fundamental hardware similarity. It does at least prevent issues with timeouts in part of an array. I wouldn't ever use a Raid without UPS backup. That's just crazy talk:wow:.

post #172 of 281
Quote:
Originally Posted by hmm View Post


That is true. I pointed it out specifically related to the cheaper ones due to smaller caches, sometimes lacking ECC ram, and cheaper overall hardware. The disks used were the other issue that I mentioned. This could be avoided with the diskless configurations available on the new thunderbolt boxes, assuming they are available in the desired size. Regarding the disks, I was referring to error recovery timings. The name I used before was firmware timings, which is basically what sets them. Shorter ones are used on the enterprise grade drives in spite of a lot of fundamental hardware similarity. It does at least prevent issues with timeouts in part of an array. I wouldn't ever use a Raid without UPS backup. That's just crazy talk:wow: .

You were the first person to inform me about this. How much time are we talking about to let a RAID finish its write? Milliseconds? If this is such an issue why wouldn't they simply include a small LiNi battery in consumer RAIDs specifically to all the system to shutdown correctly in case AC power is lost? Honestly, I don't want a large UPS just for my RAID and I certainly don't to start doing tape backups either. I bought it with the intention that my RAID10 would offer me many years of redundant storage.

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

 

Goodbyeee jragosta :: http://forums.appleinsider.com/t/160864/jragosta-joseph-michael-ragosta

Reply

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

 

Goodbyeee jragosta :: http://forums.appleinsider.com/t/160864/jragosta-joseph-michael-ragosta

Reply
post #173 of 281
Quote:
Originally Posted by Marvin View Post


Write endurance is listed here:

http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923.html

It says writing 10GB/day would last over 18 years with modern SSDs. SSDs do need to use TRIM or idle periods to stay healthy but they should be fine for swapping as long as it's not TBs per day. Disk swap isn't used at all if you have enough RAM though so it would just be for scratch disks but even then RAM is used first and it would be GBs of data.
Every RAID 5 setup has that issue. If you have a UPS so the power can't shut off during writes, it's almost never going to be a concern. Pegasus drives support RAID 6 too. You have said many times that you'd prefer internal RAID where you choose your own drives but it's mainly because you don't like Thunderbolt. Bootable internal RAID means your whole machine is offline if you have to rebuild it:

http://forums.macrumors.com/showthread.php?t=1073435

Having a single SSD internally means that no matter what happens to your mass storage, your workstation is still online. It's better that mass storage is outside and any faults are dealt with by storage companies.
As mentioned above, data corruption is copied in a RAID. This happened with GMail. They run their servers on RAID drives but a fault happened that meant they lost a whole bunch of emails and even their backup RAID didn't have them so they had to pull the data back off their regular LTO tape backups:

http://www.theregister.co.uk/2013/01/28/google_oracle/

Consumers can use tape drives too but they are expensive and only have SAS/SCSI connectors so you'd need a Thunderbolt to SAS adaptor, which for some reason costs nearly $900 and is actually downgrading 10/20Gbps Thunderbolt to 6Gbps SAS. You'd think HP or some other company would make tape drives for consumer use with a USB 3 or Thunderbolt connector. There's a Japanese company making one but I don't know if it's available to buy yet:

http://www.storagenewsletter.com/rubriques/tapes/unitex-lt50-usb/
http://www.unitex.co.jp/products/cmtmt/lt50lt40usb.html

The SAS drives are around $3k new but maybe they could have a $999 model and it can just be LTO4/LTO5 with 800GB/1.5TB capacity tapes. As long as it writes over 200MB/s, that would do fine. Once you are past the initial cost of the drive, the tape cost is under $30 per TB beyond that allowing you to keep archiving forever and RAID would be your working drive. Backup the changes on the RAID to tape either weekly or daily and that setup should work fine for decades. Tapes last about 30 years.

Well, about the swapping, that is useful info, but you somewhat misunderstood my question: back in the old days (of the VAX) we pretended that we have an infinite amount of RAM, while in reality having a very finite amount. This allowed us to write clean programs, while also allowing us to die a horrible death when we started thrashing, since disks in those days were MUCH MUCH MUCH slower than RAM (for example, seek time was notionally around 15 milliseconds, but really much worse). Now, disks (SSDs), while still slower than RAM are not quite as much slower, and so the infinite memory model might want to make a comeback, so while you have 64GB of physical RAM (as in the Mac Pro) we might pretend that we have 512GB (because we have a one terabyte SSD, and we allocated at large enough swap partition). If we use this heavily, your 10GB/daily figure pales into insignificance, and might indicate that the useful lifetime of this SSD will be measured in months. So, pretty bad.

post #174 of 281
Quote:
Originally Posted by SolipsismX View Post


You were the first person to inform me about this. How much time are we talking about to let a RAID finish its write? Milliseconds? If this is such an issue why wouldn't they simply include a small LiNi battery in consumer RAIDs specifically to all the system to shutdown correctly in case AC power is lost? Honestly, I don't want a large UPS just for my RAID and I certainly don't to start doing tape backups either. I bought it with the intention that my RAID10 would offer me many years of redundant storage.

I wasn't trying to scare you. It's important to note that the write hole issue doesn't apply to RAID10, so don't worry about that specific one. I merely cautioned against using something that splits off parity such as 3,5, or 6 without robust hardware due to potential headaches. I'm probably not qualified to say this, but I wouldn't worry too much as long as you have a backup. In theory most Raid controllers do have a battery to help protect their cache in the event of interruption. How well it works in practice may be a different issue, and I would remember that writing to each set of drives is not entirely synchronous. Like I said, I would still keep backups. I don't know how mirrored RAIDs check for corruption issues. If your box offers some kind of disk scrubbing, that would probably catch minor issues and ideally let you know if any drive sees a notable increase in bad sectors. That way the drive could be replaced preemptively. May I ask what software you're using? Also as a disclaimer this isn't my area of expertise, and there are people who know way more than me. I do try to vet any information if I'm unsure though, and I can tell you I wouldn't use redundancy in Raid as my sole method of backup. Any backup to the Raid could be much slower, so it doesn't have to be terribly expensive compared to primary storage.

 

Also I don't know of any single users who go into tape backup. It's typically in a shared storage environment. I did have an old Poweredge server with a tape drive and 15K SCSI drives a while back. It was retired from its primary use, and after replacing a battery and updating firmware, I used it set up ESXi for learning purposes. At the time it was still called it was still called ESX. *shoos kids off his lawn*

 

Quote:
Originally Posted by marubeni View Post
 

 If we use this heavily, your 10GB/daily figure pales into insignificance, and might indicate that the useful lifetime of this SSD will be measured in months. So, pretty bad.

 

With certain applications, Lion ended up with a ton of pageouts for me over the course of a day. They alone totaled more than 10GB, which seems weird with 16GB of ram. For some reason it's bad about releasing memory.

post #175 of 281
Quote:
Originally Posted by hmm View Post

May I ask what software you're using? Also as a disclaimer this isn't my area of expertise, and there are people who know way more than me. I do try to vet any information if I'm unsure though, and I can tell you I wouldn't use redundancy in Raid as my sole method of backup. Any backup to the Raid could be much slower, so it doesn't have to be terribly expensive compared to primary storage.

1) No SW per say. It's a HW RAID.

2) It is my backup. All my Macs use this HW RAID for Time Machine backups through a decade old iMac connected to the network via 100Mbps Ethernet and connected to the HW RAID via FW400. The Macs themselves can be used in the present if the RAID as a back up fails to start the backups over if that ever became an issue. However, the RAID is also used as my sole iTunes Server. There is no other copy of this outside of the data mirrored across the four discs in the HW RAID.

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

 

Goodbyeee jragosta :: http://forums.appleinsider.com/t/160864/jragosta-joseph-michael-ragosta

Reply

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

 

Goodbyeee jragosta :: http://forums.appleinsider.com/t/160864/jragosta-joseph-michael-ragosta

Reply
post #176 of 281
Quote:
Originally Posted by Cpsro View Post
 

Fanboys can be strange and myopic. I am an Apple zealot extraordinaire, but I don't let that get in the way of expressing my needs and disappointment in the direction the Mac Pro has been taken. Sorry, folks!

 

Repeating... I don't need or want GPUs (at least at this time), hence the lack of GPUs in my custom configured linux system which has twice the performance (where I need it) compared to the least-expensive 12-core Mac Pro and for about the same price. I could have received the parts last week or earlier--not wait a couple months for when they should be even cheaper (and the profit margins greater ;-)

 

I wouldn't buy a Dyson either.

 

Cpsro,

 

Denial is not just a river in Egypt! 

 

Apple’s new ‘overpriced’ $10,000 Mac Pro is $2,000 cheaper than the equivalent Windows PC

http://www.extremetech.com/computing/173695-apples-new-overpriced-10000-mac-pro-is-2000-cheaper-than-the-equivalent-windows-pc

post #177 of 281
Quote:
Originally Posted by SolipsismX View Post


1) No SW per say. It's a HW RAID.

2) It is my backup. All my Macs use this HW RAID for Time Machine backups through a decade old iMac connected to the network via 100Mbps Ethernet and connected to the HW RAID via FW400. The Macs themselves can be used in the present if the RAID as a back up fails to start the backups over if that ever became an issue. However, the RAID is also used as my sole iTunes Server. There is no other copy of this outside of the data mirrored across the four discs in the HW RAID.


Ohhh so it is the backup. Well it's not the only source for that data, so that's not so bad. By software I didn't mean a software raid where the data for each drive is calculated by a piece of software running on top of the OS rather than a dedicated hardware controller. I meant software used to manage or monitor the health of the raid. In the case of the iTunes media, if it's all purchased media, you should be able to recover it via your itunes account in the event of catastrophic failure.

post #178 of 281
Quote:
Originally Posted by marubeni View Post

Now, disks (SSDs), while still slower than RAM are not quite as much slower, and so the infinite memory model might want to make a comeback, so while you have 64GB of physical RAM (as in the Mac Pro) we might pretend that we have 512GB (because we have a one terabyte SSD, and we allocated at large enough swap partition). If we use this heavily, your 10GB/daily figure pales into insignificance, and might indicate that the useful lifetime of this SSD will be measured in months. So, pretty bad.

It's not a practical concern, if you have any program needing to use 512GB of RAM or anywhere close to that, it needs to be rewritten. Also, 10GB/day still lasts 18 years. Hard drives typically only last 5 years. That's also if you use the cheaper SSDs. The non-bargain basement SSDs and larger capacity SSDs last longer:

http://www.anandtech.com/show/6459/samsung-ssd-840-testing-the-endurance-of-tlc-nand

A 256GB MLC drive at 10GB/day is rated for 70 years. There are tests here that do TBs of writes:

http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm

The Samsung 830 256GB MLC lasted over 6PetaBytes of writes. That would be 1.6TB per day for 10 years.

The drives are limited by their write speeds anyway. At 400MB/s, a drive can only write 34TBs per day maximum. It took 259 days to wear out the Samsung 830 writing constantly. People who write that much data should expect to wear out their drives quickly and HDDs can't come close to writing like that anyway as they aren't fast enough.

Throttle the writes down to HDD level and they'll last 10x longer.
Quote:
Originally Posted by SolipsismX 
I certainly don't to start doing tape backups either

It's quite consumer-friendly:



It wouldn't have to be all that regular and the writes are pretty fast. Of course when SSDs become really cheap then that'll suffice for most people as they can be used for archiving having no moving parts. When 1TB SSD costs $100 then there's not much point using tapes at $30 per TB. They are under $0.50/GB so if the price trend keeps up, this will be before 2020.
Quote:
Originally Posted by SolipsismX 
the RAID is also used as my sole iTunes Server. There is no other copy of this outside of the data mirrored across the four discs in the HW RAID.

Even if you got a cheap 4TB desktop drive or something to keep a second copy, that would do. It's never a good idea to keep a single copy of data. If you ever get a program that accidentally formats drives, it can wipe it all out. For the sake of $50-100, it's not worth the hassle trying to recover files.
post #179 of 281
Quote:
Originally Posted by Marvin View Post

Even if you got a cheap 4TB desktop drive or something to keep a second copy, that would do. It's never a good idea to keep a single copy of data. If you ever get a program that accidentally formats drives, it can wipe it all out. For the sake of $50-100, it's not worth the hassle trying to recover files.

I would need to have another RAID but set to 0 so I can have 2x4TB but then we're still talking about RAID and if RAID isn't a proper backup solution then I'm still in the same position I am now.

I'm not worried about issues with the RAID or someone erasing the drives. This is a consumer setup. My biggest worry is that a drive will go bad and if it does I can swap it out for a new one.

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

 

Goodbyeee jragosta :: http://forums.appleinsider.com/t/160864/jragosta-joseph-michael-ragosta

Reply

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

 

Goodbyeee jragosta :: http://forums.appleinsider.com/t/160864/jragosta-joseph-michael-ragosta

Reply
post #180 of 281
Quote:
Originally Posted by SolipsismX View Post

I would need to have another RAID but set to 0 so I can have 2x4TB but then we're still talking about RAID and if RAID isn't a proper backup solution then I'm still in the same position I am now.

You wouldn't be in the same position as you'd have a backup. The phrase 'RAID isn't a backup' just means that it isn't a backup for a singular copy of data. If you have one RAID to backup another then one of the RAIDs is a backup because you have two separate copies of the data not synced in real-time.

You also wouldn't need 2x4TB unless your 8TB RAID is full and you need to backup all of it. You only need as much as you've used on the drive and enough for the important files that would be difficult or impossible to replace. You can also have two single 4TB drives and copy half your data to each one.
post #181 of 281
Quote:
Originally Posted by Marvin View Post

You wouldn't be in the same position as you'd have a backup. The phrase 'RAID isn't a backup' just means that it isn't a backup for a singular copy of data. If you have one RAID to backup another then one of the RAIDs is a backup because you have two separate copies of the data not synced in real-time.

You also wouldn't need 2x4TB unless your 8TB RAID is full and you need to backup all of it. You only need as much as you've used on the drive and enough for the important files that would be difficult or impossible to replace. You can also have two single 4TB drives and copy half your data to each one.

In terms of my iTunes Server it's not a single copy in that it's copied across 2 discs, which is why I choose the RAID in the first place. It's better than what I had before, which is no redundancy.

I am just under 4TB utilized so I don't see how I could stick with 4TB and not have to redo it all within a few months. The only upside I see is that I am using RAID10 which means I have 4x4TB drives in my RAID. I could remove two discs to make it 2x4TB in RAID1 which would give me 8TB of storage in each, but I'd still need to buy another HW RAID to put them in, and maybe even 2 drives so I can make the swap as I'm not sure it will run after I take out two of the drives but I assume it's theoretically possible since it would leave one full copy (assuming I remove the correct pairing of drives).

Either way, I don't think I'll do that even though my iMac does have a second FW400 port. Previously I used my 27" iMac with a 3.1TB Fusion Drive but with no backup, but I sold it since I never used that iMac except as an iTunes Server for the Apple TV and other Macs in the house.

What would I potentially lose here? What are the odds of it happening? Bottom line it's infinitely more redundant than it was before since before there was zero redundancy.

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

 

Goodbyeee jragosta :: http://forums.appleinsider.com/t/160864/jragosta-joseph-michael-ragosta

Reply

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

 

Goodbyeee jragosta :: http://forums.appleinsider.com/t/160864/jragosta-joseph-michael-ragosta

Reply
post #182 of 281
Quote:
Originally Posted by SolipsismX View Post

In terms of my iTunes Server it's not a single copy in that it's copied across 2 discs, which is why I choose the RAID in the first place. It's better than what I had before, which is no redundancy.

It's better than no redundancy. It is two copies of the same data but certain kinds of errors can break both sets of data as they are synced in real-time - also any actions done to the RAID set by the OS or software can break the whole thing e.g:

http://forums.macrumors.com/showthread.php?t=1615649

Offline backups aren't synced in real-time so it allows you to recover accidentally deleted files as well as recover from corruption that would otherwise be copied and any damage to the RAID set.
Quote:
Originally Posted by SolipsismX View Post

I am just under 4TB utilized so I don't see how I could stick with 4TB and not have to redo it all within a few months.

You don't have to backup all of it if it's not all important. Things that can be downloaded again easily are not important for backup. If it has a lot of DVD Rips though and losing those would mean spending hours/days redoing them, your time is worth more than the cost of another hard drive.

4TB is a bit more than I thought, 3TB is $124:

http://www.amazon.com/Book-External-Drive-Storage-Backup/dp/B0042Z55RM
Quote:
Originally Posted by SolipsismX View Post

I could remove two discs to make it 2x4TB in RAID1 which would give me 8TB of storage in each, but I'd still need to buy another HW RAID to put them in

That would give you 4TB in each. The RAID you have is fine as it protects against a single drive failure. But, it's a good idea to have a backup on top of that.
Quote:
Originally Posted by SolipsismX View Post

What would I potentially lose here? What are the odds of it happening? Bottom line it's infinitely more redundant than it was before since before there was zero redundancy.

You have 4 drives so if each drive has a 1% failure rate, you have 4% chance of a single drive failure. HDD failure rates are a bit higher than 1%:

http://www.pcworld.com/article/2062254/25-000-drive-study-shines-a-light-on-how-long-hard-drives-actually-last.html

They noted around 5% failure rate so 1 out of every 20 drives in their 25000 drive test failed in 1.5 years. That would suggest 1 in 5 chance of a single drive failure within 1.5 years for your RAID. That's still 4/5 chance you won't and if it's a failure you can recover from, it's no problem.

It's more common that people regret having too few backups than too many.
post #183 of 281
Quote:
Originally Posted by SolipsismX View Post


I would need to have another RAID but set to 0 so I can have 2x4TB but then we're still talking about RAID and if RAID isn't a proper backup solution then I'm still in the same position I am now.

I'm not worried about issues with the RAID or someone erasing the drives. This is a consumer setup. My biggest worry is that a drive will go bad and if it does I can swap it out for a new one.

 

Ahh some backup solutions do back up one RAID with another. The whole issue is somewhat debatable. For example your typical RAID would not have any kind of version history as a secondary measure against corruption. As I mentioned some do offer some kind of disk scrubbing functionality to attempt to detect possible sources of corruption early. Assuming it's not an exceptionally flakey controller, you shouldn't have trouble with fault tolerance on RAID 1 or 10. They aren't quite as finicky as a RAID 5 rebuild where it must perform a complete checksum and can fail to rebuild the array if any errors are present. That's why I said 10 (which you're using) is a better idea than 5 unless you have a very robust setup.

 

Also you didn't mention if the iTunes server media was primarily from iTunes purchases. If that is the case, you can always re-download it in the event of (unlikely) catastrophic failure.

post #184 of 281
I appreciate everyone's detailed info and advice regarding my RAID setup but since it's better than I had before and I've already spent about $1000 for the components I think I'll take my chances for the time being. My next purchase will likely be a used Mac mini simply because that old PPC iMac I'm using to connect to the RAID is either not possible for Time Machine backups when using Leopard or makes iTunes wonky when using Leopard Server.

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

 

Goodbyeee jragosta :: http://forums.appleinsider.com/t/160864/jragosta-joseph-michael-ragosta

Reply

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

 

Goodbyeee jragosta :: http://forums.appleinsider.com/t/160864/jragosta-joseph-michael-ragosta

Reply
post #185 of 281
Quote:
Originally Posted by hmm View Post
 

It's kind of like I said once before. They're great  for really really parallel workloads. Interestingly there are some areas within graphics and visualization where that could really be exploited. I suspect it's an issue of existing code bases, older algorithms, and lack of a clear future in terms of what framework ends up being dominant. That being said, I just picked up a book on OpenCL and heterogeneous programming.

That's exactly right. The difference between the 12 CPU cores and the 4096 GPU cores is that the CPU cores can all be running different programs over their little bit of data but the GPU cores must all be running the same program. That is what limits the applicability of the GPUs, but with a bit of imagination they can still be used in a lot of scenarios.

 

And as the Anandtech Mac Pro review shows, developers (like me) really need to start leveraging this.

http://www.anandtech.com/show/7603/mac-pro-review-late-2013

 

post #186 of 281
Quote:
Originally Posted by ascii View Post
 

That's exactly right. The difference between the 12 CPU cores and the 4096 GPU cores is that the CPU cores can all be running different programs over their little bit of data but the GPU cores must all be running the same program. That is what limits the applicability of the GPUs, but with a bit of imagination they can still be used in a lot of scenarios.

 

And as the Anandtech Mac Pro review shows, developers (like me) really need to start leveraging this.

http://www.anandtech.com/show/7603/mac-pro-review-late-2013

 

 

It's a long process. I've mentioned that I would like to see it in everything down to the idevices, because I think it's important for developers to be able to count on its existence. Obviously complete proliferation across all Macs is a start. Otherwise they wind up with a lot of branching to accommodate the presence or lack thereof of such hardware capability. I suspect you would see more graphics applications making a lot of use of this if it wasn't for the persistence of old code and yet to fully stabilize state of OpenCL. I'm predominantly interested in it now because I think it will be increasingly useful.

post #187 of 281
Quote:
Originally Posted by Haggar View Post

Yeah, just like no sense having dual GPUs, right?  Even the Mac Mini can support 2 internal hard drives.  So I guess that means the Mac Mini makes no sense.

Yes, well, that's because a large portion of those Mini sales are to companies who use them as small, self contained servers. So for that market, two internal HDD's or SSD's makes sense. And than, Apple is forcing the removal of optical, and gives more room.

If they didn't have that server market, they would have shrunk the height of the Mini so that only one drive could fit.

But what would have been the point to two drives in the new Mac Pro? I don't see it. If you're buying this for a home machine, then you're either nuts, someone who wants to brag, or simply has too much money, and too little brains.

For everyone else, one more drive serves no purpose.
post #188 of 281
Quote:
Originally Posted by melgross View Post

If you're buying this for a home machine, then you're either nuts, someone who wants to brag, or simply has too much money, and too little brains.

 

Is it really necessary to be insulting just to say you don't get it? We KNOW you don't get it. We don't EXPECT you to get it. We gave up on you being able to understand even such simple concepts a long time ago! ;)

 

But seriously...

 

I like making videos. The laptop I use to do that cost almost $4000 after BTO options, AppleCare and taxes. A basic Pro with a nice display would come in at about the same price and be MUCH better suited to the task.

 

Oh, and if that Pro had a second internal SSD I wouldn't even need the fast external storage for source files. I could just archive stuff on cheap, slow USB drives.

post #189 of 281
Quote:
Originally Posted by v5v View Post

Oh, and if that Pro had a second internal SSD I wouldn't even need the fast external storage for source files. I could just archive stuff on cheap, slow USB drives.

There might not be enough PCIe lanes. According to Anandtech, they had to use all the ones available. There's 40 from the CPU and 16 each per GPU only leaves you with 8. The SSD either gets 1 or 2 and 6 for Thunderbolt and they had to use more with a PLX chip.

I did expect them to make a wider version of the SSD but again it's about sales volume. How many people are really going to spend ~$1800 on internal storage. Although the first 1TB is $800, the second wouldn't be as the first deducts the stock 256GB. Two 1TB SSDs would probably be $1800.

Maybe they (or a 3rd party) could make an adaptor that sits in the SSD PCIe slot that lets you put two of the standard form factor SSD drives in side by side. It looks like Intel is aiming for 2TB 2.5" next year:

http://www.legitreviews.com/intel-ssd-roadmaps-leaked-shows-2tb-2-5-inch-ssd-coming-2014_130204

I reckon it'll be 3 years before 2TB costs the same as 1TB and 6 years for 4TB to cost the same as 1TB.
post #190 of 281
Quote:
Originally Posted by Marvin View Post

There might not be enough PCIe lanes. According to Anandtech, they had to use all the ones available. There's 40 from the CPU and 16 each per GPU only leaves you with 8. The SSD either gets 1 or 2 and 6 for Thunderbolt and they had to use more with a PLX chip.

I did expect them to make a wider version of the SSD but again it's about sales volume. How many people are really going to spend ~$1800 on internal storage. Although the first 1TB is $800, the second wouldn't be as the first deducts the stock 256GB. Two 1TB SSDs would probably be $1800.

Maybe they (or a 3rd party) could make an adaptor that sits in the SSD PCIe slot that lets you put two of the standard form factor SSD drives in side by side. It looks like Intel is aiming for 2TB 2.5" next year:

http://www.legitreviews.com/intel-ssd-roadmaps-leaked-shows-2tb-2-5-inch-ssd-coming-2014_130204

I reckon it'll be 3 years before 2TB costs the same as 1TB and 6 years for 4TB to cost the same as 1TB.

1) What about adding more PCIe next year so this is possible in the future?

2) There is room for larger SSDs which I think could allow for using less dense chips over 2 interfaces which could allow for additional storage without having to double the current cost. For example, what if the next Mac Pro comes with 2 SSDs that are 2x256GB for a starting point of 512GB today. Wouldn't that allow for nearly 2000MB/s in a RAID 0 configuration even if it's split off the same two PCIe 3.0 channels? Anyway, I'm not really sure it's needed. I personally only wanted 128GB for my new MBP but because I wanted other BTO options I had to go with a minimum of 512GB so I'm not really sure what apps could be loaded on the Mac Pro that 1TB couldn't handle.

3) The adapter is an interesting idea but I'd think they would just go with a larger SSD with the one controller instead of trying to utilize the available bandwidth of the PCIe so I think you're still not likely to get past what Apple offers today. Perhaps in a year or two controllers will be faster but I'd think they would be focusing on max storage on one SSD, not something that is segmented and with dual controllers. I hope I'm wrong but that seems too niche even for the Mac Pro market.

4) I hope you're wrong about SSD prices dropping but so far that looks to be path they are heading. Any word on Apple's investment in Anobit? Have they incorporated their tech yet?

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

 

Goodbyeee jragosta :: http://forums.appleinsider.com/t/160864/jragosta-joseph-michael-ragosta

Reply

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

 

Goodbyeee jragosta :: http://forums.appleinsider.com/t/160864/jragosta-joseph-michael-ragosta

Reply
post #191 of 281
Quote:
Originally Posted by Marvin View Post


There might not be enough PCIe lanes. According to Anandtech, they had to use all the ones available. There's 40 from the CPU and 16 each per GPU only leaves you with 8. The SSD either gets 1 or 2 and 6 for Thunderbolt and they had to use more with a PLX chip.
 


I'm actually unsure whether it's oversubscribed in the current configuration. The actual cpus are 40 lanes of PCIe 3.0. The io hubs connected to the chipset seem to be 2.0. I mix these up at times, but it would be C602, not X79. X79 is specifically for those marketed as i7 in that socket. I'm not actually sure how the bandwidth is aggregated in there. I would have to look it up, but note that there are unused usb2 ports from the chipset, as well as seemingly unused SATA connections, which wouldn't be able to take the bandwidth of that SSD. The usb 2 ports are because intel only changes chipsets every other cycle on these. You do have to account for more than thunderbolt and an SSD. Those usb3 ports require lanes, as do the ethernet ports and hdmi, so in that configuration they definitely don't have the bandwidth to run another SSD like that. I have to look up how the SATA lanes hook in. I've read conflicting things on it, and I suspect I'm missing something.

post #192 of 281

The Thunderbolt 2 ports are 20Gb/s = 2.5GB/sec = more than double that needed for a second 950MB/sec PCIe SSD, same as the internal one. And TB2 is basically PCIe exposed over a port remember?

 

So I predict Apple themselves will release an external TB2/PCIe SSD for the Mac Pro. And it won't be some big external unit with a power supply and a fan, it will be the same drive/size as the internal SSD and will stick out of a TB2 port like a usb thumb drive.

post #193 of 281
Quote:
Originally Posted by ascii View Post

The Thunderbolt 2 ports are 20Gb/s = 2.5GB/sec = more than double that needed for a second 950MB/sec PCIe SSD, same as the internal one. And TB2 is basically PCIe exposed over a port remember?

So I predict Apple themselves will release an external TB2/PCIe SSD for the Mac Pro. And it won't be some big external unit with a power supply and a fan, it will be the same drive/size as the internal SSD and will stick out of a TB2 port like a usb thumb drive.

I don't think they will release any external accessories for TB2, except for a display later down the road, and certainly not an additional SSD boot drive. Also, that TB2 performance is shared with two ports on 1 TB 2 controller if I'm not mistaken.

PS: I think it's odd that the first two 4K displays connect via TB ports 1 and 2 but the 3rd 4K display needs to be connected via HDMI 1.2 even though that actually connects back into TB ports 5 and 6, if I'm not mistaken.

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

 

Goodbyeee jragosta :: http://forums.appleinsider.com/t/160864/jragosta-joseph-michael-ragosta

Reply

"There is no rule that says the best phones must have the largest screen." ~RoundaboutNow

 

Goodbyeee jragosta :: http://forums.appleinsider.com/t/160864/jragosta-joseph-michael-ragosta

Reply
post #194 of 281
Quote:
Originally Posted by SolipsismX View Post


I don't think they will release any external accessories for TB2, except for a display later down the road, and certainly not an additional SSD boot drive. Also, that TB2 performance is shared with two ports on 1 TB 2 controller if I'm not mistaken.

PS: I think it's odd that the first two 4K displays connect via TB ports 1 and 2 but the 3rd 4K display needs to be connected via HDMI 1.2 even though that actually connects back into TB ports 5 and 6, if I'm not mistaken.

Well the 950MB/sec Flash storage Apple makes is a beautiful thing, and I would hope they would give Mac Pro owners a path to get more than 1TB of it. And sleek aluminium thumb drivey things would be a nice way to do it.

 

Based on the Anand article I think you're right, you would have to choose the port carefully, make sure it's not on the same controller as a monitor (or other high bandwidth device), to get max performance.

post #195 of 281
Quote:
Originally Posted by Marvin View Post


It's not a practical concern, if you have any program needing to use 512GB of RAM or anywhere close to that, it needs to be rewritten. Also, 10GB/day still lasts 18 years. Hard drives typically only last 5 years. That's also if you use the cheaper SSDs. The non-bargain basement SSDs and larger capacity SSDs last longer:

http://www.anandtech.com/show/6459/samsung-ssd-840-testing-the-endurance-of-tlc-nand

A 256GB MLC drive at 10GB/day is rated for 70 years. There are tests here that do TBs of writes:

http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm

The Samsung 830 256GB MLC lasted over 6PetaBytes of writes. That would be 1.6TB per day for 10 years.

The drives are limited by their write speeds anyway. At 400MB/s, a drive can only write 34TBs per day maximum. It took 259 days to wear out the Samsung 830 writing constantly. People who write that much data should expect to wear out their drives quickly and HDDs can't come close to writing like that anyway as they aren't fast enough.

Throttle the writes down to HDD level and they'll last 10x longer.
It's quite consumer-friendly:



It wouldn't have to be all that regular and the writes are pretty fast. Of course when SSDs become really cheap then that'll suffice for most people as they can be used for archiving having no moving parts. When 1TB SSD costs $100 then there's not much point using tapes at $30 per TB. They are under $0.50/GB so if the price trend keeps up, this will be before 2020.
Even if you got a cheap 4TB desktop drive or something to keep a second copy, that would do. It's never a good idea to keep a single copy of data. If you ever get a program that accidentally formats drives, it can wipe it all out. For the sake of $50-100, it's not worth the hassle trying to recover files.

Any program using 512GB needs to be rewritten? I am not talking about a browser, but about mathematical computing which needs as much memory as possible to do the "next case".

post #196 of 281
Quote:
Originally Posted by marubeni View Post

Any program using 512GB needs to be rewritten? I am not talking about a browser, but about mathematical computing which needs as much memory as possible to do the "next case".

Assuming there is such a program in need of a 512GB data set repeatedly transferred to storage, an SSD will hold up to the writes. The Mac Pro uses Samsung MLC SSD and 256GB of it stood up to 6 Petabytes of writes. It's also possible to use an external RAID drive.
post #197 of 281
Quote:
Originally Posted by Marvin View Post


Assuming there is such a program in need of a 512GB data set repeatedly transferred to storage, an SSD will hold up to the writes.

 

There may be some circumstances where it is more advantageous to simply have 512GB. Where I disagree with him is that it needs to be a Mac. Users with those kinds of requirements are typically running proprietary code on some flavor of Linux. It's not a market where Apple has ever maintained a presence outside of a couple corner cases in the PowerPC era. Even if Apple supported that, I find it unlikely that OSX would be a better choice than Linux for what I suspect would be some flavor of big data analysis. The reason that subject came to mind was that I was trying to think what would require long streams of contiguous address space without the need to address heavy traversal of read to and from swap.

 

edit: blah I didn't describe that very well.

post #198 of 281
Quote:
Originally Posted by hmm View Post
 

 

There may be some circumstances where it is more advantageous to simply have 512GB. Where I disagree with him is that it needs to be a Mac. Users with those kinds of requirements are typically running proprietary code on some flavor of Linux. It's not a market where Apple has ever maintained a presence outside of a couple corner cases in the PowerPC era. Even if Apple supported that, I find it unlikely that OSX would be a better choice than Linux for what I suspect would be some flavor of big data analysis. The reason that subject came to mind was that I was trying to think what would require long streams of contiguous address space without the need to address heavy traversal of read to and from swap.

 

edit: blah I didn't describe that very well.

My point was precisely that Macs had no presence in that market, and people do only run this kind of stuff on linux (myself included). But all of those people (myself, again, included) carry around Mac  notebooks, so I would imagine that Apple COULD make some inroads, if it were interested. Presumably the Mac Pro's fast I/O would help with quant finance stuff as well - I had never seen a Mac in finance shops, but maybe I haven't been looking hard enough.

post #199 of 281
Quote:
Originally Posted by marubeni View Post
 

My point was precisely that Macs had no presence in that market, and people do only run this kind of stuff on linux (myself included). But all of those people (myself, again, included) carry around Mac  notebooks, so I would imagine that Apple COULD make some inroads, if it were interested. Presumably the Mac Pro's fast I/O would help with quant finance stuff as well - I had never seen a Mac in finance shops, but maybe I haven't been looking hard enough.

 

Amusingly I too carry around an Apple notebook almost everywhere. It's not like I can take the Mac Pro everywhere, and it's old. It's just more flexible. Well maybe not everywhere and there are a lot of things I can't do with limited screen space, but I take it along frequently.

 

You have me thinking on this, yet Apple would need to do more than just add DIMMs to a mac pro. It might require a different design. I am unsure of the state of their enterprise support. I would think thunderbolt would be less suitable than many other connectors for mission critical settings, given its lack of a locking mechanism. That probably seems silly, but every other common connector in such a setting uses some kind of lock. With servers I suspect part of the issue is vibration. Anyway they might need to make significant changes to OSX too. Linux provides a very lean system for situations where every bit of performance is needed. I would be surprised if they went this route. Their current presence is largely due to leveraging and in a few cases software acquisitions. Years ago they acquired Shake for some time. They had FCP and now FCPX. Other than that most of the corporate devices seem to be from BYOD policies. They have been more aggressive on certain hardware changes in the last 2 years, but that would really surprise me.

post #200 of 281
Quote:
Originally Posted by marubeni View Post

My point was precisely that Macs had no presence in that market, and people do only run this kind of stuff on linux (myself included). But all of those people (myself, again, included) carry around Mac  notebooks, so I would imagine that Apple COULD make some inroads, if it were interested.

If you are talking about workstations, there isn't a significant enough market for ones with massive amounts of RAM. If you mean servers in a shared environment where the RAM expense isn't allocated to an individual, they already had a server and stated nobody was buying them. It's all very well being the lone voice enthusiastically saying you'd buy one but unless there's another 100,000 people every quarter who feel the same way, it's not a good business decision for Apple. It's not just about making the initial sale either, people have to keep investing in new hardware regularly. Someone might go and build a 24-core Linux server with 512GB RAM and run it for 5-10 years using hardware with 15% margins. Those people might say it's a missed opportunity for Apple but clearly it's not as there's not enough profit in it for them. If they made one with their margins, people would still build them to save money.

The fact you use a Mac laptop is what they care about because they are defining your user experience. The Linux boxes can sit processing data all day in a cupboard. Apple has never tried to make all things for all people; like every company, they pick which products make sense for their business model.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Current Mac Hardware
AppleInsider › Forums › Mac Hardware › Current Mac Hardware › Teardown of Apple's new Mac Pro reveals socketed, removable Intel CPU