Teardown of Apple's new Mac Pro reveals socketed, removable Intel CPU

17810121315

Comments

  • Reply 181 of 284
    Marvin wrote: »
    Even if you got a cheap 4TB desktop drive or something to keep a second copy, that would do. It's never a good idea to keep a single copy of data. If you ever get a program that accidentally formats drives, it can wipe it all out. For the sake of $50-100, it's not worth the hassle trying to recover files.

    I would need to have another RAID but set to 0 so I can have 2x4TB but then we're still talking about RAID and if RAID isn't a proper backup solution then I'm still in the same position I am now.

    I'm not worried about issues with the RAID or someone erasing the drives. This is a consumer setup. My biggest worry is that a drive will go bad and if it does I can swap it out for a new one.
     0Likes 0Dislikes 0Informatives
  • Reply 182 of 284
    Marvinmarvin Posts: 15,585moderator
    solipsismx wrote: »
    I would need to have another RAID but set to 0 so I can have 2x4TB but then we're still talking about RAID and if RAID isn't a proper backup solution then I'm still in the same position I am now.

    You wouldn't be in the same position as you'd have a backup. The phrase 'RAID isn't a backup' just means that it isn't a backup for a singular copy of data. If you have one RAID to backup another then one of the RAIDs is a backup because you have two separate copies of the data not synced in real-time.

    You also wouldn't need 2x4TB unless your 8TB RAID is full and you need to backup all of it. You only need as much as you've used on the drive and enough for the important files that would be difficult or impossible to replace. You can also have two single 4TB drives and copy half your data to each one.
     0Likes 0Dislikes 0Informatives
  • Reply 183 of 284
    Marvin wrote: »
    You wouldn't be in the same position as you'd have a backup. The phrase 'RAID isn't a backup' just means that it isn't a backup for a singular copy of data. If you have one RAID to backup another then one of the RAIDs is a backup because you have two separate copies of the data not synced in real-time.

    You also wouldn't need 2x4TB unless your 8TB RAID is full and you need to backup all of it. You only need as much as you've used on the drive and enough for the important files that would be difficult or impossible to replace. You can also have two single 4TB drives and copy half your data to each one.

    In terms of my iTunes Server it's not a single copy in that it's copied across 2 discs, which is why I choose the RAID in the first place. It's better than what I had before, which is no redundancy.

    I am just under 4TB utilized so I don't see how I could stick with 4TB and not have to redo it all within a few months. The only upside I see is that I am using RAID10 which means I have 4x4TB drives in my RAID. I could remove two discs to make it 2x4TB in RAID1 which would give me 8TB of storage in each, but I'd still need to buy another HW RAID to put them in, and maybe even 2 drives so I can make the swap as I'm not sure it will run after I take out two of the drives but I assume it's theoretically possible since it would leave one full copy (assuming I remove the correct pairing of drives).

    Either way, I don't think I'll do that even though my iMac does have a second FW400 port. Previously I used my 27" iMac with a 3.1TB Fusion Drive but with no backup, but I sold it since I never used that iMac except as an iTunes Server for the Apple TV and other Macs in the house.

    What would I potentially lose here? What are the odds of it happening? Bottom line it's infinitely more redundant than it was before since before there was zero redundancy.
     0Likes 0Dislikes 0Informatives
  • Reply 184 of 284
    Marvinmarvin Posts: 15,585moderator
    solipsismx wrote: »
    In terms of my iTunes Server it's not a single copy in that it's copied across 2 discs, which is why I choose the RAID in the first place. It's better than what I had before, which is no redundancy.

    It's better than no redundancy. It is two copies of the same data but certain kinds of errors can break both sets of data as they are synced in real-time - also any actions done to the RAID set by the OS or software can break the whole thing e.g:

    http://forums.macrumors.com/showthread.php?t=1615649

    Offline backups aren't synced in real-time so it allows you to recover accidentally deleted files as well as recover from corruption that would otherwise be copied and any damage to the RAID set.
    solipsismx wrote: »
    I am just under 4TB utilized so I don't see how I could stick with 4TB and not have to redo it all within a few months.

    You don't have to backup all of it if it's not all important. Things that can be downloaded again easily are not important for backup. If it has a lot of DVD Rips though and losing those would mean spending hours/days redoing them, your time is worth more than the cost of another hard drive.

    4TB is a bit more than I thought, 3TB is $124:

    http://www.amazon.com/Book-External-Drive-Storage-Backup/dp/B0042Z55RM
    solipsismx wrote: »
    I could remove two discs to make it 2x4TB in RAID1 which would give me 8TB of storage in each, but I'd still need to buy another HW RAID to put them in

    That would give you 4TB in each. The RAID you have is fine as it protects against a single drive failure. But, it's a good idea to have a backup on top of that.
    solipsismx wrote: »
    What would I potentially lose here? What are the odds of it happening? Bottom line it's infinitely more redundant than it was before since before there was zero redundancy.

    You have 4 drives so if each drive has a 1% failure rate, you have 4% chance of a single drive failure. HDD failure rates are a bit higher than 1%:

    http://www.pcworld.com/article/2062254/25-000-drive-study-shines-a-light-on-how-long-hard-drives-actually-last.html

    They noted around 5% failure rate so 1 out of every 20 drives in their 25000 drive test failed in 1.5 years. That would suggest 1 in 5 chance of a single drive failure within 1.5 years for your RAID. That's still 4/5 chance you won't and if it's a failure you can recover from, it's no problem.

    It's more common that people regret having too few backups than too many.
     0Likes 0Dislikes 0Informatives
  • Reply 185 of 284
    hmmhmm Posts: 3,405member
    Quote:
    Originally Posted by SolipsismX View Post





    I would need to have another RAID but set to 0 so I can have 2x4TB but then we're still talking about RAID and if RAID isn't a proper backup solution then I'm still in the same position I am now.



    I'm not worried about issues with the RAID or someone erasing the drives. This is a consumer setup. My biggest worry is that a drive will go bad and if it does I can swap it out for a new one.

     

    Ahh some backup solutions do back up one RAID with another. The whole issue is somewhat debatable. For example your typical RAID would not have any kind of version history as a secondary measure against corruption. As I mentioned some do offer some kind of disk scrubbing functionality to attempt to detect possible sources of corruption early. Assuming it's not an exceptionally flakey controller, you shouldn't have trouble with fault tolerance on RAID 1 or 10. They aren't quite as finicky as a RAID 5 rebuild where it must perform a complete checksum and can fail to rebuild the array if any errors are present. That's why I said 10 (which you're using) is a better idea than 5 unless you have a very robust setup.

     

    Also you didn't mention if the iTunes server media was primarily from iTunes purchases. If that is the case, you can always re-download it in the event of (unlikely) catastrophic failure.

     0Likes 0Dislikes 0Informatives
  • Reply 186 of 284
    solipsismxsolipsismx Posts: 19,566member
    I appreciate everyone's detailed info and advice regarding my RAID setup but since it's better than I had before and I've already spent about $1000 for the components I think I'll take my chances for the time being. My next purchase will likely be a used Mac mini simply because that old PPC iMac I'm using to connect to the RAID is either not possible for Time Machine backups when using Leopard or makes iTunes wonky when using Leopard Server.
     0Likes 0Dislikes 0Informatives
  • Reply 187 of 284
    asciiascii Posts: 5,936member
    Quote:
    Originally Posted by hmm View Post

     

    It's kind of like I said once before. They're great  for really really parallel workloads. Interestingly there are some areas within graphics and visualization where that could really be exploited. I suspect it's an issue of existing code bases, older algorithms, and lack of a clear future in terms of what framework ends up being dominant. That being said, I just picked up a book on OpenCL and heterogeneous programming.


    That's exactly right. The difference between the 12 CPU cores and the 4096 GPU cores is that the CPU cores can all be running different programs over their little bit of data but the GPU cores must all be running the same program. That is what limits the applicability of the GPUs, but with a bit of imagination they can still be used in a lot of scenarios.

     

    And as the Anandtech Mac Pro review shows, developers (like me) really need to start leveraging this.

    http://www.anandtech.com/show/7603/mac-pro-review-late-2013

     

     0Likes 0Dislikes 0Informatives
  • Reply 188 of 284
    hmmhmm Posts: 3,405member
    Quote:

    Originally Posted by ascii View Post

     

    That's exactly right. The difference between the 12 CPU cores and the 4096 GPU cores is that the CPU cores can all be running different programs over their little bit of data but the GPU cores must all be running the same program. That is what limits the applicability of the GPUs, but with a bit of imagination they can still be used in a lot of scenarios.

     

    And as the Anandtech Mac Pro review shows, developers (like me) really need to start leveraging this.

    http://www.anandtech.com/show/7603/mac-pro-review-late-2013

     

     


    It's a long process. I've mentioned that I would like to see it in everything down to the idevices, because I think it's important for developers to be able to count on its existence. Obviously complete proliferation across all Macs is a start. Otherwise they wind up with a lot of branching to accommodate the presence or lack thereof of such hardware capability. I suspect you would see more graphics applications making a lot of use of this if it wasn't for the persistence of old code and yet to fully stabilize state of OpenCL. I'm predominantly interested in it now because I think it will be increasingly useful.

     0Likes 0Dislikes 0Informatives
  • Reply 189 of 284
    melgrossmelgross Posts: 33,715member
    haggar wrote: »
    Yeah, just like no sense having dual GPUs, right?  Even the Mac Mini can support 2 internal hard drives.  So I guess that means the Mac Mini makes no sense.

    Yes, well, that's because a large portion of those Mini sales are to companies who use them as small, self contained servers. So for that market, two internal HDD's or SSD's makes sense. And than, Apple is forcing the removal of optical, and gives more room.

    If they didn't have that server market, they would have shrunk the height of the Mini so that only one drive could fit.

    But what would have been the point to two drives in the new Mac Pro? I don't see it. If you're buying this for a home machine, then you're either nuts, someone who wants to brag, or simply has too much money, and too little brains.

    For everyone else, one more drive serves no purpose.
     0Likes 0Dislikes 0Informatives
  • Reply 190 of 284
    v5vv5v Posts: 1,357member
    Quote:

    Originally Posted by melgross View Post



    If you're buying this for a home machine, then you're either nuts, someone who wants to brag, or simply has too much money, and too little brains.

     

    Is it really necessary to be insulting just to say you don't get it? We KNOW you don't get it. We don't EXPECT you to get it. We gave up on you being able to understand even such simple concepts a long time ago! ;)

     

    But seriously...

     

    I like making videos. The laptop I use to do that cost almost $4000 after BTO options, AppleCare and taxes. A basic Pro with a nice display would come in at about the same price and be MUCH better suited to the task.

     

    Oh, and if that Pro had a second internal SSD I wouldn't even need the fast external storage for source files. I could just archive stuff on cheap, slow USB drives.

     0Likes 0Dislikes 0Informatives
  • Reply 191 of 284
    Marvinmarvin Posts: 15,585moderator
    v5v wrote: »
    Oh, and if that Pro had a second internal SSD I wouldn't even need the fast external storage for source files. I could just archive stuff on cheap, slow USB drives.

    There might not be enough PCIe lanes. According to Anandtech, they had to use all the ones available. There's 40 from the CPU and 16 each per GPU only leaves you with 8. The SSD either gets 1 or 2 and 6 for Thunderbolt and they had to use more with a PLX chip.

    I did expect them to make a wider version of the SSD but again it's about sales volume. How many people are really going to spend ~$1800 on internal storage. Although the first 1TB is $800, the second wouldn't be as the first deducts the stock 256GB. Two 1TB SSDs would probably be $1800.

    Maybe they (or a 3rd party) could make an adaptor that sits in the SSD PCIe slot that lets you put two of the standard form factor SSD drives in side by side. It looks like Intel is aiming for 2TB 2.5" next year:

    http://www.legitreviews.com/intel-ssd-roadmaps-leaked-shows-2tb-2-5-inch-ssd-coming-2014_130204

    I reckon it'll be 3 years before 2TB costs the same as 1TB and 6 years for 4TB to cost the same as 1TB.
     0Likes 0Dislikes 0Informatives
  • Reply 192 of 284
    solipsismxsolipsismx Posts: 19,566member
    Marvin wrote: »
    There might not be enough PCIe lanes. According to Anandtech, they had to use all the ones available. There's 40 from the CPU and 16 each per GPU only leaves you with 8. The SSD either gets 1 or 2 and 6 for Thunderbolt and they had to use more with a PLX chip.

    I did expect them to make a wider version of the SSD but again it's about sales volume. How many people are really going to spend ~$1800 on internal storage. Although the first 1TB is $800, the second wouldn't be as the first deducts the stock 256GB. Two 1TB SSDs would probably be $1800.

    Maybe they (or a 3rd party) could make an adaptor that sits in the SSD PCIe slot that lets you put two of the standard form factor SSD drives in side by side. It looks like Intel is aiming for 2TB 2.5" next year:

    http://www.legitreviews.com/intel-ssd-roadmaps-leaked-shows-2tb-2-5-inch-ssd-coming-2014_130204

    I reckon it'll be 3 years before 2TB costs the same as 1TB and 6 years for 4TB to cost the same as 1TB.

    1) What about adding more PCIe next year so this is possible in the future?

    2) There is room for larger SSDs which I think could allow for using less dense chips over 2 interfaces which could allow for additional storage without having to double the current cost. For example, what if the next Mac Pro comes with 2 SSDs that are 2x256GB for a starting point of 512GB today. Wouldn't that allow for nearly 2000MB/s in a RAID 0 configuration even if it's split off the same two PCIe 3.0 channels? Anyway, I'm not really sure it's needed. I personally only wanted 128GB for my new MBP but because I wanted other BTO options I had to go with a minimum of 512GB so I'm not really sure what apps could be loaded on the Mac Pro that 1TB couldn't handle.

    3) The adapter is an interesting idea but I'd think they would just go with a larger SSD with the one controller instead of trying to utilize the available bandwidth of the PCIe so I think you're still not likely to get past what Apple offers today. Perhaps in a year or two controllers will be faster but I'd think they would be focusing on max storage on one SSD, not something that is segmented and with dual controllers. I hope I'm wrong but that seems too niche even for the Mac Pro market.

    4) I hope you're wrong about SSD prices dropping but so far that looks to be path they are heading. Any word on Apple's investment in Anobit? Have they incorporated their tech yet?
     0Likes 0Dislikes 0Informatives
  • Reply 193 of 284
    hmmhmm Posts: 3,405member
    Quote:
    Originally Posted by Marvin View Post





    There might not be enough PCIe lanes. According to Anandtech, they had to use all the ones available. There's 40 from the CPU and 16 each per GPU only leaves you with 8. The SSD either gets 1 or 2 and 6 for Thunderbolt and they had to use more with a PLX chip.

     



    I'm actually unsure whether it's oversubscribed in the current configuration. The actual cpus are 40 lanes of PCIe 3.0. The io hubs connected to the chipset seem to be 2.0. I mix these up at times, but it would be C602, not X79. X79 is specifically for those marketed as i7 in that socket. I'm not actually sure how the bandwidth is aggregated in there. I would have to look it up, but note that there are unused usb2 ports from the chipset, as well as seemingly unused SATA connections, which wouldn't be able to take the bandwidth of that SSD. The usb 2 ports are because intel only changes chipsets every other cycle on these. You do have to account for more than thunderbolt and an SSD. Those usb3 ports require lanes, as do the ethernet ports and hdmi, so in that configuration they definitely don't have the bandwidth to run another SSD like that. I have to look up how the SATA lanes hook in. I've read conflicting things on it, and I suspect I'm missing something.

     0Likes 0Dislikes 0Informatives
  • Reply 194 of 284
    asciiascii Posts: 5,936member

    The Thunderbolt 2 ports are 20Gb/s = 2.5GB/sec = more than double that needed for a second 950MB/sec PCIe SSD, same as the internal one. And TB2 is basically PCIe exposed over a port remember?

     

    So I predict Apple themselves will release an external TB2/PCIe SSD for the Mac Pro. And it won't be some big external unit with a power supply and a fan, it will be the same drive/size as the internal SSD and will stick out of a TB2 port like a usb thumb drive.

     0Likes 0Dislikes 0Informatives
  • Reply 195 of 284
    solipsismxsolipsismx Posts: 19,566member
    ascii wrote: »
    The Thunderbolt 2 ports are 20Gb/s = 2.5GB/sec = more than double that needed for a second 950MB/sec PCIe SSD, same as the internal one. And TB2 is basically PCIe exposed over a port remember?

    So I predict Apple themselves will release an external TB2/PCIe SSD for the Mac Pro. And it won't be some big external unit with a power supply and a fan, it will be the same drive/size as the internal SSD and will stick out of a TB2 port like a usb thumb drive.

    I don't think they will release any external accessories for TB2, except for a display later down the road, and certainly not an additional SSD boot drive. Also, that TB2 performance is shared with two ports on 1 TB 2 controller if I'm not mistaken.

    PS: I think it's odd that the first two 4K displays connect via TB ports 1 and 2 but the 3rd 4K display needs to be connected via HDMI 1.2 even though that actually connects back into TB ports 5 and 6, if I'm not mistaken.
     0Likes 0Dislikes 0Informatives
  • Reply 196 of 284
    asciiascii Posts: 5,936member
    Quote:

    Originally Posted by SolipsismX View Post





    I don't think they will release any external accessories for TB2, except for a display later down the road, and certainly not an additional SSD boot drive. Also, that TB2 performance is shared with two ports on 1 TB 2 controller if I'm not mistaken.



    PS: I think it's odd that the first two 4K displays connect via TB ports 1 and 2 but the 3rd 4K display needs to be connected via HDMI 1.2 even though that actually connects back into TB ports 5 and 6, if I'm not mistaken.

    Well the 950MB/sec Flash storage Apple makes is a beautiful thing, and I would hope they would give Mac Pro owners a path to get more than 1TB of it. And sleek aluminium thumb drivey things would be a nice way to do it.

     

    Based on the Anand article I think you're right, you would have to choose the port carefully, make sure it's not on the same controller as a monitor (or other high bandwidth device), to get max performance.

     0Likes 0Dislikes 0Informatives
  • Reply 197 of 284
    marubenimarubeni Posts: 334member
    Quote:

    Originally Posted by Marvin View Post





    It's not a practical concern, if you have any program needing to use 512GB of RAM or anywhere close to that, it needs to be rewritten. Also, 10GB/day still lasts 18 years. Hard drives typically only last 5 years. That's also if you use the cheaper SSDs. The non-bargain basement SSDs and larger capacity SSDs last longer:



    http://www.anandtech.com/show/6459/samsung-ssd-840-testing-the-endurance-of-tlc-nand



    A 256GB MLC drive at 10GB/day is rated for 70 years. There are tests here that do TBs of writes:



    http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm



    The Samsung 830 256GB MLC lasted over 6PetaBytes of writes. That would be 1.6TB per day for 10 years.



    The drives are limited by their write speeds anyway. At 400MB/s, a drive can only write 34TBs per day maximum. It took 259 days to wear out the Samsung 830 writing constantly. People who write that much data should expect to wear out their drives quickly and HDDs can't come close to writing like that anyway as they aren't fast enough.



    Throttle the writes down to HDD level and they'll last 10x longer.

    It's quite consumer-friendly:









    It wouldn't have to be all that regular and the writes are pretty fast. Of course when SSDs become really cheap then that'll suffice for most people as they can be used for archiving having no moving parts. When 1TB SSD costs $100 then there's not much point using tapes at $30 per TB. They are under $0.50/GB so if the price trend keeps up, this will be before 2020.

    Even if you got a cheap 4TB desktop drive or something to keep a second copy, that would do. It's never a good idea to keep a single copy of data. If you ever get a program that accidentally formats drives, it can wipe it all out. For the sake of $50-100, it's not worth the hassle trying to recover files.

    Any program using 512GB needs to be rewritten? I am not talking about a browser, but about mathematical computing which needs as much memory as possible to do the "next case".

     0Likes 0Dislikes 0Informatives
  • Reply 198 of 284
    Marvinmarvin Posts: 15,585moderator
    marubeni wrote: »
    Any program using 512GB needs to be rewritten? I am not talking about a browser, but about mathematical computing which needs as much memory as possible to do the "next case".

    Assuming there is such a program in need of a 512GB data set repeatedly transferred to storage, an SSD will hold up to the writes. The Mac Pro uses Samsung MLC SSD and 256GB of it stood up to 6 Petabytes of writes. It's also possible to use an external RAID drive.
     0Likes 0Dislikes 0Informatives
  • Reply 199 of 284
    hmmhmm Posts: 3,405member
    Quote:
    Originally Posted by Marvin View Post





    Assuming there is such a program in need of a 512GB data set repeatedly transferred to storage, an SSD will hold up to the writes.

     

    There may be some circumstances where it is more advantageous to simply have 512GB. Where I disagree with him is that it needs to be a Mac. Users with those kinds of requirements are typically running proprietary code on some flavor of Linux. It's not a market where Apple has ever maintained a presence outside of a couple corner cases in the PowerPC era. Even if Apple supported that, I find it unlikely that OSX would be a better choice than Linux for what I suspect would be some flavor of big data analysis. The reason that subject came to mind was that I was trying to think what would require long streams of contiguous address space without the need to address heavy traversal of read to and from swap.

     

    edit: blah I didn't describe that very well.

     0Likes 0Dislikes 0Informatives
  • Reply 200 of 284
    marubenimarubeni Posts: 334member
    Quote:

    Originally Posted by hmm View Post

     

     

    There may be some circumstances where it is more advantageous to simply have 512GB. Where I disagree with him is that it needs to be a Mac. Users with those kinds of requirements are typically running proprietary code on some flavor of Linux. It's not a market where Apple has ever maintained a presence outside of a couple corner cases in the PowerPC era. Even if Apple supported that, I find it unlikely that OSX would be a better choice than Linux for what I suspect would be some flavor of big data analysis. The reason that subject came to mind was that I was trying to think what would require long streams of contiguous address space without the need to address heavy traversal of read to and from swap.

     

    edit: blah I didn't describe that very well.


    My point was precisely that Macs had no presence in that market, and people do only run this kind of stuff on linux (myself included). But all of those people (myself, again, included) carry around Mac  notebooks, so I would imagine that Apple COULD make some inroads, if it were interested. Presumably the Mac Pro's fast I/O would help with quant finance stuff as well - I had never seen a Mac in finance shops, but maybe I haven't been looking hard enough.

     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.