Teardown of Apple's new Mac Pro reveals socketed, removable Intel CPU

1679111215

Comments

  • Reply 161 of 284
    Quote:

    Originally Posted by ascii View Post

     

    I think the theoretical limit is 1200MB/sec, so the 950MB/sec figure is what you can really expect. As for whether it would be noticeable in real life I guess that depends on the size of file you're working with. A 10GB video file might take 10s to load on a Mac Pro but 20s on a PC, but a 1MB Word file would be instantaneous on both.


     

    An interesting question is whether these high transfer speeds (together with, presumably, short seek times) would make it possible to page to the SSD. From what I understand there is a major bit rot problem on SSDs (so every bit can be written a  finite, and relatively small, number of times), but I don't really claim to understand it. Any words of wisdom?

  • Reply 162 of 284
    Quote:

    Originally Posted by hmm View Post

     

    It's like the hardware designer equivalent of wearing clean underwear.

     

    The 780s are also quite good. The biggest thing in favor of titan is probably the 6GB. If it wasn't for that I would say the 780s have a better price to performance ratio.

     

    It's kind of like I said once before. They're great  for really really parallel workloads. Interestingly there are some areas within graphics and visualization where that could really be exploited. I suspect it's an issue of existing code bases, older algorithms, and lack of a clear future in terms of what framework ends up being dominant. That being said, I just picked up a book on OpenCL and heterogeneous programming.


     

    The biggest thing in favor of the Titan for me is that the Double Precision is not crippled, as it seems to be on every other nVidia consumer card. Since I am not a big fan of single precision, it is either that or the Tesla for me...

  • Reply 163 of 284
    Quote:

    Originally Posted by mknopp View Post

     

     

    That is exactly what I was wondering as well. Almost all of the scientific computing work that I have done in the last two decades has been done on clusters. I still remember my first time using a Beowulf setup which cobbled together a bunch of older PC motherboards. Then again, I haven't used Mathematica/Maple for anything to this level. We pretty much make our own programs to reduce as much overhead as possible.

     

    Interesting.


     

    Certainly for problems with any sort of large-grain parallelism the cluster way is more cost effective (especially when the problem is compute- and not communication bound, which can often be managed with problem organization). Unfortunately, in algebra-type things, a single computation can blow up and use a lot of RAM. Also, Mathematica et al are not the most memory efficient programs out there, but they do have a lot of stuff implemented, and life is short, so it is cheaper to buy 256GB of ram than spend two years reimplementing something.

  • Reply 164 of 284
    MarvinMarvin Posts: 15,323moderator
    hmm wrote: »
    That part is highly debatable. Installing a roc or a simple host card to a box with an embedded controller isn't really difficult, and you may have a more stable range of options.

    The operative word is 'easier'. Plugging in a cable is easier than anything else.
    hmm wrote: »
    Given the way they're outfitted, I don't see why they chose to claim Raid 5 support. It is weird to do that without the shorter firmware timings.

    You don't have to use Raid 5 and Raid isn't a backup solution.
    cpsro wrote:
    I don't need or want GPUs (at least at this time), hence the lack of GPUs in my custom configured linux system which has twice the performance (where I need it) compared to the least-expensive 12-core Mac Pro and for about the same price.

    It's also 65% of the price of an HP machine for the same performance. What's the point you are making, that you can build a computer far cheaper than Apple, Dell, HP, Lenovo? I'm sure this has been well known for years and it has nothing to do with Apple's design decisions. If they offered dual 12-cores, it would cost above $10k so just like with HP, you'd be able to build one cheaper. Apple wants 30-40% gross margins, component retailers can't get away with those margins, nor do they care about the quality of the parts they sell.
    v5v wrote:
    I don't think the current design has nearly as much to do with market research as with what best satisfies the demands of Apple's software product, specifically FCPX. Just look at the test results so far: with Adobe Premiere, meh. With FCPX, wow! Coincidence?

    The only coincidence is that Apple happens to make both FCPX and the Mac Pro so they had time to optimize for it. As mentioned earlier, Adobe uses a whitelist for supported GPUs. The AMD R9 290 came out late October/early November but Adobe only added it just over a week ago:

    http://blogs.adobe.com/premierepro/2013/12/premiere-pro-cc-update-7-2-1.html

    "The AMD Radeon R9 290 Series has been added to the OpenCL supported card list"

    When a card isn't added to the list, it drops back to using the CPU. The Adobe apps will benefit from the GPUs just like FCPX when support is added for the Mac Pro GPUs.
    v5v wrote:
    I don't think anyone besides CS is saying that Apple should have gone with 2xCPU/1xGPU *instead* of 1xCPU/2xGPU, but could very easily have offered such configurations *as well.*

    They'd have had to stock two motherboards and the dual socket one would only be useful for anything above 12-core. This means stocking a whole new motherboard design for hardly any buyers.

    The GPU expense keeps coming up but the highest option is $1000 or $500 for the extra GPU (the D300 far less than that). An extra 12-core CPU would be $3500 along with the added expense of a dual-socket motherboard. It's not like they are anywhere near the same price and it would be switching one for the other.
    ascii wrote:
    SATA 3 interfaces max out at about 550 MB/sec. I have the latest Samsung EVO SSD in my PC and *wish* it was using PCI connectivity.

    Plus, they don't always achieve that theoretical maximum:

    http://www.anandtech.com/bench/SSD/262

    Most SATA drives there are below 400MB/s. They will similarly move to PCIe:

    http://www.tomshardware.com/reviews/samsung-global-ssd-summit-2013,3570-2.html

    "In an effort to standardize solid-state drive access across PCI Express, 80 companies (led in part by Samsung), created the Non-Volatile Memory Host Controller Interface Specification."

    Led by Samsung after Apple already does it. PC manufacturers need to use an interoperable standard though, Apple doesn't as they do the whole product = leverage.
    v5v wrote:
    Do speeds like that really happen in real life, or do other factors limit the rate at which the system can read and write? Would this configuration actually BE faster than a SATA3 interface or only have a "theoretical" advantage?

    For sequential reads/writes you'd get those speeds e.g copying or exporting 100GB of video data. Random reads/writes are a lot lower so duplicating a folder of 100,000 photos wouldn't go that fast.
    marubeni wrote:
    From what I understand there is a major bit rot problem on SSDs (so every bit can be written a finite, and relatively small, number of times)

    Yes but the manufacturers use wear-levelling to keep writes more even across the drive. MTBF values are in line with HDDs.
  • Reply 165 of 284
    muppetrymuppetry Posts: 3,331member
    Quote:

    Originally Posted by marubeni View Post

     
    Quote:
    Originally Posted by mknopp View Post

     

     

    That is exactly what I was wondering as well. Almost all of the scientific computing work that I have done in the last two decades has been done on clusters. I still remember my first time using a Beowulf setup which cobbled together a bunch of older PC motherboards. Then again, I haven't used Mathematica/Maple for anything to this level. We pretty much make our own programs to reduce as much overhead as possible.

     

    Interesting.


     

    Certainly for problems with any sort of large-grain parallelism the cluster way is more cost effective (especially when the problem is compute- and not communication bound, which can often be managed with problem organization). Unfortunately, in algebra-type things, a single computation can blow up and use a lot of RAM. Also, Mathematica et al are not the most memory efficient programs out there, but they do have a lot of stuff implemented, and life is short, so it is cheaper to buy 256GB of ram than spend two years reimplementing something.


     

    Pretty remarkable that analytic problems can expand that far, but that does explain the memory requirements.  It's interesting that even in the large numerical simulation domain of the massively parallel systems, modern processor architecture is starting to be an issue since they do not have fast enough access to sufficient memory - not surprising as the processors were not designed for this kind of problem.

  • Reply 166 of 284
    Quote:

    Originally Posted by Marvin View Post





    <...>

    Yes but the manufacturers use wear-levelling to keep writes more even across the drive. MTBF values are in line with HDDs.

     

    Right, but my question was whether this peculiarity of the technology made it unusable for swapping (where you would be rewriting bits A LOT, as opposed keeping data on the drive, where a bit is left a lone for many minutes at the shortest.

  • Reply 167 of 284
    hmmhmm Posts: 3,405member
    Quote:

    Originally Posted by marubeni View Post

     

     

    The biggest thing in favor of the Titan for me is that the Double Precision is not crippled, as it seems to be on every other nVidia consumer card. Since I am not a big fan of single precision, it is either that or the Tesla for me...


    I was unaware of that. If I wanted to use one for computation, I would have gone with a Titan anyway due to vram. It's not an issue of maxing it every time. There is no real system of virtual memory or swap, so everything must fit within the framebuffer. It's not worth taking the risk of such a strict hardware limitation if there is the possibility of reaching it.

     

    Quote:

    Originally Posted by Marvin View Post





    The operative word is 'easier'. Plugging in a cable is easier than anything else.

     

    That is assuming no other assembly and stable drivers. I would argue that stability makes things easier, and yeah some terrible cards existed for the old mac towers.

    Quote:


    You don't have to use Raid 5 and Raid isn't a backup solution.


     

    You use my own lines against me sir? I am usually the one that has to inform others of that. It's still a bad idea to go with something that is inherently unstable due to potential write hole issues.

  • Reply 168 of 284
    Quote:

    Originally Posted by marubeni View Post

     

    They would look until they figured out the memory limitation (the natural assumption would be that since OS X is a unix variant, it would have the same memory limitations as Linux, that is, none)


     

    Memory limitations are devised by the motherboard manufacturer and CPU manufacturer.

     

    If Apple came out with a Quad Xeon Motherboard each CPU with 64GB/128GB max then they could conceivably manage a system of 256-512GB of DDR3 Quad Channel ECC RAM.

     

    Take this Intel Xeon E5-2697 v2 – 12 Core Ivy Bridge-EP

     


    • Processor: 1x Intel Xeon E5-2697

    • Motherboard: Dual LGA2011 (C602 based) with BIOS update

    • Memory: 4x 8GB Registered ECC DDR3 1600MHz DIMMs

     

    If it had 8 slots you're looking at 64 GB or if someone made 16GB DIMMS you're maxed out at 128GB with 8 slots and 64 GB with 4.

     

    Seem familiar? It's not the damn OS limiting the Memory configurations, but the physical hardware.

     

    If Apple made a 3U/6U rackmount configuration with Quad Xeon 8 x 16GB DIMM configurations OS X would run just fine.

     

    Apple has zero interest in competing with Intel, HP and AMD for this space, not to mention wasting the resources in building a fully n-tier Enterprise Computing Division.

  • Reply 169 of 284
    v5vv5v Posts: 1,357member
    Quote:

    Originally Posted by Marvin View Post



    Raid isn't a backup solution.

     

    It isn't? Why not? It's redundant and fault tolerant. A thief or fire can cause 100% data loss, but otherwise I'm safe, right?

  • Reply 170 of 284
    Quote:
    Originally Posted by Marvin View Post





    The operative word is 'easier'. Plugging in a cable is easier than anything else.

    You don't have to use Raid 5 and Raid isn't a backup solution.

    It's also 65% of the price of an HP machine for the same performance. What's the point you are making, that you can build a computer far cheaper than Apple, Dell, HP, Lenovo? I'm sure this has been well known for years and it has nothing to do with Apple's design decisions. If they offered dual 12-cores, it would cost above $10k so just like with HP, you'd be able to build one cheaper. Apple wants 30-40% gross margins, component retailers can't get away with those margins, nor do they care about the quality of the parts they sell.

    The only coincidence is that Apple happens to make both FCPX and the Mac Pro so they had time to optimize for it. As mentioned earlier, Adobe uses a whitelist for supported GPUs. The AMD R9 290 came out late October/early November but Adobe only added it just over a week ago:



    http://blogs.adobe.com/premierepro/2013/12/premiere-pro-cc-update-7-2-1.html



    "The AMD Radeon R9 290 Series has been added to the OpenCL supported card list"



    When a card isn't added to the list, it drops back to using the CPU. The Adobe apps will benefit from the GPUs just like FCPX when support is added for the Mac Pro GPUs.

    They'd have had to stock two motherboards and the dual socket one would only be useful for anything above 12-core. This means stocking a whole new motherboard design for hardly any buyers.



    The GPU expense keeps coming up but the highest option is $1000 or $500 for the extra GPU (the D300 far less than that). An extra 12-core CPU would be $3500 along with the added expense of a dual-socket motherboard. It's not like they are anywhere near the same price and it would be switching one for the other.

    Plus, they don't always achieve that theoretical maximum:



    http://www.anandtech.com/bench/SSD/262



    Most SATA drives there are below 400MB/s. They will similarly move to PCIe:



    http://www.tomshardware.com/reviews/samsung-global-ssd-summit-2013,3570-2.html



    "In an effort to standardize solid-state drive access across PCI Express, 80 companies (led in part by Samsung), created the Non-Volatile Memory Host Controller Interface Specification."



    Led by Samsung after Apple already does it. PC manufacturers need to use an interoperable standard though, Apple doesn't as they do the whole product = leverage.

    For sequential reads/writes you'd get those speeds e.g copying or exporting 100GB of video data. Random reads/writes are a lot lower so duplicating a folder of 100,000 photos wouldn't go that fast.

    Yes but the manufacturers use wear-levelling to keep writes more even across the drive. MTBF values are in line with HDDs.

     

     

    Quote:

     "The AMD Radeon R9 290 Series has been added to the OpenCL supported card list"


     

    Awesome OpenCL 1.2/2.0 beast and OpenGL 4.x juggernaut for $499 giving you a buttload of power, 4GB DDR5, 512 bit interface and paired with an AMD FX-8350 is a sweet ride.

     

    Seeing as AMD provided the OpenCL acceleration for Adobe you new their new R9/R7 series will be added.

  • Reply 171 of 284
    hmmhmm Posts: 3,405member
    Quote:

    Originally Posted by v5v View Post

     

     

    It isn't? Why not? It's redundant and fault tolerant. A thief or fire can cause 100% data loss, but otherwise I'm safe, right?


    That second point is highly debatable. To make sure I am not misreading your words, I am positive he meant that the redundancy provided by anything above Raid 0 does not constitute a backup. You could back up one Raid with another Raid. It's just a Raid isn't both storage and backup. There are a few reasons. One is the potential for corruption. In a mirrored scenario anything written to one is written to another. Raid 5 can write bad data if a disk fails to respond, which I mentioned above. Raids can experience controller problems potentially rendering the data inaccessible. When a drive does fail, if there have been any past problems including any kind of bit flipping or minor corruption, you'll crash on rebuild. That is why the better Raid controllers use ECC memory. It's much more destructive than a single flipped bit would be in other scenarios. Anyway there are other problems, and I could go on. Perhaps you have counter-reasoning? Mine is simply that too much can go wrong with a given Raid, and the cost of disaster recovery on a Raid system tends to be cost prohibitive even compared to normal data extraction.

  • Reply 172 of 284
    MarvinMarvin Posts: 15,323moderator
    marubeni wrote: »
    Right, but my question was whether this peculiarity of the technology made it unusable for swapping (where you would be rewriting bits A LOT, as opposed keeping data on the drive, where a bit is left a lone for many minutes at the shortest.

    Write endurance is listed here:

    http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923.html

    It says writing 10GB/day would last over 18 years with modern SSDs. SSDs do need to use TRIM or idle periods to stay healthy but they should be fine for swapping as long as it's not TBs per day. Disk swap isn't used at all if you have enough RAM though so it would just be for scratch disks but even then RAM is used first and it would be GBs of data.
    hmm wrote:
    It's still a bad idea to go with something that is inherently unstable due to potential write hole issues.

    Every RAID 5 setup has that issue. If you have a UPS so the power can't shut off during writes, it's almost never going to be a concern. Pegasus drives support RAID 6 too. You have said many times that you'd prefer internal RAID where you choose your own drives but it's mainly because you don't like Thunderbolt. Bootable internal RAID means your whole machine is offline if you have to rebuild it:

    http://forums.macrumors.com/showthread.php?t=1073435

    Having a single SSD internally means that no matter what happens to your mass storage, your workstation is still online. It's better that mass storage is outside and any faults are dealt with by storage companies.
    v5v wrote:
    It isn't? Why not? It's redundant and fault tolerant. A thief or fire can cause 100% data loss, but otherwise I'm safe, right?

    As mentioned above, data corruption is copied in a RAID. This happened with GMail. They run their servers on RAID drives but a fault happened that meant they lost a whole bunch of emails and even their backup RAID didn't have them so they had to pull the data back off their regular LTO tape backups:

    http://www.theregister.co.uk/2013/01/28/google_oracle/

    Consumers can use tape drives too but they are expensive and only have SAS/SCSI connectors so you'd need a Thunderbolt to SAS adaptor, which for some reason costs nearly $900 and is actually downgrading 10/20Gbps Thunderbolt to 6Gbps SAS. You'd think HP or some other company would make tape drives for consumer use with a USB 3 or Thunderbolt connector. There's a Japanese company making one but I don't know if it's available to buy yet:

    http://www.storagenewsletter.com/rubriques/tapes/unitex-lt50-usb/
    http://www.unitex.co.jp/products/cmtmt/lt50lt40usb.html

    The SAS drives are around $3k new but maybe they could have a $999 model and it can just be LTO4/LTO5 with 800GB/1.5TB capacity tapes. As long as it writes over 200MB/s, that would do fine. Once you are past the initial cost of the drive, the tape cost is under $30 per TB beyond that allowing you to keep archiving forever and RAID would be your working drive. Backup the changes on the RAID to tape either weekly or daily and that setup should work fine for decades. Tapes last about 30 years.
  • Reply 173 of 284
    hmmhmm Posts: 3,405member

     

    Quote:

    Originally Posted by Marvin View Post









    Every RAID 5 setup has that issue. If you have a UPS so the power can't shut off during writes, it's almost never going to be a concern. Pegasus drives support RAID 6 too. You have said many times that you'd prefer internal RAID where you choose your own drives but it's mainly because you don't like Thunderbolt. Bootable internal RAID means your whole machine is offline if you have to rebuild it:

     


     

    That is true. I pointed it out specifically related to the cheaper ones due to smaller caches, sometimes lacking ECC ram, and cheaper overall hardware. The disks used were the other issue that I mentioned. This could be avoided with the diskless configurations available on the new thunderbolt boxes, assuming they are available in the desired size. Regarding the disks, I was referring to error recovery timings. The name I used before was firmware timings, which is basically what sets them. Shorter ones are used on the enterprise grade drives in spite of a lot of fundamental hardware similarity. It does at least prevent issues with timeouts in part of an array. I wouldn't ever use a Raid without UPS backup. That's just crazy talk:wow:.

  • Reply 174 of 284
    solipsismxsolipsismx Posts: 19,566member
    hmm wrote: »

    That is true. I pointed it out specifically related to the cheaper ones due to smaller caches, sometimes lacking ECC ram, and cheaper overall hardware. The disks used were the other issue that I mentioned. This could be avoided with the diskless configurations available on the new thunderbolt boxes, assuming they are available in the desired size. Regarding the disks, I was referring to error recovery timings. The name I used before was firmware timings, which is basically what sets them. Shorter ones are used on the enterprise grade drives in spite of a lot of fundamental hardware similarity. It does at least prevent issues with timeouts in part of an array. I wouldn't ever use a Raid without UPS backup. That's just crazy talk:wow: .

    You were the first person to inform me about this. How much time are we talking about to let a RAID finish its write? Milliseconds? If this is such an issue why wouldn't they simply include a small LiNi battery in consumer RAIDs specifically to all the system to shutdown correctly in case AC power is lost? Honestly, I don't want a large UPS just for my RAID and I certainly don't to start doing tape backups either. I bought it with the intention that my RAID10 would offer me many years of redundant storage.
  • Reply 175 of 284
    Quote:

    Originally Posted by Marvin View Post





    Write endurance is listed here:



    http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923.html



    It says writing 10GB/day would last over 18 years with modern SSDs. SSDs do need to use TRIM or idle periods to stay healthy but they should be fine for swapping as long as it's not TBs per day. Disk swap isn't used at all if you have enough RAM though so it would just be for scratch disks but even then RAM is used first and it would be GBs of data.

    Every RAID 5 setup has that issue. If you have a UPS so the power can't shut off during writes, it's almost never going to be a concern. Pegasus drives support RAID 6 too. You have said many times that you'd prefer internal RAID where you choose your own drives but it's mainly because you don't like Thunderbolt. Bootable internal RAID means your whole machine is offline if you have to rebuild it:



    http://forums.macrumors.com/showthread.php?t=1073435



    Having a single SSD internally means that no matter what happens to your mass storage, your workstation is still online. It's better that mass storage is outside and any faults are dealt with by storage companies.

    As mentioned above, data corruption is copied in a RAID. This happened with GMail. They run their servers on RAID drives but a fault happened that meant they lost a whole bunch of emails and even their backup RAID didn't have them so they had to pull the data back off their regular LTO tape backups:



    http://www.theregister.co.uk/2013/01/28/google_oracle/



    Consumers can use tape drives too but they are expensive and only have SAS/SCSI connectors so you'd need a Thunderbolt to SAS adaptor, which for some reason costs nearly $900 and is actually downgrading 10/20Gbps Thunderbolt to 6Gbps SAS. You'd think HP or some other company would make tape drives for consumer use with a USB 3 or Thunderbolt connector. There's a Japanese company making one but I don't know if it's available to buy yet:



    http://www.storagenewsletter.com/rubriques/tapes/unitex-lt50-usb/

    http://www.unitex.co.jp/products/cmtmt/lt50lt40usb.html



    The SAS drives are around $3k new but maybe they could have a $999 model and it can just be LTO4/LTO5 with 800GB/1.5TB capacity tapes. As long as it writes over 200MB/s, that would do fine. Once you are past the initial cost of the drive, the tape cost is under $30 per TB beyond that allowing you to keep archiving forever and RAID would be your working drive. Backup the changes on the RAID to tape either weekly or daily and that setup should work fine for decades. Tapes last about 30 years.

    Well, about the swapping, that is useful info, but you somewhat misunderstood my question: back in the old days (of the VAX) we pretended that we have an infinite amount of RAM, while in reality having a very finite amount. This allowed us to write clean programs, while also allowing us to die a horrible death when we started thrashing, since disks in those days were MUCH MUCH MUCH slower than RAM (for example, seek time was notionally around 15 milliseconds, but really much worse). Now, disks (SSDs), while still slower than RAM are not quite as much slower, and so the infinite memory model might want to make a comeback, so while you have 64GB of physical RAM (as in the Mac Pro) we might pretend that we have 512GB (because we have a one terabyte SSD, and we allocated at large enough swap partition). If we use this heavily, your 10GB/daily figure pales into insignificance, and might indicate that the useful lifetime of this SSD will be measured in months. So, pretty bad.

  • Reply 176 of 284
    hmmhmm Posts: 3,405member
    Quote:
    Originally Posted by SolipsismX View Post





    You were the first person to inform me about this. How much time are we talking about to let a RAID finish its write? Milliseconds? If this is such an issue why wouldn't they simply include a small LiNi battery in consumer RAIDs specifically to all the system to shutdown correctly in case AC power is lost? Honestly, I don't want a large UPS just for my RAID and I certainly don't to start doing tape backups either. I bought it with the intention that my RAID10 would offer me many years of redundant storage.

    I wasn't trying to scare you. It's important to note that the write hole issue doesn't apply to RAID10, so don't worry about that specific one. I merely cautioned against using something that splits off parity such as 3,5, or 6 without robust hardware due to potential headaches. I'm probably not qualified to say this, but I wouldn't worry too much as long as you have a backup. In theory most Raid controllers do have a battery to help protect their cache in the event of interruption. How well it works in practice may be a different issue, and I would remember that writing to each set of drives is not entirely synchronous. Like I said, I would still keep backups. I don't know how mirrored RAIDs check for corruption issues. If your box offers some kind of disk scrubbing, that would probably catch minor issues and ideally let you know if any drive sees a notable increase in bad sectors. That way the drive could be replaced preemptively. May I ask what software you're using? Also as a disclaimer this isn't my area of expertise, and there are people who know way more than me. I do try to vet any information if I'm unsure though, and I can tell you I wouldn't use redundancy in Raid as my sole method of backup. Any backup to the Raid could be much slower, so it doesn't have to be terribly expensive compared to primary storage.

     

    Also I don't know of any single users who go into tape backup. It's typically in a shared storage environment. I did have an old Poweredge server with a tape drive and 15K SCSI drives a while back. It was retired from its primary use, and after replacing a battery and updating firmware, I used it set up ESXi for learning purposes. At the time it was still called it was still called ESX. *shoos kids off his lawn*

     

    Quote:
    Originally Posted by marubeni View Post

     

     If we use this heavily, your 10GB/daily figure pales into insignificance, and might indicate that the useful lifetime of this SSD will be measured in months. So, pretty bad.


     

    With certain applications, Lion ended up with a ton of pageouts for me over the course of a day. They alone totaled more than 10GB, which seems weird with 16GB of ram. For some reason it's bad about releasing memory.

  • Reply 177 of 284
    hmm wrote: »
    May I ask what software you're using? Also as a disclaimer this isn't my area of expertise, and there are people who know way more than me. I do try to vet any information if I'm unsure though, and I can tell you I wouldn't use redundancy in Raid as my sole method of backup. Any backup to the Raid could be much slower, so it doesn't have to be terribly expensive compared to primary storage.

    1) No SW per say. It's a HW RAID.

    2) It is my backup. All my Macs use this HW RAID for Time Machine backups through a decade old iMac connected to the network via 100Mbps Ethernet and connected to the HW RAID via FW400. The Macs themselves can be used in the present if the RAID as a back up fails to start the backups over if that ever became an issue. However, the RAID is also used as my sole iTunes Server. There is no other copy of this outside of the data mirrored across the four discs in the HW RAID.
  • Reply 178 of 284
    Quote:
    Originally Posted by Cpsro View Post

     

    Fanboys can be strange and myopic. I am an Apple zealot extraordinaire, but I don't let that get in the way of expressing my needs and disappointment in the direction the Mac Pro has been taken. Sorry, folks!

     

    Repeating... I don't need or want GPUs (at least at this time), hence the lack of GPUs in my custom configured linux system which has twice the performance (where I need it) compared to the least-expensive 12-core Mac Pro and for about the same price. I could have received the parts last week or earlier--not wait a couple months for when they should be even cheaper (and the profit margins greater ;-)

     

    I wouldn't buy a Dyson either.


     

    Cpsro,

     

    Denial is not just a river in Egypt! 

     

    Apple’s new ‘overpriced’ $10,000 Mac Pro is $2,000 cheaper than the equivalent Windows PC

    http://www.extremetech.com/computing/173695-apples-new-overpriced-10000-mac-pro-is-2000-cheaper-than-the-equivalent-windows-pc

  • Reply 179 of 284
    hmmhmm Posts: 3,405member
    Quote:
    Originally Posted by SolipsismX View Post





    1) No SW per say. It's a HW RAID.



    2) It is my backup. All my Macs use this HW RAID for Time Machine backups through a decade old iMac connected to the network via 100Mbps Ethernet and connected to the HW RAID via FW400. The Macs themselves can be used in the present if the RAID as a back up fails to start the backups over if that ever became an issue. However, the RAID is also used as my sole iTunes Server. There is no other copy of this outside of the data mirrored across the four discs in the HW RAID.



    Ohhh so it is the backup. Well it's not the only source for that data, so that's not so bad. By software I didn't mean a software raid where the data for each drive is calculated by a piece of software running on top of the OS rather than a dedicated hardware controller. I meant software used to manage or monitor the health of the raid. In the case of the iTunes media, if it's all purchased media, you should be able to recover it via your itunes account in the event of catastrophic failure.

  • Reply 180 of 284
    MarvinMarvin Posts: 15,323moderator
    marubeni wrote: »
    Now, disks (SSDs), while still slower than RAM are not quite as much slower, and so the infinite memory model might want to make a comeback, so while you have 64GB of physical RAM (as in the Mac Pro) we might pretend that we have 512GB (because we have a one terabyte SSD, and we allocated at large enough swap partition). If we use this heavily, your 10GB/daily figure pales into insignificance, and might indicate that the useful lifetime of this SSD will be measured in months. So, pretty bad.

    It's not a practical concern, if you have any program needing to use 512GB of RAM or anywhere close to that, it needs to be rewritten. Also, 10GB/day still lasts 18 years. Hard drives typically only last 5 years. That's also if you use the cheaper SSDs. The non-bargain basement SSDs and larger capacity SSDs last longer:

    http://www.anandtech.com/show/6459/samsung-ssd-840-testing-the-endurance-of-tlc-nand

    A 256GB MLC drive at 10GB/day is rated for 70 years. There are tests here that do TBs of writes:

    http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm

    The Samsung 830 256GB MLC lasted over 6PetaBytes of writes. That would be 1.6TB per day for 10 years.

    The drives are limited by their write speeds anyway. At 400MB/s, a drive can only write 34TBs per day maximum. It took 259 days to wear out the Samsung 830 writing constantly. People who write that much data should expect to wear out their drives quickly and HDDs can't come close to writing like that anyway as they aren't fast enough.

    Throttle the writes down to HDD level and they'll last 10x longer.
    solipsismx wrote:
    I certainly don't to start doing tape backups either

    It's quite consumer-friendly:


    [VIDEO]


    It wouldn't have to be all that regular and the writes are pretty fast. Of course when SSDs become really cheap then that'll suffice for most people as they can be used for archiving having no moving parts. When 1TB SSD costs $100 then there's not much point using tapes at $30 per TB. They are under $0.50/GB so if the price trend keeps up, this will be before 2020.
    solipsismx wrote:
    the RAID is also used as my sole iTunes Server. There is no other copy of this outside of the data mirrored across the four discs in the HW RAID.

    Even if you got a cheap 4TB desktop drive or something to keep a second copy, that would do. It's never a good idea to keep a single copy of data. If you ever get a program that accidentally formats drives, it can wipe it all out. For the sake of $50-100, it's not worth the hassle trying to recover files.
Sign In or Register to comment.