Mac Pro - Six years waiting for 76% faster, and a new headache?

Posted:
in Future Apple Hardware edited January 2014

I'm definitely feeling the death of Moore's Law.

 

Yes, I know technically that Moore's Law is about component density, not necessarily about speed, but one used to be able to count on a rough correlation.

 

My current Mac Pro, an early 2008 model, is a month shy of being six years old now. I've never before kept any computer as a primary, active system for so long. It's no longer a powerhouse by today's standards, but surprisingly the performance of this 8-core (dual quad) 2.8 GHz system still compares reasonably well with current model iMacs and MacBooks.

 

After nearly six years, I'm ready to upgrade. I'm a little surprised, however, that while I can definitely improve on my 2008 Mac Pro, a new Mac Pro at a comparable price to what I'd paid before won't produce a giant leap in performance, but what seems to me like a pretty modest gain considering the passage of time. I'm used to thinking of six years as an eon or two on the technology time scale.

 

My 2008 Mac Pro cost $3477 with 2 GB RAM. Considering the extra I paid for third party RAM to get up to 16 GB, and inflation over the past few years, the new 6-core at $3999 seems like the most fair comparison.

 

Based on these Geekbench estimates from here: http://www.primatelabs.com/blog/2013/11/estimating-mac-pro-performance/

...and performance measurements of my model of Mac Pro from here: http://browser.primatelabs.com/mac-benchmarks

 

...I can expect about 2.13 times faster single-core performance, and 1.76 times faster multi-core performance. The smaller number is what's going to matter most to me, since the thing I'd most like to speed up is video encoding.

 

Maybe I shouldn't complain about a new computer being "only" 76% faster. Being spoiled by previous leaps and bounds of technology, however, I'd have hoped to get twice the performance at half the price after waiting nearly six years to upgrade, and a lot more performance at the same price.



Plus, because of the new Mac Pro's lack of internal optical drives (which are important to me) and lack of internal drive bays (I currently have all four SATA drive bays in my 2008 Mac Pro filled, those drives used as separate volumes, not as members of a RAID set, so that different processes can access different drives with minimal disk-seek contention), I'll be faced with a messy, kludged, and possibly expensive solution getting all my various drives hooked up again.



While I'm still leaning toward buying a new Mac Pro (if "Coming in December" ever comes), my enthusiasm is a bit dulled by the scale of the performance increase that I can expect, and knowing that I'll be buying myself a new problem when it comes to hooking up extra hard drives and optical drives.

«1

Comments

  • Reply 1 of 33
    hmmhmm Posts: 3,405member

    This isn't so much moore's law. The 2008 was really an enigma. If you go and break down the overall cost of the components at retail pricing  at that time compared to the cost of a 2008 mac pro, you will be surprised. I'm not sure you could have built one for $2800 considering the cost of the cpus. A pair cost $1600 at launch, and dual socket motherboards tend to be more costly. The current one is a cpu that retails for $300, so they are budgeted differently and quite possibly to a much different markup. They have been basically stuck with their current system of markup since 2009. During that time contracts with intel may have also changed. There's no way of knowing, but keep in mind you are comparing fewer cores on the new one. My question for you would be how badly do you need a new one? I buy almost nothing at launch. They're bound to have some problems, and I dislike beta testing. There's just no way to guarantee that lab tests will perfectly match the the launch time results in the wild, so I tend to anticipate early issues.

  • Reply 2 of 33
    shetlineshetline Posts: 4,695member

    I'm not sure what the price $2800 has to do with this, but are you trying to say that my 2008 Mac Pro was a surprisingly low profit margin system, and the new Mac Pro will have a much higher profit margin, so that's why a comparable price tag isn't yielding a greater increase in performance after six years?



    Maybe that's part of what's going on here, but there's still a slowdown in processing power advancement on top of that.

     

    By the way, I'm seeing prices more in the $600-$800 range for a 6-core Intel Xeon 1650 v2 CPU, rather than $300.



    I'm aware that I've made a comparison to a new system with fewer cores (6 vs. 8), because what I've been going for is a comparison based on price. I can't tell what the 8-core option would cost yet for the new Mac Pro, because as long as Apple's web site still says "Coming in December" the pricing for optional upgrades is unavailable. I'm going to guess that will be stiff enough to make a new 8-core system much more expensive than my old 8-core, even after adjusting for inflation.

  • Reply 3 of 33
    nhtnht Posts: 4,522member
    Quote:
    Originally Posted by shetline View Post

     

    I'm not sure what the price $2800 has to do with this, but are you trying to say that my 2008 Mac Pro was a surprisingly low profit margin system, and the new Mac Pro will have a much higher profit margin, so that's why a comparable price tag isn't yielding a greater increase in performance after six years?



    Maybe that's part of what's going on here, but there's still a slowdown in processing power advancement on top of that.


     

    Intel has been working on performance per watt the last few years.  The early Mac Pro 2008 has a score of 12501 with a X5482 (150W) while the E5-1680 v2 (130W) has an estimates score of 24429.  Without actual peak power draw numbers while running the benchmarks the very coarse estimation is score/max TDP.

     

    2008: 83.34

    2013: 187.91

     

    Also, Xeon is lagging in terms of architecture.  These aren't Haswells but Ivy Bridges.  So instead of going from Core based, to Nehalem based to Sandy Bridge to Haswell we're one behind.

     

    If the current Mac Pro performance increase strikes you as somewhat anemic for 6 years improvement that would be why.  Xeons were left on the backburner* in favor of improving mobile CPUs and most of Intel's effort was in power consumption and not raw performance increase.

     

    As for the internal drive bays and burners...yes but it's not a kludge or very messy solution.  It's two boxes attached via USB for the burner and TB for the drive bay.  If you needed PCIe cards...that would be a kludge.

     

    * this is probably not entirely fair as server chips tend to lag anyway since stability trumps performance.

  • Reply 4 of 33

    I get where you're coming from!

     

    And the "Pro" model should be about raw power, not about power per watt (that's more of an iMac metric... certainly a MacBook metric.)

     

    But you'll also see additional improvement from external drives... using solid state drives via thunderbolt should give noticeably faster data access.

  • Reply 5 of 33
    shetlineshetline Posts: 4,695member
    Quote:
    Originally Posted by nht View Post

     

    As for the internal drive bays and burners...yes but it's not a kludge or very messy solution.  It's two boxes attached via USB for the burner and TB for the drive bay.  If you needed PCIe cards...that would be a kludge.


     

    Part of my complaint wasn't just messy, but expensive. Have you priced a four bay Thunderbolt drive enclosure? I can't even tell if any of these would work as I want them to work, as "JBOD" (Just a Bunch of Disks), rather than RAID solutions. These things are sold as high-performance, data-protecting RAID solutions, and priced accordingly. Even if any of these enclosures would do the job, we're talking about $500 or more just to find a new home for drives that my old Mac Pro, as is, happily houses internally.

     

    Since I want two optical drives, we're talking about four total boxes where one stood alone before: the computer itself, two optical drives, and maybe, maybe just one expensive enclosure for the drives. And then there are the cables. And the power cords. And the almost certainly external, separate power supplies.



    What I'd really like (that no one makes (yet?)) is a single six bay tower that holds both optical and hard drives, connected by one Thunderbolt cable, with one power cord for an internal power supply.



    I've thought of making my own enclosure like this, but doing it with Thunderbolt would be very expensive. The best solution I've seen for using Thunderbolt to connect to standard internal SATA drives is this: http://www.bhphotovideo.com/bnh/controller/home?O=&sku=854934&Q=&is=REG&A=details



    ...snd that's more meant for eSATA, so it would take a bit of kludging with adapters to connect to internal drives. For $179 you can only hook up two drives, so I'd need $537 worth of these things daisy-chained together in order to handle 6 drives. With adapters and cables, let's say that's $600 before even putting all of that in an enclosure with a power supply, and hacking that power supply to power the hubs and well as the drives. And I'd then buy two new optical drives, since I wouldn't want to scavenge the optical drives out of my old Mac Pro.



    And that won't even be Thunderbolt 2.0, just 1.0.



    If I'd be willing to settle for a USB 3.0 hookup, I could get a 6-port USB 3.0 hub, some USB 3.0->SATA adapters (they go for around $20 each), and cram all of that into a small PC tower case with a power supply. For either Thunderbolt or USB solutions, I'd have to figure out how to hack a typical PC power supply so that it powers up without being connected to a motherboard.



    For all my bitching, however, I'm not expecting that Apple is ever going to go back to making something in the style of the old Mac Pro, so sooner or later I'll have to solve this problem of hooking up all of those drives if I don't intend to stay stuck in 2008. If I felt more certain that plenty of other people would like the kind of mixed-drive external enclosure that I'd like to have, I'd just wait for a nice pre-built solution to emerge -- but I don't feel more certain.



    I've considered taking the "hackintosh" route, but the most common and stable hackinstosh solutions are more iMac-level systems. Currently, if you try to build a Xeon-based hackintosh, from what I've seen it won't even "sleep", it can't go into low power mode, etc., due to a lack of compatible power management. That's on top of the hassles and worries about every OS update possible breaking your system. It's not even that much cheaper to build a Xeon-based hackintosh than it is to buy a legit Apple system. The only advantage would be having a more flexible enclosure.

  • Reply 6 of 33
    nhtnht Posts: 4,522member
    Thunderbolt is overkill for your burners. A 2 bay USB 3 enclosure will work

    It might be that for you a 6 bay USB 3 enclosure will work.
  • Reply 7 of 33
    hmmhmm Posts: 3,405member
    Quote:

    Originally Posted by shetline View Post

     

    I'm not sure what the price $2800 has to do with this, but are you trying to say that my 2008 Mac Pro was a surprisingly low profit margin system, and the new Mac Pro will have a much higher profit margin, so that's why a comparable price tag isn't yielding a greater increase in performance after six years?



    Maybe that's part of what's going on here, but there's still a slowdown in processing power advancement on top of that.

     

    By the way, I'm seeing prices more in the $600-$800 range for a 6-core Intel Xeon 1650 v2 CPU, rather than $300.



    I'm aware that I've made a comparison to a new system with fewer cores (6 vs. 8), because what I've been going for is a comparison based on price. I can't tell what the 8-core option would cost yet for the new Mac Pro, because as long as Apple's web site still says "Coming in December" the pricing for optional upgrades is unavailable. I'm going to guess that will be stiff enough to make a new 8-core system much more expensive than my old 8-core, even after adjusting for inflation.




    I was unclear on a couple things. I used the $3000 model for comparison, because the 2.8ghz model started at $2800 before upgrades. I'm not sure whether it was a low margin system. It was probably lower in margin. They may have gotten a really good deal that round based on some minimum purchase as they still used something comparable to intel's reference boards at the time. I suspect margins were also lower compared to the 2009 and on, which appear to have higher than average markups on the base hardware. The cto options are typically 25-30% above retail. Sometimes they balance things out a bit by bundling upgrades, but you can tell when they want a certain minimum sale on specific hardware. As I mentioned the 2008 was kind of an enigma. It started at $2800 with a pair of cpus that retailed for $1600 combined. Those kinds of cpus were pushed more into the $4000+ range after 2008. The nMP did probably incur an increase in the cost of gpus. I doubt the D300 types are terribly expensive. Apple charged $249 after market for a 5770 upgrade kit. The new ones are probably comparable chips relative to their generation. The base ones may contribute $250-300 each. They may have a higher markup so as to stagger the upgrade costs on the D500s and D700s. You'll notice other oems sometimes have higher cto pricing on gpu options and lower base model pricing.

     

    Anyway I wasn't trying to establish a full analysis of the cost. It's mostly speculation, but you were looking specifically at cpu benchmarks. I had wanted to point out that in 2008 your $3-4k went considerably further in that area.

  • Reply 8 of 33
    MarvinMarvin Posts: 15,326moderator
    the "Pro" model should be about raw power, not about power per watt (that's more of an iMac metric... certainly a MacBook metric.)

    Every machine should be about performance per watt but what you mean is they shouldn't lower the TDP. It doesn't look like they did though, they just allocated it to the other GPU.

    Before, you'd have 2x 150W CPU + 300W for all the slots so you could only get 1 high-end GPU. The optical + HDDs would use about 50W = 650W.
    Now, it's a 130W CPU + 2x 274W GPU = 678W.

    For the apps that use the GPUs like Blackmagic/Adobe/FCPX, it'll be much more powerful than before. It might improve video encoding but it depends on which compressor is used.

    The push should be to use OpenCL wherever possible. There was a test somewhere showing a 2x improvement in an app just using OpenCL on the CPU alone. High-end GPUs typically outperform the high-end CPUs quite a bit so if they get the right setup, the new Mac Pro running OpenCL code could run it 5x faster or more.

    This kind of push failed with Altivec back on PPC because the adoption wasn't there but OpenCL has at least gained a bit more commercial adoption.

    Handbrake's OpenCL gave a 2x speedup here in H.264 encoding:

    http://www.anandtech.com/show/5835/

    Of course OpenCL works on old machines too so purely CPU-based OpenCL is faster on both old and new hardware but the other advantage the GPUs will have is smooth 4K display support.

    In some cases, an option for dual-CPUs would have offered better performance-per-dollar but that would be a short-term solution and at the expense of the benefits the dual GPUs offer.

    Some software will take more work to figure out OpenCL. Complex code will be difficult to find where to use it and still have it work properly. Weta used GPU processing for their Pantaray raytracing engine for Avatar:

    http://www.nvidia.com/object/wetadigital_avatar.html

    They split the processing into doing the lighting part (the most intense part) on the GPU farm with CUDA and said it was 25x faster than the CPU. Then they took the data from the pre-process and used it in the more complex shading part on the CPUs. People often try to think of OpenCL in a way that it competes with C++ and that an entire codebase needs to be ported over but really, the most intense parts of code are often very small pieces of code and it's just finding where it's feasible to move that small code over to the GPU along with the data it operates on. One day it'll be truly heterogenous with fully shared memory like the PS4 has. Handbrake might just use OpenCL for intensive parts like where it has to compare image data to find redundancy (I think they mentioned lookahead functions or something) - GPUs do image operations really quickly.

    As for storage, it'll be an issue for now but SATA's clearly on the way out as it's not fast enough for an SSD. It works for external drives as they are slower anyway but I could see one day internal storage being as fast as RAM, at least a cached part. There would be zero boot times because the OS would stay in the non-volatile RAM cache. It would mean you could also run multiple operating systems effectively side by side and just swipe between them.
  • Reply 9 of 33
    shetlineshetline Posts: 4,695member
    Quote:
    Originally Posted by nht View Post



    Thunderbolt is overkill for your burners. A 2 bay USB 3 enclosure will work



    It might be that for you a 6 bay USB 3 enclosure will work.

     

    Yes, Thunderbolt would be overkill for the optical drives -- but I'd love to have one enclosure with both my optical and hard drives, so if the optical drives got more bandwidth than they really needed in order for the hard drives to perform better, so be it.

     

    As for a 6 bay USB 3.0 enclosure, here's the closest thing I could find to what I have in mind, with 8 bays: http://www.sansdigital.com/towerraid-/tr8uplusb.html

     

    That's designed just for hard drives, but perhaps by removing the front door that closes to hide the drive bays, two of the drive carriers could be removed and optical drives put in their place. It would probably be less work than hacking together internal power connections for things like a standard USB hub left floating around loose inside a standard PC case.



    I'd have to settle for USB 3.0 access to my external hard drives, but perhaps that's not so bad if the high-performance needs are mostly met by the new Mac Pro's internal SSD.

  • Reply 10 of 33
    shetlineshetline Posts: 4,695member
    Quote:
    Originally Posted by Marvin View Post



    Handbrake's OpenCL gave a 2x speedup here in H.264 encoding:



    http://www.anandtech.com/show/5835/

     

    That's encouraging because Handbrake is one application where I'd most like to see a performance boost. I have an ongoing project of ripping and re-encoding a large Blu-ray and DVD collection. With the settings I'm using for the quality I want, 1080p video is currently encoding at only a little faster than real-time speed.

     

    I haven't compared GPU speeds from my old Mac Pro to the new, but I'm guessing there's a much more impressive performance increase there.

  • Reply 11 of 33
    hmmhmm Posts: 3,405member
    Quote:

    Originally Posted by Marvin View Post





    Every machine should be about performance per watt but what you mean is they shouldn't lower the TDP. It doesn't look like they did though, they just allocated it to the other GPU.



    Before, you'd have 2x 150W CPU + 300W for all the slots so you could only get 1 high-end GPU. The optical + HDDs would use about 50W = 650W.

    Now, it's a 130W CPU + 2x 274W GPU = 678W.

    I don't have an exact reference for this, but the power supply was rumored to be smaller than that on the newest one. It was rumored to be closer to 450W, which makes little sense to me, unless not everything can go full throttle simultaneously. These cpus by their nature have to take performance per watt into account. They're also used in data centers where electricity including cooling solutions is a significant cost factor.

     

     

    Quote:


    The push should be to use OpenCL wherever possible. There was a test somewhere showing a 2x improvement in an app just using OpenCL on the CPU alone. High-end GPUs typically outperform the high-end CPUs quite a bit so if they get the right setup, the new Mac Pro running OpenCL code could run it 5x faster or more.





     

    Quote:


    Of course OpenCL works on old machines too so purely CPU-based OpenCL is faster on both old and new hardware but the other advantage the GPUs will have is smooth 4K display support.


    That's a function of displayport 1.2 compliance.

     

    Quote:


    In some cases, an option for dual-CPUs would have offered better performance-per-dollar but that would be a short-term solution and at the expense of the benefits the dual GPUs offer.


    Somewhat but not as much as in the past. I did a rough comparison taking two cpus around half the cost of the 12 core. It works out better, but it's not as astronomical of a difference as it might have been even a generation ago. I don't think that's why they skipped Sandy Bridge. I suspect this project started kind of on the late side. At this point they're probably waiting on things like TB2 chips. Considering that a v2 is unlikely to show up for at least 18 months after this release, they probably want to include certain features to avoid the issue of leapfrogging from the other lines.

     

     

    Quote:


    Some software will take more work to figure out OpenCL. Complex code will be difficult to find where to use it and still have it work properly. Weta used GPU processing for their Pantaray raytracing engine for Avatar:



    http://www.nvidia.com/object/wetadigital_avatar.html



    They split the processing into doing the lighting part (the most intense part) on the GPU farm with CUDA and said it was 25x faster than the CPU. Then they took the data from the pre-process and used it in the more complex shading part on the CPUs. People often try to think of OpenCL in a way that it competes with C++ and that an entire codebase needs to be ported over but really, the most intense parts of code are often very small pieces of code and it's just finding where it's feasible to move that small code over to the GPU along with the data it operates on. One day it'll be truly heterogenous with fully shared memory like the PS4 has. Handbrake might just use OpenCL for intensive parts like where it has to compare image data to find redundancy (I think they mentioned lookahead functions or something) - GPUs do image operations really quickly.






    That's a studio with full time developers of their own though. Avatar was developed several years ago, yet today I still do not know of any off the shelf solution that works like that. Most require the vertex and texture data to fit in the framebuffer of a given card. Considering the amount of texture data that could be used between displacement, whatever forms of SSS, and a huge array of maps to add materials for tattoos and dirt and other things on top of skin or metal, there's no way all of that would fit. I don't recall OpenCL supporting pointers to virtual memory, and it would slow things down quite a bit anyway.  I am really guessing here, but I suspect they already have baked point positions and took only raw lighting samples. Also consider that older codebases aren't always set up for highly parallel workloads. That's part of it. It's not necessarily an issue of switching to openCL from C++. The algorithms on older code bases may in some cases be too sequential. That isn't the case with either raytracing or REYES systems, as both rely to some degree on systems on the computation of multiple vector paths. Each path has plenty of dependencies, but you have a huge potential for multiple threads there. The only reason I'm being ambiguous is that reyes systems works a little differently from straight up raytracers. Even the way they evaluate normals is different, as one uses an edge derivative and the other uses a dot product.

  • Reply 12 of 33
    MarvinMarvin Posts: 15,326moderator
    hmm wrote: »
    I don't have an exact reference for this, but the power supply was rumored to be smaller than that on the newest one. It was rumored to be closer to 450W, which makes little sense to me, unless not everything can go full throttle simultaneously.

    Maybe for the lower end models but no way for 12-core + dual W9000 equivalents.
    hmm wrote: »
    That's a function of displayport 1.2 compliance.

    4K support is but it will still needs a lot of GPU power to run some things at that resolution smoothly.
    hmm wrote: »
    Most require the vertex and texture data to fit in the framebuffer of a given card. Considering the amount of texture data that could be used between displacement, whatever forms of SSS, and a huge array of maps to add materials for tattoos and dirt and other things on top of skin or metal, there's no way all of that would fit.

    Again, you're thinking of pushing everything onto the GPU. The details are here:

    http://www.cse.unr.edu/~fredh/class/480/F2010/class/paper-Marin.pdf

    The tests they show there are done with just 1GB of memory as it uses a method of streaming data in as needed. In practise, they said they'd use 4GB and this is for movie production scenes with billions of data points.

    Software these days should still be designed the way they used to be when hardware was very limited instead of lazily just trying to dump everything into RAM. Photoshop should be able to open and edit images that are 1 trillion x 1 trillion pixels without hitting memory errors and it should just stream data from the HDD as and when needed and save/process data in batches.

    Like I say, once memory is truly shared between the CPU and GPU and perhaps uses better caching with large enough fast PCIe storage, memory constraints shouldn't really exist.
  • Reply 13 of 33
    hmmhmm Posts: 3,405member
    Quote:
    Originally Posted by Marvin View Post





    Maybe for the lower end models but no way for 12-core + dual W9000 equivalents.

    4K support is but it will still needs a lot of GPU power to run some things at that resolution smoothly.

    Again, you're thinking of pushing everything onto the GPU. The details are here:



    http://www.cse.unr.edu/~fredh/class/480/F2010/class/paper-Marin.pdf

     

    They're not so much streaming it as they are figuring out what point positions to load. Just looking at that integral gives me a headache though.


    Quote:

    The second pass of the algorithm loops through each microgrid to find out all the buckets in which the microgrid falls, and records the microgrid-bucket pairs into an in-memory cache with a few million entries. Once the cache is full, the pairs are sorted by bucket index and written to disk in their corresponding slot, essentially making a single seek per bucket or less per cache flush.

    The purpose of this bucketing pass is to create manageable units of work which could fit in memory. However, the resulting uniform grid is very coarse and often imbalanced, which makes it unsuitable for direct ray tracing. With extremely large scenes it frequently hap- pens that a large portion of the buckets are empty or very sparsely populated, while a few remain too densely populated.



     

    I read through most of it. I didn't absorb all of it, but it sounds like this is strictly a lighting pass and attempt to cull unneeded geometry from the scene. I suspect they are baking lighting samples to be referred against the mipmaps for eventual color sample lookup as a downstream part of the rendering process. It mentions figuring out the LODs (level of detail) for a frame and the rest. They are dealing with much heavier subdivided meshes than anything I've ever seen.

     

    Quote:

    4K support is but it will still needs a lot of GPU power to run some things at that resolution smoothly.


     

    A lot is relative. I suspect some of the early rmbp issues were related to software. The latest generation of graphics chips should be capable for many use cases. I'm not sure about heavy use of OpenGL though.

     

    Quote:

    Like I say, once memory is truly shared between the CPU and GPU and perhaps uses better caching with large enough fast PCIe storage, memory constraints shouldn't really exist.


     

    I remember mental ray used to rely on a format called .MAP. It would just index the file directly into virtual memory rather than disk-->ram-->swap. I don't remember how it worked though. I remember reading somewhere about OpenCL eventually allowing the use of pointers to virtual memory. I don't know how far out that would be, but maybe it would motivate me to learn OpenCL.

     

    Quote:

    Software these days should still be designed the way they used to be when hardware was very limited instead of lazily just trying to dump everything into RAM. Photoshop should be able to open and edit images that are 1 trillion x 1 trillion pixels without hitting memory errors and it should just stream data from the HDD as and when needed and save/process data in batches.


    You are talking about a 1990s code base, and yet it still stutters less on large files than Gimp. Go figure. I only have Gimp because I find opensource projects interesting. Speaking of 1990s code bases, the same goes for their color management and weird non-linear blending modes. There are much cooler ways to design an illustration/paint program out of what you have today. What interests me more is what would run on an iPad though due to the lack of disjointedness of tablet --> screen.

     

    I would also point out that the last major rewrite of a large portion of the codebase preceded the massive trend toward ssds. Streaming from an HDD without raid was a pain in the ass, even though they did something similar enough with the use of scratch disks.

  • Reply 14 of 33
    wizard69wizard69 Posts: 13,377member
    Marvin hit upon some of the technical realities but you need to realize that Intel has publically stated recently that desktop processor are basically on the back burner. This does do much for Xeon.

    Apple may have something up it sleeve if they are working with Intel and Xeon Phi. Sometime in 2014 a variant of Phi suitable as a system processors is suppose to arrive. That would give the Mac a lot more cores. The reality is though that Apple has few options, it is either Intel or AMD and both of the companies are loosing ground in mobile. Frankly they are being challenged in the server market too.

    Beyond Phi I see little opportunity for a Apple to speed up the CPUs. It isn't anybodies fault the market changed and companies follow the money. Consider Inels new Xeon pricing, a sign of profitability issues with Xeon possibly??
  • Reply 15 of 33
    shetlineshetline Posts: 4,695member

    Originally Posted by wizard69 View Post



    The reality is though that Apple has few options, it is either Intel or AMD and both of the companies are loosing ground in mobile.

     

    I'm definitely not seeing this as an Apple-only problem. I'm underawed with new desktop and workstation performance advances in general. A new Windows workstation purchased today would not be leaps and bounds ahead of a comparably priced workstation purchased six years ago either. It does make a certain amount of sense that this is, at least in part, due to a shift in focus to mobile processors.

     

    At any rate, I went ahead and ordered one of these USB 3.0 JBOD tower enclosures last night: http://www.sansdigital.com/towerraid-/tr8uplusb.html

     

    ...plus two internal Blu-ray burners, for a total cost of about US $450. It will take a bit of physical hacking to get this enclosure to take optical drives, but I'm hoping it'll work out. As long as it works as planned, I'll be ready for a new Mac Pro whether I buy sooner or later.

  • Reply 16 of 33
    shetline wrote: »
    At any rate, I went ahead and ordered one of these USB 3.0 JBOD tower enclosures last night: http://www.sansdigital.com/towerraid-/tr8uplusb.html

    How will you be able to tell your software to access a specific drive if it only has one interface?
  • Reply 17 of 33
    MarvinMarvin Posts: 15,326moderator
    hmm wrote: »
    it sounds like this is strictly a lighting pass and attempt to cull unneeded geometry from the scene.

    They describe it as:

    "a system for precomputing sparse directional occlusion caches... for accelerating a fast cinematic lighting pipeline. The system was used as a primary lighting technology in the movie Avatar, and is able to efficiently handle massive scenes of unprecedented complexity through the use of a flexible, stream-based geometry processing architecture"

    The lighting cache can be used later on, maybe even ignored in some places but the idea is to let the GPUs handle only a very intensive but more basic part of the process and they gained up to 30x speedup in some cases vs the CPUs.

    It's a shame they don't commercialize these into usable products. That's where companies like The Foundry are good because they go in and make these things available for sale to the public. If they wrapped it into a custom lighting engine, they could sell it as an add-on for Maya, Cinema 4D, Lightwave, Modo. If the lighting passes use up 70% of the time and they can speed the process up 20x on the GPU, it means 3x faster overall turnaround times. Better than having 50% faster with 2 lower spec CPUs vs 1 faster CPU.
  • Reply 18 of 33
    shetlineshetline Posts: 4,695member

    Originally Posted by PhilBoogie View Post



    How will you be able to tell your software to access a specific drive if it only has one interface?

     

    I'm pretty sure (I am taking a chance, since I don't know for sure) that this box should function in essentially the same way as hooking up to the single interface on a USB hub, with multiple drives plugged into that hub.



    Since this box supports JBOD (Just a Bunch Of Disks), rather than trying to present all of the drives as one unified storage device (as with a RAID set), the most straightforward way to do that would be to simply let each drive show up over USB as an individual device on the USB bus.

  • Reply 19 of 33
    shetline wrote: »
    Since this box supports JBOD (Just a Bunch Of Disks), rather than trying to present all of the drives as one unified storage device (as with a RAID set), the most straightforward way to do that would be to simply let each drive show up over USB as individual devices on the USB bus.

    Ah, ok, didn’t expect that was an option. I too use a Mac Pro and love the fact that I can for instance tell one application to use drive#0 and another application to use drive #3 as a scratch disk.
  • Reply 20 of 33
    hmmhmm Posts: 3,405member
    Quote:
    Originally Posted by Marvin View Post







    It's a shame they don't commercialize these into usable products. That's where companies like The Foundry are good because they go in and make these things available for sale to the public. If they wrapped it into a custom lighting engine, they could sell it as an add-on for Maya, Cinema 4D, Lightwave, Modo. If the lighting passes use up 70% of the time and they can speed the process up 20x on the GPU, it means 3x faster overall turnaround times. Better than having 50% faster with 2 lower spec CPUs vs 1 faster CPU.

    That would be very cool, but a lot of studios maintain proprietary code. Pixar still has some of  that. They have their own dedicated storyboard software. Some of their research is open sourced. Out of that stuff I've peeked  through the code bases. It's a lot of code. I'm not sure it would make sense to release just a lighting engine. It might end up not working as anticipated some some of the basic shader sets, and ways of storing lighting samples vary by renderer. It could be much more work than you think, even if the concept is neat. A lot of people do use the out of the box shader sets, but I hate them. Their behavior is quirky. I'm all about writing my own (at this point). It just gives you so much control. I'll show you the next time I do a personal project.

Sign In or Register to comment.