Apple throws out the rulebook for its unique next-gen Mac Pro

1212224262766

Comments

  • Reply 461 of 1320
    MarvinMarvin Posts: 15,435moderator
    relic wrote: »
    I'm the resale queen, that's how I'm able to buy a new laptop every 6 months, for the price of one I can have six. However a desktop workstation is a little different, I can maintain a pretty capable machine for a very long time and still be able to keep up with the latest models by buying faster components.

    You can save a lot of money if you get some of the deals like you manage to find. For people buying from retail outlets like Newegg, the saving is almost non-existant.

    Say for example, there's a Mac Pro at $3k with a single 6-core CPU with dual FirePro graphics and it lasts 2 years before an upgrade is needed. They'd sell the hardware second-hand for about $1700 and buy the new 6-core for $3000 costing $1300.

    Now, say that someone bought a similar spec machine for $2300 from another vendor and after 2 years, they upgrade the CPU. So they pay $600 for a new retail CPU and install it. Maybe they can get some money for the old CPU too.

    The Mac Pro also has faster GPUs though and likely a new memory standard (DDR4) and it has a full warranty. Not only that, the resale value of the new Mac Pro at that point is ~$3k as they just bought it, the resale value of the other one is less than $2300. Let's say it's $1800.

    So, the relative exchanges over 2 years would be:

    Mac Pro: $3000 - $1700 + $3000 = $4300 with a retained value of $3000 so net loss = $1300
    Junk: $2300 + $600 - $200 = $2700 with a retained value of $1800 so net loss = $900

    That's assuming you'd get $200 for the 2 year old CPU and you can resell a 2 year old machine with a self-upgraded CPU for $1800, which is optimistic. Even in pretty much the best case, the saving is just $400 vs the Mac and the Mac has new GPUs and parts and a full warranty. The big difference is in the significantly higher outlay but the losses incurred over time are pretty close and I reckon you'd have a better user experience during the two years of owning the Mac.

    Some people will see it as being worse value than before and it can be if you get a great deal on CPUs but you pretty much have to for it to be worth doing. I think people need to get away from the notion that buying new computers regularly is a bad value proposition and this Mac Pro design will help drive that. It helps Apple as they are getting a higher sales volume and by extension customers as there's a demand for new models, it keeps the second-hand market healthy and will lead to a better ownership experience. The self-upgrade route is the quickest way to kill that industry because the sales volumes for the box manufacturers just collapse. Eventually they just won't bother. There was a report out just today about Samsung, which they've denied:

    http://bgr.com/2013/06/25/samsung-pc-business-continues/

    They could be baseless but that market is shrinking and it's not going to get better.
    relic wrote:
    trying to talk myself into buying a MacPro but coming up without an excuse

    The engineering looks pretty impressive. They are highlighting the noise level saying it's "astonishingly quiet". You could replace that noisy server in the basement with a machine that you can sit by your bedside while you sleep, depending on how much data it's serving of course. I'm hoping they've got a good deal on the graphics cards because that would really sell it for OpenCL-supported apps.
  • Reply 462 of 1320
    macroninmacronin Posts: 1,174member


    I wouldn't sell, I would just keep adding nodes to the render farm… Which would be especially sweet once developers start working OpenCL number crunching into the render apps… Each upgrade not only adds another CPU to the render farm, but also adds two (GP)GPUs…!

  • Reply 463 of 1320
    relicrelic Posts: 4,735member


    Thank you for the insightful post it was very educational. Though I loved building machines when I was younger I don't do it as often due to a busy family life, I want a Plugin Play solution and I'm a professional programmer who makes my living with computers so I want the best tools. Traditionally this meant a Mac, I own a iMac and a Macbook Air but I've been using my Thinkpad X230T and Tablet II more often (the Lenovo's are from work). I'm also a computer enthusiasts who enjoys playing around with all sorts of machines. I think I bought the Sun server that resides in my bomb shelter (basement) more for the uniqueness and future tinkering then it's ultimate usage. I mean what normal person has a enterprise solution for their home server needs, this gal that's who.


     


    I was never able to talk myself into buying a Workstation until recently, the prices for these things are just so far into the stratosphere to be able to justify it. Especially when it would be just a hobby machine, as a programmer I don't need anything even close to that kind of performance. However, I still always wanted one. While shopping for a new power supply for my beloved Powerbook 100 (Poky) on eBay I subconsciously found myself looking at Workstations from IBM at first then HP. I couldn't believe that I could have one so inexpensively.


     


    So I discussed it with my husband and he agreed of a set price of 2500 and was just relieved that his wife didn't have any other addictions like shoes, clothes or crack. It started with the HP Z800 and then progressed to updating and adding  components, I followed the HP manual to the letter and only used components listed and were from HP. The coolest finds were the Tesla cards, still in the box, never used, even the Quadro card was in the box. You mentioned junk computers but this guy is as original as if you would have bought it directly from HP, I even found the Windows 7 for Workstations recovery DVD, though I use CentOS the most because of the Tesla cards. Did you know HP has a built in cluster solution, if there is another HP Z on the network you can use it's CPU and GPU for extra performance, how cool is that, I now know why Dreamworks and Pixar use them. The case is also very cool, not in terms of looks as it's very plain but maintaining the components, easiest I have ever used.


     


    So as a hobby workstation I think I have one heck of a machine, not done with it either, I still want 3 more HP SAS 15,000 RPM drives. I lend out time to my video editing friends and get paid in dinners. I'm learning how to program in Cuda but I have to say it's a waste of time, the libraries for Python are fantastic. Actually on a side note, I noticed a lot of talking around here about OpenCL, the amount of people who will actually use it directly is very minuscule. AMD even thinks this, the libraries for Python, C++, etc, will do anything you need done. Only those programmers who are writing these libraries really need to learn OpenCL or Cuda. Also OpenCL can be found on almost every graphics card, even a Intel HD4000. I first started to experiment with it on my ThinkPad.


     


    Back to the Mac Pro, my biggest scare is paying so much money for a machine that I won't be able to upgrade a year after I purchase it. I know a lot of you will be fine with this but my addictive personality will defiantly want to replace or add something and the lack of a second CPU slot is probably the most disconcerting as it  would have been awesome to start with one and then eventually grow into a second. Oh well, we all can't be fully satisfied.


     


    Thank you again for spending time with me and replying with such a nice post. image

  • Reply 464 of 1320
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post







    The Mac Pro also has faster GPUs though and likely a new memory standard (DDR4) and it has a full warranty. Not only that, the resale value of the new Mac Pro at that point is ~$3k as they just bought it, the resale value of the other one is less than $2300. Let's say it's $1800.

     


    I don't feel like arguing with you on resale value math today, so I am leaving that out for now. A couple things here on the quoted portion. Faster isn't always the goal of workstation gpus. They can customize the drivers however they like here, as these are made to run OSX. Normally workstation gpus have to be stable in OpenGL driven apps and maintain speed at double precision floating point math. They're used a lot for running CAD or 3d animation apps. Many of those run just fine with almost any gpu if your scene or workspace is lighter. You won't notice a huge difference unless it's full of highly tessellated geometry. The realtime shaders used m


     


     


    Quote:


    So I discussed it with my husband and he agreed of a set price of 2500 and was just relieved that his wife didn't have any other addictions like shoes, clothes or crack. It started with the HP Z800 and then progressed to updating and adding components, I followed the HP manual to the letter and only used components listed and were from HP. The coolest finds were the Tesla cards, still in the box, never used, even the Quadro card was in the box. You mentioned junk computers but this guy is as original as if you would have bought it directly from HP, I even found the Windows 7 for Workstations recovery DVD, though I use CentOS the most because of the Tesla cards. Did you know HP has a built in cluster solution, if there is another HP Z on the network you can use it's CPU and GPU for extra performance, how cool is that, I now know why Dreamworks and Pixar use them. The case is also very cool, not in terms of looks as it's very plain but maintaining the components, easiest I have ever used.



     


    I've never auction hunted computer parts. That is just awesome.


     


     


    Quote:


    So as a hobby workstation I think I have one heck of a machine, not done with it either, I still want 3 more HP SAS 15,000 RPM drives. I lend out time to my video editing friends and get paid in dinners. I'm learning how to program in Cuda but I have to say it's a waste of time, the libraries for Python are fantastic. Actually on a side note, I noticed a lot of talking around here about OpenCL, the amount of people who will actually use it directly is very minuscule. AMD even thinks this, the libraries for Python, C++, etc, will do anything you need done. Only those programmers who are writing these libraries really need to learn OpenCL or Cuda. Also OpenCL can be found on almost every graphics card, even a Intel HD4000. I first started to experiment with it on my ThinkPad.



     


    also awesome.

  • Reply 465 of 1320
    hmmhmm Posts: 3,405member
    <p>
    relic wrote:
    So I discussed it with my husband and he agreed of a set price of 2500 and was just relieved that his wife didn't have any other addictions like shoes, clothes or crack. It started with the HP Z800 and then progressed to updating and adding components, I followed the HP manual to the letter and only used components listed and were from HP. The coolest finds were the Tesla cards, still in the box, never used, even the Quadro card was in the box. You mentioned junk computers but this guy is as original as if you would have bought it directly from HP, I even found the Windows 7 for Workstations recovery DVD, though I use CentOS the most because of the Tesla cards. Did you know HP has a built in cluster solution, if there is another HP Z on the network you can use it's CPU and GPU for extra performance, how cool is that, I now know why Dreamworks and Pixar use them. The case is also very cool, not in terms of looks as it's very plain but maintaining the components, easiest I have ever used.
    </p>
    <p>
     </p>
    <p>
    I've never auction hunted computer parts. That is just awesome.</p>
    <p>
     </p>
    <p>
    relic wrote:
    So as a hobby workstation I think I have one heck of a machine, not done with it either, I still want 3 more HP SAS 15,000 RPM drives. I lend out time to my video editing friends and get paid in dinners. I'm learning how to program in Cuda but I have to say it's a waste of time, the libraries for Python are fantastic. Actually on a side note, I noticed a lot of talking around here about OpenCL, the amount of people who will actually use it directly is very minuscule. AMD even thinks this, the libraries for Python, C++, etc, will do anything you need done. Only those programmers who are writing these libraries really need to learn OpenCL or Cuda. Also OpenCL can be found on almost every graphics card, even a Intel HD4000. I first started to experiment with it on my ThinkPad.
    </p>
    <p>
     </p>
    <p>
    also awesome.</p>
  • Reply 466 of 1320
    MarvinMarvin Posts: 15,435moderator
    relic wrote: »
    Did you know HP has a built in cluster solution, if there is another HP Z on the network you can use it's CPU and GPU for extra performance, how cool is that, I now know why Dreamworks and Pixar use them.

    I don't know if Pixar and Dreamworks would use that - they tend to have their own software e.g http://renderman.pixar.com/view/pixars-tractor - but that's the kind of thing I was hoping Apple would implement over Thunderbolt. There's a video here that seems to demonstrate HP's solution, although this could mainly be demoing Parallels leveraging HP's Cluster software:


    [VIDEO]


    I imagined there could be an office with say 5-10 Mac Pros connected via Thunderbolt and pretty much what you see happening in the video where the CPU/GPU cores get used in the background unknown to each user and an individual user would just assume they had access to a mini compute farm of say 100 processors.

    Apple's solution could largely be zero-config. It would just have a checkbox in Sharing called Compute Sharing and you'd plug in the Thunderbolt cable. iMac users could buy spare Minis for extra power etc.
    relic wrote: »
    So as a hobby workstation I think I have one heck of a machine

    That's certainly one area where Apple's devices aren't that appealing. Woz has mentioned this at times because he prefers to be able to get into machines and mess around. I think that's why some people liked the Mac Pro too because it was really the last machine they had that still let you do that. But I think it's best that this is left to cheaper machines. The Mac Pro is just too expensive to encourage people to open it up and mess around with it beyond basic upgrades like memory.
    relic wrote: »
    I'm learning how to program in Cuda but I have to say it's a waste of time, the libraries for Python are fantastic. Actually on a side note, I noticed a lot of talking around here about OpenCL, the amount of people who will actually use it directly is very minuscule. AMD even thinks this, the libraries for Python, C++, etc, will do anything you need done. Only those programmers who are writing these libraries really need to learn OpenCL or Cuda.

    It's definitely very specialized and I agree that it will be limited in use. A few larger software vendors seem to be picking it up, especially for image processing software. It'll be used for anything that has heavy computations:


    [VIDEO]


    [VIDEO]


    [VIDEO]


    The second video there is rendered using OpenCL, not the physics. The 3rd shows CPU vs dual-GPU. 20s for GPUs vs nearly 3 minutes for CPU. It can be used in media codecs for faster encoding performance and that's where I think users will see the largest benefits. Handbrake announced there will be some OpenCL acceleration coming:

    http://handbrake.fr/news.php

    For people dealing with video a lot, it will help to cut down transcode times. I think that OpenCL is at least getting more traction than Alti-vec and the performance improvements in Adobe's software are huge. There's little need to use it directly if the libraries you call use it and the benefits will take no extra effort.
  • Reply 467 of 1320
    wizard69wizard69 Posts: 13,377member
    A very interesting post!
    relic wrote: »
    <p style="margin-bottom:0in;">[SIZE=9pt]Thank you for the insightful post it was very educational. Though I loved[/SIZE] building [SIZE=9pt]machines when I was younger I don't do it as often due to a busy family life, I want a Plugin Play solution and I'm a professional programmer who makes my living with computers so I want the best tools. Traditionally this meant a Mac, I own a iMac and a Macbook Air but I've been using my Thinkpad X230T and Tablet II more often (the Lenovo's are from work). I'm also a computer enthusiasts who enjoys playing around with all sorts of machines. I think I bought the Sun server that resides in my bomb shelter (basement) more for the uniqueness and future tinkering then it's ultimate usage. I mean what normal person has a enterprise solution for their home server needs, this gal that's who.[/SIZE]</p>
    I'm not sure what sort of programming you do, but many programmers tend to go for as many cores as they can afford. It is pretty easy to leverage all the cores in a machine simply by telling make to use them. At least it is easy to do via the "C" style languages.

    To a lesser extent IDEs are like those cores too. I suspect you already know this.
    <p style="text-align:left;margin-bottom:0in;line-height:.17in;"> </p>

    <p style="text-align:left;margin-bottom:0in;line-height:.17in;">[SIZE=9pt]I was never able to talk myself into buying a Workstation until recently, the prices for these things are just so far into the stratosphere to be able to justify it. Especially when it would be just a hobby machine, as a programmer I don't need anything even close to that kind of performance. However, I still always wanted one. While shopping for a new power supply for my beloved Powerbook 100 (Poky) on eBay I subconsciously found myself looking at Workstations from IBM at first then HP. I couldn't believe that I could have one so inexpensively.[/SIZE]</p>
    Workstations suffer from the same realities of technology desktops suffer from. That is shrinking processes put more and more functionality into a given amount of space. This is why I see the new Mac Pro as being very forward looking, another process generation shrink with even higher integration and you will be looking at Mac Pros with only a few chips on each motherboard.
    <p style="text-align:left;margin-bottom:0in;line-height:.17in;"> </p>

    <p style="text-align:left;margin-bottom:0in;line-height:.17in;">[SIZE=9pt]So I discussed it with my husband and he agreed of a set price of 2500 and was just relieved that his wife didn't have any other addictions like shoes, clothes or crack. It started with the HP Z800 and then progressed to updating and adding  components, I followed the HP manual to the letter and only used components listed and were from HP. The coolest finds were the Tesla cards, still in the box, never used, even the Quadro card was in the box. You mentioned junk computers but this guy is as original as if you would have bought it directly from HP, I even found the Windows 7 for Workstations recovery DVD, though I use CentOS the most because of the Tesla cards. Did you know HP has a built in cluster solution, if there is another HP Z on the network you can use it's CPU and GPU for extra performance, how cool is that, I now know why Dreamworks and Pixar use them. The case is also very cool, not in terms of looks as it's very plain but maintaining the components, easiest I have ever used.[/SIZE]</p>
    Clustering solutions are offered up buy a couple of different vendors. Frankly I'm not sure if Apple even had clustering in mind when they designed this Mac Pro. There has been talk about linking via TB but I'm not even sure if TB supports peer to peer networking of this sort.

    It should be noted that Apple had XGrid but that went no where. In fact they more or less have dropped it. I honestly don't know why XGrid was dropped but I'd have to suspect that there where few users that really leveraged it. I've wondered if we would see a rebirth of XGrid that supports clustering a few Mac Pros over TB, if Apple is doing this they have been very quiet about it.
    <p style="text-align:left;margin-bottom:0in;line-height:.17in;"> </p>

    <p style="text-align:left;margin-bottom:0in;line-height:.17in;">[SIZE=9pt]So as a hobby workstation I think I have one heck of a machine, not done with it either, I still want 3 more HP SAS 15,000 RPM drives. I lend out time to my video editing friends and get paid in dinners.
    There is no better payment!
    I'm learning how to program in Cuda but I have to say it's a waste of time, the libraries for Python are fantastic.
    It is a waste of time because it is dead technology.
    Actually on a side note, I noticed a lot of talking around here about OpenCL, the amount of people who will actually use it directly is very minuscule. AMD even thinks this, the libraries for Python, C++, etc, will do anything you need done. Only those programmers who are writing these libraries really need to learn OpenCL or Cuda. Also OpenCL can be found on almost every graphics card, even a Intel HD4000. I first started to experiment with it on my ThinkPad.[/SIZE]</p>
    This isn't really a surprise,it is the way the computer world works, specialist write the libraries and "normal" programmers use them. You don't see many programmers trying to write a replacement for STL. Even Apple admonishes programmers to avoid writing direct for hardware to instead prefer libraries, thus the Accelerate framework and other performance computing libraries.

    The big difference with OpenCL is that an extremely talented programmer can leverage OpenCL in ways that libraries can't. Libraries are seldom perfect so OpenCL gives the programmer a direct option. For most people though the smart move is to use libraries or other programming abstractions.
    <p style="text-align:left;margin-bottom:0in;line-height:.17in;"> </p>

    <p style="text-align:left;margin-bottom:0in;line-height:.17in;">[SIZE=9pt]Back to the Mac Pro, my biggest scare is paying so much money for a machine that I won't be able to upgrade a year after I purchase it.
    Why would you even do this? What causes this urgency to upgrade before the first hardware even hits the market? The incremental performance increase you get each year don't justify the upgrades. As you know Intel has dragged out XEON updates for a very long time and then not deliver much when the new hardware actually ships. Both AMD and NVida appear to be on two year or greater cycles when it comes to real GPU hardware improvements that is new generation cores.

    I understand if this is a hobby (you should see what is in my cellar) but from a practicle standpoint it is really hard to justify yearly upgrades anymore.
    I know a lot of you will be fine with this but my addictive personality will defiantly want to replace or add something and the lack of a second CPU slot is probably the most disconcerting as it  would have been awesome to start with one and then eventually grow into a second. Oh well, we all can't be fully satisfied.[/SIZE]</p>
    By the time the next generation of real hardware updates come out you will see either clock rate increases or far more cores. Probably both. I just don't see the attraction with respect to more processor chips in a workstation.
    <p style="text-align:left;margin-bottom:0in;line-height:.17in;"> </p>

    <p style="text-align:left;margin-bottom:0in;line-height:.17in;">[SIZE=9pt]Thank you again for spending time with me and replying with such a nice post. [/SIZE]<img alt="1smile.gif" id="user_yui_3_10_0_1_1372191700326_1258" src="http://forums-files.appleinsider.com/images/smilies/1smile.gif" style="line-height:1.231;" name="user_yui_3_10_0_1_1372191700326_1258">
    </p>
  • Reply 468 of 1320
    relicrelic Posts: 4,735member

    Quote:

    Originally Posted by wizard69 View Post



    A very interesting post!

    I'm not sure what sort of programming you do, but many programmers tend to go for as many cores as they can afford. It is pretty easy to leverage all the cores in a machine simply by telling make to use them. At least it is easy to do via the "C" style languages.


    I started out as a FORTRAN programmer if you can believe that, then C#, C++, but now I'm a jack of all trades, I personally use Python, Perl, PHP and COBOL the most. I work for the largest bank in Switzerland, their backbone is Unix (Solaris/HP UX). I'm a department head with 12 programmers who are in charge of PNL calculations, Clearing data, Fix Protocol, trading systems and web services. As a women in a predominant male field you can imagine how hard it was for me to climb to this position.


    To a lesser extent IDEs are like those cores too. I suspect you already know this.

    Workstations suffer from the same realities of technology desktops suffer from. That is shrinking processes put more and more functionality into a given amount of space. This is why I see the new Mac Pro as being very forward looking, another process generation shrink with even higher integration and you will be looking at Mac Pros with only a few chips on each motherboard.

    Clustering solutions are offered up buy a couple of different vendors. Frankly I'm not sure if Apple even had clustering in mind when they designed this Mac Pro. There has been talk about linking via TB but I'm not even sure if TB supports peer to peer networking of this sort.


    You know I've seen a lot of posts about the possibilities of linking 5 or more Mac Pro's via Thunderbolt to cluster but I have seen no sign of this coming to light. I to don't believe or know if Thunderbolt is capable of such things, it doesn't make sense in a large office though as machines are usually scattered pretty far apart if not on different floors. The cost of such lenghly Thunderbolt cables would make this one expensive endeavor, they probably don't even make them.




    It should be noted that Apple had XGrid but that went no where. In fact they more or less have dropped it. I honestly don't know why XGrid was dropped but I'd have to suspect that there where few users that really leveraged it. I've wondered if we would see a rebirth of XGrid that supports clustering a few Mac Pros over TB, if Apple is doing this they have been very quiet about it.

    There is no better payment!

    It is a waste of time because it is dead technology.

    This isn't really a surprise,it is the way the computer world works, specialist write the libraries and "normal" programmers use them. You don't see many programmers trying to write a replacement for STL. Even Apple admonishes programmers to avoid writing direct for hardware to instead prefer libraries, thus the Accelerate framework and other performance computing libraries.


    I don't know if Cuda is a dead language, the amount of users who are on the support/programming forums would tell me otherwise. Plus Nvidia is the dominant force in the GPU industry not to mention that Nvidia has a very large Enterprise GPU clustering division. I believe Cuda is here to stay. Me personally I don't really care, Cuda or OpenCL. Nvidia was the first to use OpenCL and decided to write their own language so they must have had a good reason for doing so. I still prefer Nvidia graphic cards because of their Linux support, AMD is just awful on this OS. 


    The big difference with OpenCL is that an extremely talented programmer can leverage OpenCL in ways that libraries can't. Libraries are seldom perfect so OpenCL gives the programmer a direct option. For most people though the smart move is to use libraries or other programming abstractions.

    Why would you even do this? What causes this urgency to upgrade before the first hardware even hits the market? The incremental performance increase you get each year don't justify the upgrades. As you know Intel has dragged out XEON updates for a very long time and then not deliver much when the new hardware actually ships. Both AMD and NVida appear to be on two year or greater cycles when it comes to real GPU hardware improvements that is new generation cores.


    Yeah I have to tell you I'm loving my Tesla cards. I have a friend who is a real video guru, has a suped up Mac Pro, special monitors, input boards up the yin-yang, the whole nine yards. Even he can't believe how fast his projects render on my HP, almost 5 times as fast. I think he's fattening me with up with rich dinners so I can't catch him when he runs down the street with my workstation. I hope AMD goes down this road as well, I know they have something similar in their W10000 but a dedicated solution for servers and workstations is fantastic.


    I understand if this is a hobby (you should see what is in my cellar) but from a practical standpoint it is really hard to justify yearly upgrades anymore.

    By the time the next generation of real hardware updates come out you will see either clock rate increases or far more cores. Probably both. I just don't see the attraction with respect to more processor chips in a workstation.


    It's really because I can, more power, fuzzy feelings. There is no doubt the Mac Pro will sustain people for a very long time but I'm still old school and will always love tinkering and testing new parts. Does your cellar of have a water perification system, oxygen scrubbers, underground tunnels linking other cellars in case of nucleur attack, if not I WIN. Swiss are weird people huh.


     

  • Reply 469 of 1320
    nhtnht Posts: 4,522member

    Quote:

    Originally Posted by wizard69 View Post



    A very interesting post!

    I'm not sure what sort of programming you do, but many programmers tend to go for as many cores as they can afford. It is pretty easy to leverage all the cores in a machine simply by telling make to use them. At least it is easy to do via the "C" style languages.


     


    No, it's not except in the most trivial of cases.  To correctly parallelize code requires more than just a compiler switch.


     


    Quote:

    Workstations suffer from the same realities of technology desktops suffer from. That is shrinking processes put more and more functionality into a given amount of space. This is why I see the new Mac Pro as being very forward looking, another process generation shrink with even higher integration and you will be looking at Mac Pros with only a few chips on each motherboard.


     


    Workstation users desire two things: accuracy and speed.  Lower power and quieter is nice too.  Small isn't really a major factor and frankly after all these external chassis and raid boxes I dunno that the desktop footprint of the new Mac Pro is really smaller than the old Mac Pro because it's not like you can put these things on top of the Mac Pro and you can't necessarily put the Mac Pro on top of them.  Plus the cabling is a PITA to deal with and a dust ball magnet.


     


    Quote:

    It is a waste of time because it is dead technology.


     


    CUDA is not a dead technology in 2013.  It will not be a dead technology in 2014.  It strikes me even unlikely in 2018.  Beyond 5 years is anyone's guess.


     


    What is true for TODAY and TOMORROW is that CUDA support in pro-apps is better and more complete.  For example in Premiere Pro Adobe supports both CUDA and OpenCL but only accelerates ray tracing in CUDA.


     


    "Only NVIDIA GPUs provide acceleration for ray-traced 3D compositions.


     


    Kevin Monahan

    Social Support Lead

    Adobe After Effects

    Adobe Premiere Pro

    Adobe Systems, Inc.


    Follow Me on Twitter!"


     


    http://forums.creativecow.net/thread/3/942078


     


    nVidia GPUs have the advantage in GPU compute and unless they screw things up they'll retain that edge.  If they do then pros will continue to prefer nVidia solutions.  I recall reading somewhere (but do not have a link) that nVidia had around 75% of the workstation GPU market.  That's not a very big market but it's the one relevant to Mac Pros.  Why?  Drivers.  Look at the initial performance results for the FirePro 9000 vs the older Quadro 6000s.


     


    "There's plenty of battles coming through, and we'll closely follow to see just how AMD can lead this battle with NVIDIA. From these initial results, AMD has plenty of work ahead to optimize the drivers in order to get their parts competitive against almost two year old Quadro cards, yet alone the brand new, Kepler-based Quadro K5000. NVIDIA has proven its position as the undisputed leader (over 90% share) and AMD has to go on an aggressive tangent to build its challenge."


     


    http://vr-zone.com/articles/why-amd-firepro-still-cannot-compete-against-nvidia-quadro-old-or-new/17074.html


     


    Okay...they say 90% share here. On the plus side, AMD probably is giving Apple a hell of a price break.  They'll either enter the workstation market in a big way or destroy a nVidia profit center by dumping FirePros at firesale prices to Apple.


     


    The drivers have gotten better but nVidia has also filled out their Quadro line with Kepler cards.


     


    For consumer grade GPUs the Titan beats the Radeon on double precision performance for GP/GPU tasks.  You lose the safety of ECC RAM but on the mac the driver were pro quality already.


     



     


    This is the performance graph for single precision.


     



    And double precision.  


     


    http://streamcomputing.eu/blog/2013-06-16/amd-vs-nvidia-two-figures-tell-a-whole-story/


     


    Quote:

    Why would you even do this? What causes this urgency to upgrade before the first hardware even hits the market? The incremental performance increase you get each year don't justify the upgrades. As you know Intel has dragged out XEON updates for a very long time and then not deliver much when the new hardware actually ships. Both AMD and NVida appear to be on two year or greater cycles when it comes to real GPU hardware improvements that is new generation cores.


     


    Because by fall new AMD and nVidia GPUs will be out.  Like the Quadro K6000 which is the pro version of the Titan and much faster than the K5000.  A full GK110 GPU with all 15 SMX units and 2880 CUDA cores hasn't been released yet either.  Paired with a K20X that's going to be a powerhouse combo that's cheaper than a pair of K6000s.


     


    Quote:


    I understand if this is a hobby (you should see what is in my cellar) but from a practicle standpoint it is really hard to justify yearly upgrades anymore.

    By the time the next generation of real hardware updates come out you will see either clock rate increases or far more cores. Probably both. I just don't see the attraction with respect to more processor chips in a workstation.



     


    From a practical standpoint most companies do not buy new machines for you every year but will allow the purchase of a new GPU or more RAM or whatever if you need it.  Paying $3700 for another K20X is still cheaper than replacing an entire $10K+ workstation rig.

  • Reply 470 of 1320
    relicrelic Posts: 4,735member


    Thanks for the Nvidia info, I really wasn't going to debate this issue before you brought it up because OpenCL is a hot item right now due to the new MacPro's, it's a cool new term that's getting thrown around with 95% of the members here having never touched it. Heck before the new MacPro very few if any ever talked about OpenCL or CUDA, one mention of it by Apple and boom, it's the next bing thing since the invention of the computer itself. I have been using the GPU to offset the CPU for over 5 years now and your right, CUDA has a much bigger presence then OpenCL, especially in commercial software. I think this has a lot to do with Nvidia being the dominant in GPU sales. There are also soooo many great scripts available for the CUDA libraries, the community also seems larger to me, that is I can find what I'm looking for much quicker for CUDA, as well as a lot more code examples to copy off of. As a predominant Linux/Unix user when working with my Workstation I have always preferred Nvidia, they have always been more pro-active with their drivers then ATI/AMD, AMD has  gotten better but no where near what Nvidia has to offer.


     


    Now not to say that after Apple releases the new Mac Pro things will change but any major 3D software that's available for OSX using OpenCL will also have a CUDA cousin on other platforms. That and Nvidia can still run any OpenCL only software, if any are going to be available. CUDA was first to be available on Mari/Linux and they will continue to use it, if you want to see a real heated discussion on this subject check out Mari's usergroup over at linkedin.com, you will notice the ratio of CUDA to OpenCL users to be easily 5 to 1. When is Mari finially going to be available for OSX, I'd be interested to see the interface?


     


    The rest of The Foundry great products, unfortunatly only Mari will be available for OSX image


     


    I really don't have a problem who wins this battle, OpenCL or CUDA. Both are awesome and now that Apple's finially comming to the party it's only going to get better. Don't take what I said above to be anything more than an observation. I use CUDA becasue it was the easist platform for me to get started in terms of programming, that and most of the software I have including Adobe seemed to support it better.


     


    Please don't be mean to me for using CUDA.image


     


    Oh and I want a K20 so badly, I will wait until I find a used one on eBay though.

  • Reply 471 of 1320
    nhtnht Posts: 4,522member

    Quote:

    Originally Posted by Relic View Post


     


    Oh and I want a K20 so badly, I will wait until I find a used one on eBay though.



     


    The interesting thing is that AMD killed their Firestream line and the FirePros pull double duty against both the Quadros and Teslas.


     


    It makes a certain amount of sense to differentiate GPU compute from graphics display.  It also allows nVidia to price the Quadros and Teslas to bracket the FirePros.  When the K6000 appears I expect it to pricier than the equivalent FirePro but the K20 to get a little price drop to stay below W9000 pricing.


     


    A dual K6000 + K20X combo might still price favorably against dual W9000s.  The big deal is the Titans and their DP performance in a consumer grade card.  $1000 is pricey for a consumer card but cheap as a K20 stand-in.


     


    Unless you need distributed GPU the Titan is a viable alternative if you're doing CUDA development and don't want to pony up for a $3K card.  I think it's cheaper than the refurb K20s I've seen.  I might not run it 24/7 on a compute workload though.


     


    http://www.anandtech.com/show/6774/nvidias-geforce-gtx-titan-part-2-titans-performance-unveiled/3


     


    The Titan is well named.

  • Reply 472 of 1320
    relicrelic Posts: 4,735member


    The guys over at the Mari forums are just loving the Titan, for the price of one W9000 ,K20 or Quadro 6000 you can buy three of those bad boys and have over 13,000 teraflops at your disposal. Not to mention a really kickass gaming rig.image

  • Reply 473 of 1320
    macroninmacronin Posts: 1,174member

    Quote:

    Originally Posted by Relic View Post


     


    The rest of The Foundry great products, unfortunatly only Mari will be available for OSX image


     



    Actually, EVERYTHING on your list of The Foundry products is available on Mac OS X, excepting KATANA; which is Linux (CentOS/RHEL 5.4) only…


     


    (Of course, MARI is not CURRENTLY on OS X, but it will be Very Soon…)

  • Reply 474 of 1320
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Relic View Post


    Thanks for the Nvidia info, I really wasn't going to debate this issue before you brought it up because OpenCL is a hot item right now due to the new MacPro's, it's a cool new term that's getting thrown around with 95% of the members here having never touched it. Heck before the new MacPro very few if any ever talked about OpenCL or CUDA, one mention of it by Apple and boom, it's the next bing thing since the invention of the computer itself. I have been using the GPU to offset the CPU for over 5 years now and your right, CUDA has a much bigger presence then OpenCL, especially in commercial software. I think this has a lot to do with Nvidia being the dominant in GPU sales. There are also soooo many great scripts available for the CUDA libraries, the community also seems larger to me, that is I can find what I'm looking for much quicker for CUDA, as well as a lot more code examples to copy off of. As a predominant Linux/Unix user when working with my Workstation I have always preferred Nvidia, they have always been more pro-active with their drivers then ATI/AMD, AMD has  gotten better but no where near what Nvidia has to offer.


     


    Now not to say that after Apple releases the new Mac Pro things will change but any major 3D software that's available for OSX using OpenCL will also have a CUDA cousin on other platforms. That and Nvidia can still run any OpenCL only software, if any are going to be available. CUDA was first to be available on Mari/Linux and they will continue to use it, if you want to see a real heated discussion on this subject check out Mari's usergroup over at linkedin.com, you will notice the ratio of CUDA to OpenCL users to be easily 5 to 1. When is Mari finially going to be available for OSX, I'd be interested to see the interface?


     


    The rest of The Foundry great products, unfortunatly only Mari will be available for OSX image


     



    That would be because NVidia nai

  • Reply 475 of 1320
    macroninmacronin Posts: 1,174member

    Quote:

    Originally Posted by hmm View Post


    That would be because NVidia nai



     


    ???????

  • Reply 476 of 1320
    relicrelic Posts: 4,735member

    Quote:

    Originally Posted by hmm View Post


    That would be because NVidia nai



     



    image
  • Reply 477 of 1320
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Relic View Post


     



    image




    The forum software has been butchering my posts for some reason. On the last two I forgot to copy everything prior to submitting them. NVidia only tuned things for their own hardware, which allowed for some amount of stability in implementation. I suspect it has been a success for them in marketing things like teslas. You can check top500 and see that some of them are built around the use of Teslas.


     


    As for the foundry, they've always had some mac support. At least a portion of the Nuke team came from Shake. Modo was from Luxology and has always been available on the Mac. The guys that started Luxology were from Lightwave, which is also available on OSX. I know at least some of those titles are available on OSX. I've always been a fan of nuke even if I don't regularly get to use it. After Effects works for simpler stuff, even with Adobe's sloppy frankensteined layer system. Nodes are just much more efficient, and when you look away from Adobe software, you can often avoid baking certain things.


     


    Anyway it's not as complete as what I wrote before. Hopefully it works this time.

  • Reply 478 of 1320
    bergermeisterbergermeister Posts: 6,784member

    Quote:

    Originally Posted by Marvin View Post





    I imagined there could be an office with say 5-10 Mac Pros connected via Thunderbolt and pretty much what you see happening in the video where the CPU/GPU cores get used in the background unknown to each user and an individual user would just assume they had access to a mini compute farm of say 100 processors.



    Apple's solution could largely be zero-config. It would just have a checkbox in Sharing called Compute Sharing and you'd plug in the Thunderbolt cable. iMac users could buy spare Minis for extra power etc.


     


     


    That would be great.

  • Reply 479 of 1320
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Bergermeister View Post


     


     


    That would be great.





    You can already allocate tasks to other machines, but it's not as much of a cluster as Marvin is suggesting. Your system will not see it as just another cpu. It's on the end of something that is effectively a PCI bridge, although I'm not entirely sure how the system sees it. It can't be treated like something that is just over QPI, and there would be no shared memory address space. I think there are certain things that hold back lighter systems when it comes to this stuff in real world implementations. For the users that would actually invest in their own server farm, ram would be an issue for some. Software licenses are another issue, as all software is licensed differently. A lot of rendering software will actually have things like node licenses where past a couple machines you pay extra to use it on additional machines that are solely dedicated to rendering. That isn't something that would really affect a single user, but in multi-user environments, it may influence what level of hardware they purchase.



     

  • Reply 480 of 1320
    relicrelic Posts: 4,735member

    Quote:

    Originally Posted by hmm View Post




    The forum software has been butchering my posts for some reason. On the last two I forgot to copy everything prior to submitting them. NVidia only tuned things for their own hardware, which allowed for some amount of stability in implementation. I suspect it has been a success for them in marketing things like teslas. You can check top500 and see that some of them are built around the use of Teslas.


     


    As for the foundry, they've always had some mac support. At least a portion of the Nuke team came from Shake. Modo was from Luxology and has always been available on the Mac. The guys that started Luxology were from Lightwave, which is also available on OSX. I know at least some of those titles are available on OSX. I've always been a fan of nuke even if I don't regularly get to use it. After Effects works for simpler stuff, even with Adobe's sloppy frankensteined layer system. Nodes are just much more efficient, and when you look away from Adobe software, you can often avoid baking certain things.


     


    Anyway it's not as complete as what I wrote before. Hopefully it works this time.



     


    Your right about more apps having OSX support, I completely forgot about Modo and Nuke, I replaced a 6 year old Lightwave version for Modo, though they require CUDA to calculate certain Nodes and your rendering times are much, much faster with it. Interestingly, I think Modo runs better in OSX then Lightwave does but I still went for the Linux version because of the Tesla support. The Foundry recommends Redhat 6 or CentOS 6 and Nvidia GPU's for all of their products, hopefully they will start paying more attention to the new Mac Pro's. It will depend on how popular they are I guess, so fingers crossed everyone.

Sign In or Register to comment.