or Connect
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Apple throws out the rulebook for its unique next-gen Mac Pro
New Posts  All Forums:Forum Nav:

Apple throws out the rulebook for its unique next-gen Mac Pro - Page 12

post #441 of 1290
Quote:
Originally Posted by Relic View Post

The Quad Core i7-3770S found in the iMac has a CPU score of 9098, a Xeon Six Core E5649 has a CPU score of 7037 and costs twice as much. However with the mixture of a 12 Core CPU, fast bus and PCI SSD there is no doubt the Mac Pro will be ridiculously fast. I however still prefer a multiple socket machine,  for the price of one 12 Core Xeon I can buy two 6, maybe 8 Core CPU's with a rating of 10,000 and above and completely wipe the floor with it. My current HP Z800 has 2  Xeon X5660's with a CPU score of 8459 for each processor, the 12 Core Mac Pro Xeon will most likely have a CPU score between 12,000 - 14,000.

The top current Mac Pro uses X5675s, which are higher clocked than the X5660s that you have. The Passmark score incorrectly lists it lower. The new Mac Pro is expected to be 10% faster than this, which would be a score above 18,000. Two X5660s would be more expensive than the 12-core in the new Mac Pro. The E5-26xx line tops out at $2057. Your two X5660s are priced over $2400, though you may have paid less.

The E5-1650 would be good value but you can't use more than one. Two of all the other processors would be priced above $2057. There's no doubt that you can configure a two-core CPU to perform faster than a single-core but it's more expensive. Two of the Ivy Bridge equivalent of the E5-2687W would potentially 'wipe the floor' with the Mac Pro but the CPUs would be close to double the price as they are $1885 each.

It sounds like you are trying too hard to validate your own preference of buying a hacked together piece of junk as most people who do the same try to. Gamers do the same with consoles e.g why buy a playstation when you can knock together an awesome gaming PC for $500? If that's your preference, that's ok but you're kidding yourself if you think everyone else is making poorer choices.
Quote:
Originally Posted by Relic View Post

I really like having a machine that can grow with my needs instead of chucking it out when I need something faster. Even if Apple allows us to upgrade the video cards or SSD, do to their design you will always have to buy expensive upgrades from them. Is the CPU soldered to the motherboard, I sure hope not but I wouldn't be surprised if it was.

You don't 'chuck it out', you sell it second-hand. Unlike the hacked-together abominations you'd be accustomed to dealing with, Macs have a thing called resale value. The problem with people who do things the way you do is that if you ever come to sell a machine, your target audience is people like you who know you can hack together your own computer anyway so the resale value tanks well below the lowest build cost of a new machine.

As for the upgradeability, Apple might solder things in place but these are machines with CPUs costing $2k. It would make more sense from their point of view to leave them so that the GPUs and CPUs can be reconfigured if there's a return. This could be done by switching out the boards of course but that still gives an opportunity for an upgrade as resellers can buy off-the-shelf Mac Pros and sell the separate individual parts.
post #442 of 1290

I highly doubt we will see either of your items on a Mac Mini spec sheet anytime soon. In fact I can count the number of workstations that have built in clustering on one hand. If your looking for that kind of tech go with a HP Z series workstation. Any unused CPU or GPU cycles from a machine on a network can be utilized for your evil doing.

When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
post #443 of 1290
Quote:
Originally Posted by Marvin View Post

It sounds like you are trying too hard to validate your own preference of buying a hacked together piece of junk as most people who do the same try to.
You don't 'chuck it out', you sell it second-hand. Unlike the hacked-together abominations you'd be accustomed to dealing with, Macs have a thing called resale value. The problem with people who do things the way you do is that if you ever come to sell a machine, your target audience is people like you who know you can hack together your own computer anyway so the resale value tanks well below the lowest build cost of a new machine.
 

 

No, not trying to validate, just trying to talk myself into buying a MacPro but coming up without an excuse. My HP Z800 and everything in it is a HP component except for the CPU's. The CPU's aren't marked HP as I got them out of a broken IBM server that I paid 300 for at a auction, 300 for 3600 worth of chips is a steal, it was also a gamble though. Banks have these auctions all the time and are worth going as case in point. I understand you simply don't chuck it out but sell them, I'm the resale queen, that's how I'm able to buy a new laptop every 6 months, for the price of one I can have six. However a desktop workstation is a little different, I can maintain a pretty capable machine for a very long time and still be able to keep up with the latest models by buying faster components. 

When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
post #444 of 1290
Quote:
Originally Posted by TheOtherGeoff View Post

me thinks too much overlapp with the next iMac.   skating to where the puck will be, I do think your 'good' mac won't exist, or if it does, it will be at $2299 (and the current TOL iMac will drop to $1799).   They have to maintain that $500 differential.   One can only assume that PCIe SSDs will be across all mac products by fall,  with a PCIe Fusion drive as well (save for the Mac Pro).  

....

Apple traditionally doesn't cut prices on its computers. IMO, if we're lucky the entry price of the new Mac Pro will remain at the current level, if we're not then it'll be in the $2750 - $3000 range. Of course, with the limited information available, any ideas range from at best speculation all the way to wishful thinking and wild guesses.
post #445 of 1290
Quote:
Originally Posted by Relic View Post

I'm the resale queen, that's how I'm able to buy a new laptop every 6 months, for the price of one I can have six. However a desktop workstation is a little different, I can maintain a pretty capable machine for a very long time and still be able to keep up with the latest models by buying faster components.

You can save a lot of money if you get some of the deals like you manage to find. For people buying from retail outlets like Newegg, the saving is almost non-existant.

Say for example, there's a Mac Pro at $3k with a single 6-core CPU with dual FirePro graphics and it lasts 2 years before an upgrade is needed. They'd sell the hardware second-hand for about $1700 and buy the new 6-core for $3000 costing $1300.

Now, say that someone bought a similar spec machine for $2300 from another vendor and after 2 years, they upgrade the CPU. So they pay $600 for a new retail CPU and install it. Maybe they can get some money for the old CPU too.

The Mac Pro also has faster GPUs though and likely a new memory standard (DDR4) and it has a full warranty. Not only that, the resale value of the new Mac Pro at that point is ~$3k as they just bought it, the resale value of the other one is less than $2300. Let's say it's $1800.

So, the relative exchanges over 2 years would be:

Mac Pro: $3000 - $1700 + $3000 = $4300 with a retained value of $3000 so net loss = $1300
Junk: $2300 + $600 - $200 = $2700 with a retained value of $1800 so net loss = $900

That's assuming you'd get $200 for the 2 year old CPU and you can resell a 2 year old machine with a self-upgraded CPU for $1800, which is optimistic. Even in pretty much the best case, the saving is just $400 vs the Mac and the Mac has new GPUs and parts and a full warranty. The big difference is in the significantly higher outlay but the losses incurred over time are pretty close and I reckon you'd have a better user experience during the two years of owning the Mac.

Some people will see it as being worse value than before and it can be if you get a great deal on CPUs but you pretty much have to for it to be worth doing. I think people need to get away from the notion that buying new computers regularly is a bad value proposition and this Mac Pro design will help drive that. It helps Apple as they are getting a higher sales volume and by extension customers as there's a demand for new models, it keeps the second-hand market healthy and will lead to a better ownership experience. The self-upgrade route is the quickest way to kill that industry because the sales volumes for the box manufacturers just collapse. Eventually they just won't bother. There was a report out just today about Samsung, which they've denied:

http://bgr.com/2013/06/25/samsung-pc-business-continues/

They could be baseless but that market is shrinking and it's not going to get better.
Quote:
Originally Posted by Relic 
trying to talk myself into buying a MacPro but coming up without an excuse

The engineering looks pretty impressive. They are highlighting the noise level saying it's "astonishingly quiet". You could replace that noisy server in the basement with a machine that you can sit by your bedside while you sleep, depending on how much data it's serving of course. I'm hoping they've got a good deal on the graphics cards because that would really sell it for OpenCL-supported apps.
post #446 of 1290

I wouldn't sell, I would just keep adding nodes to the render farm… Which would be especially sweet once developers start working OpenCL number crunching into the render apps… Each upgrade not only adds another CPU to the render farm, but also adds two (GP)GPUs…!

Late 2009 Unibody MacBook (modified)
2.26GHz Core 2 Duo CPU/8GB RAM/60GB SSD/500GB HDD
SuperDrive delete
Reply
Late 2009 Unibody MacBook (modified)
2.26GHz Core 2 Duo CPU/8GB RAM/60GB SSD/500GB HDD
SuperDrive delete
Reply
post #447 of 1290

Thank you for the insightful post it was very educational. Though I loved building machines when I was younger I don't do it as often due to a busy family life, I want a Plugin Play solution and I'm a professional programmer who makes my living with computers so I want the best tools. Traditionally this meant a Mac, I own a iMac and a Macbook Air but I've been using my Thinkpad X230T and Tablet II more often (the Lenovo's are from work). I'm also a computer enthusiasts who enjoys playing around with all sorts of machines. I think I bought the Sun server that resides in my bomb shelter (basement) more for the uniqueness and future tinkering then it's ultimate usage. I mean what normal person has a enterprise solution for their home server needs, this gal that's who.

 

I was never able to talk myself into buying a Workstation until recently, the prices for these things are just so far into the stratosphere to be able to justify it. Especially when it would be just a hobby machine, as a programmer I don't need anything even close to that kind of performance. However, I still always wanted one. While shopping for a new power supply for my beloved Powerbook 100 (Poky) on eBay I subconsciously found myself looking at Workstations from IBM at first then HP. I couldn't believe that I could have one so inexpensively.

 

So I discussed it with my husband and he agreed of a set price of 2500 and was just relieved that his wife didn't have any other addictions like shoes, clothes or crack. It started with the HP Z800 and then progressed to updating and adding  components, I followed the HP manual to the letter and only used components listed and were from HP. The coolest finds were the Tesla cards, still in the box, never used, even the Quadro card was in the box. You mentioned junk computers but this guy is as original as if you would have bought it directly from HP, I even found the Windows 7 for Workstations recovery DVD, though I use CentOS the most because of the Tesla cards. Did you know HP has a built in cluster solution, if there is another HP Z on the network you can use it's CPU and GPU for extra performance, how cool is that, I now know why Dreamworks and Pixar use them. The case is also very cool, not in terms of looks as it's very plain but maintaining the components, easiest I have ever used.

 

So as a hobby workstation I think I have one heck of a machine, not done with it either, I still want 3 more HP SAS 15,000 RPM drives. I lend out time to my video editing friends and get paid in dinners. I'm learning how to program in Cuda but I have to say it's a waste of time, the libraries for Python are fantastic. Actually on a side note, I noticed a lot of talking around here about OpenCL, the amount of people who will actually use it directly is very minuscule. AMD even thinks this, the libraries for Python, C++, etc, will do anything you need done. Only those programmers who are writing these libraries really need to learn OpenCL or Cuda. Also OpenCL can be found on almost every graphics card, even a Intel HD4000. I first started to experiment with it on my ThinkPad.

 

Back to the Mac Pro, my biggest scare is paying so much money for a machine that I won't be able to upgrade a year after I purchase it. I know a lot of you will be fine with this but my addictive personality will defiantly want to replace or add something and the lack of a second CPU slot is probably the most disconcerting as it  would have been awesome to start with one and then eventually grow into a second. Oh well, we all can't be fully satisfied.

 

Thank you again for spending time with me and replying with such a nice post. 1smile.gif


Edited by Relic - 6/25/13 at 2:44pm
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
post #448 of 1290
Quote:
Originally Posted by Marvin View Post



The Mac Pro also has faster GPUs though and likely a new memory standard (DDR4) and it has a full warranty. Not only that, the resale value of the new Mac Pro at that point is ~$3k as they just bought it, the resale value of the other one is less than $2300. Let's say it's $1800.
 

I don't feel like arguing with you on resale value math today, so I am leaving that out for now. A couple things here on the quoted portion. Faster isn't always the goal of workstation gpus. They can customize the drivers however they like here, as these are made to run OSX. Normally workstation gpus have to be stable in OpenGL driven apps and maintain speed at double precision floating point math. They're used a lot for running CAD or 3d animation apps. Many of those run just fine with almost any gpu if your scene or workspace is lighter. You won't notice a huge difference unless it's full of highly tessellated geometry. The realtime shaders used m

 

 

Quote:
So I discussed it with my husband and he agreed of a set price of 2500 and was just relieved that his wife didn't have any other addictions like shoes, clothes or crack. It started with the HP Z800 and then progressed to updating and adding components, I followed the HP manual to the letter and only used components listed and were from HP. The coolest finds were the Tesla cards, still in the box, never used, even the Quadro card was in the box. You mentioned junk computers but this guy is as original as if you would have bought it directly from HP, I even found the Windows 7 for Workstations recovery DVD, though I use CentOS the most because of the Tesla cards. Did you know HP has a built in cluster solution, if there is another HP Z on the network you can use it's CPU and GPU for extra performance, how cool is that, I now know why Dreamworks and Pixar use them. The case is also very cool, not in terms of looks as it's very plain but maintaining the components, easiest I have ever used.

 

I've never auction hunted computer parts. That is just awesome.

 

 

Quote:
So as a hobby workstation I think I have one heck of a machine, not done with it either, I still want 3 more HP SAS 15,000 RPM drives. I lend out time to my video editing friends and get paid in dinners. I'm learning how to program in Cuda but I have to say it's a waste of time, the libraries for Python are fantastic. Actually on a side note, I noticed a lot of talking around here about OpenCL, the amount of people who will actually use it directly is very minuscule. AMD even thinks this, the libraries for Python, C++, etc, will do anything you need done. Only those programmers who are writing these libraries really need to learn OpenCL or Cuda. Also OpenCL can be found on almost every graphics card, even a Intel HD4000. I first started to experiment with it on my ThinkPad.

 

also awesome.

post #449 of 1290
Quote:
Originally Posted by Relic View Post

Did you know HP has a built in cluster solution, if there is another HP Z on the network you can use it's CPU and GPU for extra performance, how cool is that, I now know why Dreamworks and Pixar use them.

I don't know if Pixar and Dreamworks would use that - they tend to have their own software e.g http://renderman.pixar.com/view/pixars-tractor - but that's the kind of thing I was hoping Apple would implement over Thunderbolt. There's a video here that seems to demonstrate HP's solution, although this could mainly be demoing Parallels leveraging HP's Cluster software:



I imagined there could be an office with say 5-10 Mac Pros connected via Thunderbolt and pretty much what you see happening in the video where the CPU/GPU cores get used in the background unknown to each user and an individual user would just assume they had access to a mini compute farm of say 100 processors.

Apple's solution could largely be zero-config. It would just have a checkbox in Sharing called Compute Sharing and you'd plug in the Thunderbolt cable. iMac users could buy spare Minis for extra power etc.
Quote:
Originally Posted by Relic View Post

So as a hobby workstation I think I have one heck of a machine

That's certainly one area where Apple's devices aren't that appealing. Woz has mentioned this at times because he prefers to be able to get into machines and mess around. I think that's why some people liked the Mac Pro too because it was really the last machine they had that still let you do that. But I think it's best that this is left to cheaper machines. The Mac Pro is just too expensive to encourage people to open it up and mess around with it beyond basic upgrades like memory.
Quote:
Originally Posted by Relic View Post

I'm learning how to program in Cuda but I have to say it's a waste of time, the libraries for Python are fantastic. Actually on a side note, I noticed a lot of talking around here about OpenCL, the amount of people who will actually use it directly is very minuscule. AMD even thinks this, the libraries for Python, C++, etc, will do anything you need done. Only those programmers who are writing these libraries really need to learn OpenCL or Cuda.

It's definitely very specialized and I agree that it will be limited in use. A few larger software vendors seem to be picking it up, especially for image processing software. It'll be used for anything that has heavy computations:





The second video there is rendered using OpenCL, not the physics. The 3rd shows CPU vs dual-GPU. 20s for GPUs vs nearly 3 minutes for CPU. It can be used in media codecs for faster encoding performance and that's where I think users will see the largest benefits. Handbrake announced there will be some OpenCL acceleration coming:

http://handbrake.fr/news.php

For people dealing with video a lot, it will help to cut down transcode times. I think that OpenCL is at least getting more traction than Alti-vec and the performance improvements in Adobe's software are huge. There's little need to use it directly if the libraries you call use it and the benefits will take no extra effort.
post #450 of 1290
A very interesting post!
Quote:
Originally Posted by Relic View Post

Thank you for the insightful post it was very educational. Though I loved building machines when I was younger I don't do it as often due to a busy family life, I want a Plugin Play solution and I'm a professional programmer who makes my living with computers so I want the best tools. Traditionally this meant a Mac, I own a iMac and a Macbook Air but I've been using my Thinkpad X230T and Tablet II more often (the Lenovo's are from work). I'm also a computer enthusiasts who enjoys playing around with all sorts of machines. I think I bought the Sun server that resides in my bomb shelter (basement) more for the uniqueness and future tinkering then it's ultimate usage. I mean what normal person has a enterprise solution for their home server needs, this gal that's who.

I'm not sure what sort of programming you do, but many programmers tend to go for as many cores as they can afford. It is pretty easy to leverage all the cores in a machine simply by telling make to use them. At least it is easy to do via the "C" style languages.

To a lesser extent IDEs are like those cores too. I suspect you already know this.
Quote:

 



I was never able to talk myself into buying a Workstation until recently, the prices for these things are just so far into the stratosphere to be able to justify it. Especially when it would be just a hobby machine, as a programmer I don't need anything even close to that kind of performance. However, I still always wanted one. While shopping for a new power supply for my beloved Powerbook 100 (Poky) on eBay I subconsciously found myself looking at Workstations from IBM at first then HP. I couldn't believe that I could have one so inexpensively.

Workstations suffer from the same realities of technology desktops suffer from. That is shrinking processes put more and more functionality into a given amount of space. This is why I see the new Mac Pro as being very forward looking, another process generation shrink with even higher integration and you will be looking at Mac Pros with only a few chips on each motherboard.
Quote:

 



So I discussed it with my husband and he agreed of a set price of 2500 and was just relieved that his wife didn't have any other addictions like shoes, clothes or crack. It started with the HP Z800 and then progressed to updating and adding  components, I followed the HP manual to the letter and only used components listed and were from HP. The coolest finds were the Tesla cards, still in the box, never used, even the Quadro card was in the box. You mentioned junk computers but this guy is as original as if you would have bought it directly from HP, I even found the Windows 7 for Workstations recovery DVD, though I use CentOS the most because of the Tesla cards. Did you know HP has a built in cluster solution, if there is another HP Z on the network you can use it's CPU and GPU for extra performance, how cool is that, I now know why Dreamworks and Pixar use them. The case is also very cool, not in terms of looks as it's very plain but maintaining the components, easiest I have ever used.

Clustering solutions are offered up buy a couple of different vendors. Frankly I'm not sure if Apple even had clustering in mind when they designed this Mac Pro. There has been talk about linking via TB but I'm not even sure if TB supports peer to peer networking of this sort.

It should be noted that Apple had XGrid but that went no where. In fact they more or less have dropped it. I honestly don't know why XGrid was dropped but I'd have to suspect that there where few users that really leveraged it. I've wondered if we would see a rebirth of XGrid that supports clustering a few Mac Pros over TB, if Apple is doing this they have been very quiet about it.
Quote:

 



So as a hobby workstation I think I have one heck of a machine, not done with it either, I still want 3 more HP SAS 15,000 RPM drives. I lend out time to my video editing friends and get paid in dinners.

There is no better payment!
Quote:
I'm learning how to program in Cuda but I have to say it's a waste of time, the libraries for Python are fantastic.
It is a waste of time because it is dead technology.
Quote:
Actually on a side note, I noticed a lot of talking around here about OpenCL, the amount of people who will actually use it directly is very minuscule. AMD even thinks this, the libraries for Python, C++, etc, will do anything you need done. Only those programmers who are writing these libraries really need to learn OpenCL or Cuda. Also OpenCL can be found on almost every graphics card, even a Intel HD4000. I first started to experiment with it on my ThinkPad.
This isn't really a surprise,it is the way the computer world works, specialist write the libraries and "normal" programmers use them. You don't see many programmers trying to write a replacement for STL. Even Apple admonishes programmers to avoid writing direct for hardware to instead prefer libraries, thus the Accelerate framework and other performance computing libraries.

The big difference with OpenCL is that an extremely talented programmer can leverage OpenCL in ways that libraries can't. Libraries are seldom perfect so OpenCL gives the programmer a direct option. For most people though the smart move is to use libraries or other programming abstractions.
Quote:

 



Back to the Mac Pro, my biggest scare is paying so much money for a machine that I won't be able to upgrade a year after I purchase it.

Why would you even do this? What causes this urgency to upgrade before the first hardware even hits the market? The incremental performance increase you get each year don't justify the upgrades. As you know Intel has dragged out XEON updates for a very long time and then not deliver much when the new hardware actually ships. Both AMD and NVida appear to be on two year or greater cycles when it comes to real GPU hardware improvements that is new generation cores.

I understand if this is a hobby (you should see what is in my cellar) but from a practicle standpoint it is really hard to justify yearly upgrades anymore.
Quote:
I know a lot of you will be fine with this but my addictive personality will defiantly want to replace or add something and the lack of a second CPU slot is probably the most disconcerting as it  would have been awesome to start with one and then eventually grow into a second. Oh well, we all can't be fully satisfied.
By the time the next generation of real hardware updates come out you will see either clock rate increases or far more cores. Probably both. I just don't see the attraction with respect to more processor chips in a workstation.
Quote:

 



Thank you again for spending time with me and replying with such a nice post. 1smile.gif


post #451 of 1290
Quote:
Originally Posted by wizard69 View Post

A very interesting post!
I'm not sure what sort of programming you do, but many programmers tend to go for as many cores as they can afford. It is pretty easy to leverage all the cores in a machine simply by telling make to use them. At least it is easy to do via the "C" style languages.
I started out as a FORTRAN programmer if you can believe that, then C#, C++, but now I'm a jack of all trades, I personally use Python, Perl, PHP and COBOL the most. I work for the largest bank in Switzerland, their backbone is Unix (Solaris/HP UX). I'm a department head with 12 programmers who are in charge of PNL calculations, Clearing data, Fix Protocol, trading systems and web services. As a women in a predominant male field you can imagine how hard it was for me to climb to this position.
To a lesser extent IDEs are like those cores too. I suspect you already know this.
Workstations suffer from the same realities of technology desktops suffer from. That is shrinking processes put more and more functionality into a given amount of space. This is why I see the new Mac Pro as being very forward looking, another process generation shrink with even higher integration and you will be looking at Mac Pros with only a few chips on each motherboard.
Clustering solutions are offered up buy a couple of different vendors. Frankly I'm not sure if Apple even had clustering in mind when they designed this Mac Pro. There has been talk about linking via TB but I'm not even sure if TB supports peer to peer networking of this sort.
You know I've seen a lot of posts about the possibilities of linking 5 or more Mac Pro's via Thunderbolt to cluster but I have seen no sign of this coming to light. I to don't believe or know if Thunderbolt is capable of such things, it doesn't make sense in a large office though as machines are usually scattered pretty far apart if not on different floors. The cost of such lenghly Thunderbolt cables would make this one expensive endeavor, they probably don't even make them.

It should be noted that Apple had XGrid but that went no where. In fact they more or less have dropped it. I honestly don't know why XGrid was dropped but I'd have to suspect that there where few users that really leveraged it. I've wondered if we would see a rebirth of XGrid that supports clustering a few Mac Pros over TB, if Apple is doing this they have been very quiet about it.
There is no better payment!
It is a waste of time because it is dead technology.
This isn't really a surprise,it is the way the computer world works, specialist write the libraries and "normal" programmers use them. You don't see many programmers trying to write a replacement for STL. Even Apple admonishes programmers to avoid writing direct for hardware to instead prefer libraries, thus the Accelerate framework and other performance computing libraries.
I don't know if Cuda is a dead language, the amount of users who are on the support/programming forums would tell me otherwise. Plus Nvidia is the dominant force in the GPU industry not to mention that Nvidia has a very large Enterprise GPU clustering division. I believe Cuda is here to stay. Me personally I don't really care, Cuda or OpenCL. Nvidia was the first to use OpenCL and decided to write their own language so they must have had a good reason for doing so. I still prefer Nvidia graphic cards because of their Linux support, AMD is just awful on this OS. 
The big difference with OpenCL is that an extremely talented programmer can leverage OpenCL in ways that libraries can't. Libraries are seldom perfect so OpenCL gives the programmer a direct option. For most people though the smart move is to use libraries or other programming abstractions.
Why would you even do this? What causes this urgency to upgrade before the first hardware even hits the market? The incremental performance increase you get each year don't justify the upgrades. As you know Intel has dragged out XEON updates for a very long time and then not deliver much when the new hardware actually ships. Both AMD and NVida appear to be on two year or greater cycles when it comes to real GPU hardware improvements that is new generation cores.
Yeah I have to tell you I'm loving my Tesla cards. I have a friend who is a real video guru, has a suped up Mac Pro, special monitors, input boards up the yin-yang, the whole nine yards. Even he can't believe how fast his projects render on my HP, almost 5 times as fast. I think he's fattening me with up with rich dinners so I can't catch him when he runs down the street with my workstation. I hope AMD goes down this road as well, I know they have something similar in their W10000 but a dedicated solution for servers and workstations is fantastic.
I understand if this is a hobby (you should see what is in my cellar) but from a practical standpoint it is really hard to justify yearly upgrades anymore.
By the time the next generation of real hardware updates come out you will see either clock rate increases or far more cores. Probably both. I just don't see the attraction with respect to more processor chips in a workstation.
It's really because I can, more power, fuzzy feelings. There is no doubt the Mac Pro will sustain people for a very long time but I'm still old school and will always love tinkering and testing new parts. Does your cellar of have a water perification system, oxygen scrubbers, underground tunnels linking other cellars in case of nucleur attack, if not I WIN. Swiss are weird people huh.
 

Edited by Relic - 6/27/13 at 5:49am
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
post #452 of 1290
Quote:
Originally Posted by wizard69 View Post

A very interesting post!
I'm not sure what sort of programming you do, but many programmers tend to go for as many cores as they can afford. It is pretty easy to leverage all the cores in a machine simply by telling make to use them. At least it is easy to do via the "C" style languages.

 

No, it's not except in the most trivial of cases.  To correctly parallelize code requires more than just a compiler switch.

 

Quote:
Workstations suffer from the same realities of technology desktops suffer from. That is shrinking processes put more and more functionality into a given amount of space. This is why I see the new Mac Pro as being very forward looking, another process generation shrink with even higher integration and you will be looking at Mac Pros with only a few chips on each motherboard.

 

Workstation users desire two things: accuracy and speed.  Lower power and quieter is nice too.  Small isn't really a major factor and frankly after all these external chassis and raid boxes I dunno that the desktop footprint of the new Mac Pro is really smaller than the old Mac Pro because it's not like you can put these things on top of the Mac Pro and you can't necessarily put the Mac Pro on top of them.  Plus the cabling is a PITA to deal with and a dust ball magnet.

 

Quote:
It is a waste of time because it is dead technology.

 

CUDA is not a dead technology in 2013.  It will not be a dead technology in 2014.  It strikes me even unlikely in 2018.  Beyond 5 years is anyone's guess.

 

What is true for TODAY and TOMORROW is that CUDA support in pro-apps is better and more complete.  For example in Premiere Pro Adobe supports both CUDA and OpenCL but only accelerates ray tracing in CUDA.

 

"Only NVIDIA GPUs provide acceleration for ray-traced 3D compositions.

 

Kevin Monahan
Social Support Lead
Adobe After Effects
Adobe Premiere Pro
Adobe Systems, Inc.

Follow Me on Twitter!"

 

http://forums.creativecow.net/thread/3/942078

 

nVidia GPUs have the advantage in GPU compute and unless they screw things up they'll retain that edge.  If they do then pros will continue to prefer nVidia solutions.  I recall reading somewhere (but do not have a link) that nVidia had around 75% of the workstation GPU market.  That's not a very big market but it's the one relevant to Mac Pros.  Why?  Drivers.  Look at the initial performance results for the FirePro 9000 vs the older Quadro 6000s.

 

"There's plenty of battles coming through, and we'll closely follow to see just how AMD can lead this battle with NVIDIA. From these initial results, AMD has plenty of work ahead to optimize the drivers in order to get their parts competitive against almost two year old Quadro cards, yet alone the brand new, Kepler-based Quadro K5000. NVIDIA has proven its position as the undisputed leader (over 90% share) and AMD has to go on an aggressive tangent to build its challenge."

 

http://vr-zone.com/articles/why-amd-firepro-still-cannot-compete-against-nvidia-quadro-old-or-new/17074.html

 

Okay...they say 90% share here. On the plus side, AMD probably is giving Apple a hell of a price break.  They'll either enter the workstation market in a big way or destroy a nVidia profit center by dumping FirePros at firesale prices to Apple.

 

The drivers have gotten better but nVidia has also filled out their Quadro line with Kepler cards.

 

For consumer grade GPUs the Titan beats the Radeon on double precision performance for GP/GPU tasks.  You lose the safety of ECC RAM but on the mac the driver were pro quality already.

 

 

This is the performance graph for single precision.

 

And double precision.  

 

http://streamcomputing.eu/blog/2013-06-16/amd-vs-nvidia-two-figures-tell-a-whole-story/

 

Quote:
Why would you even do this? What causes this urgency to upgrade before the first hardware even hits the market? The incremental performance increase you get each year don't justify the upgrades. As you know Intel has dragged out XEON updates for a very long time and then not deliver much when the new hardware actually ships. Both AMD and NVida appear to be on two year or greater cycles when it comes to real GPU hardware improvements that is new generation cores.

 

Because by fall new AMD and nVidia GPUs will be out.  Like the Quadro K6000 which is the pro version of the Titan and much faster than the K5000.  A full GK110 GPU with all 15 SMX units and 2880 CUDA cores hasn't been released yet either.  Paired with a K20X that's going to be a powerhouse combo that's cheaper than a pair of K6000s.

 

Quote:
I understand if this is a hobby (you should see what is in my cellar) but from a practicle standpoint it is really hard to justify yearly upgrades anymore.
By the time the next generation of real hardware updates come out you will see either clock rate increases or far more cores. Probably both. I just don't see the attraction with respect to more processor chips in a workstation.

 

From a practical standpoint most companies do not buy new machines for you every year but will allow the purchase of a new GPU or more RAM or whatever if you need it.  Paying $3700 for another K20X is still cheaper than replacing an entire $10K+ workstation rig.

post #453 of 1290

Thanks for the Nvidia info, I really wasn't going to debate this issue before you brought it up because OpenCL is a hot item right now due to the new MacPro's, it's a cool new term that's getting thrown around with 95% of the members here having never touched it. Heck before the new MacPro very few if any ever talked about OpenCL or CUDA, one mention of it by Apple and boom, it's the next bing thing since the invention of the computer itself. I have been using the GPU to offset the CPU for over 5 years now and your right, CUDA has a much bigger presence then OpenCL, especially in commercial software. I think this has a lot to do with Nvidia being the dominant in GPU sales. There are also soooo many great scripts available for the CUDA libraries, the community also seems larger to me, that is I can find what I'm looking for much quicker for CUDA, as well as a lot more code examples to copy off of. As a predominant Linux/Unix user when working with my Workstation I have always preferred Nvidia, they have always been more pro-active with their drivers then ATI/AMD, AMD has  gotten better but no where near what Nvidia has to offer.

 

Now not to say that after Apple releases the new Mac Pro things will change but any major 3D software that's available for OSX using OpenCL will also have a CUDA cousin on other platforms. That and Nvidia can still run any OpenCL only software, if any are going to be available. CUDA was first to be available on Mari/Linux and they will continue to use it, if you want to see a real heated discussion on this subject check out Mari's usergroup over at linkedin.com, you will notice the ratio of CUDA to OpenCL users to be easily 5 to 1. When is Mari finially going to be available for OSX, I'd be interested to see the interface?

 

The rest of The Foundry great products, unfortunatly only Mari will be available for OSX 1frown.gif

 

I really don't have a problem who wins this battle, OpenCL or CUDA. Both are awesome and now that Apple's finially comming to the party it's only going to get better. Don't take what I said above to be anything more than an observation. I use CUDA becasue it was the easist platform for me to get started in terms of programming, that and most of the software I have including Adobe seemed to support it better.

 

Please don't be mean to me for using CUDA.1smile.gif

 

Oh and I want a K20 so badly, I will wait until I find a used one on eBay though.


Edited by Relic - 6/27/13 at 8:57am
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
post #454 of 1290
Quote:
Originally Posted by Relic View Post

 

Oh and I want a K20 so badly, I will wait until I find a used one on eBay though.

 

The interesting thing is that AMD killed their Firestream line and the FirePros pull double duty against both the Quadros and Teslas.

 

It makes a certain amount of sense to differentiate GPU compute from graphics display.  It also allows nVidia to price the Quadros and Teslas to bracket the FirePros.  When the K6000 appears I expect it to pricier than the equivalent FirePro but the K20 to get a little price drop to stay below W9000 pricing.

 

A dual K6000 + K20X combo might still price favorably against dual W9000s.  The big deal is the Titans and their DP performance in a consumer grade card.  $1000 is pricey for a consumer card but cheap as a K20 stand-in.

 

Unless you need distributed GPU the Titan is a viable alternative if you're doing CUDA development and don't want to pony up for a $3K card.  I think it's cheaper than the refurb K20s I've seen.  I might not run it 24/7 on a compute workload though.

 

http://www.anandtech.com/show/6774/nvidias-geforce-gtx-titan-part-2-titans-performance-unveiled/3

 

The Titan is well named.

post #455 of 1290

The guys over at the Mari forums are just loving the Titan, for the price of one W9000 ,K20 or Quadro 6000 you can buy three of those bad boys and have over 13,000 teraflops at your disposal. Not to mention a really kickass gaming rig.1smoking.gif

When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
post #456 of 1290
Quote:
Originally Posted by Relic View Post

 

The rest of The Foundry great products, unfortunatly only Mari will be available for OSX 1frown.gif

 

Actually, EVERYTHING on your list of The Foundry products is available on Mac OS X, excepting KATANA; which is Linux (CentOS/RHEL 5.4) only…

 

(Of course, MARI is not CURRENTLY on OS X, but it will be Very Soon…)

Late 2009 Unibody MacBook (modified)
2.26GHz Core 2 Duo CPU/8GB RAM/60GB SSD/500GB HDD
SuperDrive delete
Reply
Late 2009 Unibody MacBook (modified)
2.26GHz Core 2 Duo CPU/8GB RAM/60GB SSD/500GB HDD
SuperDrive delete
Reply
post #457 of 1290
Quote:
Originally Posted by Relic View Post

Thanks for the Nvidia info, I really wasn't going to debate this issue before you brought it up because OpenCL is a hot item right now due to the new MacPro's, it's a cool new term that's getting thrown around with 95% of the members here having never touched it. Heck before the new MacPro very few if any ever talked about OpenCL or CUDA, one mention of it by Apple and boom, it's the next bing thing since the invention of the computer itself. I have been using the GPU to offset the CPU for over 5 years now and your right, CUDA has a much bigger presence then OpenCL, especially in commercial software. I think this has a lot to do with Nvidia being the dominant in GPU sales. There are also soooo many great scripts available for the CUDA libraries, the community also seems larger to me, that is I can find what I'm looking for much quicker for CUDA, as well as a lot more code examples to copy off of. As a predominant Linux/Unix user when working with my Workstation I have always preferred Nvidia, they have always been more pro-active with their drivers then ATI/AMD, AMD has  gotten better but no where near what Nvidia has to offer.

 

Now not to say that after Apple releases the new Mac Pro things will change but any major 3D software that's available for OSX using OpenCL will also have a CUDA cousin on other platforms. That and Nvidia can still run any OpenCL only software, if any are going to be available. CUDA was first to be available on Mari/Linux and they will continue to use it, if you want to see a real heated discussion on this subject check out Mari's usergroup over at linkedin.com, you will notice the ratio of CUDA to OpenCL users to be easily 5 to 1. When is Mari finially going to be available for OSX, I'd be interested to see the interface?

 

The rest of The Foundry great products, unfortunatly only Mari will be available for OSX 1frown.gif

 

That would be because NVidia nai

post #458 of 1290
Quote:
Originally Posted by hmm View Post

That would be because NVidia nai

 

???????

Late 2009 Unibody MacBook (modified)
2.26GHz Core 2 Duo CPU/8GB RAM/60GB SSD/500GB HDD
SuperDrive delete
Reply
Late 2009 Unibody MacBook (modified)
2.26GHz Core 2 Duo CPU/8GB RAM/60GB SSD/500GB HDD
SuperDrive delete
Reply
post #459 of 1290
Quote:
Originally Posted by hmm View Post

That would be because NVidia nai

 


1bugeye.gif
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
post #460 of 1290
Quote:
Originally Posted by Relic View Post

 


1bugeye.gif


The forum software has been butchering my posts for some reason. On the last two I forgot to copy everything prior to submitting them. NVidia only tuned things for their own hardware, which allowed for some amount of stability in implementation. I suspect it has been a success for them in marketing things like teslas. You can check top500 and see that some of them are built around the use of Teslas.

 

As for the foundry, they've always had some mac support. At least a portion of the Nuke team came from Shake. Modo was from Luxology and has always been available on the Mac. The guys that started Luxology were from Lightwave, which is also available on OSX. I know at least some of those titles are available on OSX. I've always been a fan of nuke even if I don't regularly get to use it. After Effects works for simpler stuff, even with Adobe's sloppy frankensteined layer system. Nodes are just much more efficient, and when you look away from Adobe software, you can often avoid baking certain things.

 

Anyway it's not as complete as what I wrote before. Hopefully it works this time.

post #461 of 1290
Quote:
Originally Posted by Marvin View Post


I imagined there could be an office with say 5-10 Mac Pros connected via Thunderbolt and pretty much what you see happening in the video where the CPU/GPU cores get used in the background unknown to each user and an individual user would just assume they had access to a mini compute farm of say 100 processors.

Apple's solution could largely be zero-config. It would just have a checkbox in Sharing called Compute Sharing and you'd plug in the Thunderbolt cable. iMac users could buy spare Minis for extra power etc.

 

 

That would be great.

 

Your = the possessive of you, as in, "Your name is Tom, right?" or "What is your name?"

 

You're = a contraction of YOU + ARE as in, "You are right" --> "You're right."

 

 

Reply

 

Your = the possessive of you, as in, "Your name is Tom, right?" or "What is your name?"

 

You're = a contraction of YOU + ARE as in, "You are right" --> "You're right."

 

 

Reply
post #462 of 1290
Quote:
Originally Posted by Bergermeister View Post

 

 

That would be great.


You can already allocate tasks to other machines, but it's not as much of a cluster as Marvin is suggesting. Your system will not see it as just another cpu. It's on the end of something that is effectively a PCI bridge, although I'm not entirely sure how the system sees it. It can't be treated like something that is just over QPI, and there would be no shared memory address space. I think there are certain things that hold back lighter systems when it comes to this stuff in real world implementations. For the users that would actually invest in their own server farm, ram would be an issue for some. Software licenses are another issue, as all software is licensed differently. A lot of rendering software will actually have things like node licenses where past a couple machines you pay extra to use it on additional machines that are solely dedicated to rendering. That isn't something that would really affect a single user, but in multi-user environments, it may influence what level of hardware they purchase.

 

post #463 of 1290
Quote:
Originally Posted by hmm View Post


The forum software has been butchering my posts for some reason. On the last two I forgot to copy everything prior to submitting them. NVidia only tuned things for their own hardware, which allowed for some amount of stability in implementation. I suspect it has been a success for them in marketing things like teslas. You can check top500 and see that some of them are built around the use of Teslas.

 

As for the foundry, they've always had some mac support. At least a portion of the Nuke team came from Shake. Modo was from Luxology and has always been available on the Mac. The guys that started Luxology were from Lightwave, which is also available on OSX. I know at least some of those titles are available on OSX. I've always been a fan of nuke even if I don't regularly get to use it. After Effects works for simpler stuff, even with Adobe's sloppy frankensteined layer system. Nodes are just much more efficient, and when you look away from Adobe software, you can often avoid baking certain things.

 

Anyway it's not as complete as what I wrote before. Hopefully it works this time.

 

Your right about more apps having OSX support, I completely forgot about Modo and Nuke, I replaced a 6 year old Lightwave version for Modo, though they require CUDA to calculate certain Nodes and your rendering times are much, much faster with it. Interestingly, I think Modo runs better in OSX then Lightwave does but I still went for the Linux version because of the Tesla support. The Foundry recommends Redhat 6 or CentOS 6 and Nvidia GPU's for all of their products, hopefully they will start paying more attention to the new Mac Pro's. It will depend on how popular they are I guess, so fingers crossed everyone.

When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
post #464 of 1290
Quote:
Originally Posted by hmm View Post


You can already allocate tasks to other machines, but it's not as much of a cluster as Marvin is suggesting. Your system will not see it as just another cpu. It's on the end of something that is effectively a PCI bridge, although I'm not entirely sure how the system sees it. It can't be treated like something that is just over QPI, and there would be no shared memory address space. I think there are certain things that hold back lighter systems when it comes to this stuff in real world implementations. For the users that would actually invest in their own server farm, ram would be an issue for some. Software licenses are another issue, as all software is licensed differently. A lot of rendering software will actually have things like node licenses where past a couple machines you pay extra to use it on additional machines that are solely dedicated to rendering. That isn't something that would really affect a single user, but in multi-user environments, it may influence what level of hardware they purchase.

 

There are free clustering software for Linux. There are also complete Linux clustering distro's (Rocks Cluster is one), you make boot CD's or USB's that make your PC into a cluster node. That would be the cheapest way to go. As far as Apple coming out with their own solution, I highly doubt it, if they haven't done it yet, well. Which I'm sure a third party will fill that void but it probably won't be cheap.

 

If you had a Linux machine I would recommend spending the money to buy one of these;

It's a Tesla x4 GPU computing unit, you can now grab them for about 500 bucks. It's an older model but it's still 5 Teraflops and for the price of a W9000 you can buy 6 of them for a total of 30,000 Teraflops. 1eek.gif

When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
post #465 of 1290
Quote:
Originally Posted by Relic View Post

There are free clustering software for Linux. There are also complete Linux clustering distro's (Rocks Cluster is one), you make boot CD's or USB's that make your PC into a cluster node. That would be the cheapest way to go. As far as Apple coming out with their own solution, I highly doubt it, if they haven't done it yet, well. Which I'm sure a third party will fill that void but it probably won't be cheap.

 

If you had a Linux machine I would recommend spending the money to buy one of these;

It's a Tesla x4 GPU computing unit, you can now grab them for about 500 bucks. It's an older model but it's still 5 Teraflops and for the price of a W9000 you can buy 6 of them for a total of 30,000 Teraflops. 1eek.gif

Your knowledge of salvaging retired geek stuff is really impressive. The topics of 3d rendering and non linear editors come up frequently when talking about this stuff. That is basically my interest in GPGPU. In terms of tesla cards, some of the fermi gaming cards often compared quite well on Windows in terms of raw speed on the limited number of renderers where offline tasks could be primarily gpu driven. It was usually a case of being too limiting in terms of total materials in a scene that would be supported by a gpu or too little ram. I think ram would be a big one. It would be for me if I was looking at used equipment. I think offline renderers have held back due to limitations in ram and total materials that can be addressed that way. I deal more with still 3d renders and comps than anything, and things are a lot faster than a few years ago. It would be really nice to have something render in an hour though without having to clean up massive amounts of noise and artifacts.

Quote:
Originally Posted by Relic View Post

 

Your right about more apps having OSX support, I completely forgot about Modo and Nuke, I replaced a 6 year old Lightwave version for Modo, though they require CUDA to calculate certain Nodes and your rendering times are much, much faster with it. Interestingly, I think Modo runs better in OSX then Lightwave does but I still went for the Linux version because of the Tesla support. The Foundry recommends Redhat 6 or CentOS 6 and Nvidia GPU's for all of their products, hopefully they will start paying more attention to the new Mac Pro's. It will depend on how popular they are I guess, so fingers crossed everyone.

Autodesk certifies their Linux software with Redhat as well. Looking at their site, they seem to suggest Redhat Enterprise 5.4. I kind of expected the Foundry to stick to something similar given the amount of user overlap there. Modo is a cool piece of software. I like some of its tools. I dislike the lack of a construction history or modifier stack or something to help edit. The renderer seems to have finicky sampling too. I'm not sure how good the shaders are. If the Foundry decides to improve on that renderer and possibly publish good API literature to allow for better shader writing, it would be an awesome software package. Shaders are a big deal to me. Some of the 3d packages have really buggy out of the box shaders when it comes to displacement or rebuilding passes. It's very annoying. It can be something as simple as unintuitive behavior in terms of how they normalize scattered and direct reflections for energy conservation or "physical" sun systems without good importance sampling in place. Those kinds of things fill me with rage as they result in highly situational tools or convoluted workarounds.

 

The popularity aspect does concern me. I can see a lot of people waiting to see what happens in software and third party hardware support, making for slow initial sales. Apple may have a better idea how development looks there. I wonder about other things like whether plugging in a typical displayport display still taxes the thunderbolt chip. The gpu itself is probably running on its own lanes to some degree. I have no idea how they implemented that, but it would be a lot more open if it was possible to plug in a displayport display and only consume the port.

post #466 of 1290
Quote:
Originally Posted by Relic View Post

I replaced a 6 year old Lightwave version for Modo, though they require CUDA to calculate certain Nodes and your rendering times are much, much faster with it. Interestingly, I think Modo runs better in OSX then Lightwave does but I still went for the Linux version because of the Tesla support. The Foundry recommends Redhat 6 or CentOS 6 and Nvidia GPU's for all of their products, hopefully they will start paying more attention to the new Mac Pro's. It will depend on how popular they are I guess, so fingers crossed everyone.

Don't you mean Nuke when you're talking about CUDA nodes?

http://www.thefoundry.co.uk/products/nuke/system-requirements/

"To enable NUKEX 7.0 to calculate certain nodes using the GPU, you need to have:

An NVIDIA GPU with compute capability 1.1 or above."
Quote:
Originally Posted by Relic 
It's a Tesla x4 GPU computing unit, you can now grab them for about 500 bucks. It's an older model but it's still 5 Teraflops and for the price of a W9000 you can buy 6 of them for a total of 30,000 Teraflops.

I'd like to exchange some currency at the bank you work at. Multiplying 5Tflops by 6 gives you 30Tflops, not 30,000Tflops. I guess that explains the economy. Maybe give that PnL a second look over. 1wink.gif

Apple noted the Mac Pro as being 7Tflops. This is equivalent to about two Tesla K20 cards ( http://www.nvidia.com/object/personal-supercomputing.html ). A single Tesla K20 is $3500. If Apple has the whole top-end Mac Pro for $6-7k, it would be cheaper than using Teslas.
post #467 of 1290
Quote:
Originally Posted by Marvin View Post

Apple noted the Mac Pro as being 7Tflops. This is equivalent to about two Tesla K20 cards ( http://www.nvidia.com/object/personal-supercomputing.html ). A single Tesla K20 is $3500. If Apple has the whole top-end Mac Pro for $6-7k, it would be cheaper than using Teslas.

 

W9000s should be around 3.9 Tflops (SP).  For DP the W9000s only weigh in a .998 Tflops each.  A K20 is 1.17 Tflops for DP and 3.52 Tflops SP and can be had for around $3100.

 

They must be underclocking the FirePro or running at less than x16 or both.

 

The nVidia Titan weighs in at 1.3 Tflops (DP) and costs $1000.  That's cheaper still and viable on a tower with slots.

post #468 of 1290
Quote:
Originally Posted by nht View Post

 

W9000s should be around 3.9 Tflops (SP).  For DP the W9000s only weigh in a .998 Tflops each.  A K20 is 1.17 Tflops for DP and 3.52 Tflops SP and can be had for around $3100.

 

They must be underclocking the FirePro or running at less than x16 or both.

 

The nVidia Titan weighs in at 1.3 Tflops (DP) and costs $1000.  That's cheaper still and viable on a tower with slots.

Makes me think that Apple might have gone with AMD/ATI because they perform slightly less than the nVidia offerings; therefore Apple could use that as pressure for lower wholesale/bulk pricing on FirePro GPUs…

Late 2009 Unibody MacBook (modified)
2.26GHz Core 2 Duo CPU/8GB RAM/60GB SSD/500GB HDD
SuperDrive delete
Reply
Late 2009 Unibody MacBook (modified)
2.26GHz Core 2 Duo CPU/8GB RAM/60GB SSD/500GB HDD
SuperDrive delete
Reply
post #469 of 1290
Quote:
Originally Posted by Marvin View Post

An NVIDIA GPU with compute capability 1.1 or above."
I'd like to exchange some currency at the bank you work at. Multiplying 5Tflops by 6 gives you 30Tflops, not 30,000Tflops. I guess that explains the economy. Maybe give that PnL a second look over. 1wink.gif
 

 

Woopsy..... That's why I let machines do all the calculations. I just got carried away with all that power, oh the sweat power, moooohhaahahaha.

Edited by Relic - 6/27/13 at 10:59pm
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
post #470 of 1290
Quote:
Originally Posted by nht View Post

 

The nVidia Titan weighs in at 1.3 Tflops (DP) and costs $1000.  That's cheaper still and viable on a tower with slots.

Don't forget the SP is at 4.5 TFlops, not bad for a 1,000. It might be worth looking into, I would pair 2 Titan's with a Quadro K4000. The price would be the same as one W9000, K5000, K20 and you get the benefit of a Workstation card with the raw rendering compute speeds of a SLI'd Titan.

 

Hear are some benchmark's of the Titan, the interesting part is the 3D rendering segment.

http://www.tomshardware.com/reviews/geforce-gtx-titan-opencl-cuda-workstation,3474.html

 

As you can see it would still be very beneficial to have a Workstation class GPU but you can't ignore how fast 3D is with the Titan, MONSTER.


Edited by Relic - 6/27/13 at 11:28pm
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
post #471 of 1290
Quote:
Originally Posted by nht View Post

No, it's not except in the most trivial of cases.  To correctly parallelize code requires more than just a compiler switch.
I think you totally missed what I was getting at. That is most programmers I know prefer rather performant workstations or PCs because most development systems can highly leverage the cores available. In other words parallel "MAKES" are generally able to leverage most of the cores a programmer can throw at the machine. In other words I was focuses not on writing parallel code but the ability of a programmer to leverage many cores via the common tools available to them.

As to the efforts required to paralyze code that depends upon the problem domain. There are certainly many problems that require significant effort, but don't dismiss the trivial opportunities.
Quote:

Workstation users desire two things: accuracy and speed.  Lower power and quieter is nice too.  Small isn't really a major factor and frankly after all these external chassis and raid boxes I dunno that the desktop footprint of the new Mac Pro is really smaller than the old Mac Pro because it's not like you can put these things on top of the Mac Pro and you can't necessarily put the Mac Pro on top of them.  Plus the cabling is a PITA to deal with and a dust ball magnet.
In many cases you would have one extra cable going to a disk array. Beyond that your cabling nightmares might actually be simplified. Think about audio recording that it the past might have been routed into an I/O card at the back of the machine. A simple installation might involve four cables. With a remotely attached console you have exactly one cable going to the Mac which can easily support many more than four channels.
Quote:

CUDA is not a dead technology in 2013.  It will not be a dead technology in 2014.  It strikes me even unlikely in 2018.  Beyond 5 years is anyone's guess.
It is a dead technology, investing in it is ill advised.
Quote:
What is true for TODAY and TOMORROW is that CUDA support in pro-apps is better and more complete.  For example in Premiere Pro Adobe supports both CUDA and OpenCL but only accelerates ray tracing in CUDA.
That is Adobe's problem not OpenCL's.
Quote:


nVidia GPUs have the advantage in GPU compute and unless they screw things up they'll retain that edge.  If they do then pros will continue to prefer nVidia solutions.  I recall reading somewhere (but do not have a link) that nVidia had around 75% of the workstation GPU market.  That's not a very big market but it's the one relevant to Mac Pros.  Why?  Drivers.  Look at the initial performance results for the FirePro 9000 vs the older Quadro 6000s.
It is easy to find web sites that generate just the opposite. For example: http://clbenchmark.com/result.jsp. Then again if you look hard you can always find tht one app that jives well with the chip architecture you are benchmarking. NVidia has bigger issues long term as they will have significantly declining sales over the next few years. The good part of declining sales is that should force them to look harder at the high performance solutions.
Quote:

"There's plenty of battles coming through, and we'll closely follow to see just how AMD can lead this battle with NVIDIA. From these initial results, AMD has plenty of work ahead to optimize the drivers in order to get their parts competitive against almost two year old Quadro cards, yet alone the brand new, Kepler-based Quadro K5000. NVIDIA has proven its position as the undisputed leader (over 90% share) and AMD has to go on an aggressive tangent to build its challenge."


http://vr-zone.com/articles/why-amd-firepro-still-cannot-compete-against-nvidia-quadro-old-or-new/17074.html
Again you have to be willing to believe everything you read on the net. It is easy to find opinions that lead to a different conclusion.
Quote:
Okay...they say 90% share here. On the plus side, AMD probably is giving Apple a hell of a price break.  They'll either enter the workstation market in a big way or destroy a nVidia profit center by dumping FirePros at firesale prices to Apple.
It will be very interesting to find out what is involved in Apple and AMD's partnership. It is pretty obvious that Apple will be able to deliver significant volume for AMD.
Quote:
The drivers have gotten better but nVidia has also filled out their Quadro line with Kepler cards.
Actually drivers are likely to be a significant issue here. In looks like Apple put a lot of work into Mavericks to get it up to snuff. However up to snuff has never meant high performance to Apple.
Quote:
For consumer grade GPUs the Titan beats the Radeon on double precision performance for GP/GPU tasks.  You lose the safety of ECC RAM but on the mac the driver were pro quality already.

This is the performance graph for single

And double precision.  
The thought comes to mind that we really don't even know what Apple and AMD will be putting on the cards. Combined that with effectively brand new drivers for Apples OS and we don't really know what the computational land scape will be like.
Quote:


Because by fall new AMD and nVidia GPUs will be out.  Like the Quadro K6000 which is the pro version of the Titan and much faster than the K5000.  A full GK110 GPU with all 15 SMX units and 2880 CUDA cores hasn't been released yet either.  Paired with a K20X that's going to be a powerhouse combo that's cheaper than a pair of K6000s.
Which means nothing to the vast majority of us! We will buy what suits our needs and not upgrade for at least a couple of years. In a couple of years you will need all new hardware anyways.
Quote:

From a practical standpoint most companies do not buy new machines for you every year but will allow the purchase of a new GPU or more RAM or whatever if you need it.  Paying $3700 for another K20X is still cheaper than replacing an entire $10K+ workstation rig.
Actually it is a lot worst than that, if you don't ask for everything you need up front then you are out of luck at most corporations. Even things like RAM upgrades require being sneaky to avoid IT departments overbearing control.
post #472 of 1290
Quote:
Originally Posted by Relic View Post

Thanks for the Nvidia info, I really wasn't going to debate this issue before you brought it up because OpenCL is a hot item right now due to the new MacPro's, it's a cool new term that's getting thrown around with 95% of the members here having never touched it. Heck before the new MacPro very few if any ever talked about OpenCL or CUDA, one mention of it by Apple and boom, it's the next bing thing since the invention of the computer itself.
That has to be the biggest bunch of baloney I've seen in a long time on this forum. OpenCL is hot technology widely adopted outside the graphics arts world. As to our concerns here I've been wondering when Apple will get OpenCL running on Intels GPUs. As for GPU compute I suspect most users never even realize when the software they are running is using GPU compute.
Quote:
I have been using the GPU to offset the CPU for over 5 years now and your right, CUDA has a much bigger presence then OpenCL, especially in commercial software.
That depends upon the commercial software. Beyond that most developers have no long term intention of supporting CUDA. It is simply to limited and leaves developers exposed in a very negative way. CUDA had its day but the risk isn't worth the potential performance gains.
Quote:
I think this has a lot to do with Nvidia being the dominant in GPU sales. There are also soooo many great scripts available for the CUDA libraries, the community also seems larger to me, that is I can find what I'm looking for much quicker for CUDA, as well as a lot more code examples to copy off of.
It has a lot to do with being first and then the only viable solution for awhile.
Quote:
As a predominant Linux/Unix user when working with my Workstation I have always preferred Nvidia, they have always been more pro-active with their drivers then ATI/AMD, AMD has  gotten better but no where near what Nvidia has to offer.
Well yeah if you like closed source drivers on your Linux machine. Considering that currently Apples drivers are worst than some of the open source solutions for Linux it makes little difference if Apple uses NVidia or AMD. I'm really hoping that Mavericks changes this.
Quote:
Now not to say that after Apple releases the new Mac Pro things will change but any major 3D software that's available for OSX using OpenCL will also have a CUDA cousin on other platforms. That and Nvidia can still run any OpenCL only software, if any are going to be available. CUDA was first to be available on Mari/Linux and they will continue to use it, if you want to see a real heated discussion on this subject check out Mari's usergroup over at linkedin.com, you will notice the ratio of CUDA to OpenCL users to be easily 5 to 1. When is Mari finially going to be available for OSX, I'd be interested to see the interface?

The rest of The Foundry great products, unfortunatly only Mari will be available for OSX 1frown.gif
Here are some alternative links related to OpenCL usage:
http://www.altera.com/products/software/opencl/opencl-index.html
http://www.amdahlsoftware.com/ten-reasons-why-we-love-opencl-and-why-you-might-too/
http://www.wolfram.com/mathematica/new-in-8/cuda-and-opencl-support/
http://in.amdfireprohub.com/downloads/whitepaper/OpenCL_White_Paper.pdf
http://proteneer.com/blog/ The interesting thing here is that For this usage Titan isn't that fast.
http://www.openclblog.com/ Biased ?
http://sourceforge.net/projects/openclsolarsyst/
http://gpgpu.org/ A great place to see the very wide variety of uses of GPU compute.

A lot of links some OpenCL specific. The interesting thing is that maximum performance isn't clearly the domain of any one brand.
Quote:

I really don't have a problem who wins this battle, OpenCL or CUDA. Both are awesome and now that Apple's finially comming to the party it's only going to get better. Don't take what I said above to be anything more than an observation. I use CUDA becasue it was the easist platform for me to get started in terms of programming, that and most of the software I have including Adobe seemed to support it better.
Apple has been in the party since they introduced OpenCL. Adobe is a minor blip in the OpenCL landscape.
Quote:
Please don't be mean to me for using CUDA.1smile.gif
Use whatever you want, I'm just saying there isn't much of a future in it.
Quote:
Oh and I want a K20 so badly, I will wait until I find a used one on eBay though.
post #473 of 1290
Quote:
Originally Posted by nht View Post

W9000s should be around 3.9 Tflops (SP).  For DP the W9000s only weigh in a .998 Tflops each.  A K20 is 1.17 Tflops for DP and 3.52 Tflops SP and can be had for around $3100.

They must be underclocking the FirePro or running at less than x16 or both.
There are lots of possibilities:
  1. one is that you are jumping to conclusions.
  2. second Apple is being their usual conservative self and post expected performance with their drivers.
  3. apple has been testing beta hardware and prefer realistically numbers as opposed to marketing fluff or optimized routines that give you biased impressions of what the GPU can really do.
Quote:

The nVidia Titan weighs in at 1.3 Tflops (DP) and costs $1000.  That's cheaper still and viable on a tower with slots.
post #474 of 1290
Nice post but I don't agree that CUDA is dying, where did you come up with that conclusion. CUDA can be found in almost every professional software that can utilize a GPU. The amount of support and programming forums out numbers OpenCL's like 5 to 1. Go check out Mari's, Nuke, Mobo, Lightwave, Blender, Adobe forums and see which one the users prefer.
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
post #475 of 1290
Quote:
Originally Posted by MacRonin View Post

Makes me think that Apple might have gone with AMD/ATI because they perform slightly less than the nVidia offerings; therefore Apple could use that as pressure for lower wholesale/bulk pricing on FirePro GPUs…

Or it could be that Apple likes AMDs road map better and appreciates their focus on things that NVidia doesn't seem to care about. Plus AMDs chips have the ability to drive very high resolution displays.
post #476 of 1290
Quote:
Originally Posted by wizard69 View Post

There are lots of possibilities:
  1. one is that you are jumping to conclusions.
  2. second Apple is being their usual conservative self and post expected performance with their drivers.
  3. apple has been testing beta hardware and prefer realistically numbers as opposed to marketing fluff or optimized routines that give you biased impressions of what the GPU can really do.

Am I speculating about the Titan, sure but I also use similar tech and I'm a constant visitor and poster over at the Mobo and Mari forums. The Titan has everyone excited, especially students who can't afford a Tesla or Workstation graphics card. I'm not even talking about the Mac Pro here, just having a conversation about super cool tech. You don't have to defend Apple, I'm sure their system will kick ass. All is well.
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
When I looked up "Ninjas" in Thesaurus.com, it said "Ninja's can't be found" Well played Ninjas, well played.
Reply
post #477 of 1290
Quote:
Originally Posted by Relic View Post

Am I speculating about the Titan, sure but I also use similar tech and I'm a constant visitor and poster over at the Mobo and Mari forums. The Titan has everyone excited, especially students who can't afford a Tesla or Workstation graphics card. I'm not even talking about the Mac Pro here, just having a conversation about super cool tech. You don't have to defend Apple, I'm sure their system will kick ass. All is well.

Sometimes specs from Apple are very conservative. Take for example he Mac Book AIRs which for many users get better than expected battery life. I'm not so much defending Apple as highlighting that there could be other reasons for them to post what appears to be lower than expected performance figures.

As for AMD they probably have a competitor to the Titan in the wings. That is if Apple hasn't stolen all of their engineers yet.

What is interesting in this forum is all the discussions about raw performance with little concern for the real world. I doubt that raw performance was all that big of a factor when Apple sourced AMD as a partner. I'm not trying to say performance wasn't a goal so much as I'm saying AMD had more to bring to the game than NVidia cold ever think about. So the other factors likely included some of these:
  1. A partner that was willing to engage in the project that obviously was going to take years to complete and would end up with proprietary parts not usable elsewhere.
  2. A partner that has hardware capable of driving very high resolution displays. Further a partner that was willing to do the engineering to support Thunderbolt. Of course I have no way to prove it but I suspect this is a big factor, Apple found itself needing something beyond Display Port latest standard.
  3. A partner committed to OpenCL! This is an Apple initiative and frankly a good one, there is little sense in Apple going NVidias way when there are competing camps at NVidia.
  4. AMD in general has been very willing to engage partners for custom circuitry. I wouldn't be surprised that this willingness is a factor.

In any event the focus on performance is an interesting discussion, but it is only part of the equation when putting together a new generation machine like this. This is what I think many in this thread mis. Even if some of the performance references are honest and the coming NVidia GPUs are hot performers, it doesn't matter. Why? Because there is more to a project like this than performance. If NVidia wasn't a willing partner then it really doesn't matter if one bench mark places them ahead of AMD.

In the end performance isn't an issue anyways because the GPUs discussed seem to group together pretty well over a broad range of usage. As it is I'm still waiting for the debut. Because when it comes down to it the price on this new Mac will make or break it in the marketplace.
post #478 of 1290
Quote:
Originally Posted by nht View Post

They must be underclocking the FirePro or running at less than x16 or both.

They are in Crossfire so it may not scale perfectly to double with two GPUs.
Quote:
Originally Posted by nht View Post

The nVidia Titan weighs in at 1.3 Tflops (DP) and costs $1000.  That's cheaper still and viable on a tower with slots.

The Titan would be viable if there was a fully functioning driver for it under OS X. It's also not a workstation card.
Quote:
Originally Posted by Relic 
you can't ignore how fast 3D is with the Titan

The 7970 outperformed it for a lot of the OpenCL tests, sometimes by double, the 3D tests looked fairly close. The 7970 is pretty close to the W9000. Also, the Mac Pro will have two.

Even assuming a scenario where Apple had beefed up the PSU and put in two double-wide slots and doubled the slot power allocation, it's not likely that many end users would buy two Titans themselves without official driver support and there's no chance of getting any cost savings from Apple's bulk orders. Titan GPUs are priced lower than Teslas and Quadros for a reason.

I personally prefer NVidia GPUs because I think they are clearly making better headway in computation. There is a note here about complications with using OpenCL on the GPU for raytracing:

http://wiki.blender.org/index.php/Dev:2.6/Source/Render/Cycles/OpenCL

On the CPU it's no problem. Given that it doesn't work for NVidia either, perhaps OpenCL is going to have to be reworked so that it can run more complex code on GPUs. Some developers have an advantage for raytracing with CUDA because NVidia developed their own engine:

http://www.nvidia.com/object/optix.html

This is what Adobe is using in After Effects. Perhaps AMD just needs to work more closely with developers trying to implement complex code and get them to work.
post #479 of 1290
Quote:
Originally Posted by wizard69 View Post


  1. A partner committed to OpenCL! This is an Apple initiative and frankly a good one, there is little sense in Apple going NVidias way when there are competing camps at NVidia.

People said the same thing last year prior to the use of NVidia's kepler gpus in the imacs and macbook pros. Everyone on here swore that Apple would never use NVidia again after the defective chips in 2008 or so. In the end they'll use whoever they wish to. I've already read similar articles to what Marvin posted on this. Some companies found the OpenCL implementation with their software to run better on NVidia gpus, and NVidia was the first to make a move toward GPGPU computation. As for being committed to OpenCL, part of that is on Apple's end. They've been flaky about both that and OpenGL over the past several years. For me it would come down to what is usable today in a given application.

 

Anyway NVidia is even favored in scientific areas. If you take a look at something like top500 you'll see NVidia based clusters, Xeon Phis, Xeon EP 2600s, and opterons. AMD's opterons tend to be represented at times toward the top of that list. Their gpus are never there. The overall success of this machine is going to come down to the state of software support both from Apple and third parties after the first 6 months. They're no longer selling extra space inside a box, so even the entry level model has to justify its place through performance. If we're talking about the equivalent of a pair of W5000s and a quad Xeon, it won't be very interesting.

post #480 of 1290
Quote:
Originally Posted by hmm View Post

People said the same thing last year prior to the use of NVidia's kepler gpus in the imacs and macbook pros. Everyone on here swore that Apple would never use NVidia again after the defective chips in 2008 or so.
Frankly I'm not sure what NVidias GPUs brought to the table, I wasn't exactly happy to see NVidia return to Mac hardware. The performance and power profile is so close to AMDs hardware that it makes you wonder why Apple troubled themselves.
Quote:
In the end they'll use whoever they wish to.
This is true, but from a customer perspective the constant flip flopping isn't cool!
Quote:
I've already read similar articles to what Marvin posted on this. Some companies found the OpenCL implementation with their software to run better on NVidia gpus, and NVidia was the first to make a move toward GPGPU computation.
Nobody is denying this. AMD has suffered from a tools issue in the past though that has been under continual improvement. From a developers perspective though it is far better to use OpenCL on NVidia hardware than CUDA simply because you aren't married to the hardware.
Quote:
As for being committed to OpenCL, part of that is on Apple's end. They've been flaky about both that and OpenGL over the past several years. For me it would come down to what is usable today in a given application.
Apparently neither AMD nor NVidia are to blame for Apples lack of attention with respect to OpenGL or OpenCL. I'm not sure if flaky is the word here, neglectful is probably a better term. Especially in the case of OpenGL they have really smudged their image with respect to the technical community. When open source Linux solutions do better or are more complete, you have issues that are pretty severe for a company Apples size.
Quote:
Anyway NVidia is even favored in scientific areas. If you take a look at something like top500 you'll see NVidia based clusters, Xeon Phis, Xeon EP 2600s, and opterons. AMD's opterons tend to be represented at times toward the top of that list. Their gpus are never there.
Probably because NVidia pursued that market very aggressively. The other thing here is that it is only recently, with its southern islands GPU cores that AMD really had the ability to even support GPU compute well.
Quote:
The overall success of this machine is going to come down to the state of software support both from Apple and third parties after the first 6 months. They're no longer selling extra space inside a box, so even the entry level model has to justify its place through performance. If we're talking about the equivalent of a pair of W5000s and a quad Xeon, it won't be very interesting.
I'm still expecting good, better and best. It is hard to tell what the specs posted represent in that triptych.

However I still don't buy this idea that only maximum power is of interest in the marketplace. For example dual W5000s may not interest you that much but I can see many an engineer being very happy to have one on their desk. I'm still trying to fathom all of the "up to" qualifiers in Apples postings. For example do they intend to sell the same GPU with different RAM configurations or offer choices of different GPUs.

As to software support I think you are right in that regard. People blame Apples slide in the professional market on the Mac Pro, but I'm going out on a limb here to say that there is a lot more to it than that. Apples tendency to neglect important elements of the pie, in this case OpenGL, tends to undermine acceptance. The other thing is that Apple does a great job with XCode and the three C's, but rapidly drops support intensity for other languages important to professionals. Sure they include things like Python with their OS, but include is a far cry from optimized support. It almost seems like Python is looked down upon at Apple. However good scripting languages are important to a very wide range of professionals, so Apple needs to get on the ball so to speak. Frankly support for Fortran wouldn't hurt either. Yes it is old and crusty but Fortran support cold ease a lot of porting jobs. Obviously we are talking UNIX here so it isn't all that bad, I just think Apple needs to make something like Python a first class Mac Solution. It is just another cog in the machine for professionals but an important one.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Future Apple Hardware
AppleInsider › Forums › Mac Hardware › Future Apple Hardware › Apple throws out the rulebook for its unique next-gen Mac Pro