Apple throws out the rulebook for its unique next-gen Mac Pro

1232426282966

Comments

  • Reply 501 of 1320
    v5vv5v Posts: 1,357member

    Quote:

    Originally Posted by nht View Post


    What is true for TODAY and TOMORROW is that CUDA support in pro-apps is better and more complete.  For example in Premiere Pro Adobe supports both CUDA and OpenCL but only accelerates ray tracing in CUDA.



     


    Quote:

    Originally Posted by wizard69 View Post



    That is Adobe's problem not OpenCL's.


     


    If that were all there is to it I'd actually enjoy a certain amount of schadenfeude over Adobe suffering, but the truth is that it's not THEIR problem, it's their USERS' problem.


     


    Most of us don't get to choose what editing/compositing software we use and/or can't conveniently or cost-effectively switch just because one supports a certain standard and another doesn't.


     


    By developing competing systems like this and then forcing users into a situation where they can't even choose one or the other, Apple, AMD, NVidia and Adobe are all screwing the very people they seek to woo.

  • Reply 502 of 1320
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post





    You'd have said that about pretty much anything they made though. Even if they'd brought out a standard tower with dual Titan GPUs and the possibility of dual CPUs, they'd have higher margins so you'd have said you could get the same power cheaper and the enclosure wasn't innovative and it likely would have had issues putting Thunderbolt in so you'd have said it was disappointing. There's almost nothing they could do that would impress you.

    Which card? As you've seen, it performs at double a 7770 and 5870:

     


    It gets more interesting around the W7000 level. There are two cards, but there's no reason to expect linear scaling. Much of the time I'm writing down suspicions based on what data is available. As I've said it will be more interesting to see what comes from a shipping unit. I think they only did the preview due to how long the old one has gone without a true update. I also wonder if we'll see additional scalable thunderbolt storage solutions beyond what we have today. I think part of the reason they're lacking is volume. Most boxes are marketed at both Windows and Mac customers. There aren't many Windows customers with thunderbolt. There are probably none at the workstation level. If I come across any I'll mention them. I don't think Apple mentioned whether usb3 would be included. It's not on that chipset, but I suspect Apple would test usb3 chips for the thunderbolt displays too. I don't know that it's so much about impress. It's more about what I would buy to solve alleviate current "barely getting the job done" problems.


     


     


    Quote:


    http://www.tomshardware.com/reviews/workstation-graphics-card-gaming,3425-13.html



    For an entry card, that's decent enough performance, especially when there's two of them. To go from a single 5870 in the max BTO config to 4x a 5870 in the entry config in 1 year (technically 3 years) would be alright.


     




    Huh? Your link doesn't suggest that. The W5000 is just over the 5870 in directx. It should be higher in OpenGL. I recall the 5870 being a poor OpenGL performer under windows. I would expect it to be beyond the 680mx in the imac today outside of gaming in OpenGL and double precision math. It's not so much that double precision means a lot to everyone, so much as it is that the gaming card drivers usually choke there rather than on single. Your math is taking the card performance and doubling it for 2 cards right? Otherwise it wouldn't make sense with the provided data. This is another thing where I want to see what the final implementation is like.

  • Reply 503 of 1320
    wizard69wizard69 Posts: 13,377member
    Open GL.

    I'm glad Apple finally seems to be taking the format they support seriously with Mavericks.
    I'm glad to see them doing something here. As I've said this has more to do with perception of being a viable pro platform then many of Apples hardware solutions.
    4.1?  Full support.  4.2 in play...and 4.3 pending?

    Whispers from around the net hint at graphical/system response being much faster.
    Ive been hearing this too. Mavericks in general appears to be a performance overhaul.
    Hopefully this will mean gpu performance vs PC gpu performance will finally see parity????
    Well I'd prefer solid and glitch free to parity with PC drivers.
    Not the 50-100% less (which always struck me as ridiculous...to say that Apple supported the only equivalent standard to Direct X on the Mac side...)
    By 100% I assume you mean all of the missing goodies.
    Do they only have one guy doing their GL drivers?
    Good question. For a company with billions in the bank and a reputation of being a platform for graphics you would think OpenGL would be a very high priority feature.
    With Open GL 4.1, Mavericks, the New Mac Pro...and the new Pro apps from 3rd parties...

    ...far from the much feared death of the Pro...we see a renaissance?

    Lemon Bon Bon.

    Yes I see good things ahead for this platform. The new Mac Pro will hopefully be able to fill more roles than ever before and ideally attract a broader array of customers.

    The biggest issue is all of these massive changes will take awhile to iron out. I just hope the teething pains aren't that bad.
  • Reply 504 of 1320
    wizard69wizard69 Posts: 13,377member
    hmm wrote: »
    Yes I've gathered that. I've followed both for a very long time. NVidia overall has been really aggressive in expanding their market as you pointed out. I don't see that as a bad thing. I think it was only logical that they tuned things for their own hardware. They wanted to squeeze as much performance and stability as possible out of it. I see no motivation for such a company to do the major R&D work to get an open standard off the ground. They don't sell as many, but the gaming cards tend to absorb a lot of development costs.

    I just view it as if your software relies on CUDA, you probably stick to those years when purchasing new machines unless the software changes to accommodate similar functionality with OpenCL and AMD drivers.
    The problem with the flip flopping is that it is hard on customers and even developers. Further it looks like Apple does this for some incalculable reason to avoid a long linkage to anyone company. In a nut shell the flops don't appear to be performance based, so I really don't know why we get a flop almost every other year.

    The good thing for the low end guys is that this problem is effectively gone. They will have Intel integrated GPUs and that is about it. Unless of course for some reason they implement an AMD chip with integrated graphics.

    I get why you're saying that. My point was more that on a typical 3-4 year replacement cycle, if the software supports CUDA today, it's unlikely to dry up within that time. If you're a developer, it's entirely different.
    It won't dry up but if in three years your choices are all AMD based then you are screwed. Speaking of which I don't believe that NVidia has a choice with respect to high performance GPU cards and aggressive marketing into specialty industries. The market for all of those low cost GPUs will dry up slowly until the high end is about all you have as a GPU manufacture. Due to this I don't see NVidias high end chips getting cheaper but rather the opposite. Limited markets and low prices won't lead to NVidia being around long.

    Sometimes I see the possibility if AMD trying to crush NVidia with this partnership with Apple. However even AMD will have issues selling discrete GPUs and thus will have to try to maintain pricing on high end cards. This whole issue with respect to Mac Pro pricing is very perplexing.

    I have mentioned that. It's annoying to me, and I'm not entirely sure whether it's an issue of growing pains, sourcing staff, or whatever else. Apple certainly has the money to hire people, but it's rarely a simple issue when it comes to bringing something that has fallen behind back up to speed.
    It is looking like Mavericks irons a lot of this out. I'd be embarrassed if I had to try to sell Apple hardware into a technical market as there is no software "backup" for that hardware. The state of drivers has been laughable.
    You mention recent AMD support here. We still haven't seen that on a shipping product in the wild. The demo looked cool, and I can appreciate the demanding software they ran on it. I'm still interested in the overall results when it comes to a broader set of applications and real world testing as well as the updated alignment of price to performance. It loses some of the prior functionality that I mentioned. I liked extra bays, that don't have to deal with additional hardware layers. There's no need to deal with host cards unless you want something like SAS out and no concerns about glitch port multiplier firmware . From a practical standpoint, it's likely to be less customizable than prior models. I was going to throw in a 4k output reference, but I noticed it's supported by some of the after market cards for the current model.
    Until my heart is crushed by outlandish pricing is pretty bullish on this new Mac Pro. It really could be a corporate success story given MicroSofts fast decline.

    As to the GPUs, if you can dig through AMDs terrible web site you should be able to find some technical details on the FirePros. The data rates the cards are capable of are limited by Display Port. AMD specifically says they can handle data rates much higher than the Display Port standard. So I'm wondering what AMD and Apple have been up to the last couple of years. I'm thinking a new video monitor with more than 4K resolution. Apple really has few details about the video capabilities of the machine.


    I wasn't so much saying maximum power. The W5000s are based on a low end card, lower than the 5770s. The drivers and possibly firmware are obviously different. Ram might be ECC. It's not that great of a card overall, in spite of its price and marketing.
    This got me thinking, would AMD and Apple go to all of this trouble and then develop drivers for old architectures? It really doesn't make sense now. Which makes me wonder if AMD might debut new FirePro chips about the time this machine ships. As much as Apple has announced this machine they really haven't said much about it.
    I guess it depends on where these machines start. I am fully expecting some price updates from AMD by the time these machines ship. It isn't realistic to me to assume that a mac pro is going to cram $6k worth of gpus. That would be a higher hardware cost than they have ever addressed. I'm sure there's a market, just not a sustainable one, as configurations that cost that much are mostly available through cto options from other oems. Even on the W9000s, they're still at launch pricing currently. AMD and NVidia both tend to adjust mid cycle on longer cycles rather than rebrand workstation gpus with adjusted clock rates, and workstation cards show up after gaming cards. That means these have to last at least another year, but I don't think they'll stay at $3000 retail that entire time.
    I would expect AMD and Apple to stay with the GCN based cards to minimize driver issues. This gives them a number of performance levels to choose from. The problem with these "cards" is that the cards won't be in Apples machine. So it is very difficult to judge pricing. There is an intrinsic value in the parts on the card and a profit. The problem is I'm not certain what those numbers actually are. I have a hard time believing that these $3000 cards have even $1500 worth of parts. If that. I could see the cost to Apple being close to that of an equivalent desktop card.
    Having used it for a bit now, I like Python's structure, although the way they update it is weird where you have a lot of stuff still on Python 2, and elements of Python 3 backported in the last couple releases.
    I really like Python, it fits into my minor role when it comes to programming. You are right about the Python 3 transition, it could have been done better.
    I know absolutely nothing about Fortran, so there's no way I could put together a valid response on that.
    Actually I'm not big on it but rather know that there is a great deal of science and engineering code built upon it. Of course the smarter types have moved on to C++ or even better languages. I just see offering a Fortran compiler as a low traction way to bring more technical people into the fold.
  • Reply 505 of 1320
    wizard69wizard69 Posts: 13,377member
    v5v wrote: »

    If that were all there is to it I'd actually enjoy a certain amount of schadenfeude over Adobe suffering, but the truth is that it's not THEIR problem, it's their USERS' problem.
    Well it would be Adobes problem if users rebelled a bit. Adobe has made more than a few bad business decisions of late and as such they have yet to be significantly harmed by their users. That is actually pretty odd.
    Most of us don't get to choose what editing/compositing software we use and/or can't conveniently or cost-effectively switch just because one supports a certain standard and another doesn't.
    Believe me I understand this to an extent. However at least you have options that compete with various Adobe products.
    By developing competing systems like this and then forcing users into a situation where they can't even choose one or the other, Apple, AMD, NVidia and Adobe are all screwing the very people they seek to woo.

    Again it is a problem that the users need to place back on the developers. All that is needed is a conserted effort to make sure that they hear about the different issues associated with their policies. If no one stands up and says - hey adobe your subscription plan sucks - or - we want OpenCL support - adobe won't do much to change.
  • Reply 506 of 1320
    asciiascii Posts: 5,936member

    Quote:

    Originally Posted by wizard69 View Post



    Well I'd prefer solid and glitch free to parity with PC drivers.


    Me too. Just last night I had to reboot my PC twice due to video driver bringing down the system while gaming. All the optimizations on the PC are nice but they come in the form of tricky code which is harder to get all the kinks out.


     


    BTW has anyone tried any trip-A titles on Mavericks (e.g. Batman, Deus Ex on Mac App Store). Better frame rates?

  • Reply 507 of 1320
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by wizard69 View Post







    Again it is a problem that the users need to place back on the developers. All that is needed is a conserted effort to make sure that they hear about the different issues associated with their policies. If no one stands up and says - hey adobe your subscription plan sucks - or - we want OpenCL support - adobe won't do much to change.


    I think they went with whatever was available and stable at the time of development. If I remember correctly, the raytracer in After Effects uses technology that is published by NVidia. NVidia also owns the remnants of Mental Images and just a lot of graphics and rendering related technology. I don't think it was a move they made for no reason. Adobe has added OpenCL support in some applications, and they will probably continue. As to the subscription thing, it's still cheaper than Autodesk or the Foundry. Both charge an initial license fee + maintenance. Maintenance alone can be quite expensive depending on the application. I would argue that Adobe's pricing isn't that high compared to some of the others. It's not that I like the subscription thing. It's more expensive for me. I'm just saying they aren't anywhere near the worst. Apple tends to sink the cost of software as it's often used more to market hardware than as a primary source of profits. That is a major distinction. Apple also doesn't make as many updates to their more specialized software products.


     


    Quote:

    Originally Posted by wizard69 View Post





    The problem with the flip flopping is that it is hard on customers and even developers. Further it looks like Apple does this for some incalculable reason to avoid a long linkage to anyone company. In a nut shell the flops don't appear to be performance based, so I really don't know why we get a flop almost every other year.


     


    I'm not sure. They seem to alternate quite a bit. I don't really have much insight to add.


     


    Quote:




    The good thing for the low end guys is that this problem is effectively gone. They will have Intel integrated GPUs and that is about it. Unless of course for some reason they implement an AMD chip with integrated graphics.

    It won't dry up but if in three years your choices are all AMD based then you are screwed. Speaking of which I don't believe that NVidia has a choice with respect to high performance GPU cards and aggressive marketing into specialty industries. The market for all of those low cost GPUs will dry up slowly until the high end is about all you have as a GPU manufacture. Due to this I don't see NVidias high end chips getting cheaper but rather the opposite. Limited markets and low prices won't lead to NVidia being around long.



     


    Well NVidia does support OpenCL on their current cards. I would have to look up the state of OpenCL 1.2 support, but I'm pretty sure it's in place for currently supported cards. On the OSX end, a lot of developers are drifting toward OpenCL over the longer term, but I wouldn't feel uncomfortable being set up with something that could use CUDA today. It's not a huge deal to me as while I use that functionality in a couple programs, I don't use it that frequently on my own machine. I have a CS6 license, so I have Premiere. I don't use it anywhere near daily. I will probably transition to their subscription model eventually, but there's nothing pressing at the moment. If anything I'm likely to spend less time with it a year from now. Treating licenses as perpetual works better on Windows anyway. With OSX something will break it as it changes more frequently. Things that annoy me with Adobe are more like last minute change notices. Late in CS5, they announced CS6 upgrade eligibility would cut off at CS5 rather than the old rule of three versions back. It's stupid to announce such a change nearing the end of an upgrade cycle. They backed off considerably as I anticipated, but it was still a stupid thing to do.


     


     


     


    Quote:


    Sometimes I see the possibility if AMD trying to crush NVidia with this partnership with Apple. However even AMD will have issues selling discrete GPUs and thus will have to try to maintain pricing on high end cards. This whole issue with respect to Mac Pro pricing is very perplexing.

    It is looking like Mavericks irons a lot of this out. I'd be embarrassed if I had to try to sell Apple hardware into a technical market as there is no software "backup" for that hardware. The state of drivers has been laughable.



     


    Statements like that are why I think you really hate NVidia. They have their long term bets. What you're describing is something I mentioned before. The gaming cards essentially subsidize Tesla and Quadro cards. Selling $100 gaming cards means terrible margins, but if they ship enough they can absorb fab costs. I suspect they have some other plan for that going forward. I agree with you on drivers. I see something like dual gpus optimized for computation as something that would appeal to scientific markets. Heavier workstation sales have been somewhat flat depending on what quarter you examine, but if you take something that used to require cluster time and move it down to a single user device that requires minimal IT support, that should move units. Obviously I'm talking about the low end of that market and people who merely use time on such a cluster, not tasks where an entire cluster is dedicated.


     


    Quote:


    Until my heart is crushed by outlandish pricing is pretty bullish on this new Mac Pro. It really could be a corporate success story given MicroSofts fast decline.



    Microsoft just keeps missing every turn. Prior to Apple in the phone market, they should have been worried about Blackberry. In both cases they showed up late without something truly interesting. Look at how much money they had to throw at the Xbox just to catch up to Sony. If Sony hadn't weakened overall, I'm not sure MS would have gained as much leverage. They make a lot of stuff that works well enough, and they aren't really doing that bad. It's just they miss potential markets and throw products out just to have a presence. Even with that, Windows doesn't bother me as much as some of you guys.




     


     


    Quote:


    As to the GPUs, if you can dig through AMDs terrible web site you should be able to find some technical details on the FirePros. The data rates the cards are capable of are limited by Display Port. AMD specifically says they can handle data rates much higher than the Display Port standard. So I'm wondering what AMD and Apple have been up to the last couple of years. I'm thinking a new video monitor with more than 4K resolution. Apple really has few details about the video capabilities of the machine.

    This got me thinking, would AMD and Apple go to all of this trouble and then develop drivers for old architectures? It really doesn't make sense now. Which makes me wonder if AMD might debut new FirePro chips about the time this machine ships. As much as Apple has announced this machine they really haven't said much about it.



    I am not sure. They don't seem to have anything significant coming out in terms of chip architecture that I can find. I haven't followed too closely. I don't see anything wrong with 4K. It's a lot of resolution. As to developing drivers, AMD 7000 chip drivers have shown up in prior Mountain Lion builds. AI and MR both published articles on that something like a year ago. The low level drivers may have been in place for some time.


     


     


    Quote:




    I would expect AMD and Apple to stay with the GCN based cards to minimize driver issues. This gives them a number of performance levels to choose from. The problem with these "cards" is that the cards won't be in Apples machine. So it is very difficult to judge pricing. There is an intrinsic value in the parts on the card and a profit. The problem is I'm not certain what those numbers actually are. I have a hard time believing that these $3000 cards have even $1500 worth of parts. If that. I could see the cost to Apple being close to that of an equivalent desktop card.



    Much of the time they're extremely similar in hardware when compared to the gaming cards. I would need to compare specs, but the 7970 should be close to the W9000 in terms of raw specs. Workstation cards aren't always faster in every way. They're generally more stable in OpenGL apps. They often hold up better in double precision calculations. At the high end you can find them with more ram. NVidia debuted iray back in 2009 or 2010. At that time a tesla card was the only way you could possibly render a scene that contained more than a couple objects, especially if it contained any "hero" geometry as in maximum level of detail and high resolution mapped textures. I suspect the ram is a big thing with other types of computation too.


     


    Quote:


    I really like Python, it fits into my minor role when it comes to programming. You are right about the Python 3 transition, it could have been done better.

    Actually I'm not big on it but rather know that there is a great deal of science and engineering code built upon it. Of course the smarter types have moved on to C++ or even better languages. I just see offering a Fortran compiler as a low traction way to bring more technical people into the fold.


     




     


    Yeah I found out about certain changes in mutability and some other things when scripts wouldn't run. Obviously that filled me with rage. I do most of it in a text editor. I only use an IDE if I want something like pop up flags, and even then I usually use embedded ones in certain software, as they auto load software specific semantics. I wish I could comment on Fortran, but I try to avoid being a wikipedia scholar.

  • Reply 508 of 1320
    asciiascii Posts: 5,936member

    Quote:

    Originally Posted by hmm View Post


     


    Microsoft just keeps missing every turn. Prior to Apple in the phone market, they should have been worried about Blackberry. In both cases they showed up late without something truly interesting. Look at how much money they had to throw at the Xbox just to catch up to Sony. If Sony hadn't weakened overall, I'm not sure MS would have gained as much leverage. They make a lot of stuff that works well enough, and they aren't really doing that bad. It's just they miss potential markets and throw products out just to have a presence. Even with that, Windows doesn't bother me as much as some of you guys.



    They thought the PC was going to migrate in to the living room but it migrated to the phone. A lot of money spent building a competitor in the TV gaming space for nothing (strategically speaking).

  • Reply 509 of 1320
    macroninmacronin Posts: 1,174member

    Quote:

    Originally Posted by hmm View Post


    As to the subscription thing, it's still cheaper than Autodesk or the Foundry. Both charge an initial license fee + maintenance. Maintenance alone can be quite expensive depending on the application. I would argue that Adobe's pricing isn't that high compared to some of the others. It's not that I like the subscription thing. It's more expensive for me.



     


    But if you do not pay maintenance, the app still works. Maintenance covers upgrades & customer support for each year. For The Foundry, you are basically buying the app again every 5 years, cost-wise. And if you are using the apps to make a living, then that is just the cost of doing business (and a tax deduction).

  • Reply 510 of 1320
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by ascii View Post


    They thought the PC was going to migrate in to the living room but it migrated to the phone. A lot of money spent building a competitor in the TV gaming space for nothing (strategically speaking).



    I never personally thought that. I mean what they did with the Xbox isn't bad. It should have been obvious things would go to the phone as far back as the Handspring Treo, speaking of which Microsoft had earlier phone products. They just didn't pursue that area hard enough or utilize it to its potential.


     


    Quote:

    Originally Posted by MacRonin View Post


     


    But if you do not pay maintenance, the app still works. Maintenance covers upgrades & customer support for each year. For The Foundry, you are basically buying the app again every 5 years, cost-wise. And if you are using the apps to make a living, then that is just the cost of doing business (and a tax deduction).



     


    I do know what it covers. You can deduct the cost of Adobe's rental model too without amortization. I really don't like what Adobe did, but I think the complaints are still somewhat overstated. If adoption lags too much, they will offer better rates.

  • Reply 511 of 1320
    MarvinMarvin Posts: 15,435moderator
    hmm wrote: »
    There are two cards, but there's no reason to expect linear scaling.

    There is for some tasks but not all tasks. Metro 2033 scales nearly perfectly across two GPUs:

    http://www.hardware.fr/articles/848-22/amd-radeon-hd-7970-crossfirex-test-28nm-gcn.html

    as do some compute tasks:

    http://www.tomshardware.com/reviews/geforce-gtx-690-benchmark,3193-12.html
    hmm wrote: »
    I don't think Apple mentioned whether usb3 would be included. It's not on that chipset, but I suspect Apple would test usb3 chips for the thunderbolt displays too.

    It says USB 3 on the page:

    http://www.apple.com/mac-pro/
    hmm wrote: »
    Huh? Your link doesn't suggest that. The W5000 is just over the 5870 in directx.

    That's on the low preset, which won't be using advanced shaders. If you look at the high preset, the 5870 is 10FPS and the W5000 is 22. Metro 2033, the 5870 is 5.5, the W5000 is 15. Some of the less intensive games, they come out fairly even though and I see the cumulative performance puts the W5000 only slightly ahead of the 5870 so Apple's setup would be closer to 2x 5870. Still quite a long way from being below a 5770, which is half the performance of a 5870.

    You can also see that the cumulative performance is around half of a 7970 or 680 so for things that do scale linearly, Apple's entry Mac Pro would come with that equivalent performance, which isn't really bad at all.
  • Reply 512 of 1320
    v5vv5v Posts: 1,357member

    Quote:

    Originally Posted by wizard69 View Post



    Well it would be Adobes problem if users rebelled a bit. Adobe has made more than a few bad business decisions of late and as such they have yet to be significantly harmed by their users. That is actually pretty odd.


     


    Once an app becomes an "industry standard" it's really difficult for someone who makes their living using it to switch to something else. I'm not at all surprised that people have not jumped ship.


     


     




    Quote:

    Originally Posted by wizard69 View Post



    at least you have options that compete with various Adobe products.


     


    I could be wrong, but I don't think the alternatives are viable. Based on my admittedly cursory examination, other titles seem to be either not intended for pro users and lack the depth of features such work requires, or are specialty titles used mostly by Hollywood that cost as much as the computer that runs them.


     


    My complaint isn't really with any of the specific players anyway. Like so many others in so many situations, I'm just frustrated by the lack of a single, universal standard upon which all can agree.


     


    If I'm correctly understanding what I'm reading in this thread, NVidia and AMD each use different approaches to hardware acceleration and certain software titles are optimized for one or the other. Since the software I use (After Effects) uses the NVidia approach, and Apple doesn't offer NVidia hardware, my options are to change software, which is impractical, or change computer supplier, which means using Windows instead of OSX. Neither is a particularly desirable solution.


     


    If it's true that Adobe's existing support for CUDA over OpenC/GL is simply a matter of what made sense at the time and will soon evolve to include the latter, then the issue would seem to be moot.

  • Reply 513 of 1320
    wizard69wizard69 Posts: 13,377member
    v5v wrote: »
    Once an app becomes an "industry standard" it's really difficult for someone who makes their living using it to switch to something else. I'm not at all surprised that people have not jumped ship.
    Industry standard is often used to express the concept that we don't want to try anything new. I've seen this in automation hardware and software also for various IT groups that refuse to modernize anything.


    I could be wrong, but I don't think the alternatives are viable. Based on my admittedly cursory examination, other titles seem to be either not intended for pro users and lack the depth of features such work requires, or are specialty titles used mostly by Hollywood that cost as much as the computer that runs them.
    Wel of you need a specific feature you are pretty much screwed. However if you don't then you do have a lot of alternatives to choose from.
    My complaint isn't really with any of the specific players anyway. Like so many others in so many situations, I'm just frustrated by the lack of a single, universal standard upon which all can agree.
    Actually the last thing you want in software is universal standards. MicroSoft Office is an example of this both good and bad. The problem with universal standards is that they often become bloated with options to try to please everybody. Sometime this isn't too bad, Adobe so far hasn't completely ruined their apps with bloat, MS on the other hand has damaged Office pretty badly trying to be all things to all people.

    In a nut shell universal standards lead to stagnation and a lack of innovation
    If I'm correctly understanding what I'm reading in this thread, NVidia and AMD each use different approaches to hardware acceleration and certain software titles are optimized for one or the other. Since the software I use (After Effects) uses the NVidia approach, and Apple doesn't offer NVidia hardware, my options are to change software, which is impractical, or change computer supplier, which means using Windows instead of OSX. Neither is a particularly desirable solution.
    You should alway make your desires known to the likes of Adobe and other software developers. In simplest terms no one wants to be forced to use the wrong hardware. However don't expect miracles, even if Adobe went all in with OpenCL acceleration it could take years to completely port everything involved.
    If it's true that Adobe's existing support for CUDA over OpenC/GL is simply a matter of what made sense at the time and will soon evolve to include the latter, then the issue would seem to be moot.
    As far as I know Adobe hasn't publicly expressed their intentions with respect to OpenCL. However from a practical standpoint it would be far easier to move to OpenCL to allow for a broader hardware coverage. This isn't just an AMD / NVidia thing, tablets and cell phones run completely different hardware. So if Adobe wants to see their software running on alternative platforms then they need to have as much portable code as possible. In this regards I'm expecting OpenCL compliant hardware in mobile devices and support for that hardware software wise.

    By the way, Adobe is in a position where they can pick an choose which platforms to run on. The big difference here is the leverage smaller developers get by going with OpenCL. OpenCL would save such developers a lot of time when it comes to supporting alternative architectures.
  • Reply 514 of 1320
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post





    There is for some tasks but not all tasks. Metro 2033 scales nearly perfectly across two GPUs:



    http://www.hardware.fr/articles/848-22/amd-radeon-hd-7970-crossfirex-test-28nm-gcn.html



    as do some compute tasks:



    http://www.tomshardware.com/reviews/geforce-gtx-690-benchmark,3193-12.html


    Thanks. I was unaware of that.


     


    Quote:


     


    It says USB 3 on the page:




    Again I missed that. I'm not aware of any native support from intel there, but they can probably reuse some of that work for the thunderbolt display updates.


     


     


    Quote:


    http://www.apple.com/mac-pro/

    That's on the low preset, which won't be using advanced shaders. If you look at the high preset, the 5870 is 10FPS and the W5000 is 22. Metro 2033, the 5870 is 5.5, the W5000 is 15. Some of the less intensive games, they come out fairly even though and I see the cumulative performance puts the W5000 only slightly ahead of the 5870 so Apple's setup would be closer to 2x 5870. Still quite a long way from being below a 5770, which is half the performance of a 5870.



     


    I just looked through each chart. Keep in mind I hate using gaming benchmarks. Typically the prime function of these things is high precision OpenGL used in 3d apps, engineering applications, and some scientific work. That aside the difference you speak of is mainly seen on ultra at 2560x1440. High at 1080 shows a much lower percentage of deviation. I suspect that it's a 1GB 5870 and those settings. Some of the higher NVidia cards use 1.5GB on typical configurations. Looking at it in context, it's still way below where even the imac would be on that chart. The 680mx should be around a 660 desktop card, maybe a bit higher or lower. It's not going to be an extreme deviation. Likewise the 7970 is close to double. Assuming strong scaling it would be around the same with 2x W5000s at a significantly higher cost, again assuming PC side pricing provides a good pricing reference. The best comparison I could find was on Tom's hardware. They tested enough different scenarios. As you can see it's all over the place, so in the end it depends on software support. Apple really needs to provide this support in the form of libraries rather than rely on developers to adjust for whatever cards they choose that year. That always leads to delayed certification when hardware and software releases are poorly aligned. On OSX I'm not sure you'll see as much driver differentiation. The biggest thing in using workstation cards there for computation is the sheer amount of memory. At the top levels you can't get that on a radeon card. It's the same reason NVidia packed the Teslas with ram. In many cases Fermi gaming cards did well with computation assuming the problem was small enough to fit within the card's memory.


     


    http://www.tomshardware.com/reviews/firepro-w8000-w9000-benchmark,3265.html


     


     


    From the last page of that review


     


     


    Quote:


    A direct comparison between AMD’s FirePro W-series and Nvidia’s Quadro family is difficult. One somehow manages to shine when the other runs into major problems. The tasks these cards are built to perform are varied and complex, leaving us to look at benchmarks for specific tasks, which keep us from generalizing about performance overall.


    TSMC's 28 nm process technology, AMD's GCN architecture, and the hardware-accelerated features yet to be enabled by ISVs are all very promising. However, AMD's drivers don't necessarily seem mature enough to gauge what these cards will do in the weeks and months to follow. But potential is there, and once AMD’s software team is done closing some of the gaps seen in today's testing, the new FirePro cards could become well-priced alternatives to Nvidia's more established Quadro line-up. We’re also looking forward to the lower-end offerings in AMD’s FirePro W-series, and we’ll try to get our hands on them as soon as they're available.




    If you flip through it, you'll see each card ran into issues. A couple wouldn't run on whatever hardware. I'm just not big on comparing to an obviously outdated card today. The imac pulled ahead of the 5870 with the 680mx. The 5870 came out in late 2009. It's not quite 4 years old at this point, but I still consider it a poor point of reference. At this point I would expect better performance or something more cost effective. Given that the overall design doesn't accommodate a lot of extra storage internally or anything in the way of after market parts, I would expect them to provide a good value in terms of performance even on the entry to mid level models. In my opinion the 4,1 and 5,1 were very poor values overall. They could have done better, and it doesn't come down solely to specs. There's a lot of discussion of better framework tuning and support in Mavericks, and that is part of providing a viable platform.


     


    Quote:


    You can also see that the cumulative performance is around half of a 7970 or 680 so for things that do scale linearly, Apple's entry Mac Pro would come with that equivalent performance, which isn't really bad at all.






    Yeah I noticed that. I guess the extra ram would be good. I try to compare against Windows benchmarks, but it can be difficult as OSX versions can be all over the place relative to Windows and even Linux.

  • Reply 515 of 1320
    wizard69wizard69 Posts: 13,377member
    hmm wrote: »
    Thanks. I was unaware of that.

    Again I missed that. I'm not aware of any native support from intel there, but they can probably reuse some of that work for the thunderbolt display updates.
    Most likely Intel will be launching a new chipset around the time this hardware ships. The "motherboard" card is rather small so I can't imagine Apple wasting space on PCI to USB3 bridge chips. Another possibility is that they used some glue logic to link a desktop north bridge. Then again Intel and Apple could have partnered on an Intel custom North Bridge.

    I would lean towards intel shipping new hardware myself. A custom chip for Apple seems to be unlikely. Further the Xeon environment could really use an update that supports USB 3 generally.


    I just looked through each chart. Keep in mind I hate using gaming benchmarks. Typically the prime function of these things is high precision OpenGL used in 3d apps, engineering applications, and some scientific work. That aside the difference you speak of is mainly seen on ultra at 2560x1440. High at 1080 shows a much lower percentage of deviation. I suspect that it's a 1GB 5870 and those settings. Some of the higher NVidia cards use 1.5GB on typical configurations. Looking at it in context, it's still way below where even the imac would be on that chart. The 680mx should be around a 660 desktop card, maybe a bit higher or lower. It's not going to be an extreme deviation. Likewise the 7970 is close to double. Assuming strong scaling it would be around the same with 2x W5000s at a significantly higher cost, again assuming PC side pricing provides a good pricing reference. The best comparison I could find was on Tom's hardware. They tested enough different scenarios. As you can see it's all over the place, so in the end it depends on software support. Apple really needs to provide this support in the form of libraries rather than rely on developers to adjust for whatever cards they choose that year.
    Apple is a very strong advocate for the use of libraries (Accelerate). They strongly reccomend this as opposed to using the vector capability of a processor. The problem is I don't believe that Accelerate makes use of OpenCL or GPUs in other ways. It would be interesting to see Accelerate transparently use GPUs. However the problem with GPU compute is that the programmer often has to know about the GPUs capabilities to really benefit from it.

    So the question is, is it even possible to write a worthwhile generic library for a GPU? We have certainly see such for high level languages like Python, but how much do you loose performance wise?
    That always leads to delayed certification when hardware and software releases are poorly aligned. On OSX I'm not sure you'll see as much driver differentiation. The biggest thing in using workstation cards there for computation is the sheer amount of memory. At the top levels you can't get that on a radeon card. It's the same reason NVidia packed the Teslas with ram. In many cases Fermi gaming cards did well with computation assuming the problem was small enough to fit within the card's memory.
    RAM and GPU compute is a difficult subject to get a handle on. I wouldn't be surprised to find out that one reason Apple went with AMD is that they seem to be of the same mind as to how GPU compute should work in the future. That elusive goal of heterogeneous computing. I'm not sure how AMD will support this on cards sitting on the PCI Express bus though.

    Depending upon the app though simply moving data in and out of RAM on the GPU card becomes a bottle neck. Also as you note GPU RAM can but limits on what the GPU can realistically do. The Mac Pro throws another wrench into the works by spreading GPU RAM across two cards.

    It will be very interesting to see how this all works out. I wouldn't be surprised to find out that getting drivers up to snuff has delayed this ne Mac Pro more than hardware issues.



    From the last page of that review


    If you flip through it, you'll see each card ran into issues. A couple wouldn't run on whatever hardware. I'm just not big on comparing to an obviously outdated card today.
    This is why I tend to reject the rantings of people that pop up one URL to declare their GPU the absolute best. The reality is that the unique architectures involved mean that results will be all over the map. A programmer really only has two choices, one is to live with what you have. The other is to test on all platforms and go with what works best.

    The problem with the second approach is that there are far to many rapidly changing variables to contend with. Drive updates, compiler updates, library updates and updates to the physical architectures all impact how a GPU will perform with your code. This is why I see all the posts about the desires for NVidia GPUs to be unwarranted. Things change rapidly in this industry, for example NVidia use to suck at double precision.

    By the way if you are stipuck on NVidia because of software being tied to it you need to either buy the specific hardware or complain loudly to NVidia. From the standpoint of a developer it isn't smart to be tied so closely to hardware anyways.
    The imac pulled ahead of the 5870 with the 680mx. The 5870 came out in late 2009. It's not quite 4 years old at this point, but I still consider it a poor point of reference. At this point I would expect better performance or something more cost effective. Given that the overall design doesn't accommodate a lot of extra storage internally or anything in the way of after market parts, I would expect them to provide a good value in terms of performance even on the entry to mid level models. In my opinion the 4,1 and 5,1 were very poor values overall. They could have done better, and it doesn't come down solely to specs. There's a lot of discussion of better framework tuning and support in Mavericks, and that is part of providing a viable platform.
    Mavericks seems to be a bigger update than Apple let's on. Maybe they want to surprise everybody.

    Yeah I noticed that. I guess the extra ram would be good. I try to compare against Windows benchmarks, but it can be difficult as OSX versions can be all over the place relative to Windows and even Linux.

    OSX and GPU support have suffered for a long time from neglect. I'm really hoping that Mavericks lives up to its billing as various web posts imply.
  • Reply 516 of 1320
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by wizard69 View Post





    Most likely Intel will be launching a new chipset around the time this hardware ships. The "motherboard" card is rather small so I can't imagine Apple wasting space on PCI to USB3 bridge chips. Another possibility is that they used some glue logic to link a desktop north bridge. Then again Intel and Apple could have partnered on an Intel custom North Bridge.


    New computers still have a real northbridge?


     


    Quote:


     


     


    I would lean towards intel shipping new hardware myself. A custom chip for Apple seems to be unlikely. Further the Xeon environment could really use an update that supports USB 3 generally.




    Every other oem shipped it as a discrete component. My point was that they would have to test third party ones anyway if an updated thunderbolt display is to include it.


     


    Doh forum chopped my post again and I'm not rewriting all of that. That was the one time I forgot to copy before submitting.

  • Reply 517 of 1320
    I still wonder If the Mac mini(even Apple TV) get this design look still obviously smaller.
  • Reply 518 of 1320
    MarvinMarvin Posts: 15,435moderator
    I still wonder If the Mac mini(even Apple TV) get this design look still obviously smaller.

    There's no reason to. The Mac Pro uses a triangular heatsink because it has to cool 3 high power chips (about 650W combined) and house a very large power supply. The Mini has 1 low power chip (45W) and a very small power supply. The Apple TV is passively cooled (~2W) so doesn't even need a fan.

    If the Mini changes to PCIe SSD for one of the drives, I could see it getting shorter but there's no reason for it to be cylindrical.
  • Reply 519 of 1320

    Quote:



    Quote:



    Yeah I noticed that. I guess the extra ram would be good. I try to compare against Windows benchmarks, but it can be difficult as OSX versions can be all over the place relative to Windows and even Linux.




    OSX and GPU support have suffered for a long time from neglect. I'm really hoping that Mavericks lives up to its billing as various web posts imply.



     


    Neglect.  Wasn't it Jobs that once said that Apple lost sight of their crown jewels once upon a time?  


     


    I hope Mavericks reverses this apparent trend.  GPU support and Open GL has seemed weak to me over the years compared to Windows GPUs and drivers be they GL or Direct X.  Certainly a well rounded on perception of 'lagging' GL development by many, many commentators and critics...


     


    Lemon Bon Bon.

  • Reply 520 of 1320


    Mavericks looks to be shaping up to being a 'Snow Leopard' style release?


     


    Lemon Bon Bon.

Sign In or Register to comment.