Apple throws out the rulebook for its unique next-gen Mac Pro

1222325272866

Comments

  • Reply 481 of 1320
    relicrelic Posts: 4,735member

    Quote:

    Originally Posted by hmm View Post




    You can already allocate tasks to other machines, but it's not as much of a cluster as Marvin is suggesting. Your system will not see it as just another cpu. It's on the end of something that is effectively a PCI bridge, although I'm not entirely sure how the system sees it. It can't be treated like something that is just over QPI, and there would be no shared memory address space. I think there are certain things that hold back lighter systems when it comes to this stuff in real world implementations. For the users that would actually invest in their own server farm, ram would be an issue for some. Software licenses are another issue, as all software is licensed differently. A lot of rendering software will actually have things like node licenses where past a couple machines you pay extra to use it on additional machines that are solely dedicated to rendering. That isn't something that would really affect a single user, but in multi-user environments, it may influence what level of hardware they purchase.



     



    There are free clustering software for Linux. There are also complete Linux clustering distro's (Rocks Cluster is one), you make boot CD's or USB's that make your PC into a cluster node. That would be the cheapest way to go. As far as Apple coming out with their own solution, I highly doubt it, if they haven't done it yet, well. Which I'm sure a third party will fill that void but it probably won't be cheap.


     


    If you had a Linux machine I would recommend spending the money to buy one of these;



    It's a Tesla x4 GPU computing unit, you can now grab them for about 500 bucks. It's an older model but it's still 5 Teraflops and for the price of a W9000 you can buy 6 of them for a total of 30,000 Teraflops. image

  • Reply 482 of 1320
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Relic View Post


    There are free clustering software for Linux. There are also complete Linux clustering distro's (Rocks Cluster is one), you make boot CD's or USB's that make your PC into a cluster node. That would be the cheapest way to go. As far as Apple coming out with their own solution, I highly doubt it, if they haven't done it yet, well. Which I'm sure a third party will fill that void but it probably won't be cheap.


     


    If you had a Linux machine I would recommend spending the money to buy one of these;



    It's a Tesla x4 GPU computing unit, you can now grab them for about 500 bucks. It's an older model but it's still 5 Teraflops and for the price of a W9000 you can buy 6 of them for a total of 30,000 Teraflops. image



    Your knowledge of salvaging retired geek stuff is really impressive. The topics of 3d rendering and non linear editors come up frequently when talking about this stuff. That is basically my interest in GPGPU. In terms of tesla cards, some of the fermi gaming cards often compared quite well on Windows in terms of raw speed on the limited number of renderers where offline tasks could be primarily gpu driven. It was usually a case of being too limiting in terms of total materials in a scene that would be supported by a gpu or too little ram. I think ram would be a big one. It would be for me if I was looking at used equipment. I think offline renderers have held back due to limitations in ram and total materials that can be addressed that way. I deal more with still 3d renders and comps than anything, and things are a lot faster than a few years ago. It would be really nice to have something render in an hour though without having to clean up massive amounts of noise and artifacts.


    Quote:

    Originally Posted by Relic View Post


     


    Your right about more apps having OSX support, I completely forgot about Modo and Nuke, I replaced a 6 year old Lightwave version for Modo, though they require CUDA to calculate certain Nodes and your rendering times are much, much faster with it. Interestingly, I think Modo runs better in OSX then Lightwave does but I still went for the Linux version because of the Tesla support. The Foundry recommends Redhat 6 or CentOS 6 and Nvidia GPU's for all of their products, hopefully they will start paying more attention to the new Mac Pro's. It will depend on how popular they are I guess, so fingers crossed everyone.



    Autodesk certifies their Linux software with Redhat as well. Looking at their site, they seem to suggest Redhat Enterprise 5.4. I kind of expected the Foundry to stick to something similar given the amount of user overlap there. Modo is a cool piece of software. I like some of its tools. I dislike the lack of a construction history or modifier stack or something to help edit. The renderer seems to have finicky sampling too. I'm not sure how good the shaders are. If the Foundry decides to improve on that renderer and possibly publish good API literature to allow for better shader writing, it would be an awesome software package. Shaders are a big deal to me. Some of the 3d packages have really buggy out of the box shaders when it comes to displacement or rebuilding passes. It's very annoying. It can be something as simple as unintuitive behavior in terms of how they normalize scattered and direct reflections for energy conservation or "physical" sun systems without good importance sampling in place. Those kinds of things fill me with rage as they result in highly situational tools or convoluted workarounds.


     


    The popularity aspect does concern me. I can see a lot of people waiting to see what happens in software and third party hardware support, making for slow initial sales. Apple may have a better idea how development looks there. I wonder about other things like whether plugging in a typical displayport display still taxes the thunderbolt chip. The gpu itself is probably running on its own lanes to some degree. I have no idea how they implemented that, but it would be a lot more open if it was possible to plug in a displayport display and only consume the port.

  • Reply 483 of 1320
    MarvinMarvin Posts: 15,435moderator
    relic wrote: »
    I replaced a 6 year old Lightwave version for Modo, though they require CUDA to calculate certain Nodes and your rendering times are much, much faster with it. Interestingly, I think Modo runs better in OSX then Lightwave does but I still went for the Linux version because of the Tesla support. The Foundry recommends Redhat 6 or CentOS 6 and Nvidia GPU's for all of their products, hopefully they will start paying more attention to the new Mac Pro's. It will depend on how popular they are I guess, so fingers crossed everyone.

    Don't you mean Nuke when you're talking about CUDA nodes?

    http://www.thefoundry.co.uk/products/nuke/system-requirements/

    "To enable NUKEX 7.0 to calculate certain nodes using the GPU, you need to have:

    An NVIDIA GPU with compute capability 1.1 or above."
    relic wrote:
    It's a Tesla x4 GPU computing unit, you can now grab them for about 500 bucks. It's an older model but it's still 5 Teraflops and for the price of a W9000 you can buy 6 of them for a total of 30,000 Teraflops.

    I'd like to exchange some currency at the bank you work at. Multiplying 5Tflops by 6 gives you 30Tflops, not 30,000Tflops. I guess that explains the economy. Maybe give that PnL a second look over. ;)

    Apple noted the Mac Pro as being 7Tflops. This is equivalent to about two Tesla K20 cards ( http://www.nvidia.com/object/personal-supercomputing.html ). A single Tesla K20 is $3500. If Apple has the whole top-end Mac Pro for $6-7k, it would be cheaper than using Teslas.
  • Reply 484 of 1320
    nhtnht Posts: 4,522member

    Quote:

    Originally Posted by Marvin View Post



    Apple noted the Mac Pro as being 7Tflops. This is equivalent to about two Tesla K20 cards ( http://www.nvidia.com/object/personal-supercomputing.html ). A single Tesla K20 is $3500. If Apple has the whole top-end Mac Pro for $6-7k, it would be cheaper than using Teslas.


     


    W9000s should be around 3.9 Tflops (SP).  For DP the W9000s only weigh in a .998 Tflops each.  A K20 is 1.17 Tflops for DP and 3.52 Tflops SP and can be had for around $3100.


     


    They must be underclocking the FirePro or running at less than x16 or both.


     


    The nVidia Titan weighs in at 1.3 Tflops (DP) and costs $1000.  That's cheaper still and viable on a tower with slots.

  • Reply 485 of 1320
    macroninmacronin Posts: 1,174member

    Quote:

    Originally Posted by nht View Post


     


    W9000s should be around 3.9 Tflops (SP).  For DP the W9000s only weigh in a .998 Tflops each.  A K20 is 1.17 Tflops for DP and 3.52 Tflops SP and can be had for around $3100.


     


    They must be underclocking the FirePro or running at less than x16 or both.


     


    The nVidia Titan weighs in at 1.3 Tflops (DP) and costs $1000.  That's cheaper still and viable on a tower with slots.



    Makes me think that Apple might have gone with AMD/ATI because they perform slightly less than the nVidia offerings; therefore Apple could use that as pressure for lower wholesale/bulk pricing on FirePro GPUs…

  • Reply 486 of 1320
    relicrelic Posts: 4,735member

    Quote:

    Originally Posted by Marvin View Post



    An NVIDIA GPU with compute capability 1.1 or above."

    I'd like to exchange some currency at the bank you work at. Multiplying 5Tflops by 6 gives you 30Tflops, not 30,000Tflops. I guess that explains the economy. Maybe give that PnL a second look over. image

     


     


    Woopsy..... That's why I let machines do all the calculations. I just got carried away with all that power, oh the sweat power, moooohhaahahaha.

  • Reply 487 of 1320
    relicrelic Posts: 4,735member

    Quote:

    Originally Posted by nht View Post


     


    The nVidia Titan weighs in at 1.3 Tflops (DP) and costs $1000.  That's cheaper still and viable on a tower with slots.



    Don't forget the SP is at 4.5 TFlops, not bad for a 1,000. It might be worth looking into, I would pair 2 Titan's with a Quadro K4000. The price would be the same as one W9000, K5000, K20 and you get the benefit of a Workstation card with the raw rendering compute speeds of a SLI'd Titan.


     


    Hear are some benchmark's of the Titan, the interesting part is the 3D rendering segment.


    http://www.tomshardware.com/reviews/geforce-gtx-titan-opencl-cuda-workstation,3474.html


     


    As you can see it would still be very beneficial to have a Workstation class GPU but you can't ignore how fast 3D is with the Titan, MONSTER.

  • Reply 488 of 1320
    wizard69wizard69 Posts: 13,377member
    nht wrote: »
    No, it's not except in the most trivial of cases.  To correctly parallelize code requires more than just a compiler switch.
    I think you totally missed what I was getting at. That is most programmers I know prefer rather performant workstations or PCs because most development systems can highly leverage the cores available. In other words parallel "MAKES" are generally able to leverage most of the cores a programmer can throw at the machine. In other words I was focuses not on writing parallel code but the ability of a programmer to leverage many cores via the common tools available to them.

    As to the efforts required to paralyze code that depends upon the problem domain. There are certainly many problems that require significant effort, but don't dismiss the trivial opportunities.

    Workstation users desire two things: accuracy and speed.  Lower power and quieter is nice too.  Small isn't really a major factor and frankly after all these external chassis and raid boxes I dunno that the desktop footprint of the new Mac Pro is really smaller than the old Mac Pro because it's not like you can put these things on top of the Mac Pro and you can't necessarily put the Mac Pro on top of them.  Plus the cabling is a PITA to deal with and a dust ball magnet.
    In many cases you would have one extra cable going to a disk array. Beyond that your cabling nightmares might actually be simplified. Think about audio recording that it the past might have been routed into an I/O card at the back of the machine. A simple installation might involve four cables. With a remotely attached console you have exactly one cable going to the Mac which can easily support many more than four channels.

    CUDA is not a dead technology in 2013.  It will not be a dead technology in 2014.  It strikes me even unlikely in 2018.  Beyond 5 years is anyone's guess.
    It is a dead technology, investing in it is ill advised.
    What is true for TODAY and TOMORROW is that CUDA support in pro-apps is better and more complete.  For example in Premiere Pro Adobe supports both CUDA and OpenCL but only accelerates ray tracing in CUDA.
    That is Adobe's problem not OpenCL's.


    nVidia GPUs have the advantage in GPU compute and unless they screw things up they'll retain that edge.  If they do then pros will continue to prefer nVidia solutions.  I recall reading somewhere (but do not have a link) that nVidia had around 75% of the workstation GPU market.  That's not a very big market but it's the one relevant to Mac Pros.  Why?  Drivers.  Look at the initial performance results for the FirePro 9000 vs the older Quadro 6000s.
    It is easy to find web sites that generate just the opposite. For example: http://clbenchmark.com/result.jsp. Then again if you look hard you can always find tht one app that jives well with the chip architecture you are benchmarking. NVidia has bigger issues long term as they will have significantly declining sales over the next few years. The good part of declining sales is that should force them to look harder at the high performance solutions.

    [SIZE=12px]"<span style="font-family:helvetica, arial, sans-serif;line-height:1.231;">There's plenty of battles coming through, and we'll closely follow to see just how AMD can lead this battle with NVIDIA. From these initial results, AMD has plenty of work ahead to optimize the drivers in order to get their parts competitive against almost two year old Quadro cards, yet alone the brand new, Kepler-based Quadro K5000. NVIDIA has proven its position as the undisputed leader (over 90% share) and AMD has to go on an aggressive tangent to build its challenge."</span>
    [/SIZE]


    http://vr-zone.com/articles/why-amd-firepro-still-cannot-compete-against-nvidia-quadro-old-or-new/17074.html
    Again you have to be willing to believe everything you read on the net. It is easy to find opinions that lead to a different conclusion.
    Okay...they say 90% share here. On the plus side, AMD probably is giving Apple a hell of a price break.  They'll either enter the workstation market in a big way or destroy a nVidia profit center by dumping FirePros at firesale prices to Apple.
    It will be very interesting to find out what is involved in Apple and AMD's partnership. It is pretty obvious that Apple will be able to deliver significant volume for AMD.
    The drivers have gotten better but nVidia has also filled out their Quadro line with Kepler cards.
    Actually drivers are likely to be a significant issue here. In looks like Apple put a lot of work into Mavericks to get it up to snuff. However up to snuff has never meant high performance to Apple.
    For consumer grade GPUs the Titan beats the Radeon on double precision performance for GP/GPU tasks.  You lose the safety of ECC RAM but on the mac the driver were pro quality already.

    This is the performance graph for single

    And double precision.  
    The thought comes to mind that we really don't even know what Apple and AMD will be putting on the cards. Combined that with effectively brand new drivers for Apples OS and we don't really know what the computational land scape will be like.


    Because by fall new AMD and nVidia GPUs will be out.  Like the Quadro K6000 which is the pro version of the Titan and much faster than the K5000.  A full GK110 GPU with all 15 SMX units and 2880 CUDA cores hasn't been released yet either.  Paired with a K20X that's going to be a powerhouse combo that's cheaper than a pair of K6000s.
    Which means nothing to the vast majority of us! We will buy what suits our needs and not upgrade for at least a couple of years. In a couple of years you will need all new hardware anyways.

    From a practical standpoint most companies do not buy new machines for you every year but will allow the purchase of a new GPU or more RAM or whatever if you need it.  Paying $3700 for another K20X is still cheaper than replacing an entire $10K+ workstation rig.
    Actually it is a lot worst than that, if you don't ask for everything you need up front then you are out of luck at most corporations. Even things like RAM upgrades require being sneaky to avoid IT departments overbearing control.
  • Reply 489 of 1320
    wizard69wizard69 Posts: 13,377member
    relic wrote: »
    Thanks for the Nvidia info, I really wasn't going to debate this issue before you brought it up because OpenCL is a hot item right now due to the new MacPro's, it's a cool new term that's getting thrown around with 95% of the members here having never touched it. Heck before the new MacPro very few if any ever talked about OpenCL or CUDA, one mention of it by Apple and boom, it's the next bing thing since the invention of the computer itself.
    That has to be the biggest bunch of baloney I've seen in a long time on this forum. OpenCL is hot technology widely adopted outside the graphics arts world. As to our concerns here I've been wondering when Apple will get OpenCL running on Intels GPUs. As for GPU compute I suspect most users never even realize when the software they are running is using GPU compute.
    I have been using the GPU to offset the CPU for over 5 years now and your right, CUDA has a much bigger presence then OpenCL, especially in commercial software.
    That depends upon the commercial software. Beyond that most developers have no long term intention of supporting CUDA. It is simply to limited and leaves developers exposed in a very negative way. CUDA had its day but the risk isn't worth the potential performance gains.
    I think this has a lot to do with Nvidia being the dominant in GPU sales. There are also soooo many great scripts available for the CUDA libraries, the community also seems larger to me, that is I can find what I'm looking for much quicker for CUDA, as well as a lot more code examples to copy off of.
    It has a lot to do with being first and then the only viable solution for awhile.
    As a predominant Linux/Unix user when working with my Workstation I have always preferred Nvidia, they have always been more pro-active with their drivers then ATI/AMD, AMD has  gotten better but no where near what Nvidia has to offer.
    Well yeah if you like closed source drivers on your Linux machine. Considering that currently Apples drivers are worst than some of the open source solutions for Linux it makes little difference if Apple uses NVidia or AMD. I'm really hoping that Mavericks changes this.
    Now not to say that after Apple releases the new Mac Pro things will change but any major 3D software that's available for OSX using OpenCL will also have a CUDA cousin on other platforms. That and Nvidia can still run any OpenCL only software, if any are going to be available. CUDA was first to be available on Mari/Linux and they will continue to use it, if you want to see a real heated discussion on this subject check out Mari's usergroup over at linkedin.com, you will notice the ratio of CUDA to OpenCL users to be easily 5 to 1. When is Mari finially going to be available for OSX, I'd be interested to see the interface?

    The rest of The Foundry great products, unfortunatly only Mari will be available for OSX :(
    Here are some alternative links related to OpenCL usage:
    http://www.altera.com/products/software/opencl/opencl-index.html
    http://www.amdahlsoftware.com/ten-reasons-why-we-love-opencl-and-why-you-might-too/
    http://www.wolfram.com/mathematica/new-in-8/cuda-and-opencl-support/
    http://in.amdfireprohub.com/downloads/whitepaper/OpenCL_White_Paper.pdf
    http://proteneer.com/blog/ The interesting thing here is that For this usage Titan isn't that fast.
    http://www.openclblog.com/ Biased ?
    http://sourceforge.net/projects/openclsolarsyst/
    http://gpgpu.org/ A great place to see the very wide variety of uses of GPU compute.

    A lot of links some OpenCL specific. The interesting thing is that maximum performance isn't clearly the domain of any one brand.

    I really don't have a problem who wins this battle, OpenCL or CUDA. Both are awesome and now that Apple's finially comming to the party it's only going to get better. Don't take what I said above to be anything more than an observation. I use CUDA becasue it was the easist platform for me to get started in terms of programming, that and most of the software I have including Adobe seemed to support it better.
    Apple has been in the party since they introduced OpenCL. Adobe is a minor blip in the OpenCL landscape.
    Please don't be mean to me for using CUDA.:)
    Use whatever you want, I'm just saying there isn't much of a future in it.
    Oh and I want a K20 so badly, I will wait until I find a used one on eBay though.
  • Reply 490 of 1320
    wizard69wizard69 Posts: 13,377member
    nht wrote: »
    W9000s should be around 3.9 Tflops (SP).  For DP the W9000s only weigh in a .998 Tflops each.  A K20 is 1.17 Tflops for DP and 3.52 Tflops SP and can be had for around $3100.

    They must be underclocking the FirePro or running at less than x16 or both.
    There are lots of possibilities:
    1. one is that you are jumping to conclusions.
    2. second Apple is being their usual conservative self and post expected performance with their drivers.
    3. apple has been testing beta hardware and prefer realistically numbers as opposed to marketing fluff or optimized routines that give you biased impressions of what the GPU can really do.

    The nVidia Titan weighs in at 1.3 Tflops (DP) and costs $1000.  That's cheaper still and viable on a tower with slots.
  • Reply 491 of 1320
    relicrelic Posts: 4,735member
    Nice post but I don't agree that CUDA is dying, where did you come up with that conclusion. CUDA can be found in almost every professional software that can utilize a GPU. The amount of support and programming forums out numbers OpenCL's like 5 to 1. Go check out Mari's, Nuke, Mobo, Lightwave, Blender, Adobe forums and see which one the users prefer.
  • Reply 492 of 1320
    wizard69wizard69 Posts: 13,377member
    macronin wrote: »
    Makes me think that Apple might have gone with AMD/ATI because they perform slightly less than the nVidia offerings; therefore Apple could use that as pressure for lower wholesale/bulk pricing on FirePro GPUs…

    Or it could be that Apple likes AMDs road map better and appreciates their focus on things that NVidia doesn't seem to care about. Plus AMDs chips have the ability to drive very high resolution displays.
  • Reply 493 of 1320
    relicrelic Posts: 4,735member
    wizard69 wrote: »
    There are lots of possibilities:
    1. one is that you are jumping to conclusions.
    2. second Apple is being their usual conservative self and post expected performance with their drivers.
    3. apple has been testing beta hardware and prefer realistically numbers as opposed to marketing fluff or optimized routines that give you biased impressions of what the GPU can really do.

    Am I speculating about the Titan, sure but I also use similar tech and I'm a constant visitor and poster over at the Mobo and Mari forums. The Titan has everyone excited, especially students who can't afford a Tesla or Workstation graphics card. I'm not even talking about the Mac Pro here, just having a conversation about super cool tech. You don't have to defend Apple, I'm sure their system will kick ass. All is well.
  • Reply 494 of 1320
    wizard69wizard69 Posts: 13,377member
    relic wrote: »
    Am I speculating about the Titan, sure but I also use similar tech and I'm a constant visitor and poster over at the Mobo and Mari forums. The Titan has everyone excited, especially students who can't afford a Tesla or Workstation graphics card. I'm not even talking about the Mac Pro here, just having a conversation about super cool tech. You don't have to defend Apple, I'm sure their system will kick ass. All is well.

    Sometimes specs from Apple are very conservative. Take for example he Mac Book AIRs which for many users get better than expected battery life. I'm not so much defending Apple as highlighting that there could be other reasons for them to post what appears to be lower than expected performance figures.

    As for AMD they probably have a competitor to the Titan in the wings. That is if Apple hasn't stolen all of their engineers yet.

    What is interesting in this forum is all the discussions about raw performance with little concern for the real world. I doubt that raw performance was all that big of a factor when Apple sourced AMD as a partner. I'm not trying to say performance wasn't a goal so much as I'm saying AMD had more to bring to the game than NVidia cold ever think about. So the other factors likely included some of these:
    1. A partner that was willing to engage in the project that obviously was going to take years to complete and would end up with proprietary parts not usable elsewhere.
    2. A partner that has hardware capable of driving very high resolution displays. Further a partner that was willing to do the engineering to support Thunderbolt. Of course I have no way to prove it but I suspect this is a big factor, Apple found itself needing something beyond Display Port latest standard.
    3. A partner committed to OpenCL! This is an Apple initiative and frankly a good one, there is little sense in Apple going NVidias way when there are competing camps at NVidia.
    4. AMD in general has been very willing to engage partners for custom circuitry. I wouldn't be surprised that this willingness is a factor.

    In any event the focus on performance is an interesting discussion, but it is only part of the equation when putting together a new generation machine like this. This is what I think many in this thread mis. Even if some of the performance references are honest and the coming NVidia GPUs are hot performers, it doesn't matter. Why? Because there is more to a project like this than performance. If NVidia wasn't a willing partner then it really doesn't matter if one bench mark places them ahead of AMD.

    In the end performance isn't an issue anyways because the GPUs discussed seem to group together pretty well over a broad range of usage. As it is I'm still waiting for the debut. Because when it comes down to it the price on this new Mac will make or break it in the marketplace.
  • Reply 495 of 1320
    MarvinMarvin Posts: 15,435moderator
    nht wrote: »
    They must be underclocking the FirePro or running at less than x16 or both.

    They are in Crossfire so it may not scale perfectly to double with two GPUs.
    nht wrote: »
    The nVidia Titan weighs in at 1.3 Tflops (DP) and costs $1000.  That's cheaper still and viable on a tower with slots.

    The Titan would be viable if there was a fully functioning driver for it under OS X. It's also not a workstation card.
    relic wrote:
    you can't ignore how fast 3D is with the Titan

    The 7970 outperformed it for a lot of the OpenCL tests, sometimes by double, the 3D tests looked fairly close. The 7970 is pretty close to the W9000. Also, the Mac Pro will have two.

    Even assuming a scenario where Apple had beefed up the PSU and put in two double-wide slots and doubled the slot power allocation, it's not likely that many end users would buy two Titans themselves without official driver support and there's no chance of getting any cost savings from Apple's bulk orders. Titan GPUs are priced lower than Teslas and Quadros for a reason.

    I personally prefer NVidia GPUs because I think they are clearly making better headway in computation. There is a note here about complications with using OpenCL on the GPU for raytracing:

    http://wiki.blender.org/index.php/Dev:2.6/Source/Render/Cycles/OpenCL

    On the CPU it's no problem. Given that it doesn't work for NVidia either, perhaps OpenCL is going to have to be reworked so that it can run more complex code on GPUs. Some developers have an advantage for raytracing with CUDA because NVidia developed their own engine:

    http://www.nvidia.com/object/optix.html

    This is what Adobe is using in After Effects. Perhaps AMD just needs to work more closely with developers trying to implement complex code and get them to work.
  • Reply 496 of 1320
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by wizard69 View Post





    1. A partner committed to OpenCL! This is an Apple initiative and frankly a good one, there is little sense in Apple going NVidias way when there are competing camps at NVidia.



    People said the same thing last year prior to the use of NVidia's kepler gpus in the imacs and macbook pros. Everyone on here swore that Apple would never use NVidia again after the defective chips in 2008 or so. In the end they'll use whoever they wish to. I've already read similar articles to what Marvin posted on this. Some companies found the OpenCL implementation with their software to run better on NVidia gpus, and NVidia was the first to make a move toward GPGPU computation. As for being committed to OpenCL, part of that is on Apple's end. They've been flaky about both that and OpenGL over the past several years. For me it would come down to what is usable today in a given application.


     


    Anyway NVidia is even favored in scientific areas. If you take a look at something like top500 you'll see NVidia based clusters, Xeon Phis, Xeon EP 2600s, and opterons. AMD's opterons tend to be represented at times toward the top of that list. Their gpus are never there. The overall success of this machine is going to come down to the state of software support both from Apple and third parties after the first 6 months. They're no longer selling extra space inside a box, so even the entry level model has to justify its place through performance. If we're talking about the equivalent of a pair of W5000s and a quad Xeon, it won't be very interesting.

  • Reply 497 of 1320
    wizard69wizard69 Posts: 13,377member
    hmm wrote: »
    People said the same thing last year prior to the use of NVidia's kepler gpus in the imacs and macbook pros. Everyone on here swore that Apple would never use NVidia again after the defective chips in 2008 or so.
    Frankly I'm not sure what NVidias GPUs brought to the table, I wasn't exactly happy to see NVidia return to Mac hardware. The performance and power profile is so close to AMDs hardware that it makes you wonder why Apple troubled themselves.
    In the end they'll use whoever they wish to.
    This is true, but from a customer perspective the constant flip flopping isn't cool!
    I've already read similar articles to what Marvin posted on this. Some companies found the OpenCL implementation with their software to run better on NVidia gpus, and NVidia was the first to make a move toward GPGPU computation.
    Nobody is denying this. AMD has suffered from a tools issue in the past though that has been under continual improvement. From a developers perspective though it is far better to use OpenCL on NVidia hardware than CUDA simply because you aren't married to the hardware.
    As for being committed to OpenCL, part of that is on Apple's end. They've been flaky about both that and OpenGL over the past several years. For me it would come down to what is usable today in a given application.
    Apparently neither AMD nor NVidia are to blame for Apples lack of attention with respect to OpenGL or OpenCL. I'm not sure if flaky is the word here, neglectful is probably a better term. Especially in the case of OpenGL they have really smudged their image with respect to the technical community. When open source Linux solutions do better or are more complete, you have issues that are pretty severe for a company Apples size.
    Anyway NVidia is even favored in scientific areas. If you take a look at something like top500 you'll see NVidia based clusters, Xeon Phis, Xeon EP 2600s, and opterons. AMD's opterons tend to be represented at times toward the top of that list. Their gpus are never there.
    Probably because NVidia pursued that market very aggressively. The other thing here is that it is only recently, with its southern islands GPU cores that AMD really had the ability to even support GPU compute well.
    The overall success of this machine is going to come down to the state of software support both from Apple and third parties after the first 6 months. They're no longer selling extra space inside a box, so even the entry level model has to justify its place through performance. If we're talking about the equivalent of a pair of W5000s and a quad Xeon, it won't be very interesting.
    I'm still expecting good, better and best. It is hard to tell what the specs posted represent in that triptych.

    However I still don't buy this idea that only maximum power is of interest in the marketplace. For example dual W5000s may not interest you that much but I can see many an engineer being very happy to have one on their desk. I'm still trying to fathom all of the "up to" qualifiers in Apples postings. For example do they intend to sell the same GPU with different RAM configurations or offer choices of different GPUs.

    As to software support I think you are right in that regard. People blame Apples slide in the professional market on the Mac Pro, but I'm going out on a limb here to say that there is a lot more to it than that. Apples tendency to neglect important elements of the pie, in this case OpenGL, tends to undermine acceptance. The other thing is that Apple does a great job with XCode and the three C's, but rapidly drops support intensity for other languages important to professionals. Sure they include things like Python with their OS, but include is a far cry from optimized support. It almost seems like Python is looked down upon at Apple. However good scripting languages are important to a very wide range of professionals, so Apple needs to get on the ball so to speak. Frankly support for Fortran wouldn't hurt either. Yes it is old and crusty but Fortran support cold ease a lot of porting jobs. Obviously we are talking UNIX here so it isn't all that bad, I just think Apple needs to make something like Python a first class Mac Solution. It is just another cog in the machine for professionals but an important one.
  • Reply 498 of 1320

    Quote:


    As to software support I think you are right in that regard. People blame Apples slide in the professional market on the Mac Pro, but I'm going out on a limb here to say that there is a lot more to it than that. Apples tendency to neglect important elements of the pie, in this case OpenGL, tends to undermine acceptance. 



     


    Open GL.


     


    I'm glad Apple finally seems to be taking the format they support seriously with Mavericks.


     


    4.1?  Full support.  4.2 in play...and 4.3 pending?


     


    Whispers from around the net hint at graphical/system response being much faster.


     


    Hopefully this will mean gpu performance vs PC gpu performance will finally see parity????


     


    Not the 50-100% less (which always struck me as ridiculous...to say that Apple supported the only equivalent standard to Direct X on the Mac side...)


     


    Do they only have one guy doing their GL drivers?


     


    With Open GL 4.1, Mavericks, the New Mac Pro...and the new Pro apps from 3rd parties...


     


    ...far from the much feared death of the Pro...we see a renaissance?


     


    Lemon Bon Bon.

  • Reply 499 of 1320
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by wizard69 View Post







    Frankly I'm not sure what NVidias GPUs brought to the table, I wasn't exactly happy to see NVidia return to Mac hardware. The performance and power profile is so close to AMDs hardware that it makes you wonder why Apple troubled themselves.


    Yes I've gathered that. I've followed both for a very long time. NVidia overall has been really aggressive in expanding their market as you pointed out. I don't see that as a bad thing. I think it was only logical that they tuned things for their own hardware. They wanted to squeeze as much performance and stability as possible out of it. I see no motivation for such a company to do the major R&D work to get an open standard off the ground. They don't sell as many, but the gaming cards tend to absorb a lot of development costs.


     


    Quote:


    This is true, but from a customer perspective the constant flip flopping isn't cool!



    I just view it as if your software relies on CUDA, you probably stick to those years when purchasing new machines unless the software changes to accommodate similar functionality with OpenCL and AMD drivers.


     


     


    Quote:


    Nobody is denying this. AMD has suffered from a tools issue in the past though that has been under continual improvement. From a developers perspective though it is far better to use OpenCL on NVidia hardware than CUDA simply because you aren't married to the hardware.



    I get why you're saying that. My point was more that on a typical 3-4 year replacement cycle, if the software supports CUDA today, it's unlikely to dry up within that time. If you're a developer, it's entirely different.


     


    Quote:


    Apparently neither AMD nor NVidia are to blame for Apples lack of attention with respect to OpenGL or OpenCL. I'm not sure if flaky is the word here, neglectful is probably a better term. Especially in the case of OpenGL they have really smudged their image with respect to the technical community. When open source Linux solutions do better or are more complete, you have issues that are pretty severe for a company Apples size.

    Probably because NVidia pursued that market very aggressively. The other thing here is that it is only recently, with its southern islands GPU cores that AMD really had the ability to even support GPU compute well.

    I'm still expecting good, better and best. It is hard to tell what the specs posted represent in that triptych.



     


    I have mentioned that. It's annoying to me, and I'm not entirely sure whether it's an issue of growing pains, sourcing staff, or whatever else. Apple certainly has the money to hire people, but it's rarely a simple issue when it comes to bringing something that has fallen behind back up to speed. You mention recent AMD support here. We still haven't seen that on a shipping product in the wild. The demo looked cool, and I can appreciate the demanding software they ran on it. I'm still interested in the overall results when it comes to a broader set of applications and real world testing as well as the updated alignment of price to performance. It loses some of the prior functionality that I mentioned. I liked extra bays, that don't have to deal with additional hardware layers. There's no need to deal with host cards unless you want something like SAS out and no concerns about glitch port multiplier firmware . From a practical standpoint, it's likely to be less customizable than prior models. I was going to throw in a 4k output reference, but I noticed it's supported by some of the after market cards for the current model.


     


     


    Quote:


     


    However I still don't buy this idea that only maximum power is of interest in the marketplace. For example dual W5000s may not interest you that much but I can see many an engineer being very happy to have one on their desk. I'm still trying to fathom all of the "up to" qualifiers in Apples postings. For example do they intend to sell the same GPU with different RAM configurations or offer choices of different GPUs.


     




     


    I wasn't so much saying maximum power. The W5000s are based on a low end card, lower than the 5770s. The drivers and possibly firmware are obviously different. Ram might be ECC. It's not that great of a card overall, in spite of its price and marketing. I guess it depends on where these machines start. I am fully expecting some price updates from AMD by the time these machines ship. It isn't realistic to me to assume that a mac pro is going to cram $6k worth of gpus. That would be a higher hardware cost than they have ever addressed. I'm sure there's a market, just not a sustainable one, as configurations that cost that much are mostly available through cto options from other oems. Even on the W9000s, they're still at launch pricing currently. AMD and NVidia both tend to adjust mid cycle on longer cycles rather than rebrand workstation gpus with adjusted clock rates, and workstation cards show up after gaming cards. That means these have to last at least another year, but I don't think they'll stay at $3000 retail that entire time.


     


    Quote:


    As to software support I think you are right in that regard. People blame Apples slide in the professional market on the Mac Pro, but I'm going out on a limb here to say that there is a lot more to it than that. Apples tendency to neglect important elements of the pie, in this case OpenGL, tends to undermine acceptance. The other thing is that Apple does a great job with XCode and the three C's, but rapidly drops support intensity for other languages important to professionals. Sure they include things like Python with their OS, but include is a far cry from optimized support. It almost seems like Python is looked down upon at Apple. However good scripting languages are important to a very wide range of professionals, so Apple needs to get on the ball so to speak. Frankly support for Fortran wouldn't hurt either. Yes it is old and crusty but Fortran support cold ease a lot of porting jobs. Obviously we are talking UNIX here so it isn't all that bad, I just think Apple needs to make something like Python a first class Mac Solution. It is just another cog in the machine for professionals but an important one.


     




    Having used it for a bit now, I like Python's structure, although the way they update it is weird where you have a lot of stuff still on Python 2, and elements of Python 3 backported in the last couple releases. I know absolutely nothing about Fortran, so there's no way I could put together a valid response on that.

  • Reply 500 of 1320
    MarvinMarvin Posts: 15,435moderator
    hmm wrote: »
    If we're talking about the equivalent of a pair of W5000s and a quad Xeon, it won't be very interesting.

    You'd have said that about pretty much anything they made though. Even if they'd brought out a standard tower with dual Titan GPUs and the possibility of dual CPUs, they'd have higher margins so you'd have said you could get the same power cheaper and the enclosure wasn't innovative and it likely would have had issues putting Thunderbolt in so you'd have said it was disappointing. There's almost nothing they could do that would impress you.
    hmm wrote:
    The W5000s are based on a low end card, lower than the 5770s.

    Which card? As you've seen, it performs at double a 7770 and 5870:

    http://www.tomshardware.com/reviews/workstation-graphics-card-gaming,3425-13.html

    For an entry card, that's decent enough performance, especially when there's two of them. To go from a single 5870 in the max BTO config to 4x a 5870 in the entry config in 1 year (technically 3 years) would be alright.
Sign In or Register to comment.