Tim Cook confirms updated Mac Pro coming in 2013

1679111217

Comments

  • Reply 161 of 339
    wizard69wizard69 Posts: 13,377member
    Marvin wrote: »
    It's supposed to be 9ns across 7 devices but real-world scenarios may prove otherwise. It certainly should be under 9ns end-to-end across one device though.
    Not very slowly, A GTX 680 and 7970 (the fastest consumer GPUs in the world) run at 73% and 86% respectively in PCI 1 x4:
    http://www.overclock.net/t/1253914/tpu-ivy-bridge-pci-express-scaling-with-hd-7970-and-gtx-680
    That performance drop is pretty much unnoticeable.
    I think saying it is unnoticeable is a bit of a stretch.
    Knights Corner can use more general purpose code though.
    This is huge which I think many underestimate. Knights Corner (KC) should be able to accelerate many computational work loads that don't transfer to well to the GPU. You might even see KC running more general purpose threads though at a lower performance level. The in order nature of the cores will leave some codes running with Atom like performance.
    My Cube would have 6 Thunderbolt ports with 16 lanes for the GPU. Although they are lower bandwidth, I'd say 6 TB ports is better than 3x PCI slots. Some cards will prevent you even using all 3 slots.
    I don't agree with this 100%. Mostly because external devices add more problems than internal ones. Even so you are only talking 28 lanes out of 40 that Sandy Bridge E supports and that assumes that they will use Sandy Bridge E and current chipsets. Apple could easily add a few 4 lane slots and do OK.
    I think the Knights Corner (or more likely Knights Landing if late 2013) would be on the motherboard but Apple put the iMac GPU in a slot so I figure they'd do the same with the Mac Pro. Either way is good though.
    Maybe I'm a bit optimistic but I expect the new machine in January right after Knights Corner launches. As to configuration that is an interesting question. I suspect software developers would prefer a known base system.
    I don't see there being a huge bottleneck. Thunderbolt is a multi-protocol connection. If you need channel bonding, it can be implemented just like it is on a fibre channel PCI board. Thunderbolt is external PCI. Whatever you use internal PCI for can be used for external PCI. Here is a 10Gbps ethernet Thunderbolt box with link aggregation:
    http://attotech.com/products/product.php?scat=32&sku=TLNT-1102-D00
    The problem with this discussion is that you can prove that any cluster interconnect is better than any other simply due to running less than optimal software on the competeing system. The real question is how well the interconnects work with the apps Apple wants to promote on the machine.
    Nah, the xMac was always the cheaper i7 box that could never happen.
    Actually I think the chances increase of XMac happening if this Mac Pro replacement is real. It appears that Apple intentionally castrated the Mini so that the gap between the Pro and Mini Is huge. With Knights Corner and possible the super computer chip that Intel is rumored to be working on no i7 would come close to this level of performance.
    This is rethinking what a workstation-class machine should represent. It shouldn't be hacking together ugly PCI cards and only having Xeons that take forever to improve in performance.
    Actually Apple ends up tied to Intel even tighter with this sort of box.
    6 Thunderbolt ports = lots of ports for expansion if you need it and/or up to 6 displays.
    powerful GPU that is the best in class and well-supported
    Knights Landing - 1.5-2 TFLOPs of double precision semi-general purpose computing for high resource workloads like video encoding/decoding and rendering
    KC is still advantaged by code that fits its instruction set well. However nobody should underestimate just how important that ability to run more general code is.
    6-10 core Xeon for reliable general purpose tasks that can't be or don't need to be accelerated by the co-processor.
    Affordable - $2999.
    Scalable - just hook more together. No matter if Intel screw up their rollout again, the performance can be scaled linearly.

    The big problem i see is that the initial prototypes of the chips burned a lot of power. We are talking +300 watts. Hopefully KC corrects that issue.
  • Reply 162 of 339
    wizard69wizard69 Posts: 13,377member
    As a side note one of the reasons I brought up KC is that I really do think Apples screwed up with the micro Mac Pro update. To pass that off as an update with out an explanation as to what and why is just stupid. I really don't blame people for being pissed off as the behavior around this release is insulting. It is insulting to all desktop users not just Mac Pro users.

    It is after a bit of rest and thought that one has to think about what Apple is up to. After all they have some of the smartest people in the world working there. The reality is that Sandy Bridge E wouldn't provide the speed up that would set a new machine apart from the machines of the past. Further GPU acceleration is far to specialized for many demanding apps. Looking around for new technologies Knights Corner and Intels other super computing efforts are the only path to a significantly more powerful Mac Pro replacement.

    Maybe I'm wacko but it really seems like timing is just about perfect for this to happen. In the end arguing about what box it comes in serves little purpose. It is the new generation of performance that will make the box a hot item.
  • Reply 163 of 339
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by wizard69 View Post



    Apple did make a huge mistake here!

    However it appears that they recognize that they screwed up thus the "leaks" about a new Mac Pro coming in 2013. If the advance is as strong as rumored it might not be a bad idea to put off the platform change.

    The third alternative is Linux.

    In any event I'd suggest not leaving the fold to quickly. It appears that Apple realizes that they screwed up with this micro update to the Mac Pro. At this point their only choice is to ride out the hostility until they get the next generation hardware on the market. Speaking of which I'm rather put off by their excessive focus on the laptop segment with little effort putinto innovating on the desktop. Apples desktop line up right now is pure crap from the Mini on up, it is no wonder sales suck.




    It's not just a failure to innovate in any way. All they did was update pricing to reflect changes in cpu pricing and switch a couple cpu models around. The 8 core became a 12 core for $300 more. This is a pretty far cry from getting Sandy Bridge E, updated gpus, and updated features. Also I like Linux. If I could get everything running well under Debian or Fedora, I would never boot into OSX again.


     


    I only typed it like that because it sounds funny. I'm a little annoyed by things like facebook integration under Mountain Lion. I'd like a lean OS with good stability. The priorities at the top of Apple's list are not really my own. Runs cooler would motivate me more than computer is .25" thinner than the previous generation. I think I've got my upgrades figured out for the year anyway. They do not include a RMBP or the rehashed nehalem towers. If their notebook updates really look good, Haswell is hyped as a major update anyway. If we're talking about more of a Merom like shift rather than a Sandy Bridge one, I'm all for it.

  • Reply 164 of 339
    nhtnht Posts: 4,522member

    Quote:

    Originally Posted by Marvin View Post



    It's supposed to be 9ns across 7 devices but real-world scenarios may prove otherwise. It certainly should be under 9ns end-to-end across one device though.

    Not very slowly, A GTX 680 and 7970 (the fastest consumer GPUs in the world) run at 73% and 86% respectively in PCI 1 x4:

    http://www.overclock.net/t/1253914/tpu-ivy-bridge-pci-express-scaling-with-hd-7970-and-gtx-680

    That performance drop is pretty much unnoticeable.


     


    If you are gaming yes.  For GPU compute the differences are probably more than the 1%.


     


    "Simply enabling PCIe 3.0 on our EVGA X79 SLI motherboard (EVGA provided us with a BIOS that allowed us to toggle PCIe 3.0 mode on/off) resulted in a 9% increase in performance on the Radeon HD 7970. This tells us two things: 1) You can indeed get PCIe 3.0 working on SNB-E/X79, at least with a Radeon HD 7970, and 2) PCIe 3.0 will likely be useful for GPU compute applications, although not so much for gaming anytime soon."


     


    http://www.anandtech.com/show/5264/sandy-bridge-e-x79-pcie-30-it-works


     


    Mostly I look at this from a GPU compute perspective because that's our greatest use for them.


     


    From a gaming perspective there's a limit to the amount of textures and data pushed across so I can see why going from x4 to x16 isn't that big a change.  What you also need to show is for GPU compute activities the difference is still only 1%.  Everything from CS6 GPU acceleration to scientific CUDA code.


     


     


    Quote:


    My Cube would have 6 Thunderbolt ports with 16 lanes for the GPU. Although they are lower bandwidth, I'd say 6 TB ports is better than 3x PCI slots. Some cards will prevent you even using all 3 slots.



     


     


    Except when you have a card that wants more than 10 Gbps bandwidth.  A single PCIe 3.0 x8 slot has 8 GB/sec bandwidth...that's more than all 6 TB ports together.


     


    Quote:


    I think the Knights Corner (or more likely Knights Landing if late 2013) would be on the motherboard but Apple put the iMac GPU in a slot so I figure they'd do the same with the Mac Pro. Either way is good though.



     


    Neither the GTX 680 nor the Radeon HD 7970 are small GPUs to be put in the motherboard or an iMac like mezzanine card.  And again, not everyone will want a Knights anything on the MB.  Their specific needs may call for a Tesla card or something else.  The Teslas we have have 6GB of RAM aboard and is fairly massive.


     


    This isn't a consumer machine.  There are users that need specific GPUs and compute engines for their work so PCIe slots are much better than integrating it all into the box.


     


     


    Quote:


    Nah, the xMac was always the cheaper i7 box that could never happen. This is rethinking what a workstation-class machine should represent. It shouldn't be hacking together ugly PCI cards and only having Xeons that take forever to improve in performance.



     


     


    Ugly PCI cards?  When do you see them?  Hacking together?  In what sense?  Xeons + ECC is what workstations are.


     


     


    Quote:


    6 Thunderbolt ports = lots of ports for expansion if you need it and/or up to 6 displays.

    powerful GPU that is the best in class and well-supported



     


    Best in class for whom?  Gamers?  Video pros?  CAD users?  What?  Mostly the gaming GPUs have better performance but the Quadros the stability desired even at the cost of performance.


     


    Quote:


    Knights Landing - 1.5-2 TFLOPs of double precision semi-general purpose computing for high resource workloads like video encoding/decoding and rendering

    6-10 core Xeon for reliable general purpose tasks that can't be or don't need to be accelerated by the co-processor.

    Affordable - $2999.

    Scalable - just hook more together. No matter if Intel screw up their rollout again, the performance can be scaled linearly.



     


    We haven't done any MIC coding yet that I'm aware of.  Maybe it is as simple as Intel and the youtube demos suggest.  Something tells me probably not quite.  My impression is that you're still going to need to code against the 512 bit SMID...kinda like uber SSE coding.  And I want to see something beyond CERN's embarrassingly parallel example.  


     


    Wont be long, we're smack in the middle of Intel's target market for MICs. But it's very niche.  I see a completely different need set by the more general pro market.

  • Reply 165 of 339
    wizard69wizard69 Posts: 13,377member
    hmm wrote: »

    It's not just a failure to innovate in any way. All they did was update pricing to reflect changes in cpu pricing and switch a couple cpu models around. The 8 core became a 12 core for $300 more. This is a pretty far cry from getting Sandy Bridge E, updated gpus, and updated features. Also I like Linux. If I could get everything running well under Debian or Fedora, I would never boot into OSX again.
    Sandy Bridge E, the processor anyways isn't a big deal, it is all the support functions that would make a huge difference. A SATA and PCI Express bumps would have made a huge difference.

    As to Linux I do like it but it is also frustrating in that much of the software on that platform is trying. As you note getting things running can still be a hassle.

    I only typed it like that because it sounds funny. I'm a little annoyed by things like facebook integration under Mountain Lion. I'd like a lean OS with good stability. The priorities at the top of Apple's list are not really my own.
    Honestly that doesn't bother me one bit. Other changes have like auto save have really pissed me off yet at times I'm happy that is there. Beyond that I like that they keep the Unix subsystem fresh without constant minute changes happening.
    Runs cooler would motivate me more than computer is .25" thinner than the previous generation. I think I've got my upgrades figured out for the year anyway. They do not include a RMBP or the rehashed nehalem towers.
    I'm actually hoping to get through the year without buying any major Apple hardware. I will admit that I'm tempted by that new RMBP. Unless they seriously rethink the Mini, the Mini won't even come close to the RMBP performance wise.
    If their notebook updates really look good, Haswell is hyped as a major update anyway. If we're talking about more of a Merom like shift rather than a Sandy Bridge one, I'm all for it.

    Well if Intel keeps its promises. My attempt to displine myself and avoid a Mac this year is just trying to prioritize things in life a bit. Haswell isn't a goal but obviously could be a very big win for these machines.
  • Reply 166 of 339
    wizard69wizard69 Posts: 13,377member
    nht wrote: »
    If you are gaming yes.  For GPU compute the differences are probably more than the 1%.



    Mostly I look at this from a GPU compute perspective because that's our greatest use for them.
    GPU compute would loose more performance than graphics acceleration over TB. Generally there is lots of data to move around. Even with todays systems data movements can swamp the performance advantage of a GPU.
    From a gaming perspective there's a limit to the amount of textures and data pushed across so I can see why going from x4 to x16 isn't that big a change.  What you also need to show is for GPU compute activities the difference is still only 1%.  Everything from CS6 GPU acceleration to scientific CUDA code.

    It should be worst. Of course that depends upon the exact type of compute activity going on. But in general every compute operation involves sending and receiving data from the GPU.

    Except when you have a card that wants more than 10 Gbps bandwidth.  A single PCIe 3.0 x8 slot has 8 GB/sec bandwidth...that's more than all 6 TB ports together.
    Even a 4X slot would be more powerful.


    Neither the GTX 680 nor the Radeon HD 7970 are small GPUs to be put in the motherboard or an iMac like mezzanine card.  And again, not everyone will want a Knights anything on the MB.  Their specific needs may call for a Tesla card or something else.  The Teslas we have have 6GB of RAM aboard and is fairly massive.
    Maybe the video card goes into a slot. I just don't see that happening unless Apple and Intel come with a new board standard that routes the video lines for TB.

    As to Knights Corner I see its incorporation into the mother as a requirement. Developers need the certainty that the accelerator will be there
    This isn't a consumer machine.  There are users that need specific GPUs and compute engines for their work so PCIe slots are much better than integrating it all into the box.
    I see a machine with slots but maybe not with the power some of those accelerators need.



    Ugly PCI cards?  When do you see them?  Hacking together?  In what sense?  Xeons + ECC is what workstations are.



    Best in class for whom?  Gamers?  Video pros?  CAD users?  What?  Mostly the gaming GPUs have better performance but the Quadros the stability desired even at the cost of performance.


    We haven't done any MIC coding yet that I'm aware of.  Maybe it is as simple as Intel and the youtube demos suggest.  Something tells me probably not quite.  My impression is that you're still going to need to code against the 512 bit SMID...kinda like uber SSE coding.  And I want to see something beyond CERN's embarrassingly parallel example.  
    used Improperly the CPUs in KC will look pretty bad.
    Wont be long, we're smack in the middle of Intel's target market for MICs. But it's very niche.  I see a completely different need set by the more general pro market.

    Actually I see the opposite. KC offers a way to accelerate many apps that is very approachable.
  • Reply 167 of 339
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by wizard69 View Post





    Well if Intel keeps its promises. My attempt to displine myself and avoid a Mac this year is just trying to prioritize things in life a bit. Haswell isn't a goal but obviously could be a very big win for these machines.




    These quotes never paste correctly anymore. Anyway Sandy Bridge isn't even a bad update on the cpus. It's not revolutionary, but neither is Ivy Bridge E. Both are incremental, and they use the same chipset. Assuming cpus of equal cost, you'd get way more on that $3800 machine than you do now. You could have a pretty sweet 12 core that's around the performance of the prior top model. You could also have a significantly faster 6 core around the $3kish realm. Add in usb3, sata update, PCIe 3.0, etc. like you mentioned and it's a big step up. Every part of that is significant, including the cpus, as the exist in a realm where you pay a premium for diminishing returns, so you can bring some incredible performance to the $3k realm compared to $5-7k. That is a big win. I'm really disappointed in their update. Considering that I only work from my own machine a portion of the time, I'm somewhat budget sensitive.

  • Reply 168 of 339
    MarvinMarvin Posts: 15,326moderator
    wizard69 wrote:
    I think saying it is unnoticeable is a bit of a stretch.

    Even as low as 75% performance means that a game on maximum enthusiast settings would run at 26fps vs 35fps (the 7970 ran at 86% = 30fps). High-end cards can run games with everything on. Just tweak a few settings and the experience will be the same and make up the small 5-10 fps difference.
    nht wrote:
    If you are gaming yes.  For GPU compute the differences are probably more than the 1%.

    It should be almost zero difference for compute as reported on the following thread but it depends how it's done:

    http://setiathome.berkeley.edu/forum_thread.php?id=65035

    GPUs have their own memory so you can just run the code and data from there if there's enough memory. But this isn't something you'd be doing on a standard Mac Pro anyway. It has a 300W PCI power limit so you only get one high-end GPU and a single high-end GPU can go in the Cube.
    Except when you have a card that wants more than 10 Gbps bandwidth.

    Which card has such a desire?
    Ugly PCI cards?  When do you see them?  Hacking together?  In what sense?

    Manufacturers have to design all their wide variety of boards to a certain form factor. A giant dual-GPU has to be designed to fit in the same slot as a small fibre-channel card. Some cards have no reason to be wasting a full slot but you have no other choice. Some PCI adaptor cards are nothing more than a connector to a large external box. Again, wasting a full slot.

    Thunderbolt means no peripheral wastes more than it needs to. Manufacturers who want to build an I/O adaptor with a single cable can do so in a small size (e.g GigE and FW800 adaptors from Apple). Manufacturers who want to build specialized equipment for video can actually make a product fit for purpose with all the right inputs. If they need 10 different I/O ports, they're not going to fit those on a single PCI card. They'll split them and block multiple PCI slots, it's an ugly hack.

    Even high-resource users shouldn't be expected to open up their machines and slot in raw boards. Everything that's not a core component should be plug and play.
  • Reply 169 of 339
    wizard69wizard69 Posts: 13,377member
    Marvin wrote: »
    Even as low as 75% performance means that a game on maximum enthusiast settings would run at 26fps vs 35fps (the 7970 ran at 86% = 30fps). High-end cards can run games with everything on. Just tweak a few settings and the experience will be the same and make up the small 5-10 fps difference.
    Honestly Marvin you amaze me. Your statements make about as much sense as www.anandtech.com when the fiddle with video settings for Intel integrated graphics until they can declare how good they are. To hell with the readers that actually want good video performance just make sure it looks good so the "Marketing" goodies continue to flow. I doubt you are getting any marketing goodies from intel so I'm not sure why you would propose tweaking the settings until the frame rates are good enough.
    It should be almost zero difference for compute as reported on the following thread but it depends how it's done:
    http://setiathome.berkeley.edu/forum_thread.php?id=65035
    Not so much how it is done but rather the problem set in question. SETI at Home may be far more optimal than other problem sets where moving data back and forth becomes a real issue.
    GPUs have their own memory so you can just run the code and data from there if there's enough memory. But this isn't something you'd be doing on a standard Mac Pro anyway. It has a 300W PCI power limit so you only get one high-end GPU and a single high-end GPU can go in the Cube.
    I disagree, you can get a Lot of computational power into a 300 watt video card these days.
    Except when you have a card that wants more than 10 Gbps bandwidth.
    Which card has such a desire?
    Who knows? With current drivers and hardware today's video cards barely benefit from PCI Express 3 on sixteen lane slots. But then again that is still much faster than 10GBps

    There are probably a number of cards that can make use of more than 10 Gbps. I don't have a list, but that doesn't mean they don't exist.
    Ugly PCI cards?  When do you see them?  Hacking together?  In what sense?
    Manufacturers have to design all their wide variety of boards to a certain form factor. A giant dual-GPU has to be designed to fit in the same slot as a small fibre-channel card. Some cards have no reason to be wasting a full slot but you have no other choice. Some PCI adaptor cards are nothing more than a connector to a large external box. Again, wasting a full slot.
    none of this has in anyway discounted the need for slots. TB simpy isn't fast enough no matter how much you try to twist facts to support it. Realize though that I'm right with you hoping to see more TB ports in the Mac Pro replacement. It is just that I accept the limitations of those ports which means in my mind the Pros replacement needs real PCI-Express slots. Preferably fast slots with 8 or 16 lanes.
    Thunderbolt means no peripheral wastes more than it needs to. Manufacturers who want to build an I/O adaptor with a single cable can do so in a small size (e.g GigE and FW800 adaptors from Apple).
    It also means an end to clean and uncluttered installations. Really where do you expect all of those dongles, expansion boxes and power supplies will go? TB has its place but I just don't see a huge advantage in Pro installations and many disadvantages.
    Manufacturers who want to build specialized equipment for video can actually make a product fit for purpose with all the right inputs. If they need 10 different I/O ports, they're not going to fit those on a single PCI card. They'll split them and block multiple PCI slots, it's an ugly hack.
    Or they run a single cable to a break out box.
    Even high-resource users shouldn't be expected to open up their machines and slot in raw boards. Everything that's not a core component should be plug and play.

    Garbage! The higher your resource usage the greater the need to get to the faster bus that a 16 lane PCI Express slot offers. We are talking Pro users here, not some Joe off the street without a clue.
  • Reply 170 of 339
    MarvinMarvin Posts: 15,326moderator
    wizard69 wrote:
    To hell with the readers that actually want good video performance just make sure it looks good so the "Marketing" goodies continue to flow. I doubt you are getting any marketing goodies from intel so I'm not sure why you would propose tweaking the settings until the frame rates are good enough.

    I'm not talking about Intel's IGP where the competition can be 50-100% faster. This performance drop is minimal. There will be an internal GPU anyway so almost no one will need to have this setup. But if they wanted to, it would work.
    SETI at Home may be far more optimal than other problem sets where moving data back and forth becomes a real issue.

    And it may not. I've shown where it's not a problem for computation, you show me where it is.
    There are probably a number of cards that can make use of more than 10 Gbps. I don't have a list, but that doesn't mean they don't exist.

    Building a list should be easy though and yet no one seems to have done this.
    none of this has in anyway discounted the need for slots. TB simpy isn't fast enough no matter how much you try to twist facts to support it.

    TB isn't as fast as a PCI slot, whether it's fast enough is debatable but optical connections are certainly fast enough:

    http://www.theverge.com/2012/3/8/2854040/ibm-holey-optochip-optical-chipset

    That discounts the need for slots.
    It also means an end to clean and uncluttered installations. Really where do you expect all of those dongles, expansion boxes and power supplies will go?

    How many peripherals do you expect people will have? The vast majority will have none.

    The people who currently use a Mac Pro have 3 slots free. A single Magma box with a power supply covers this.
    The higher your resource usage the greater the need to get to the faster bus that a 16 lane PCI Express slot offers. We are talking Pro users here, not some Joe off the street without a clue.

    So, professional video editors and graphics artists should be expected to know how to install PCI cards inside a workstation? Presumably they should be expected to know how to service an Arri Alexa or a high-end tape deck? I don't think they should. I think they should know how to do their job and if they need a peripheral, it should plug in like any other peripheral.
  • Reply 171 of 339
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post





    TB isn't as fast as a PCI slot, whether it's fast enough is debatable but optical connections are certainly fast enough:

    http://www.theverge.com/2012/3/8/2854040/ibm-holey-optochip-optical-chipset

    That discounts the need for slots.


    We aren't seeing a true optical thunderbolt in the current year. The cabling was more to work around length restrictions. I'd have to find  the details on it. How is a prototype design from IBM that is likely aimed at server/enterprise markets relevant to a 2012-2013 machine in the four figure realm?


    Quote:

    Originally Posted by Marvin View Post



     




    How many peripherals do you expect people will have? The vast majority will have none.

    The people who currently use a Mac Pro have 3 slots free. A single Magma box with a power supply covers this.

    Quote:

    The higher your resource usage the greater the need to get to the faster bus that a 16 lane PCI Express slot offers. We are talking Pro users here, not some Joe off the street without a clue.


    So, professional video editors and graphics artists should be expected to know how to install PCI cards inside a workstation? Presumably they should be expected to know how to service an Arri Alexa or a high-end tape deck? I don't think they should. I think they should know how to do their job and if they need a peripheral, it should plug in like any other peripheral.


    Take a look at the cost when you replace a previously standard feature with a niche solution that must be supported through all of Apple's random changes in requirements. Lion actually dumped quite a lot of devices when they dropped carbon support. Given the cost of rewriting drivers, it may not have been acceptable anywhere the margins were fairly lean. This has also been a problem with usb3 drivers. If they could rely on Apple for the driver stack, you would have cheap cards available with reasonable performance. Adding in driver development is costly given the sometimes razor thin margins on such cards.


     







    The higher your resource usage the greater the need to get to the faster bus that a 16 lane PCI Express slot offers. We are talking Pro users here, not some Joe off the street without a clue.


    So, professional video editors and graphics artists should be expected to know how to install PCI cards inside a workstation? Presumably they should be expected to know how to service an Arri Alexa or a high-end tape deck? I don't think they should. I think they should know how to do their job and if they need a peripheral, it should plug in like any other peripheral.


    Marvin while the tape deck might seem daunting given a potential lack of familiarity, installing a PCI card is not complicated, and it requires very little technical inclination. If your job requires you to know how to service the other items, then you do it or budget the cost of contracting someone who does know them. I worry more about the people who ask me why Raid 1 isn't a combination of storage + backup *bangs head on desk*. Anyway I'm still not of the opinion that a technology in its infancy is a valid replacement for stable technology today. If such things become popular enough, Regarding cabling messes, you'd laugh if you saw mine. I really need to clean it up.

  • Reply 172 of 339
    macroninmacronin Posts: 1,174member

    Quote:

    Originally Posted by Marvin View Post







    The people who currently use a Mac Pro have 3 slots free. A single Magma box with a power supply covers this.


    So, professional video editors and graphics artists should be expected to know how to install PCI cards inside a workstation? Presumably they should be expected to know how to service an Arri Alexa or a high-end tape deck? I don't think they should. I think they should know how to do their job and if they need a peripheral, it should plug in like any other peripheral.


     


     


    So you are saying that pro video editors & graphic artists should not have to know how to install a PCI card, but before that you imply that they should know how to use a Magma expansion chassis (which would require them to know how to instal PCI cards into it)…?!?

  • Reply 173 of 339
    MarvinMarvin Posts: 15,326moderator
    hmm wrote: »
    We aren't seeing a true optical thunderbolt in the current year. The cabling was more to work around length restrictions. I'd have to find  the details on it. How is a prototype design from IBM that is likely aimed at server/enterprise markets relevant to a 2012-2013 machine in the four figure realm?

    I was talking about the future of interconnects discounting the need for slots entirely. I think they are aiming for 20Gbps in 2014. TB as it stands right now will still cover most (I would say all) usage cases of PCI.
    macronin wrote:
    So you are saying that pro video editors & graphic artists should not have to know how to install a PCI card, but before that you imply that they should know how to use a Magma expansion chassis (which would require them to know how to instal PCI cards into it)…?!?

    I don't like the Magma box particularly, it's expensive and you have 3 cards on one TB port. That was an example for people who don't like the idea of cable clutter. A good example would be the Decklink vs UltraStudio:

    http://www.blackmagic-design.com/products/decklinkhdextreme/
    http://www.blackmagic-design.com/products/ultrastudio3d/

    $995 each but the PCI version is going to block two PCI ports with the HDMI bracket and all the I/O ports are at the back of the workstation. The TB version is not only much better designed, more easily accessible but is plug and play.

    No need to check if cards will overload the power limit of the ports, no allocating PCI lanes, no shutting down the computer to install. I think it's a much more elegant solution. Plug and play and run-time loading should help with drivers too - no specific boot-time requirements.
  • Reply 174 of 339
    nhtnht Posts: 4,522member

    Quote:

    Originally Posted by Marvin View Post





    I'm not talking about Intel's IGP where the competition can be 50-100% faster. This performance drop is minimal. There will be an internal GPU anyway so almost no one will need to have this setup. But if they wanted to, it would work.


     


    10% is not minimal.  And given that folks leverage memory access via pinned memory in existing CUDA code for performance the ability to leverage PCIe 3.0 latency improvement is significant.  25% over PCIe 2.0. 


     


    An integrated GPU is sub-optimal for many workstation users and a wasted cost.  If it were on die that would be one thing.


     


     


    Quote:



    Quote:

    SETI at Home may be far more optimal than other problem sets where moving data back and forth becomes a real issue.


    And it may not. I've shown where it's not a problem for computation, you show me where it is.

     




     


    I provided a link that indicated that folks expect a 10% degradation. On PCIe 3.0 this difference could be larger.  There's a lot of RAM on a Tesla sure, but you also have to load that RAM for your task with your dataset.  That's a memcpy from host to device.  Then back again to get the results out.


     


    Here's another link from toms:


     


    "We’d expect to see GPU computational limits exceed bandwidth limits at 2560x1600, yet the worst-case scenario still shows 23% lower performance for the x16/x4 configuration. Again, the average difference is stuck at 10%."


     


    http://www.tomshardware.com/reviews/pci-express-scaling-p67-chipset-gaming-performance,2887-10.html


     


    So going form two slots configured x8/x8 to a x16 slot and a theoretical x4 TB link shows a performance hit anywhere from 23% to 10% when using crossfire.  


     


    10% is not minimal.


     


     


    Quote:



    Quote:

    There are probably a number of cards that can make use of more than 10 Gbps. I don't have a list, but that doesn't mean they don't exist.


    Building a list should be easy though and yet no one seems to have done this.



     


    Any card listed as needing x8 lanes.  I've mentioned that the Rocket is in this category.  You can take it up with Red for inflating specs if you like.  Show first that none of these x8 cards actually utilize the throughput of 8 lanes.


     


     


    Quote:



    Quote:

    none of this has in anyway discounted the need for slots. TB simpy isn't fast enough no matter how much you try to twist facts to support it.


    TB isn't as fast as a PCI slot, whether it's fast enough is debatable but optical connections are certainly fast enough:

    http://www.theverge.com/2012/3/8/2854040/ibm-holey-optochip-optical-chipset

    That discounts the need for slots.


     


    Quote:

    It also means an end to clean and uncluttered installations. Really where do you expect all of those dongles, expansion boxes and power supplies will go?


    How many peripherals do you expect people will have? The vast majority will have none.

    The people who currently use a Mac Pro have 3 slots free. A single Magma box with a power supply covers this.


     




     


     


    The vast majority can use a iMac.  Why not just dump the Mac Pro entirely based on that premise.  Again, what does your box buy you above either an iMac or a compute configured Mini without slots?


     


    Many off board expansion chassis use interconnects faster than 20Gbps today and measure using gigaBYTEs rather than gigBIT throughput.  The magma box is a great solution for laptops but is not the kind of chassis used for workstations.


     


    In any case the current benchmarks show that 3 rockets is optimal because of PCI bandwidth limitations according to Rob Lohman on the Red Team:


     


    "It works on the Mac up till 3 in the tower (not a full speed, obviously) and we can support more through a PCIe extender. However, I do not expect those scenarios to run at full speed (due to limited PCIe bandwidth available in the towers)."


     


    http://reduser.net/forum/showthread.php?69425-REDCINE-X-Professional-Update-Build-9-Beta&p=906633&viewfull=1#post906633


     


    "does rocket x6 get's us 6times faster decodes???? sounds interesting at all.


     


    It does if your system have the PCIe and overall bandwidth to do that (not a Mac Pro tower)"


     


    http://reduser.net/forum/showthread.php?69425-REDCINE-X-Professional-Update-Build-9-Beta&p=906187&viewfull=1#post906187


     


    Here's a benchmark with 3-6 Rockets:


     


    "If you're like me, it's no secret that the trend of higher shooting ratios is on the rise. With more cameras, more coverage and more footage to process, the need for additional speed may be welcomed by independents and facilities alike. Recently, my friends Torrey Loomis from Silverado Systems and Eric Fiegehen from Cubix helped us out by allowing us to test the latest Cubix XPander (http://www.cubix.com/products/gpu-xp...nt/rackmount-2) with REDCine-X Pro v9. We then tested the Rocket acceleration performance with 1, 2, 3, 4 and 6 Rockets in a MacPro (read and write disks were separate volumes). And thanks to Rob and Matt at RED for their advisement and hard work in optimizing RCX-Pro v9 performance. After some extensive testing, the Cubix XPander is a great tool for people who are looking to increase their REDCine-X Pro transcode performance to be much faster than real-time."


     


    http://www.reduser.net/forum/showthread.php?71788-6-ROCKETS-and-REDCINE-X-PRO-v9


     


    This limit is due to the bandwidth constraints of a shared x16 slot...a TB link peaks out far lower.  It also indicates that the Red x8 spec is not bullshit and actually uses around 6 lanes worth of PCIe 2.0 bandwidth.  If is only used x4 then it would scale to 4+ cards before being PCIe bandwidth constrained.


     


    Quote:



    Quote:



    The higher your resource usage the greater the need to get to the faster bus that a 16 lane PCI Express slot offers. We are talking Pro users here, not some Joe off the street without a clue.


    So, professional video editors and graphics artists should be expected to know how to install PCI cards inside a workstation? Presumably they should be expected to know how to service an Arri Alexa or a high-end tape deck? I don't think they should. I think they should know how to do their job and if they need a peripheral, it should plug in like any other peripheral.



     


    Pro video editors and graphics artists have these guys and gals known as "tech support".  Failing that, the installation of a PCI card is a trivial exercise involving opening one panel, loosening two screw and popping out a plate, inserting a card into a notched slot that's impossible to stick in backwards, replacing a plate then tightening two screws and then closing one panel.  Finally, if they really don't want to do that they can pay Best Buy $50 to insert it for them.  Kinda expensive but what the heck, I pay a guy to change my oil.  Something I used to do for free myself.

  • Reply 175 of 339
    gfeiergfeier Posts: 127member


    I'm perfectly willing to wait another year to replace my 2006 Mac Pro (6GB, Radeon 5770) with a new one, expecially if it will use an Apple TV for a monitor. I just hope the old HDs hold up.

     

  • Reply 176 of 339
    MarvinMarvin Posts: 15,326moderator
    nht wrote:
    10% is not minimal.

    I think it is. If Apple sold you a Mac Pro in 2012 that was 10% faster than a 2011 model, would it be worth upgrading to? Of course not because it's a minimal difference in performance.
    nht wrote:
    Any card listed as needing x8 lanes. I've mentioned that the Rocket is in this category.

    That's been demonstrated to work just fine in a Thunderbolt box.
    nht wrote:
    In any case the current benchmarks show that 3 rockets is optimal because of PCI bandwidth limitations according to Rob Lohman on the Red Team:

    So someone who works for RED selling $4750 PCI cards recommends you buy at least 3 of them so you can decode the proprietary RED codec in the most optimal way. People outside of RED experimenting with 6 of them have spent nearly $30,000. This is all to work with RED video formats. For that money, you can build a far more cost-effective transcoding solution that works with more than just RED.

    You could for example buy 10 of the little Cubes and hook them all together for at least 60 Xeon cores. I doubt 6 RED cards will do a better job than 60 Xeon cores plus 500 Knights Landing cores.

    It's just more of these fictional scenarios that don't have any relevance to the issue about the Mac Pro's future. The amount of people in the world who have $30k to spend on a single workflow can probably be counted on one hand and a fraction of that money is going to Apple. You will notice in my scenario, Apple (and Intel btw) gets the most money so is obviously the better option for them.
    nht wrote:
    Pro video editors and graphics artists have these guys and gals known as "tech support".


    [VIDEO]


    Simplicity is the ultimate sophistication and Thunderbolt is the simplest option of all. Put in the plug and go.
  • Reply 177 of 339
    nhtnht Posts: 4,522member

    Quote:

    Originally Posted by Marvin View Post





    I think it is. If Apple sold you a Mac Pro in 2012 that was 10% faster than a 2011 model, would it be worth upgrading to? Of course not because it's a minimal difference in performance.


     


    For certain users yes.  And we're talking about a 10%+ delta that costs less than an external chassis solution that TB requires vs slots that already exist in the Mac Pro.


     


    Quote:


    That's been demonstrated to work just fine in a Thunderbolt box.




     


    That has been demonstrated to WORK but not at full speed in a Thunderbolt box.


     


    I have no idea why you ask for evidence of something and then disregard data simply because it doesn't support your baseless assertion.


     


    You show me that there are no cards that use more than x4 bandwidth.


     


    Quote:


    So someone who works for RED selling $4750 PCI cards recommends you buy at least 3 of them so you can decode the proprietary RED codec in the most optimal way. People outside of RED experimenting with 6 of them have spent nearly $30,000. This is all to work with RED video formats. For that money, you can build a far more cost-effective transcoding solution that works with more than just RED.




     


    Yes, I'm sure video pros are complete morons being duped by Red.  And $30K isn't a lot of money for some Pro users.


     


    Quote:


    You could for example buy 10 of the little Cubes and hook them all together for at least 60 Xeon cores. I doubt 6 RED cards will do a better job than 60 Xeon cores plus 500 Knights Landing cores.

    It's just more of these fictional scenarios that don't have any relevance to the issue about the Mac Pro's future. The amount of people in the world who have $30k to spend on a single workflow can probably be counted on one hand and a fraction of that money is going to Apple. You will notice in my scenario, Apple (and Intel btw) gets the most money so is obviously the better option for them Simplicity is the ultimate sophistication and Thunderbolt is the simplest option of all. Put in the plug and go.



     


    The only thing fictional is your little cubes at the price point you're dreaming of.  And they are likely to remain that way.  If using a cluster compute solution was all that then folks would already be using them beyond render and trascoding farms.  I can already buy 2U servers with Xeons and a slot inside.  What makes you think what you suggest is in some way unique or new?


     


    What isn't fictional is the hard fact that quite a few pros need the large amounts of bandwidth provided by PCIe and are hungry for more.  $30K?  I've spent $30K on a freaking RAM drive for some applications. I've bought fully populated blade servers for a single project as a small compute cluster.  The chassis alone I think was $25K at the time.  Then I put in a dozen Xeon blades in there and a few disk packs.


     


    $30K for Rocket cards?  If we need them we buy them.  Heck we bought a Tesla compute chassis and a bunch of Teslas and that's a lot more than $30K too.  I can't remember how much it cost but remember thinking that Dell was making some serious profit off us.


     


    You vastly overestimate the utility of parallel computing and the value of 60 Xeon cores for certain tasks.  Sometimes paying tens of thousands for 10% single thread performace is what pros do because that's the way to get the job done. 


     


    And yes, 6 Rockets will do a better job because that's what the software is written for.

  • Reply 178 of 339

    Quote:


    It's not just a failure to innovate in any way. All they did was update pricing to reflect changes in cpu pricing and switch a couple cpu models around. The 8 core became a 12 core for $300 more.



     


    I'm still amazed at the crepe Apple pulled with the Mac Pro 'update' and had the audacity (which reflects their current hubris) to call it 'new'.  No wonder one of the original Mac evangelists went public and pulled Apple's trousers down over this narcissism.


     


    That Apple had to remove said 'new' status is unprecedented...


     


    That Apple had Cook and a Spokesperson come out and public and say they had a great Mac Pro coming 'late' 2013 is nothing short of astonishing.


     


    It's...well...nothing short of farcical.


     


    ...along with the confusion over the potential iMac update...2013?  'Late' 2013?  That's Apple's 'flagship' desktop.  And that machine garners the best part of Apple's '1 million' give or take in desktop sales.


     


    It leaves their desktop strategy...in tatters.  Shocking.  Really, really shocking.


     


    There hasn't been anything remotely approaching this lethargic or absurd since the PPC clockspeed crisis when Apple and Moto swapped jibes over the availability of the 500mhz G4.  Or when Apple had to offer 'two' G4s because Moto couldn't get their act together.


     


    Sure, Moto copped the blame back then.  But this time?  The hubris is all Apple's.  They have the ram, the gpus, the motherboards, the billions, the mindshare, the bricks and mortars stores to print more money by offering a 'sane' desktop range.  They have the new cpus ready to go.  They have thunderbolt.  They have a case that hasn't been touched in ten years.


     


    It's nothing a 2 bit PC (throw the components into a box) supplier couldn't do in a few days/couple of weeks.  The standard parts are out there to put out a fair better update.  To put out AN update!  (Because, that 'disgrace' to Apple's performance desktop wasn't it!  No way in hell.)


     


    Is this really the same Apple that put out the 'legendary' blue and white G3 on a price performance delta that would shame the current Mac Pro?  The same Apple that used to have an affordable iMac in it's range?  (But has seen the base price of an iMac DOUBLE in the last ten years?)


     


    The current pro is an over priced joke.  Look at the '3 year' old GPUs still on offer.  My 'tag line' still reads true all these years later.  What's with offering crepe gpus in a 'workstation' product?  Are they for real?  Who's in charge of desktops over there.


     


    Sure I get we're moving to laptops and iOS gadjets real computing for human beings.  Sure.  We can see the floor move beneath us.  However, that '1 million' in desktop sales could be 25%-100% better if you gave people a little more choice...or even gave them updates.


     


    For Apple to have only updated their laptop line in half a year into 2012 is simply madness.


     


    Apple's desktop strategy is as rigid as it is farcical.  You have to pay almost £1000 before you can get a desktop with a monitor.  You have to pay over £2000 before you can get one you can 'realistically' upgrade in the full meaning of the word.


     


    Apple sat on dual core tech' for years while quad core was common place on PC desktops.  Apple gave the iMac several gpu 'side grades.'  One such top of the line 'update' had Phil Schiller have to come out publically and offer a Nv 7600 (or something...) as another top end gpu option after iMac owners finally cracked at the kind of crap they pull over the 'new' Mac Pro.


     


    Historically, Apple had a G5 as low as £995.  The old G3 and G4 had entry prices closer to £1000 than £2000+  And Apple may wonder why the sales of the thing are tanking?


     


    I like the iMac.  But I can understand why people like Dave (Wiz'), Hmm and other's get ****** off with Apple.  I do recognise that their needs could be relatively easily catered for.  But Apple's blinkered hubris is reflected in the out cries from the Pro community and elsewhere.


     


    You have desktop lines with year old components that haven't even been given a price break to reflect that simple fact.  THEY'RE OLD!  EVEN Apple can't defy gravity indefinitely.


     


    Unless it's a political move to be THE 'mobile' computer company (they, arguably, already are?) ie to use mindshare to sueeze the desktop market and the traditional market to the new mobile market where...yes.  I guess they are king (if you count phones, tablets and laptops).  That I don't have a problem with as a long term objective.


     


    But we're a couple of years at least from that.  And it is less than nothing for a company like Apple to put out a decent desktop with a decent price that doesn't gauge you for components that are mainstream or entry priced on the PC side.


     


    Lemon Bon Bon.  (Hopping.)

  • Reply 179 of 339


    ...and Apple are 'huge' now on bulk buying supply chain power.  It's not like they can't command good pricing on components.  Still paying outrageous prices on ram, SSD and out of date GPUs. It's not the same Apple of the 1990s..? ;)


     


    Lemon Bon Bon.

  • Reply 180 of 339


    The silver lining?


     


    At least the Mac Pro isn't dead.  Tim Cook himself confirmed a 2013 model implying a radical shift.


     


    The iMac should at least get an update in the next couple of months.


     


    Despite my rant.  It doesn't affect me.  (But I can understand the frustration of others who are waiting...)  I'm waiting for retina screens and a significant gpu and cpu performance boost before I go near another iMac (that points to at least Haswell) or this 'muted' Knights landing (which I haven't heard about at all...) Pro update (if they ever make an affordable one again...)


     


    Lemon Bon Bon.

Sign In or Register to comment.