OS X 10.8.3 beta supports AMD Radeon 7000 drivers, hinting at Apple's new Mac Pro

1246711

Comments

  • Reply 61 of 211
    ecsecs Posts: 307member

    Quote:

    Originally Posted by benanderson89 View Post


    [...]


    Even the lowest power desktop PCs have fans in the PSU. Those things get HOT. Its not a wall-wart or power brick connecting to a battery in a laptop, its a big brick of a thing that has to power several high performance devices.


     


    Also, one fan? Why would that be a good idea in a high performance system? High performance servers and workstations are designed to stay below at least 60C. The Mac Pro has a fan on the GPU, fan in the PSU, fan for the HDDs, a fan for the daughter card holding the processor and ram, a fan inside the processer heatsinc itself and an exhaust fan. It needs all those fans - you can't just chop them down to one and expect it to function. [...]



     


    Yes, one fan, because I don't really need a Mac Pro elephant. What I actually want is an iMac without the display and with proper cooling. Put the most complete iMac configuration into a moderately sized cube, and you won't need more than a big high-quality silent fan to keep it cool at intense CPU/GPU work.


     


    Problem is that Apple, with the only exception of the Mac Pro, doesn't seem interested in cool (I mean cool thermal-wise) machines, maybe because if they run hot, they last less, and you buy machines more frequently. Put it together with beautiful aesthetics, and all factors are put there to increase sales.


     


    This forces you to consider the Mac Pro even when you don't need one. But I don't want an elephant, that's why I'm asking for just a simple machine that can be used for intense CPU/GPU, with as little mechanical parts as possible, while keeping the chips cool. It's not hard to achieve that, although I don't see Apple doing it, for the reasons above.

     0Likes 0Dislikes 0Informatives
  • Reply 62 of 211
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by ecs View Post


     


    Yes, one fan, because I don't really need a Mac Pro elephant. What I actually want is an iMac without the display and with proper cooling. Put the most complete iMac configuration into a moderately sized cube, and you won't need more than a big high-quality silent fan to keep it cool at intense CPU/GPU work.


     



    Trying to engineer in favor of smaller sizes can actually drive up costs. Right now what most people fail to realize is that they're already leveraging a portion of the xmac crowd to keep mac pro sales at an acceptable level. The xmac machine people want aligns quite well with the specs of the base mac pro. Adding in a few drive bays and PCI options has little impact on pricing. They're using high markups for a line with little growth and low volume relative to their other products. I realize you dislike this, but that is how they've chosen to address this market up to this point. Windows is also showing terrible growth on desktops, so I can't see any big changes coming there.


    Quote:

    Originally Posted by PhilBoogie View Post





    Lion only sees 96GB (don't know if ML will). Strangely, Windows uses all 128GB under Bootcamp.

    OWC 16GB Memory Modules for 2009/2010 Mac Pro — 48GB / 96GB in Mac Pro





    Photographers disagree. Read this piece on a person using Photoshop to the max, so to speak. Easily needing 80GB, wishing for more.

    I really enjoy this thread. Don't have time to discuss everyones' take on the subject, but I do have one thing to say:

     


    Most photographers will never use that much. You'd have to be stitching some enormous files with a lot of layers. Adobe does recommend 64 GB for optimal performance with After Effects on a 16 core machine. They basically allocate 2GB per logical core. I don't think the ever increasing core count will keep going, but dual package models still provide twice the bandwidth. Whether Apple sells them is another matter. This represents a small portion of their total users. I'd expect Windows to represent a significantly larger chunk of this. The thing is it doesn't matter for most of these applications. When you're in them, they're 90% the same. People usually just continue with whatever operating system they already use unless hardware or software requirements force a change.

     0Likes 0Dislikes 0Informatives
  • Reply 63 of 211

    Quote:

    Originally Posted by ecs View Post


     


    Yes, one fan, because I don't really need a Mac Pro elephant. What I actually want is an iMac without the display and with proper cooling. Put the most complete iMac configuration into a moderately sized cube, and you won't need more than a big high-quality silent fan to keep it cool at intense CPU/GPU work.


     


    Problem is that Apple, with the only exception of the Mac Pro, doesn't seem interested in cool (I mean cool thermal-wise) machines, maybe because if they run hot, they last less, and you buy machines more frequently. Put it together with beautiful aesthetics, and all factors are put there to increase sales.


     


    This forces you to consider the Mac Pro even when you don't need one. But I don't want an elephant, that's why I'm asking for just a simple machine that can be used for intense CPU/GPU, with as little mechanical parts as possible, while keeping the chips cool. It's not hard to achieve that, although I don't see Apple doing it, for the reasons above.



    Machines running hot does affect reliability - but Apple machines are consistantly some of the most reliable on the market; so much for your "they make them hot so they break" theory. The problem is that you don't seem to understand the requirements of Professional IT equipment: Apple clearly state that the MacPro is a workstation and not a consumer oriented desktop computer, therefore meaning it will cator to workstation users. Workstation users want power, versitility, accuracy and reliability: The way to make the Mac Pro powerful is to use high performance parts - high performance equipment runs hot, so large cooling systems are needed. The system needs to be versitile and ready for any situation, hence the massive expansion capabilities of the Mac Pro right down to removable daughter boards. They need to be accurate and reliable, again pointing towards the performance and cooling but can also mean the ruggedness of the system, its components, chasis and motherboard.


     


    The Mac Pro is no smaller than any other professional workstation - infact its quite tiny in comparison with others in the same market segmant (ProAm, Medium and Enterprise). Dell's new T5600 workstation is toated as having a "Compact Chasis":


     



     



     


     


    Its 41cm high, 17cm wide and 47cm deep.


    The MacPro is around 40cm high (sans the large handles), 20cm wide and 47cm deep.


    The Mac Pro is 3cm wider, but it has 4 hard disk bays vs Dell's two. The MacPro case is on par with what other manufacturers are calling "Compact" - cases for this class of machine can get much, much larger and much, much heavier. Your typical gaming computer has a larger case than the MacPro.


     


    If you think its easy to keep high-performance chips cool then you really need to take another look at the heatsincs on the market and those installed in other professional workstations; they are gigantic for a reason. To get a desktop computer to stay at the heat levels gained by the MacPro, you better be prepaired to shove one of these on your motherboard + high performance fans:



    Lets not forget that a single fan will not be enough to also cool a dedicated graphics card as as well the power supply unit. Even the cheapest desktop computer with a modest GPU card has a minimum of three fans.


     


     


    Quote:



    Originally Posted by hmm View Post


    Most photographers will never use that much. You'd have to be stitching some enormous files with a lot of layers. Adobe does recommend 64 GB for optimal performance with After Effects on a 16 core machine. They basically allocate 2GB per logical core. I don't think the ever increasing core count will keep going, but dual package models still provide twice the bandwidth. Whether Apple sells them is another matter. This represents a small portion of their total users. I'd expect Windows to represent a significantly larger chunk of this. The thing is it doesn't matter for most of these applications. When you're in them, they're 90% the same. People usually just continue with whatever operating system they already use unless hardware or software requirements force a change.



    Given the sheer pixel density of modern cameras, its very possible that a photographer could use that much RAM in a heartbeat. I draw colour comics in photoshop, and I have no trouble reaching the 32GB+ mark of Real Memory on my MacPro after several solid hours toiling away over a wacom tablet. If I was doing professional grade work with CMYK and/or print ready that would easilly top 64GB+. A CMYK or print ready file (300DPi+, 16-bit colour or better and/or very high resolution) can be hundreds of megabytes in size, some even a gigabyte, and this is just the file on disk!

     0Likes 0Dislikes 0Informatives
  • Reply 64 of 211
    Thank you [B]benanderson89[/B] for elaborating on that. I looked that Workstation up at Dell.com and found the picture to be funny (with the SF movie on the screen):

    [IMG ALT=""]http://forums.appleinsider.com/content/type/61/id/16956/width/500/height/1000[/IMG]

    All fun aside, yes, there are definitely people out there maxing their RAM. Some wish for OSX to go beyond 96GB for good reason. Sound crazy, but then again, we have crazy cameras nowadays like the D800 that shoot 36MP, resulting in an uncompressed raw of 75MB. A lossless compressed 14-bit raw is around 40MB.

    If your gonna work on these in Photoshop you might hit that 96GB boundary.
     0Likes 0Dislikes 0Informatives
  • Reply 65 of 211
    Marvinmarvin Posts: 15,584moderator
    hmm wrote:
    Distributed computing has been around for a while

    I see it being less like that and more like a co-processor. In much the same way you'd add external GPUs to a Mac Pro using a PCI extender but whichever works best.

    This wouldn't in any way be the normal setup, it would be the exception. A single 10/12-core Ivy Bridge Xeon is going to be pretty fast on its own and decent enough value for $4000. Instead of the best value starting around $4000, that's where it ends.
    hmm wrote:
    Regarding thunderbolt and PCI slots, the problem was the lack of integrated graphics. It had nothing to do with PCI slots. The chip depended on specific logic board placement, and the certification requirements made integrated graphics the way to go.

    Surely they can connect a desktop GPU directly to the TB controller though, it just has to be a more restrictive design, which is what I'd suggest. They could add a separate GPU onto the motherboard I suppose but it's not going to be used for much unless they ship entry MPs without add-on GPUs at a lower price.
    You've removed the second processor, crippling the machine in the high-end market

    The single Ivy Bridge CPU it uses could have the same number of cores they have now.
    you've removed the PCIe slots, meaning it can no longer be upgraded with extra expansions boards

    You can get an external PCI box but the preferred route would be Thunderbolt peripherals.
    you're removed a drive bay leaving only three (making almost all RAID configurations useless if three drives are employed)

    The OS has to go on one of them anyway. If you have RAID01/10, your OS is on a RAID0 setup, which isn't a good idea. RAID5 is supported with 3 drives. Ideally they are going to ship these with SSD blades/Fusion drives too though so you still technically have 4 drives.
    you've removed the top of the machine above the drive bays that held both the ODDs and the PSU, meaning no space left for extras such as a card reader.

    SD card readers are tiny, it would go next to the USB ports on the front.
    Your choice of putting the PSU behind the processor and ram daughter board means that cooling has been compromised as there is now no exhaust fan at the back of the machine

    The PSU doesn't take up the full width (or depth from this view) of the machine - it's not a 1kW PSU any more, I didn't show that very well in the image. The air would flow past the gap in front of it. You can see the available depth when they pull out the giant heatsinks here:



    Those massive heatsinks shouldn't be needed with the new Sandia heatsink design mentioned above either.
    Smaller fans would have to be employed for the middle of the tower running at a higher RPM

    Possibly if they leave the design like that. The GPU doesn't have to be like that though. I wouldn't expect them to maintain the layout like I've done, that was just to show roughly what a reworking of the internals can do.
    philboogie wrote:
    Photographers disagree. Read this piece on a person using Photoshop to the max, so to speak. Easily needing 80GB, wishing for more.

    He didn't quite max out 64GB but a few extra layers might do it. Working with multiple 16bpc 22MPixel images and saving a 24GB PSD file isn't respresentative of a widespread need though - he even said it was for comparisons so presumably loading a whole load of images in as layers to see the differences. Photoshop isn't meant for that. But one processor support 6 slots anyway so if they went this route, Apple could support 96GB. Photoshop shouldn't use that much RAM for doing this. Either they need to figure out how to keep layers compressed in RAM or using an intelligent proxy system.

    Photoshop has various caching features, they even use cache tiles:

    http://helpx.adobe.com/photoshop/kb/optimize-performance-photoshop-cs4-cs5.html

    but when you look at Google Maps, it's a set of photos of the entire world and it runs in a web browser. You aren't doing any filters but Photoshop should be able to only load as much as it needs for the zoom level you are at and let you work with infinite resolution images. Any processing should be done directly to disk.

    As you rightly say, people work with what they've got as they've done over the years, even with machines with less power than modern mobile phones and the other thing to remember is that Apple isn't out to sell customers a machine to satisfy their needs. They still want to sell more machines.

    They'd be better off selling Cubes so that people will want a new one the following year. It creates growth. Sure some people might switch to Windows workstations but they can do that now. Apple doesn't sell dual E5-2687W workstations.

    Think about the following spec:

    - 10-core Ivy Bridge 3.1GHz
    - 4/6 RAM slots up to 64/96GB RAM
    - 3GB Radeon 8970 GPU, possibly fixed design
    - 4/6 20Gbps TB ports
    - Fusion drive option with up to 12.7TB total storage over 3 drives
    - 8" Cube best case, worst case 8"x14.5"x14.5"

    $3999

    I think that's a pretty good workstation machine. If you need slots, buy an $800 PCI box or get Thunderbolt equivalents. It would have the glossy black Apple logo on the side but smaller.
     0Likes 0Dislikes 0Informatives
  • Reply 66 of 211
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by benanderson89 View Post




     


    Quote:


    Given the sheer pixel density of modern cameras, its very possible that a photographer could use that much RAM in a heartbeat. I draw colour comics in photoshop, and I have no trouble reaching the 32GB+ mark of Real Memory on my MacPro after several solid hours toiling away over a wacom tablet. If I was doing professional grade work with CMYK and/or print ready that would easilly top 64GB+. A CMYK or print ready file (300DPi+, 16-bit colour or better and/or very high resolution) can be hundreds of megabytes in size, some even a gigabyte, and this is just the file on disk!



    Scratch disks always built up over a number of hours. With adequate ram you're just storing the information directly in memory. Anyway I've dealt with CMYK files in the past. I'm not sure why you'd feel the need to retain 16 bpc there. CMYK spaces tend to be pretty locked down, so you shouldn't run into banding issues. Hundreds of megabytes in size is nothing. You can deal with 2GB (on disk) files comfortably on modern hardware. It was possible a decade ago. It's just you wouldn't have used 16bpc modes. You didn't have cpu intensive things like smart objects. You didn't assemble large spherical hdri imagery directly in PS (it didn't even support the radiance format). Digidlloyd talks things up a bit at times, but he does provide file sizes for reference and lists the actions applied. When you look at a 16 bpc 15-20k image and run a set of intense tasks, it allows you to really see stratification within the lineup, and it is nice being able to deal with things in real time without lag. It's just silly to suggest it's otherwise unworkable. I mentioned that Adobe suggests 48GB of ram for a 12 core machine or 64 for a 16 core if you want to retain maximum performance, especially during rendering. After Effects is the most memory intensive application they publish. Photoshop isn't as bad, even in CMYK or indexed color and 10k files.


     


    Quote:

    Originally Posted by Marvin View Post





    He didn't quite max out 64GB but a few extra layers might do it. Working with multiple 16bpc 22MPixel images and saving a 24GB PSD file isn't respresentative of a widespread need though - he even said it was for comparisons so presumably loading a whole load of images in as layers to see the differences. Photoshop isn't meant for that. But one processor support 6 slots anyway so if they went this route, Apple could support 96GB. Photoshop shouldn't use that much RAM for doing this. Either they need to figure out how to keep layers compressed in RAM or using an intelligent proxy system.



     


     


    You can't save a 24GB PSD file. I don't know how much of that bulk is layers or if its compressed, but PS has a limit on size. Once you go over 2GB on disk, you have to save in .PSB (large document format) anyway. My primary use these days would be for stitching spherical hdr images. You could do a lot of these things all the way back to the G4/G5 days, but it was a lot slower. At that time 16 bpc was also uncommon in PS. It's overrated anyway. 32 bpc is useful if you need to work with linear data.


    Quote:


    Photoshop has various caching features, they even use cache tiles:

    http://helpx.adobe.com/photoshop/kb/optimize-performance-photoshop-cs4-cs5.html

    but when you look at Google Maps, it's a set of photos of the entire world and it runs in a web browser. You aren't doing any filters but Photoshop should be able to only load as much as it needs for the zoom level you are at and let you work with infinite resolution images. Any processing should be done directly to disk.

    As you rightly say, people work with what they've got as they've done over the years, even with machines with less power than modern mobile phones and the other thing to remember is that Apple isn't out to sell customers a machine to satisfy their needs. They still want to sell more machines.

    They'd be better off selling Cubes so that people will want a new one the following year. It creates growth. Sure some people might switch to Windows workstations but they can do that now. Apple doesn't sell dual E5-2687W workstations.


     




    I was going to avoid that topic, but photoshop can run in a lot of ways. If you load it with ram and set its memory allocation high, it will use it. People still worked on huge files for movie posters and things prior to 64 bit photoshop. It's just that today it's practical to let it cache more data to ram rather than scratch disks if the resources are available. It's not the leanest application out there, but memory prices are low. For anyone dealing with large files, I'd just say max ram and if necessary, add an ssd after that.


     


     


    Quote:


    Think about the following spec:

    - 10-core Ivy Bridge 3.1GHz

    - 4/6 RAM slots up to 64/96GB RAM

    - 3GB Radeon 8970 GPU, possibly fixed design

    - 4/6 20Gbps TB ports

    - Fusion drive option with up to 12.7TB total storage over 3 drives

    - 8" Cube best case, worst case 8"x14.5"x14.5"

    $3999



    I'm not sure there's a good way to implement more than a single thunderbolt chip, and each chip supports 2 ports according to everything I can find. In the past Apple has limited the number of "specialty" ports. They never had more than 3 firewire ports. This assumes we'll see thunderbolt right now. It could skip a generation, as it would have little impact on the overall shape of the case. The best way to implement it remains integrated graphics, which you wouldn't have on the E5 you mentioned. You're still stuck on engineering it to be smaller. At a $4000 price point, I don't see how that could possibly matter unless you're going the lenovo C20 route and trying to make more fit in a server rack. It's actually more expensive than some of the others. When you engineer specifically for size, it costs money. The cost of materials to build a large aluminum case is paltry compared to the costs of trying to make something as compact as possible while retaining performance parts. I think you just like to imagine this stuff. You told me you like new/innovative solutions before. I get that, but I think your abstract concepts are misaligned with the priorities of such a machine. I'm also skeptical of the Radeon drivers when the rest of the lineup has moved to NVidia for now. The ability to leverage OpenCL and CUDA is important when it comes to reaching the widest market possible. If you're offering a fixed graphics solution that is limited to AMD, you've limited your market yet again.


     


     


    Quote:


    I think that's a pretty good workstation machine. If you need slots, buy an $800 PCI box or get Thunderbolt equivalents. It would have the glossy black Apple logo on the side but smaller.



     


    You knew this was nonsense logic when you typed it. I'm really puzzled by this. At $4000 the price is a large determining factor in your potential market. You're looking for users with either bleeding edge requirements or complex needs, and this solution does basically nothing to drive anything forward. If you're looking for a design that must last for a number of years, gpu computation must be a part of the core design, not something allocated to third party generic boxes and a limited set of cards with hit or miss OSX support. Apple really needs the widest market possible. If they're limiting it to a high price point, the worst thing they could do would be to drive away anyone that can afford it. I think that kind of pricing + artificially imposed limitations would be enough to finally kill the line due to negative growth.


     


    I should add that most of the people who want a smaller case assume that this would directly lower the price. It's a false assumption in terms of product positioning. Apple could lower the price if they wanted to, and they would sell more. Using thick aluminum doesn't contribute more than a few dollars to the material cost, and any initial setup costs should have been covered long ago given the age of the outer shell design.

     0Likes 0Dislikes 0Informatives
  • Reply 67 of 211
    wizard69wizard69 Posts: 13,377member
    philboogie wrote: »
    z3r0 wrote: »
    192GB+ RAM

    Lion only sees 96GB (don't know if ML will). Strangely, Windows uses all 128GB under Bootcamp.
    Last I knew the limit was still 96 GB. It is not something I have to worry about though obviously important to others.
    OWC 16GB Memory Modules for 2009/2010 Mac Pro — 48GB / 96GB in Mac Pro

    1000
    Marvin wrote: »
    No one will ever need more than 64GB of RAM. You heard it here first.

    1000

    Photographers disagree. Read this piece on a person using Photoshop to the max, so to speak. Easily needing 80GB, wishing for more.
    My photographic interests are amateur at best but have noticed performance problems due to the lack of RAM. As a side note, the Mac may suffer from the lack of RAM but on iOS devices the lack of RAM is a critical issue.
    I really enjoy this thread. Don't have time to discuss everyones' take on the subject, but I do have one thing to say:

    I think pro's use whatever tool is available to get the job done. U'd think someone like Phil Collins would sound different if he plays on a different drumkit?
    Actually yes he would sound different. You may have picked a poor craft to base your argument on because musicians can be down right obsessive about their instruments. A good one anyways will hear things that fly right past me. At times the will have preferences for instruments based on the song they wish to play.
    No, it's not the tool that makes the sound; it's the artist creating whatever he wants. A photographer doesn't blame his gear if the picture doesn't look good to him. There are people creating far better pictures taken with their cellphone than others do with a (D)SLR.
    That is one sided, there is a real world of optics and the physical effects associated with light that does impact a picture. The wrong lens can mess up a picture as badly as a poor composition. Admittedly a good photographer selects the lens best suited for what he wants to achieve but even then sometimes what is seen in the view finder never makes it to film. Well at least not as intended.
    Which, for example, means that a Pro won't care that much if there aren't any drivebays in the next model; they'll get external (TB) storage if needed.
    Almost every pro photographer you come across is working with some sort of external storage array often more than one. Often this is because it ends up being easier to manage external devices rather than internal. Plus external devices can go off site with a laptop. Generally there are significant advantages a associated with external arrays.
    Thanks to everyone for their great posts, especially wizard69 and Marvin.

    If they do make a smaller box and some folks need to hook up their external devices that might create desktop clutter:
    Cute.

    One point here, for the last few years I've been using a MBP as my primary computer. As such I've had or have a bit of a rats nest of devices plugged into the laptop. Except for one external always there I don't see a desktop reducing the amount of wires significantly no matter the size of the box. Video monitors, audio cables and what have you would still be plugged into the machine. Done right a disk array would be designed to mate nicely with the compute module and hardly be noticeable.
    1000
     0Likes 0Dislikes 0Informatives
  • Reply 68 of 211

    Quote:

    Originally Posted by wizard69 View Post



    …you call yourself a professional but can't see the importance of these standards so it makes me wonder just what you are profession wise.


    I am a recording engineer. In my post, I made several references to audio recording, and included a link to PCI audio hardware made by Mark of the Unicorn. I stand by what I said: USB 3.0 and Thunderbolt are fine, but that is not how we—nor anyone else in the industry—gets multiple channels of 192kHz audio onto a hard drive. We have a frightfully large investment in PCI hardware because that's how this works.

     0Likes 0Dislikes 0Informatives
  • Reply 69 of 211
    Marvinmarvin Posts: 15,584moderator
    hmm wrote:
    I'm not sure there's a good way to implement more than a single thunderbolt chip

    For 4 ports, I reckon they'd get away with a 4-lane 20Gbps per lane controller. This would let you run 2 displays and have 2x 20Gbps ports free. If they can implement two controllers or have 8 lanes, that would be the preferred setup.

    I wonder if the Thunderbolt setup has been the major cause of the current state of the Mac Pro because there was really no other reason to avoid Sandy Bridge.

    They can obviously avoid using it in the Mac Pro but they've put so much effort into pushing it as a standard. Leaving it out of their most expensive machine doesn't seem like a sensible idea.
    hmm wrote:
    You're still stuck on engineering it to be smaller. At a $4000 price point, I don't see how that could possibly matter unless you're going the lenovo C20 route and trying to make more fit in a server rack.

    At $4000 the price is a large determining factor in your potential market. You're looking for users with either bleeding edge requirements or complex needs, and this solution does basically nothing to drive anything forward. If you're looking for a design that must last for a number of years, gpu computation must be a part of the core design, not something allocated to third party generic boxes and a limited set of cards with hit or miss OSX support.

    Apple really needs the widest market possible. If they're limiting it to a high price point, the worst thing they could do would be to drive away anyone that can afford it. I think that kind of pricing + artificially imposed limitations would be enough to finally kill the line due to negative growth.

    The Mac Pro is $6200 at the top-end just now. The new one would be $4000 with a single 10/12-core CPU. It would start at $2499 with a 6-core.

    When you say it's not driving anything forward, what exactly does leaving the machine unchanged do? You're not getting any more GPUs in there with a 300W power limit on the slots. Just put one high-end GPU in and computation is covered.

    The thing about supporting legacy technology is that if you leave it in, people keep investing in it. Then after a few years, people say 'you can't remove that because I have loads of hardware based on it now'. It's like the edit-to-tape thing in FCPX. If you leave it in, people just stick with the same workflow and then after 10 years, they'll say they've been using it for 10 years so they still can't change it.

    Apple can leave the design the same if they want and what would happen is that it appeals to exactly the same amount of people the current one does. The quad-core would still be $2499 and nobody would want it because it's a huge box and price for just a quad. The $4000+ models are too expensive.

    The 'Cube' would be:

    6-core Ivy Bridge $2499
    8-core $2999
    10-core $3499
    12-core $3999

    Each with an option for a high-end GPU like the 8970 or GTX 780, each with 4-6 RAM slots, each with 4-6 Thunderbolt ports.

    It's more cost-effective because you aren't putting in a huge PSU or optical drives and cabling, you use single CPUs.

    This segment is dying whether people like it or not, it may as well go out with style.
     0Likes 0Dislikes 0Informatives
  • Reply 70 of 211
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post





    For 4 ports, I reckon they'd get away with a 4-lane 20Gbps per lane controller. This would let you run 2 displays and have 2x 20Gbps ports free. If they can implement two controllers or have 8 lanes, that would be the preferred setup.

    I wonder if the Thunderbolt setup has been the major cause of the current state of the Mac Pro because there was really no other reason to avoid Sandy Bridge.

    They can obviously avoid using it in the Mac Pro but they've put so much effort into pushing it as a standard. Leaving it out of their most expensive machine doesn't seem like a sensible idea.

    The Mac Pro is $6200 at the top-end just now. The new one would be $4000 with a single 10/12-core CPU. It would start at $2499 with a 6-core.

    When you say it's not driving anything forward, what exactly does leaving the machine unchanged do? You're not getting any more GPUs in there with a 300W power limit on the slots. Just put one high-end GPU in and computation is covered.


    I misinterpreted part of your post. I thought you meant just offer a $4000 model, and I'm not sure that would bring in enough volume to be considered sustainable for Apple. It could be completely different for a smaller vendor. Apple has somehow supported dual gpus in their cto options before, although they weren't as power hungry. If they see this as a growing use case, it would make sense to design a box that could adequately cool a couple of them without tacking additional coolers directly to the card. Using 2 might require power cables. As you mentioned the PCI lanes have a 300W limit. This is something that is currently better accommodated by some of the PC vendors. They make significant design updates more often, and more of them have been going this route with the current generation of machines. It makes sense to look at opportunities for growth if they're going to keep making this line. Growth requires real improvements across the line. Right now the higher end mac pros have greatly outpaced the lower ones, which have remained somewhat stagnant.


     


    I forgot to include this before. Ivy Bridge E5s don't make it any easier to implement thunderbolt. What have you read that suggested otherwise? It doesn't integrate thunderbolt at a native level. It doesn't provide integrated graphics. Consider that they may not have devoted resources to the project, especially after repeated delays from intel. There was likely no mac pro team to work on it. While I still see perfectly valid use cases for such a machine, I thought they'd either update or cancel it. As for thunderbolt, I don't think Apple cares beyond their own peripherals. It was initially placed where they had a mini displayport connection. In either case you could plug in a display, and they built out from that. It allowed them to offer integration that wasn't possible on the notebooks and imacs, which obviously make up the bulk of mac sales.


     


    Quote:


     


    The thing about supporting legacy technology is that if you leave it in, people keep investing in it. Then after a few years, people say 'you can't remove that because I have loads of hardware based on it now'. It's like the edit-to-tape thing in FCPX. If you leave it in, people just stick with the same workflow and then after 10 years, they'll say they've been using it for 10 years so they still can't change it.

    Apple can leave the design the same if they want and what would happen is that it appeals to exactly the same amount of people the current one does. The quad-core would still be $2499 and nobody would want it because it's a huge box and price for just a quad. The $4000+ models are too expensive.

    The 'Cube' would be:

    6-core Ivy Bridge $2499

    8-core $2999

    10-core $3499

    12-core $3999

    Each with an option for a high-end GPU like the 8970 or GTX 780, each with 4-6 RAM slots, each with 4-6 Thunderbolt ports.

    It's more cost-effective because you aren't putting in a huge PSU or optical drives and cabling, you use single CPUs.

    This segment is dying whether people like it or not, it may as well go out with style.




     


    If the pricing strategy remains somewhat similar, you'll still attract most of the same people. It needs to attract some kind of growth, especially in driving faster repurchasing cycles. People buying these probably update every 2-4 years. Buying the new one with 15% faster X86 cores doesn't do much. It usually means that anything bound by machine time gets done a bit faster, which doesn't describe the overall for most of these users. The other circumstance is that the machine you already own is limiting so any amount of additional speed is welcome. I've been interested in things like CUDA because they open up newer workflows. After Effects probably would not have implemented a raytracer if they were still bound to X86 cores. I agree that the $2500 model should have shifted to a hex to maintain some kind of value. I don't agree with the idea of starting with a form factor and determining what will fit within it when building a workstation as opposed to determining what should be included and designing around that.

     0Likes 0Dislikes 0Informatives
  • Reply 71 of 211
    mactacmactac Posts: 321member

    Quote:

    Originally Posted by PhilBoogie View Post





    There are quite a few people posting this very request; a smaller MP, an in-between iMac and MP. Knowing Apple, they never seize to amaze people. You might get your wish, but I doubt it. Because:

    iMac starts at $1299 Mac Pro starts at $2499. Say they want to release an mid-Mac, if you will. I think that price will need to be in between there, $1899.


    I would camp outside an Apple store and pay $1500 for a mid sized Mac that had an i7 processor, some expansion (2 hard drives plus room for an optical drive for folks like me that still sues it) and didn't have a built in monitor.


    The Mac Pro is simply overkill in size, price and processor horsepower.


    But the mini and the iMac are too restricted. Built in monitor? No way. Not for me. Mini with no expansion? No way. Both are desktop (stationary where a little bit of size and weight isn't that big of a deal) but have been shrunk so much that you can't even get an optical drive in one.


     


    $1500 mid sized, some expansion, sans built in monitor Mac and I'm all over it.

     0Likes 0Dislikes 0Informatives
  • Reply 72 of 211
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by MacTac View Post


    I would camp outside an Apple store and pay $1500 for a mid sized Mac that had an i7 processor, some expansion (2 hard drives plus room for an optical drive for folks like me that still sues it) and didn't have a built in monitor.


    The Mac Pro is simply overkill in size, price and processor horsepower.


    But the mini and the iMac are too restricted. Built in monitor? No way. Not for me. Mini with no expansion? No way. Both are desktop (stationary where a little bit of size and weight isn't that big of a deal) but have been shrunk so much that you can't even get an optical drive in one.


     


    $1500 mid sized, some expansion, sans built in monitor Mac and I'm all over it.



    The time to implement such a machine would have been a little after they moved to intel to grab some of that market. It's been in a growth slump, which tends to be a bad thing for new products, especially when it comes to a company the size of Apple. I'm not discounting the advantages of the form factor. I just don't expect to see this. The idea of the mac pro becoming an xmac and falling to this price level is really unlikely. The people who criticize the mac pro as being overkill often ignore that the base model is very xmac like in its hardware choices. The hardware used there isn't that expensive, and sharing a backplane adds negligible costs. In fact the daughter board design was likely a cost cutting measure to prevent having to use more expensive dual package parts in the single version they split off as of 2009. Really it wouldn't be much cheaper to build the xmac than it would the bottom mac pro. The performance would be pretty similar. People just misunderstand the cost of that model. It's there because Apple wanted it there, and they use it to maintain minimum volume for the rest of the line. If you don't believe me, look up the cpu costs at the launch of the $800 quad mini and the mac pro at the last "refresh". They drop a $300 cpu option into the $2500 mac pro, and they aren't paying the costs of a full dual package board due to the daughterboard design. As to thick aluminum and extra drive bays, they don't add much. It's cheaper aluminum, but it's thick. Things like drive bays and other random features are also  commonly found in xmac like machines on newegg.

     


    What all of you really want is the base mac pro in a smaller case for $1000 less, and this would require a change in philosophy for the machine rather than a reduction in costs.

     0Likes 0Dislikes 0Informatives
  • Reply 73 of 211
    Marvinmarvin Posts: 15,584moderator
    hmm wrote:
    Buying the new one with 15% faster X86 cores doesn't do much. It usually means that anything bound by machine time gets done a bit faster, which doesn't describe the overall for most of these users. The other circumstance is that the machine you already own is limiting so any amount of additional speed is welcome.

    So you mean they have to get round Intel's slow upgrade cycle somehow like by allowing you to buy multiple machines and easily hook them together to get guaranteed performance scaling with both CPU and GPU. Yeah, that's a good idea. I wonder how they'd connect them together to allow you to control a CPU and GPU in a plug and play manner though.
    hmm wrote:
    I don't agree with the idea of starting with a form factor and determining what will fit within it when building a workstation as opposed to determining what should be included and designing around that.

    It can never be one or the other. They don't design an iMac and then worry if it can just take a ULV processor with integrated graphics.

    They can start with an 8" Cube and see what they can fit inside. If it's too small, they try something a bit bigger like an 8.1" Cube.

    There are 3 real options:

    - similar design and just throw in Ivy Bridge at the same prices points after 3 years
    - radical redesign with better performance per dollar that will work for the next few years
    - no more Mac Pro

    Given what's happened, I think they were about to drop it. You can see this trend everywhere. No big company wants to be in the tower market any more.

    The Pippin failed, the Newton failed and the Quicktake failed, the iPhone and iPad succeeded. It's the Cube's time to shine. Think about the cube store with cubes inside.
     0Likes 0Dislikes 0Informatives
  • Reply 74 of 211
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post





    So you mean they have to get round Intel's slow upgrade cycle somehow like by allowing you to buy multiple machines and easily hook them together to get guaranteed performance scaling with both CPU and GPU. Yeah, that's a good idea. I wonder how they'd connect them together to allow you to control a CPU and GPU in a plug and play manner though.


    This isn't quite how I see it. I've mentioned several things such as performance per dollar, potential range of use cases, and what upgrades actually enable. These are aimed at users with either demanding workloads or atypical requirements. I think we can agree on that much. They're not really designed for distributed computing as a primary use. You could try that, and it might work if you were looking to harvest extra cycles from under utilized machines in a larger shop driven by mac pros. It's also true that as they become more powerful, they could leverage some workloads that were previously dedicated to clusters, assuming those workloads hit flat growth and workstations have now caught up. I was merely pointing out that the most cost effective improvement to a wide range of workloads is currently tied to computation allocated to the gpu. The comparisons are generally a CUDA card to a 12 core mac pro, and even with the testing biases, it presents a strong performance per dollar. Take a look at NVidia's propaganda. What's really interesting is how well it works at various levels. Adobe certainly didn't optimize in favor of cpu computation, but the results are still really impressive. Looking at where things will go moving forward, I see that as a better thing to address at a core level. The gains are likely to be much stronger, and it would be something to drive growth in the mac pro as the software matures. If you're just looking to build a server farm, there are cheaper methods. Do you see Apple as a company that would research ways to daisy chain mac pros at a plug and play level if they really were recently considering its cancellation? This isn't the kind of thing they've pursued since the Xserve. Given their professed interest in OpenCL and recent migration back to NVidia, I assumed a strong focus on GPGPU to align better with their current path.


     


    Quote:


    t can never be one or the other. They don't design an iMac and then worry if it can just take a ULV processor with integrated graphics.

    They can start with an 8" Cube and see what they can fit inside. If it's too small, they try something a bit bigger like an 8.1" Cube.

    There are 3 real options:

    - similar design and just throw in Ivy Bridge at the same prices points after 3 years

    - radical redesign with better performance per dollar that will work for the next few years

    - no more Mac Pro



    You're leaving out the potential for a late Sandy Bridge E. If you look at Westmere, they still used some nehalem options. It could be the same thing here. It may be mixed either way. If they're waiting for Ivy, these drivers will likely never be used. The AMD 8XXX will be out by the time we have Ivy Bridge E5s. NVidia seems like a better option anyway. I think leaving the Mac Pro as the only computer without CUDA options is just a terrible move for its future when it needs to soak up as much volume as possible. No more mac pro makes less sense to me unless they were undecided. They could have cancelled it when they announced new products or sunset it like they did with the Xserve. I only see a redesign if they think they can capture new customers with it. This is the whole point of what I've mentioned. Even in workstations, not everything is a $10k/seat configuration. They've been going kind of cheap on X86 cores with just a quad in the $2500 model. If that is the continued direction, they need something else to prop up its value.


     


     


    Quote:


     


    Given what's happened, I think they were about to drop it.




     


    I agree. Pushing it back to next year likely means that no one was allocated to work on it.


     


    Quote:


    You can see this trend everywhere. No big company wants to be in the tower market any more.

    The Pippin failed, the Newton failed and the Quicktake failed, the iPhone and iPad succeeded. It's the Cube's time to shine. Think about the cube store with cubes inside.



    I think the last two lines there are more your imagination, although it does make these discussions more interesting.

     0Likes 0Dislikes 0Informatives
  • Reply 75 of 211
    Marvinmarvin Posts: 15,584moderator
    hmm wrote:
    I've mentioned several things such as performance per dollar, potential range of use cases, and what upgrades actually enable. These are aimed at users with either demanding workloads or atypical requirements.

    I was merely pointing out that the most cost effective improvement to a wide range of workloads is currently tied to computation allocated to the gpu.

    But you have to be suggesting that if they offer better performance per dollar with multiple GPUs that they include more double-wide slots and increase the power limit of the slots. They'd have to double that power allocation. Bigger PSU, more slots, possibly a bigger case on top of the standard prices. You would be able to get a quad-core with 2 GTX 780s, which would be more cost-effective for certain tasks than a dual processor machine but still pricey and not that appealling.
    hmm wrote:
    They're not really designed for distributed computing as a primary use.


    [VIDEO]


    I love the quote "at a price of $5.2m practically anyone can build a supercomputer". Yeah, anyone with $5.2m. In terms of today's machines, 50 little Ivy Bridge cubes would match the performance of that cluster. If you add in GPU computing, it could be as few as 10.

    I don't see it as a primary use but still a significant one. Think how many people deal with video ingesting/transcoding. If Apple can work with Adobe to get AE renders done in a distributed way, that's huge.
    hmm wrote:
    Do you see Apple as a company that would research ways to daisy chain mac pros at a plug and play level if they really were recently considering its cancellation?

    Yes. Look at what happened with the Mini. It went ages without an update and people just assumed that was it but then BAM, they came out with a totally rethought, redesigned model that in itself is a work of art. This says to me that deep down in their darkest workshops, they are retooling everything to build a next-gen Mac Pro. It might not be built the way I suggest but I would be surprised if it remained largely unchanged.
    You're leaving out the potential for a late Sandy Bridge E.

    There's no potential for this. Why would they use Sandy Bridge when Ivy Bridge E is a few months away? Q3 is right around WWDC:

    http://www.engadget.com/2012/10/17/intel-roadmap-reveals-10-core-xeon-e5-2600-v2-cpu/

    If you're going to make a new machine, you might as well use the newest parts instead of making people wait 3 years for last year's CPUs.
    You can see this trend everywhere. No big company wants to be in the tower market any more.

    The Pippin failed, the Newton failed and the Quicktake failed, the iPhone and iPad succeeded. It's the Cube's time to shine. Think about the cube store with cubes inside.
    I think the last two lines there are more your imagination, although it does make these discussions more interesting.

    HP doesn't want to be in the market, they were going to sell off their entire desktop business. Both Dell and HP, the biggest companies in this sector are severely struggling:

    http://www.zdnet.com/dell-hp-and-the-folly-of-the-consumer-pc-business-7000003072/
    http://www.zdnet.com/server-sales-slow-but-dell-shows-growth-hp-ibm-tied-for-no-1-7000003427/
    http://www.wired.com/wiredenterprise/2012/09/29853/

    While they do better in the enterprise, if they cut the consumer market, their business solutions will suffer too because their volumes will shrink to a fraction of what they are now. The server market will go the custom build route possibly with ARM to save millions for big companies and Tesla and similar will take over for compute. Dell and HP are dead in the water.

    The workstation market is tiny and over time it will merge into the AIO market. This is one thing HP actually gets right. I know people are going to rattle off the usual 'always needs' like internal RAID, multiple GPUs, internal expansion but just give it another few years.

    Custom PCI cards are designed to do things the computer is too slow to do natively, this is going to change. GPUs will reach a point very quickly where you just won't need multiple GPUs for real-time use and the rest will be server-side. For IO expansion, everything is going to standardise around USB 3 and Thunderbolt (or some other form of external PCI).
     0Likes 0Dislikes 0Informatives
  • Reply 76 of 211
    philboogiephilboogie Posts: 7,675member
    Marvin wrote: »
    The Pippin failed, the Newton failed and the Quicktake failed, the iPhone and iPad succeeded. It's the Cube's time to shine. Think about the cube store with cubes inside.

    But the Cube failed as well, selling a mere 125.000 IIRC. Do like your thought on Cubes in the Cube!
     0Likes 0Dislikes 0Informatives
  • Reply 77 of 211
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post





    But you have to be suggesting that if they offer better performance per dollar with multiple GPUs that they include more double-wide slots and increase the power limit of the slots. They'd have to double that power allocation. Bigger PSU, more slots, possibly a bigger case on top of the standard prices. You would be able to get a quad-core with 2 GTX 780s, which would be more cost-effective for certain tasks than a dual processor machine but still pricey and not that appealling.


    Actually workstations don't always run the hottest gpus. This is mostly unique to Apple, but even Apple has allowed 2 in the 150W range on several prior mac pros. A lot of workstation gpus are clocked lower, but they can perform significantly better under certain circumstances. It varies to a degree, which is why a lot of people used GTX580s for CUDA processing. Supposedly the Teslas held up better with double precision math, and obviously you have more ram available there. I expect integrated graphics may eventually show up on some E5 variants. The E5 equivalent of Haswell is the earliest I'd expect that if they go that route. Broadwell is more likely, as I don't expect them to go up in core counts indefinitely. So far that strategy hasn't worked perfectly. A lot of these algorithms weren't written to be split among so many threads, and algorithm development tends to be largely academic. If you ever read Siggraph articles, you'll find a lot of Phd dissertations referenced among them.


     


    Anyway I think you're a bit imaginative in the way you envision these things. In 10 years a lot of things that exist as primary computing devices today may operate more like slim clients. It's just too early to gauge with 100% accuracy. People claim the cloud will take over in every regard, yet it's useless without the proper infrastructure and back end programming.


     


     


    Quote:


    I love the quote "at a price of $5.2m practically anyone can build a supercomputer". Yeah, anyone with $5.2m. In terms of today's machines, 50 little Ivy Bridge cubes would match the performance of that cluster. If you add in GPU computing, it could be as few as 10.



    I don't see it as a primary use but still a significant one. Think how many people deal with video ingesting/transcoding. If Apple can work with Adobe to get AE renders done in a distributed way, that's huge.



    I could see distributed computing in a larger shop if it could be worked out as a way to harvest extra cpu cycles from workstations that are not used at the time or under- utilized. Apple lacks any kind of efficient infrastructure for this though, and they've been moving away from this direction. In the G5 era, Xgrid was supported. There was a reason for the choice in some of these clusters, although I can't remember what it was. That video doesn't really go much into the logic behind their hardware choices.


     


     


     


     


    Quote:


    Yes. Look at what happened with the Mini. It went ages without an update and people just assumed that was it but then BAM, they came out with a totally rethought, redesigned model that in itself is a work of art. This says to me that deep down in their darkest workshops, they are retooling everything to build a next-gen Mac Pro. It might not be built the way I suggest but I would be surprised if it remained largely unchanged.



    I don't necessarily view this the same way. The mini still aligned well with Apple's future goals. In the case of the mac pro, they had plenty of time to redesign for Sandy Bridge E if they wanted to do this. Even when they released Westmere, Sandy Bridge E was already looking pretty far out. At that time Sandy Bridge E was scheduled for the first quarter with E5s late in the third quarter. The opportunity was there. It's more likely that they simply did not allocate any kind of team to such a project, then later decided they didn't want to cancel it. They have pushed out two redesigns this year, so it could happen. I don't agree with your list of priorities though. The historical price points of this line dictate its markets.


     


     


     


    Quote:


    There's no potential for this. Why would they use Sandy Bridge when Ivy Bridge E is a few months away? Q3 is right around WWDC:



    http://www.engadget.com/2012/10/17/intel-roadmap-reveals-10-core-xeon-e5-2600-v2-cpu/



    If you're going to make a new machine, you might as well use the newest parts instead of making people wait 3 years for last year's CPUs.



    You're ignoring what I mentioned before. Apple may not have a starter option within Ivy Bridge E. Look at the mac pro now. The lowest option is nehalem from 2009. The low end from westmere would have been the 2.4, and it was still above their price target, and possibly not compatible with a single package board. The other thing I would mention is that Intel's release dates are not necessarily accurate. Sandy Bridge E5s officially launched in early March. Most oems weren't shipping until early July. Typically the supercomputer vendors get first pick in terms of contract purchases. Q3 could easily mean machines shipping in December by Intel's math. Apple likely has access to more information on this matter, which is why I've stated the possibility of a late Sandy Bridge E still exists. I'm not denying that is an incredibly screwed up release cycle.


     


     


    Quote:


     


    HP doesn't want to be in the market, they were going to sell off their entire desktop business. Both Dell and HP, the biggest companies in this sector are severely struggling:



    http://www.zdnet.com/dell-hp-and-the-folly-of-the-consumer-pc-business-7000003072/

    http://www.zdnet.com/server-sales-slow-but-dell-shows-growth-hp-ibm-tied-for-no-1-7000003427/

    http://www.wired.com/wiredenterprise/2012/09/29853/



    While they do better in the enterprise, if they cut the consumer market, their business solutions will suffer too because their volumes will shrink to a fraction of what they are now. The server market will go the custom build route possibly with ARM to save millions for big companies and Tesla and similar will take over for compute. Dell and HP are dead in the water.



    The workstation market is tiny and over time it will merge into the AIO market. This is one thing HP actually gets right. I know people are going to rattle off the usual 'always needs' like internal RAID, multiple GPUs, internal expansion but just give it another few years.



    Custom PCI cards are designed to do things the computer is too slow to do natively, this is going to change. GPUs will reach a point very quickly where you just won't need multiple GPUs for real-time use and the rest will be server-side. For IO expansion, everything is going to standardise around USB 3 and Thunderbolt (or some other form of external PCI).


     




     


    What they wanted to ditch was their consumer desktop market. The margins on their Z series workstations are quite high, and they already built the Z1 as an AIO design. They priced it pretty aggressively. Well outfitted it comes out around $5k. I think it's higher if you go with the Dreamcolor display, which is necessary if you're coming from NEC/Eizo. HP's markups on upgrades have always been extremely high. If I bought one, I'd probably wait for a configuration I like to show up as one of the standardized configurations because of this.


     


    Regarding the server market, it's not all a trend to ARM.


     


    http://www.wired.com/wiredenterprise/2011/11/server-world-bermuda-triangle/


    http://www.wired.com/wiredenterprise/2012/09/29853/


     


    Note the two links. The first is regarding Facebook and others directly negotiating server purchases with ODMs. The second relates to companies like Google building their own servers.


     


    If anything X86 has grown in the server market. I don't expect them to simply ignore ARM. Viewing this as a foregone conclusion without even examining what ARM gives up in favor of power efficiency is really illogical. The two designs remain far enough apart to where I'm not sure how it will end.

     0Likes 0Dislikes 0Informatives
  • Reply 78 of 211
    Marvinmarvin Posts: 15,584moderator
    philboogie wrote:
    But the Cube failed as well, selling a mere 125.000 IIRC.

    That's what I was getting at, some things deserve a second chance. People wanted the Cube when it came out but it was hard to justify the price next to the larger workstation and the cooling method it used along with cracks in the plastic case just didn't sit well. They'd design it properly this time.

    The price will put people off just like the Mac Pro but it would be unique. Right now the Mac Pro is like all the other machines out there but bigger, heavier and expensive. It has better cooling but if that power and more could fit into something you could pick up with one hand, it's more impressive.

    I don't see the downsides of the smaller design. It can take a 10/12-core chip for a lower price. It can hold a good amount of RAM. Storage might be a bit tight but even with 2 drives + SSD, it can handle ~8TB of storage. It can hold a very fast GPU. All that's missing is PCI expansion but there are external options and it's probably going to have a minimal effect.

    If more people buy the Pro for the slots than the cores, then sticking with the slots and ignoring TB is the way to go but I think the Mac Pro's area of emphasis needs to stop being expansion and start being performance per dollar.

    The Mini is about being small and entry level, the iMac is simple and has a great display bundled, the Mac Pro needs to be a powerhouse for creative tasks. A quad-core 2009 CPU and 5770 for $2499 is terrible value and a drop-in upgrade would leave it that way.
    hmm wrote:
    Apple may not have a starter option within Ivy Bridge E.

    It would all be Ivy Bridge, none of this selling old CPUs on the low-end. It's about performance-per-dollar. They would design it and price it in a way that they could use the latest architecture in the whole lineup. The latest 6-cores would be around $500-600. They can absord the extra $300 just from the margins but also from the redesign (no optical, smaller PSU etc).
    hmm wrote:
    the possibility of a late Sandy Bridge E still exists

    If that's the case, why not release it now; what are they waiting for? Furthermore, why do it a few months after what they did at WWDC? It doesn't add up. It seems like they couldn't decide on which direction to take it next. They've probably been going over the same discussions we have about how they actually get TB support in there and if they need to bother with it.

    It has to be something that stopped them redesigning around Sandy Bridge. They purposely avoided it. It's not as if they had a change of heart because of WWDC, the Mac Pro release coincided with it. For whatever reason, Sandy Bridge just didn't work for them. They might need a better TB controller like Redwood Ridge or Falcon Ridge or if they are going with single CPUs, they need chips that scale to 10/12-cores, which only Ivy Bridge offers.

    I think the big reveal will happen at WWDC 2013. That's the audience for it.
     0Likes 0Dislikes 0Informatives
  • Reply 79 of 211
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post





    It would all be Ivy Bridge, none of this selling old CPUs on the low-end. It's about performance-per-dollar. They would design it and price it in a way that they could use the latest architecture in the whole lineup. The latest 6-cores would be around $500-600. They can absord the extra $300 just from the margins but also from the redesign (no optical, smaller PSU etc).


    We aren't in disagreement here about what they could do. It just contradicts their previous behavior. I'm not sure what their strategy is at the moment. Their current pattern of behavior is inch up pricing and cut costs. That's pretty much a product death spiral. It could be that they foresee the imac cannibalizing enough in the future to phase this out in another generation or two. I'm not sure though. I don't disagree with you that they could eat the price increase. It's just that this contradicts their prior behavior. Intel cut the price of 6 core cpus long ago. They only repriced it at WWDC to $3000. I'd expect that price point to most likely carry over to Sandy or Ivy. 


     


     


    Quote:




    If that's the case, why not release it now; what are they waiting for? Furthermore, why do it a few months after what they did at WWDC? It doesn't add up. It seems like they couldn't decide on which direction to take it next. They've probably been going over the same discussions we have about how they actually get TB support in there and if they need to bother with it.

    It has to be something that stopped them redesigning around Sandy Bridge. They purposely avoided it. It's not as if they had a change of heart because of WWDC, the Mac Pro release coincided with it. For whatever reason, Sandy Bridge just didn't work for them. They might need a better TB controller like Redwood Ridge or Falcon Ridge or if they are going with single CPUs, they need chips that scale to 10/12-cores, which only Ivy Bridge offers.

    I think the big reveal will happen at WWDC 2013. That's the audience for it.



     


     


    We discussed the possibility that it was initially slated for cancellation. If this was the case and they had no one working on it, it would make sense. I thought Ivy Bridge only went to 10 cores? Anyway I've never seen workstation boards morph on in between generations like that. The chipsets right now are the same ones they'll have available then. You get less mileage from them when skipping the first generation. This means they'd have to fabricate a new board for Haswell E5s. I wonder if we'll see them before 2015. Going with a single package 10 core system would offer some explanation, but I'm still not seeing it. My best guess is low priority + change of plans. At this point they can't be making much off the line. You really think an Ivy machine will be ready by WWDC? Even with delayed shipping dates, that seems unlikely to me given the disparity between Intel's "launch dates" and actual shipping dates from oems.


     


    I'm still curious when Apple will support OpenCL 1.2. If only I could get Snow Leopard with OpenCL 1.2. It's much leaner than Lion and ML. The one problem is the lack of ability to leverage newer things.

     0Likes 0Dislikes 0Informatives
  • Reply 80 of 211
    wizard69wizard69 Posts: 13,377member
    Marvin wrote: »
    philboogie wrote:
    But the Cube failed as well, selling a mere 125.000 IIRC.

    That's what I was getting at, some things deserve a second chance. People wanted the Cube when it came out but it was hard to justify the price next to the larger workstation and the cooling method it used along with cracks in the plastic case just didn't sit well. They'd design it properly this time.
    The cube was an interesting even desirable machine but Apple really screwed up with pricing. Badly! In fact the Cube was so poorly priced that it turned me off with respect to Apple for a very long time. I eventually replaced my Mac Plus with a series of Linux machines because of it. The proverbial straw that broke the camels back so to speak as I just could never see a price justification for that machine nor for many that came between the Plus and the Cube.

    The Cube want a bad idea though. Something similar today probably would work well unless it was big enough to house the capability of the Mac Pro. Notably the Mini is a far better "Cube" that the Cube ever was. So a modern day Cube would need to be able to house both high performance CPUs and GPUs. That at the very least means a bigger box.
    The price will put people off just like the Mac Pro but it would be unique. Right now the Mac Pro is like all the other machines out there but bigger, heavier and expensive. It has better cooling but if that power and more could fit into something you could pick up with one hand, it's more impressive.
    Price does kill the Mac Pro even though many don't want to admit it. The problem is that the Mac Pro is really only attractive to people that configure it in rather high performance configurations, which means they are by definition less sensitive to pricing. At the low end, for people looking for really good midrange performance and maybe a decent GPU the machine is a joke and way over priced. Thus the terrible sales.
    I don't see the downsides of the smaller design. It can take a 10/12-core chip for a lower price. It can hold a good amount of RAM. Storage might be a bit tight but even with 2 drives + SSD, it can handle ~8TB of storage. It can hold a very fast GPU. All that's missing is PCI expansion but there are external options and it's probably going to have a minimal effect.
    I really don't see the problem either. For the most part, that is for the majority of users the Mac Pro is one big box of dead air. The only thing to argue with is the need for a PCI-Express slot. Such a shrunken machine needs at least a couple of slots as external connections would never be fast enough nor have the reliability some users want. As for bulk storage it simply doesn't belong inside a CPU box anymore, there are now multiple ways to interface such hardware and still maintain the performance required.
    If more people buy the Pro for the slots than the cores, then sticking with the slots and ignoring TB is the way to go but I think the Mac Pro's area of emphasis needs to stop being expansion and start being performance per dollar.
    The two aren't mutually exclusive! You can have slots in a small box and t the same time target performance per dollar. The problem is that certain segments of industry will always need some sort of applications accelerator that can only really be leveraged in a high performance slot.
    The Mini is about being small and entry level, the iMac is simple and has a great display bundled, the Mac Pro needs to be a powerhouse for creative tasks. A quad-core 2009 CPU and 5770 for $2499 is terrible value and a drop-in upgrade would leave it that way.
    Terrible isn't the word for it. It is down right highway robbery. We need to look deeper though and to try to determine where those high prices come from and what can be done to reduce them. I still see the first order of business should be to rip everything out of the box that can't be justified for making a high performance module. Thus anything SATA related must go, that is almost a third of the box and motherboard right there.
    hmm wrote:
    Apple may not have a starter option within Ivy Bridge E.

    It would all be Ivy Bridge, none of this selling old CPUs on the low-end. It's about performance-per-dollar. They would design it and price it in a way that they could use the latest architecture in the whole lineup. The latest 6-cores would be around $500-600. They can absord the extra $300 just from the margins but also from the redesign (no optical, smaller PSU etc).
    Apple does have configuration issues that make the low end machines very in appealing.

    As to Ivy Bridge, I just don't see it in a new 2013 Mac Pro. Frankly it really isn't worth waiting for, at least it doesn't justify pissing off your loyal customer base for. Instead I see something from the Xeon Phi line up going into the machine. Not so much the accelerator chips already released/announced but rather a Main CPU Phi that has been rumored about. In other words a chipset that allows Apple to implement a dramatically different Mac Pro and something they might see as justifying good margins.

    To put it another way, they need something that makes people say wow. Something that changes the mindset as to the Mac Pros value. Frankly if they rolled out yet another Ivy Bridge based Pro machine, in the same mold as the current Pros, I don't see a lot of NEW users rushing to embrace the new machine. New being the key word as to remain viable the new Pro needs to suck in many more new users.
    hmm wrote:
    the possibility of a late Sandy Bridge E still exists

    If that's the case, why not release it now; what are they waiting for? Furthermore, why do it a few months after what they did at WWDC? It doesn't add up. It seems like they couldn't decide on which direction to take it next. They've probably been going over the same discussions we have about how they actually get TB support in there and if they need to bother with it.
    The same logic more or less applies to an Ivy Bridge based machine. Or it will when a stable of Ivy based Xeons is out. Things like TB and other technologies are really pushing us to dramatically different Mac Pro architectures. The thing is what is taking so long? Hard question to answer but nothing available Ivy Bridge wise really seems to justify the long delay in a new architecture.
    It has to be something that stopped them redesigning around Sandy Bridge. They purposely avoided it. It's not as if they had a change of heart because of WWDC, the Mac Pro release coincided with it. For whatever reason, Sandy Bridge just didn't work for them. They might need a better TB controller like Redwood Ridge or Falcon Ridge or if they are going with single CPUs, they need chips that scale to 10/12-cores, which only Ivy Bridge offers.
    To this I agree, but I really can't see anything compelling in Ivy Bridge Xeons either. Think about it, Apple risked many customers by releasing that "bump" machine a few months ago. Does Ivy Bridge justify that? Nope! At least I can't see anything so compelling in Ivy Bridge that I'd risk my customer base waiting for it. This is why I expect something different, who knows Apple could be partnering with Intel on a Xeon Phi specific for Apples needs. All I do know is that they must have something compelling up their sleeves to justify all the foot dragging and non updates we have gotten.
    I think the big reveal will happen at WWDC 2013. That's the audience for it.

    I was thinking February. I suppose another half year doesn't mean much if you haven't done a real update in four years but the customer base is getting itchy. In any event you would think that we would be hearing leaks or rumors rather soon. .
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.