Future of Mac Pro

1235711

Comments

  • Reply 81 of 212
    frank777frank777 Posts: 5,839member

    Quote:

    Originally Posted by filmjr View Post


    What are the chances we'll see an announcement of the new Mac Pro at NAB, like they did with FCPX in 2010?


     


    Surely the largest remaining target audience of a true pro machine are filmmakers, videographers, compositers and animators.



     


    True, but as has been pointed out, the likely chips are due in the summer and there's the question of whether optical Thunderbolt is ready yet.


     


    They may need time to iron out supply issues for the Thunderbolt display upgrade.


     


    I still think it will be WWDC, but you never know.

  • Reply 82 of 212
    MarvinMarvin Posts: 15,322moderator
    hmm wrote:
    I don't think cloud computing will completely remove the need for optimization to control costs and cpu time requirements. You still pay for the time there. It provides smaller shops with some amount of scalability, which is very cool. It doesn't actually displace the need for workstation hardware anywhere things must be addressed in real time, especially in terms of gpu hardware. GPUs get stressed quite a bit. It's still common to see low rez proxies used to set up a scene or animate even with the use of powerful gpus.

    There's also the matter of what computers sit in the cloud. Naturally, they'll try to maximize the space with blade servers and multiple GPUs but 'smaller' providers could buy 100 12-core Cubes to get a 1200-core farm for under $0.5m and charge it out by the hour. One here charges $0.7/core/hour:

    http://www.renderrocket.com/pricing/

    That was used for a few films: Ant Bully, Superman, Die Hard 4, Iron Man, Night at the Museum, Spiderman 3.

    If they had it running full load, 24/7, they could make 1200 x $0.7 x 365 x 24 = ~$7m/year minus running costs. If Apple made it easy to configure, even better e.g install one software package on a control node and just plug as many machines in and enable "compute sharing" on each and it can use CPU and GPU seamlessly. They can even have an iPad app and cable to configure a node. Take it out the box, plug into the network, plug an iPad in via USB, turn on the node and set it up. From that point, the control node would deal with the software.

    It can be more affordable to have your own workstation but it depends. If you render at even 5 minutes per HD frame and you aim for a 20 second TV commercial that's 20 seconds x 30 frames x 5 minutes = 50 hours straight - it doesn't leave much room for mistakes or deadlines. You'd ideally use both solutions but you can do proxies and the actual work on any machine.
  • Reply 83 of 212
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post







    It can be more affordable to have your own workstation but it depends. If you render at even 5 minutes per HD frame and you aim for a 20 second TV commercial that's 20 seconds x 30 frames x 5 minutes = 50 hours straight - it doesn't leave much room for mistakes or deadlines. You'd ideally use both solutions but you can do proxies and the actual work on any machine.


    I don't personally buy into the cube theory there, but you know that. I'm not how many commercials would have the render budget to book 1200 nodes for 50 hours. That's only a fraction of their total production cost. By the way, it should be 24 fps, and you wouldn't run such a job without tests. There would be a number of checks prior to the final run, at which point they still have the ability to tweak things at the compositing level.

  • Reply 84 of 212
    MarvinMarvin Posts: 15,322moderator
    hmm wrote:
    I don't personally buy into the cube theory

    It's not a theory, just an option for them. It would require a lot more engineering effort but the way I see it is their engineers are probably quite bored these days and could use a challenge. They already have tons of money, why not do something interesting? There's not much of a challenge in dropping an Intel motherboard and some off-the-shelf parts into a big box they've already engineered. Maybe that's how they like it though.

    If they just went the drop-in upgrade route, you'd end up with almost the same machine but with dual 8-core Ivy Bridge (they won't use the 10-core or 12-core options at this price point) for $6200, USB 3 support, PCI 3, SATA 6G, Radeon 8970 or whatever Nvidia calls their GTX 680 next year. It's tried and tested I suppose but it's a bit lame.
    hmm wrote:
    I'm not how many commercials would have the render budget to book 1200 nodes for 50 hours.

    They wouldn't do that most likely. If one 12-core machine can do a frame in 15 minutes, they only need 480-600 frames done depending on what framerate they use. So they'd book the 100-machines (1200 cores) for 1.5 hours, which costs $1260.

    What's the alternative? If you buy 20 machines of your own, you have to spend $80,000-120,000 and it'll take 7.5 hours to do each render.
    hmm wrote:
    you wouldn't run such a job without tests.

    Sure but tests can be done at lower quality or in shorter chunks for which you could use any suitably powerful machine.
  • Reply 85 of 212
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post





    It's not a theory, just an option for them. It would require a lot more engineering effort but the way I see it is their engineers are probably quite bored these days and could use a challenge. They already have tons of money, why not do something interesting? There's not much of a challenge in dropping an Intel motherboard and some off-the-shelf parts into a big box they've already engineered. Maybe that's how they like it though.



     


    The bolded part made me laugh. It's extremely funny, as there may be some truth to it. Workstation and server products are often very conservative. They need to show up and work with minimal downtime, and that may make the design somewhat boring for their respective teams. I mentioned how Google is building their own servers. I think facebook was doing the same thing. The oems may not add enough value to the equation for them.


     


    Quote:


     


    If they just went the drop-in upgrade route, you'd end up with almost the same machine but with dual 8-core Ivy Bridge (they won't use the 10-core or 12-core options at this price point) for $6200, USB 3 support, PCI 3, SATA 6G, Radeon 8970 or whatever Nvidia calls their GTX 680 next year. It's tried and tested I suppose but it's a bit lame.

    They wouldn't do that most likely. If one 12-core machine can do a frame in 15 minutes, they only need 480-600 frames done depending on what framerate they use. So they'd book the 100-machines (1200 cores) for 1.5 hours, which costs $1260.




    I was under the impression we were still talking about server hardware, so I went with that. I noticed you precisely quoted mac pro with an NVidia gaming card. In terms of workstations, that is likely. I think I've mentioned this, but the ideal machine for me would be either a quad i7 or more likely the Sandy E5 version of the W3680 with a mid range Quadro card and maxed out ram. I just max ram because it's so cheap now that I don't bother splitting hairs trying to figure out the optimal amount. 16GB would be slightly low in a new machine as it's kind of low for me today. Might as well go to 32. This has little to do with Apple's lineup. I'm just saying what I'd pick if I was building one for my own use from the ground up. I've been following cloud solutions as well. They have the potential to provide a lot of extra leverage to freelancers, and if rendering time is cheaper than trying to find workarounds to brute force GI methods or increased sampling, it can make sense. This wouldn't be the case if you're working with a fixed amount of render power. I've worked on a pretty wide range of machines. I can tell you that there's still a good reason to have as much gpu power as possible. It doesn't mean everyone can justify $2000+ on a gpu, but many of them would likely derive some amount of benefit from it. This is one of those things that is still quite unique to the desktop. I don't expect it to remain that way. I'm just saying it is what it is for now.


     


     


    Quote:


    What's the alternative? If you buy 20 machines of your own, you have to spend $80,000-120,000 and it'll take 7.5 hours to do each render.

    Sure but tests can be done at lower quality or in shorter chunks for which you could use any suitably powerful machine.



     


    Yeah I wasn't suggesting that. Individuals and smaller companies generally can't afford that. They have to know it will pay for itself and have cash or financing lined up. The concepts of cloud computing and slim clients aren't really new at all. It's just that they're being leveraged for things that weren't practical in the past.  I would suggest that 15 minutes per frame on a 12 core machine would be pretty damn fast. I know that's just arguing details, but it's not that atypical to render a little larger than the final resolution to allow for cropping and tweaks in compositing. It's also a common way of dealing with noise/anti-aliasing, as heavy use of something like mitchell-netravali filtering can be way too time consuming to resolve without flicker. If it's a print job, it's less of an issue, but that render could be running for many hours. This is a fun discussion. I don't these solutions as much in the way of competition for the mac pro outside of freelancers and small shops, but I don't know what the sales distribution is like on those machines. It's hard for me to offer a even a reasonably good analogy when I have no idea who will order the most mac pros going forward. My earlier comments on HP were more about how I think workstation class machines are a much bigger deal to them, in spite of low volume.


     


    I got a little off track there. If you note one of my prior links, scalable render power would mean that a greater range of shops could handle projects like that involving instanced hero objects and huge amounts of geometry. You need proxies either way, but just having the ability to use geometry rather than mapped textures can help alleviate that CG look that I can often pick out at a glance.

  • Reply 86 of 212
    wizard69wizard69 Posts: 13,377member
    Marvin & hmm, it is interesting that these discussions always center around rendering and other video processing uses of the Mac Pro. Many of us don't care and frankly that is not what defines a Pro computer for us. As a workstation it is easy to end up to focused on one field and end up with a very low volume machine. Low volume in the context of Apple anyways.

    To prop up volume, which I see as a serious Mac Pro issue, Apple needs to consider a wider array of users. Video processing may still be important to the Mac Pro market, but moving into the future I don't see it being well supported on the massive machines we have today. This is why I see a much smaller Mac Pro in our future. If you need a machine to build a render farm on, it makes sense to me to make each node compact, high performance and cost effective. Such a node is also highly salable to those of use not in need of the big box but do want big processor performance.

    Thus I wouldn't be surprised to find the new Mac Pro looking something like Marvin's drawing/rendering. Most likely it would be a it bigger, but the general idea flies with me. Done right such a box could end up being sold in groups of twos fairly close to the price of a single dual processor Mac Pro of today's design. I say fairly close because I'm still expecting each node to have a discrete GPU thus increasing cost a bit. That effectively means each node would have to sell in the $1500 to $1800 range. That might be tuff to meet depending upon the hardware selected, but I see a need for that price range if a shrunken and clusterable Mac Pro where to ever be successful. In other words such an arrangement can't end up being more expensive (significantly) than the traditional big box approach.

    The goal of a redesigned Mac has to be solidifying and increasing sales. The current approach is frankly a dead end and does more to shrink sales than to drive sales. The current machines are just to expensive to be easily justified.
  • Reply 87 of 212
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by wizard69 View Post



    Marvin & hmm, it is interesting that these discussions always center around rendering and other video processing uses of the Mac Pro. Many of us don't care and frankly that is not what defines a Pro computer for us. As a workstation it is easy to end up to focused on one field and end up with a very low volume machine. Low volume in the context of Apple anyways.

     


    They're known Mac markets, and it's easy for me to comment on what I know. It seems like the imac is what they hope will really stick at the moment. To be a healthy line, they would need to attract more people to OSX or attract more frequent purchases. We're dependent on intel to a lesser degree there, as we don't know how long it will be before they're back on a predictable cycle. Marvin suggested they should skip Ivy. I don't see that happening at all, especially as that would leave their EX/E7 processors on Westmere. There is no Sandy EX just like there was no Nehalem EX. I don't think they'd skip a tick cycle in server hardware.

  • Reply 88 of 212
    MarvinMarvin Posts: 15,322moderator
    hmm wrote: »
    I would suggest that 15 minutes per frame on a 12 core machine would be pretty damn fast.

    It's going to vary a lot depending on what's in the shot of course but that was just a rough time to show it can be far more cost-effective on the highest -end jobs to move to the cloud. There are examples that show similar times:

    http://www.chaosgroup.com/en/2/envyspot.html?i=15
    "Although we got a powerful render farm here at Taylor James we aim to keep our render times as low as possible. With the flexibility of V-Ray you can easily tweak your render times. Normally we aim for 30 min per frame for an HD frame on a 16-core machine."

    http://www.stashmedia.tv/?p=12326
    10-15 minutes per frame


    5-35 minutes per frame, done on a single workstation
    hmm wrote: »
    I don't these solutions as much in the way of competition for the mac pro outside of freelancers and small shops

    http://www.technologyreview.com/news/425698/hollywoods-cloud/
    "One such firm is Afterglow Studios, based in Minneapolis. Its owner, Luke Ployhar, is currently finishing Space Junk 3D, a 40-minute stereoscopic film about the 6,000 tons of garbage circling the planet. It’s a big project for a small firm, which has required more than 16,000 hours of computing time to animate, or render, the scenes of orbiting debris. Ployhar estimates that if he’d bought computers to do the job, he would have spent at least $50,000 on equipment. It wouldn’t have been economical for me to buy all these machines.

    Last year, about 5 percent of DreamWorks Animation’s rendering was done in the cloud, but the company plans to increase that to 50 percent by the end of 2012, Derek Chan, head of digital operations at DreamWorks Animation says, rather than spend many millions of dollars to expand its existing data center."

    http://us.gmocloud.com/oldblog/blog/2012/12/19/cloud-based-rendering-the-logical-next-step-for-render-farms/

    Massive companies have their own farms or pay for the cloud. Freelancers and smaller shops can use a cloud solution too. There's a space somewhere for the large personal workstation somewhere but it's getting smaller.
    wizard69 wrote:
    it is interesting that these discussions always center around rendering and other video processing uses of the Mac Pro. Many of us don't care and frankly that is not what defines a Pro computer for us. As a workstation it is easy to end up to focused on one field and end up with a very low volume machine.

    It's just one of those fields where there's a constant demand for more power so it fits quite well with the idea of needing powerful local machines to exist. There are other fields like music production, CAD, medical/scientific fields that use computation and so on. There are areas where you might run many small compute processes over and over and a personal workstation would work out more cost-effective or just be more convenient to use. That's where a better performance-per-dollar and smaller machine would be beneficial. They could of course get by with an iMac or MBP but a 6-core+ and a high-end desktop GPU can offer a decent speed boost for a little extra money.

    I don't think $2500 is a bad starting price but it should offer a 6-core so there's an immediate reason to buy one over an iMac. With a single CPU, it tops out at $4000-4500 instead of $6200 and if you need more power, you get another box where you get a second CPU and GPU with more RAM and storage. A spare 128Gbps half-length PCI 3 slot would let you do everything else.
  • Reply 89 of 212
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post





    It's going to vary a lot depending on what's in the shot of course but that was just a rough time to show it can be far more cost-effective on the highest -end jobs to move to the cloud. There are examples that show similar times:



    http://www.chaosgroup.com/en/2/envyspot.html?i=15

    "Although we got a powerful render farm here at Taylor James we aim to keep our render times as low as possible. With the flexibility of V-Ray you can easily tweak your render times. Normally we aim for 30 min per frame for an HD frame on a 16-core machine."



    http://www.stashmedia.tv/?p=12326

    10-15 minutes per frame





    5-35 minutes per frame, done on a single workstation

    http://www.technologyreview.com/news/425698/hollywoods-cloud/

    "One such firm is Afterglow Studios, based in Minneapolis. Its owner, Luke Ployhar, is currently finishing Space Junk 3D, a 40-minute stereoscopic film about the 6,000 tons of garbage circling the planet. It’s a big project for a small firm, which has required more than 16,000 hours of computing time to animate, or render, the scenes of orbiting debris. Ployhar estimates that if he’d bought computers to do the job, he would have spent at least $50,000 on equipment. It wouldn’t have been economical for me to buy all these machines.



    Last year, about 5 percent of DreamWorks Animation’s rendering was done in the cloud, but the company plans to increase that to 50 percent by the end of 2012, Derek Chan, head of digital operations at DreamWorks Animation says, rather than spend many millions of dollars to expand its existing data center."



    http://us.gmocloud.com/oldblog/blog/2012/12/19/cloud-based-rendering-the-logical-next-step-for-render-farms/



     


    This is why I enjoy these discussions. I have never seen anyone who could come up with so cool links. I wouldn't have found the Ben-Collier Marsh video if you hadn't linked it. That's an amazing piece. The shaders and textures appear to have been kept somewhat simple. This doesn't mean they didn't optimize them. I'm saying it doesn't look like they used a massive shader stack or heavy texture mapping. It appears that they worked out the reflective behavior at a shader level but relied solely on geometry and lighting to produce the right highlights. It explains the comment about it requiring a mind blowing amount of geo, as creating that many interlocking parts along with bevels and thickness, each carefully labeled and ensuring a minimum of tangency across all of these sweeping surfaces is a lot of work. If you aren't careful, you end up with a lot of topology problems. It's not so much that it's difficult to build some of those individual pieces. Cogs are extremely easy to build, although it probably took some studying to ensure that they'd work together well mechanically The difficult portions would be setting it up so that everything is in appropriate scale and rigged for animation in a way that will transfer and evaluate correctly, working out some of those self illuminated shaders and lighting for highly reflective dark objects, the concepting where someone had to design such a machine even if it wouldn't be physically constructed, and the camera movements. I'm not as big on that kind of camera movement. Some of them wouldn't be possible with a physical model without deconstructing portions of it for different shots, unless I missed something. Anyway I want to know how you find some of this stuff. That is just way too cool. Back somewhat on topic, 10-15 minutes per frame is likely because in spite of the huge polygon count, the shader and texture design is extremely minimal. It may be well optimized, but they didn't use some of the things that really cause render time to skyrocket. I don't see a lot of heavy reflection blur. They seem to pulled it off through the reflectivity model implemented in their shader. I don't Not having to worry about glossy sampling helps quite a bit. The materials are very smooth. They didn't attempt to break that up with mapped textures in most places that I can see. It looks like they only beveled things where it was absolutely necessary to minimize extra edges. That is smart. 


     


    I'm just saying that time likely came from being extremely efficient in setup. If they wanted the textures to look more like real objects through some amount of wear, those times could have multiplied. Personally I like them the way they are. If they were film props that went alongside actors, they'd likely have some amount of applied wear so as not to stand out. In spite of the simple shader claim in terms of calculation time, I can't imagine how much time it must have taken testing and fine tuning them. There must have been some post work involved to get the reflections that perfect. Speaking of that, there's one place where a renderer can bite you. Some of them are far more approximative than others on sampling indirect reflections. Therefore if you need to output rawreflection + reflection filter passes, it can bite you in the ass trying to maintain a clean look. Okay I've gone on with nerd talk enough for one day.


     


    Quote:


    Massive companies have their own farms or pay for the cloud. Freelancers and smaller shops can use a cloud solution too. There's a space somewhere for the large personal workstation somewhere but it's getting smaller.



    My impression was that they're not using cloud cycles for things involving setup. I was trying to maintain somewhat of a distinction between what is viable as a cloud service and what would remain on local machine time for now. I've generally been of the opinion that local machines are for where real time or near real time feedback is required, where they double as render farms for freelance individuals.


     


    Quote:


    I don't think $2500 is a bad starting price but it should offer a 6-core so there's an immediate reason to buy one over an iMac. With a single CPU, it tops out at $4000-4500 instead of $6200 and if you need more power, you get another box where you get a second CPU and GPU with more RAM and storage. A spare 128Gbps half-length PCI 3 slot would let you do everything else.



    I've said that for years. If you look at the 1,1 through 3,1, the base option was a significant step up from the imac compared to what it is today. It's not the only reason to buy such a machine. It offers greater flexibility. I don't think we're about to hit a slim client only thing. There are enough markets left for performance machines. It's likely that you'll see a certain amount of consolidation, but I don't see them disappearing within the next few years. This means very little in terms of Apple's lineup. As I've said, they tend to chase high growth markets due to their size as a company.

  • Reply 90 of 212
    MarvinMarvin Posts: 15,322moderator
    hmm wrote:
    10-15 minutes per frame is likely because in spite of the huge polygon count, the shader and texture design is extremely minimal.

    The specific time isn't all that important because obviously the longer it is, the less feasible it becomes to do on a single workstation anyway. One of those examples took 35 days to render. Double the time and you get 70 days. Do you want to have your workstation sitting practically unusable for over 2 months as it's maxed out just to find out there's an artifact at a higher resolution or a setting that's been done wrong:



    Like I say though, there has to be computers somewhere so if Apple wants them to be Macs, they need to have a model that works for the usage scenarios. I think the Mac Pro is currently too big and expensive for both remote/parallel use and personal use.

    For personal use, it should be good value (good performance per dollar) and convenient - I'm sure a few people have lifted a 40lb workstation and it's not that convenient. For remote/parallel use, it should also be good performance per dollar but also efficient space-wise and power-wise, as well as in terms of software configuration. A smaller dual-processor workstation could be best, it's up to Apple's engineers to find the right compromise.
    My impression was that they're not using cloud cycles for things involving setup. I was trying to maintain somewhat of a distinction between what is viable as a cloud service and what would remain on local machine time for now. I've generally been of the opinion that local machines are for where real time or near real time feedback is required, where they double as render farms for freelance individuals.

    That's right but the real-time stuff can be done on almost any machine now. Once you get real-time feedback, you don't need better than that because we live in real-time.
    I don't think we're about to hit a slim client only thing. There are enough markets left for performance machines.

    Again though you're creating an artificially large gap between the lower-end and higher-end models. A MBP and iMac are hardly thin clients.

    A 27" iMac with a Fusion drive, 32GB RAM and 2GB GTX 680M is a workstation. A 15" rMBP with 256GB SSD and a 1GB 650M is a workstation.

    In your mind, you have an association between the word workstation and the tower form factor but the definition has expanded over the years. There's a resistance to this just now because it's still in the early phases. We only got quad-core laptops and iMacs around 2010/2011.
  • Reply 91 of 212
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post





    Again though you're creating an artificially large gap between the lower-end and higher-end models. A MBP and iMac are hardly thin clients.



    A 27" iMac with a Fusion drive, 32GB RAM and 2GB GTX 680M is a workstation. A 15" rMBP with 256GB SSD and a 1GB 650M is a workstation.



    In your mind, you have an association between the word workstation and the tower form factor but the definition has expanded over the years. There's a resistance to this just now because it's still in the early phases. We only got quad-core laptops and iMacs around 2010/2011.


    You misinterpret some of my words. By the way, the reason for those bloopers is that you don't animate on the real thing. It just means the rig deformations didn't transfer as expected, but you could bake those out prior to ever rendering footage. It's possible those were test renders at lower than production quality settings. I'm not hung up on definitions. I've worked on my notebook plenty of times. It does choke on far less than a desktop. The biggest choke point is the gpu. It's application dependent, but on anything remotely heavy, I'd be stuck in wireframe or bounding box views. The slim client comment was based on the idea of pushing everything out to the cloud. I said I don't think we're there yet. I don't think Apple's options beneath the mac pro are ideal for this. They're workable, but it's unlikely that this is a primary focus for Apple. It's just that if you're going to use Macs, you pick out of what is available. This doesn't mean I'll never use imac. It just means probably not today. I kind of wonder if by the time I'm interested, they'll be the next item on the chopping block.

  • Reply 92 of 212
    MarvinMarvin Posts: 15,322moderator
    hmm wrote:
    I've worked on my notebook plenty of times. It does choke on far less than a desktop.

    The biggest choke point is the gpu.

    You'd have to be specific about which laptop and which desktop though. Obviously a laptop with an HD3000 isn't going to do real-time graphics like a desktop with a GTX 680 or Quadro. However an iMac with the mobile GTX 680M would and the 650M in the MBP would probably hold up ok too.
    hmm wrote:
    The slim client comment was based on the idea of pushing everything out to the cloud.

    You'd only push the most intensive things that are not very feasible on any machine. The remaining tasks should be doable on any decent performance computer.
    hmm wrote:
    I kind of wonder if by the time I'm interested, they'll be the next item on the chopping block.

    It depends on how the market goes I guess. Without a Mini, there would be no OS X Server. Without the iMac, the prices would probably go up on the 27" Cinema displays. It could happen eventually but the iMac is still a strong seller.

    How does the current iMac compare performance-wise to what you use now?
  • Reply 93 of 212
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by Marvin View Post







    How does the current iMac compare performance-wise to what you use now?


    Bad comparison as I'm quite constrained right now. You know how I mentioned the power available determining the need for workarounds? I am very familiar with many of those. It's too laggy viewing a few million OpenGL polygons in textured mode, so I do rely considerably on low res versions for positioning, wireframe, etc. I've been debating how to resolve that. There are workloads that go well beyond my own, including some of your links. My point regarding gpus like the 650m was that sometimes desktop equivalents can be several times the speed within a given application without the costs going into thousands. For certain use cases memory is also a precious resource. We don't have heterogeneous  computing at this point, so it's still a factor. I suspect 1GB of vram is part of the reason both of my machines regularly choke in the aforementioned use cases. As I said it's possible to work around that, but who wants to if a decent gpu is cost effective?


     


     


    Quote:


    It depends on how the market goes I guess. Without a Mini, there would be no OS X Server. Without the iMac, the prices would probably go up on the 27" Cinema displays. It could happen eventually but the iMac is still a strong seller.



    My point was that if the very generic version of something like a display hits the point where I no longer see a difference, that market may no longer interest Apple in terms of growth. I wasn't being completely serious though, and it implied an extremely ambiguous timeline. I've worked on imacs before. I've used my notebook to accomplish work before. I'm not commenting on anything I haven't tried.

  • Reply 94 of 212
    rbrrbr Posts: 631member

    Quote:

    Originally Posted by Marvin View Post

    It depends on how the market goes I guess. Without a Mini, there would be no OS X Server. Without the iMac, the prices would probably go up on the 27" Cinema displays. It could happen eventually but the iMac is still a strong seller.



     


    Marvin,


     


    While it is true that the market determines everything in the end, I don't see the iMac (despite the limitations that make a lot of us uncomfortable) going away any time soon.


     


    Apple is right about one thing. There are a lot of people who not only don't ever personally go into a computer, but don't have it upgraded by someone else. I am occasionally reminded that there are a lot of people who don't even really know just which computer (Mac or PC) they have, hard as that is for us to imagine. Their response is usually something or other to the effect that it's a (fill in the blank with the name brand) if they even remember that much. At least with Macs they will usually answer its an Apple or a Mac and sometimes even the model (iMac or Macbook Pro), but seldom anything detailed about the hardware.


     


    This is what we are up against when trying to convince Apple that they need to pay more attention to us. (Sigh)


     


    Cheers

  • Reply 95 of 212
    MarvinMarvin Posts: 15,322moderator
    hmm wrote:
    Bad comparison as I'm quite constrained right now.

    Would you call what you are using now a workstation?
    hmm wrote:
    It's too laggy viewing a few million OpenGL polygons in textured mode, so I do rely considerably on low res versions for positioning, wireframe, etc. I've been debating how to resolve that.

    It just needs better software:

    http://www.centileo.com/news.html

    "This demo proves that it is possible to explore huge scenes with lot's of textures on a consumer computer with a single GPU thanks to the efficient virtual memory manager.
    New York downtown scene represented by 25 million polygons and 8 gigabyte of textures (more than 1000 high-resolution textures) and Boeing 777 scene represented by 360 million polygons (or 250 million polygons in some shots).
    Laptop specs include: 200 GB SSD storage, 16 GB RAM, NVIDIA GeForce GTX 485M graphics card with 2 GB of memory (pricing for this laptop was 2000$ in 2011)."

    John Carmack was going on about memory issues for ages:

    http://www.pcper.com/reviews/Editorial/John-Carmack-Interview-GPU-Race-Intel-Graphics-Ray-Tracing-Voxels-and-more/Intervi

    "Those things are good and useful, but what I most want to see is direct surfacing of the memory. It’s all memory there at some point, and the worst thing that kills Rage on the PC is texture updates. Where on the consoles we just say “we are going to update this one pixel here,” we just store it there as a pointer. On the PC it has to go through the massive texture update routine, and it takes tens of thousands of times [longer] if you just want to update one little piece.

    You start to advertise that overhead when you start to update larger blocks of textures, and AMD actually went and implemented a multi-texture update specifically for id Tech 5 so you can bash up and eliminate some of the overhead by saying “I need to update these 50 small things here,” but still it’s very inefficient. So I’m hoping that as we look forward, especially with Intel integrated graphics [where] it is the main memory, there is no reason we shouldn't be looking at that. With AMD and NVIDIA there's still issues of different memory banking arrangements and complicated things that they hide in their drivers, but we are moving towards integrated memory on a lot of things."

    http://www.pcper.com/reviews/Editorial/John-Carmack-Interview-GPU-Race-Intel-Graphics-Ray-Tracing-Voxels-and-more

    " Intel’s integrated graphics actually has impressed Carmack quite a bit and the shared memory address space could potentially fix much of this issue. AMD’s Fusion architecture, seen in the Llano APU and upcoming Trinity design, would also fit into the same mold here. He calls it “almost a forgone conclusion” that eventually this type of architecture is going to be the dominant force."

    Something like the GTX 680M would still hold up pretty well with 2GB of memory but shared memory is needed for both desktop and mobile cards.
    rbr wrote:
    This is what we are up against when trying to convince Apple that they need to pay more attention to us.

    I think they are paying attention though. They didn't need to put IPS, high-res displays into the laptop line and eliminate most of the reflections. They didn't need to put SSDs in nor quad-core i7 CPUs. If they were just targeting people classed as 'consumers' they have no reason to do that. They have no reason to put a 2GB GTX 680M and the fastest desktop i7 into the iMac - they could do what most other AIO manufacturers do. They have no reason to bother with an external PCI standard like Thunderbolt. Consumers could happily live without all of these things and have lower prices.
  • Reply 96 of 212
    wizard69wizard69 Posts: 13,377member
    rbr wrote: »
    Marvin,

    While it is true that the market determines everything in the end, I don't see the iMac (despite the limitations that make a lot of us uncomfortable) going away any time soon.
    For the last couple of years the iMac was the only desktop Mac with an upside in sales in the USA. From my perspective that is directly related to the bad values represented by the Pro and to a lesser extent the Mini.
    Apple is right about one thing. There are a lot of people who not only don't ever personally go into a computer, but don't have it upgraded by someone else. I am occasionally reminded that there are a lot of people who don't even really know just which computer (Mac or PC) they have, hard as that is for us to imagine. Their response is usually something or other to the effect that it's a (fill in the blank with the name brand) if they even remember that much. At least with Macs they will usually answer its an Apple or a Mac and sometimes even the model (iMac or Macbook Pro), but seldom anything detailed about the hardware.
    This is all true and frankly I'm happy that Apple has been able to leverage that market.
    This is what we are up against when trying to convince Apple that they need to pay more attention to us. (Sigh)
    Even amongst pro users, the portion that cares deeply about these sorts of things is vanishingly small. There in lies the problem, Apple does have a good and broad Pro user base it is just that the majority of them don't push hardware as hard as you and the other guys posting in this forum.
    Cheers

    Sadly I really don't know what the solution is here. It would seem to me to be extremely simple to build a chassis around a couple of mother boards that could effectively support all of their Pro and not so Pro users on one platform. For example one Haswell based board for joe average pro and a Xeon or Phi based board for joe exceptional pro user. The idea is to get more of your customers to buy the platform to shore up sales. Maintain a commonality of parts across the platform to reduce costs and maintain production flexibility. What should be obvious to everybody is that Apple desktop line up is a failure at this point. I don't buy into the idea that it has to do with current market realities, it has more to do with Apples customer base getting frustrated and basically saying to hell with Apples desktop machines.
  • Reply 97 of 212

    "A 27" iMac with a Fusion drive, 32GB RAM and 2GB GTX 680M is a workstation. A 15" rMBP with 256GB SSD and a 1GB 650M is a workstation.



    In your mind, you have an association between the word workstation and the tower form factor but the definition has expanded over the years. There's a resistance to this just now because it's still in the early phases. We only got quad-core laptops and iMacs around 2010/2011."


     


    Well, Marv' and Wizard.  My New 27 inch iMac FINALLY arrived.  (Yes, with external DVD player...)


     


    Fusion.


    8 gigs of Ram.


    3.4 gig Ivy.


    680 MX.


     


    The laminated screen really POPS(!).


     


    It is a work of art.  A thing of sheer beauty.  The ultimate Mac.  (for me...)


     


    No wires from K/B to monitor.  Or from the Mouse.


     


    It's breathtaking in person.  I took loads of photos as I unwrapped it...it's...PHWOARRR!


     


    All I need now is a copy of Photoshop.


     


    And yes.  It's quick.  It's bloody quick to boot.  11 secs.  Web pages barely a pause before loading.


     


    It's not retina...sure.  But the 680 MX will throw this 27 inch screen around alot better than a retina would perhaps.


     


    By the time a gpu can?  The retina iMac will have arrived or the iMac will have been discontinued...


     


    Wizard, shame you can't get past the fact that you really don't need to get inside.  (Yeah, I'd liked to have put my own SSD in there and yes, Apple hosed me with a £100 price hike plus £200 for Fusion as opposed to just given me the option for an internal 256 SSD.)  I'll buy a couple of sticks of 8 gigs in due course to take my ram upto 24 gigs.


     


    Workstation?


     


    This is.  Compared to the Power Mac Pro tower i had in 1997, this is more of a workstation than that ever was.  Heaps of ram, loads of cpu speed, 10th fasters gpu?  


     


    On a glass table with chrome legs...it's stunning.


     


    Unpacking it was 'foreplay...'


     


    Lemon Bon Bon.


     


     


     



     




     

  • Reply 98 of 212


    To me, it's a giant iPad in some ways.  The iMac.  It's simple convergence of Technology and Arts is serenely beyond mere PCs.


     


    Just a shame I can't swivel the monitor to go vertical...


     


    Lemon Bon Bon.

  • Reply 99 of 212
    wizard69wizard69 Posts: 13,377member
    As far as getting inside the machine, at wouldn't be an issue if Apple would configure the machines in the way I want. There is no doubt the iMac "looks" nice but that means little if your options are severely constrained when it comes to things like SSDs.
    To me, it's a giant iPad in some ways.  The iMac.  It's simple convergence of Technology and Arts is serenely beyond mere PCs.

    Just a shame I can't swivel the monitor to go vertical...

    Lemon Bon Bon.
  • Reply 100 of 212
    macroninmacronin Posts: 1,174member

    Quote:

    Originally Posted by Junkyard Dawg View Post


    …Corvette…



     


    Let's hope that the next Mac Pro makes a leap in style & performance as the latest Corvette does from the previous model…!


     


     



     


    P.S. - Anyone got a spare US$60,000.00 sitting around…?!? I got a V8-fueled Fiberglass Fever…!

Sign In or Register to comment.