So what is coming in September, to cause Apple to warn investors?

2

Comments

  • Reply 21 of 56
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by Marvin View Post


    The only issue with making it for multiple purposes is that it limits future form factors or people have to buy all new sleds when a new one comes out. It would be nice to have a lot of adaptors like card payment sleds and things but I think it would create a lot of headaches unless they decided the iPod Touch form factor was as thin as they wanted to go.



    Maybe Apple needs to lol at expanding the Touch line. A commercial model with such expansion would be very interesting.

    Quote:





    It could do, the statement was quite vague:



    "As we announced at WWDC, we have a lot going on in the fall with the introduction of iOS 5 and iCloud. We also have a future product transition that we're not going to talk about today, and these things will impact our September quarter," Oppenheimer said.



    The statement was made in response to a question about "soft" projections for the 4th quarter though and I guess iOS 5 and iCloud are not really going to be profit-generating as they are free so this would suggest negative impact. A new iPhone should make short work of that though.







    Sure but how many uses and does Apple need to cater for them? The Mac Pro segment is already close to dead comprising much less than 10% of total Mac shipments. People won't give up easily on OS X so transitioning to Thunderbolt-only for expansion helps push manufacturers into supporting those products so all models can use them.



    The problem is, the lack of a low cost slot equipped Mac takes the platform out of the running for many uses. Apple will only push a small segment of the potential market in that direction.



    As an aside their does seem to be interest on Thunderbolt in the instrumentation community. The trouble is this market can easily ignore Apple and the cost of TB.

    Quote:

    A simple series of questions can reach this conclusion:



    Is the future mobile? Yes.



    Well if all you have is a hammer I gues all problems look like a nail.



    Seriously if your none mobile needs don't fit Apples line up you are screwed.



    Quote:

    Can you put a PCI slot on a mobile device? No



    I'm not sure what this has to do with the discussion. The obvious response is of course you can't fit a PCI card from a PC in a mobile device. That does not however preclude the use of PCI-Express as a bus standard in a mobile device.



    For example a good arguement could be made for using PCI-Express for those sleds you talked about.

    Quote:

    Can you put a Thunderbolt port on a mobile device? Yes.



    Which first requires a PCI-Express facility.
  • Reply 22 of 56
    Drop the Macbook Pro? What? So what laptops would they have left to sell? Im confused.
  • Reply 23 of 56
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by M3rc Nate View Post


    Drop the Macbook Pro? What? So what laptops would they have left to sell? Im confused.



    The conversation revolved around the very low sales of the Mac Pro.
  • Reply 24 of 56
    Quote:
    Originally Posted by wizard69 View Post


    The conversation revolved around the very low sales of the Mac Pro.



    Oh ok, the MAC Pro...makes sense. Srry
  • Reply 25 of 56
    Quote:
    Originally Posted by Marvin View Post


    The Mac Pro segment is already close to dead comprising much less than 10% of total Mac shipments.



    The Mac Pro segment includes graphic design, motion graphics, industrial design, film editing, post-production, music/sound engineering, print (yes it's still around), publishing, academic research/scientific visualization and many more. Due to high demand in the past few years, AutoDesk just recently re-introduced AutoCAD for Mac, so architecture is a huge new growth area. Not to mention Apple's own overhaul of Final Cut. I run Maya on OS X, as do many other 3D modelers/animators. I'm rendering as I write this.



    WE NEED POWERFUL WORKSTATIONS! We need RAID and fibre channel and huge amounts of RAM for visualization and 2K textures. We need powerful graphics cards! And we need all this in an enclosure that allows us to swap out these components as needed.



    I just can't believe that the heavy duty engineering/design work being done in Cupertino is being done on iMacs and minis. It's probably safe to say that Jony Ive and his crew are running powerful industrial design software on fully upgraded Mac Pro workstations.



    If we do only comprise 10% of Mac sales, I hope they find a more creative solution than to drop us completely.
  • Reply 26 of 56
    tallest skiltallest skil Posts: 43,388member
    Quote:
    Originally Posted by 2GIGS1CUP View Post


    We need RAID and fibre channel and huge amounts of RAM for visualization and 2K textures. We need powerful graphics cards! And we need all this in an enclosure that allows us to swap out these components as needed.



    You need something with one dual-height PCIe slot for swappable graphics and something with at least eight RAM slots for RAM.



    Everything else can be taken care of with Thunderbolt.
  • Reply 27 of 56
    MarvinMarvin Posts: 15,310moderator
    Quote:
    Originally Posted by 2GIGS1CUP View Post


    WE NEED POWERFUL WORKSTATIONS! We need RAID and fibre channel and huge amounts of RAM for visualization and 2K textures. We need powerful graphics cards! And we need all this in an enclosure that allows us to swap out these components as needed.



    Initially, I'd expect a redesign that maintains the powerful dual-processor Xeon setup but this doesn't need a large enclosure. The Boxx rendering machines are much smaller than a workstation with just the Xeon chips:



    http://www.boxxtech.com/products/ren...o_overview.asp



    This still allows a large amount of RAM. As for RAID storage and fibre channel, Thunderbolt takes care of both but it is more compact to have drives internally and they don't add a great deal of space.



    As for the graphics card, the mobile graphics cards are powerful. The 6970M is as powerful as a 5770 and takes up a fraction of the space and uses less power.



    Graphics cards double in performance every year or two and over the past 10 years, we've seen GPUs increase in performance by a factor of 30. Not only that, they are getting better features like advanced shaders, tessellation etc. How far away are we really from not even having to put something in the background to render? It's not just a transistor count issue - they can put the right algorithms in hardware and get them real-time.



    There can be MXM upgrades for cards or you upgrade the whole machine. Mac Pros are not impervious to obsolescence over short periods of time. Apple's low-end will now outperform some Mac Pros that have valid warranties.
  • Reply 28 of 56
    Quote:
    Originally Posted by Marvin View Post


    As for the graphics card, the mobile graphics cards are powerful. The 6970M is as powerful as a 5770 and takes up a fraction of the space and uses less power.



    This may be true for gaming or general graphics, but workstation class GPUs are a completely different class. Though they use the same hardware as consumer cards, Quadro and FirePro use different drivers and algorithms that supersede frame rate speed for image precision, quality, and accuracy. If you've ever done any CAD/3D modeling work, you'd know the difference.



    Quote:
    Originally Posted by Marvin View Post


    How far away are we really from not even having to put something in the background to render? It's not just a transistor count issue - they can put the right algorithms in hardware and get them real-time.



    Rendering is processed by the CPUs and not the GPUs. Pixar's render farm is a cluster of hundreds of Linux servers. My 8-core Mac at home sometimes takes 48 hours to render 720 frames (30 seconds of animation). We are far far away from real-time rendering. (Are you referring to Octane?)



    Quote:
    Originally Posted by Marvin View Post


    Initially, I'd expect a redesign that maintains the powerful dual-processor Xeon setup but this doesn't need a large enclosure.



    I'd love to see a smaller more energy efficient Mac Pro-- but not at the expense of expandability or versatility. The pro segment should be able to configure components based on specific purposes-- Radeon 6870 for graphic design or FCP, a pair of 6770's for scientific visualization, Quadro for CAD, or a Pro Tools card for audio engineering. This is what separates the consumer from the pro lines. Thunderbolt may be the answer for the future, but even medium-sized studios or firms can't make that transition overnight. The financial investment and effort takes more than upgrading your rigs-- there's also the replacement of the existing infrastructure (fibre channel lines and Xsan hardware).



    Until the day Thunderbolt transitions to optical and replaces all existing PCIe peripherals or the mac mini is a 16-core monster with 32G of RAM and upgradable graphics, the Mac Pro remains the workhorse for many professionals and I'm looking forward to Apple's solution and redesign.
  • Reply 29 of 56
    Quote:
    Originally Posted by 2GIGS1CUP View Post


    Though they use the same hardware as consumer cards, Quadro and FirePro use different drivers and algorithms that supersede frame rate speed for image precision, quality, and accuracy. If you've ever done any CAD/3D modeling work, you'd know the difference.



    You're right: http://en.wikipedia.org/wiki/Nvidia_Quadro#Video_cards, but this difference in quality and performance is only due to optimizations made for certain applications/games. If you are creating a new app/game that uses the graphic card, there's no difference between those 2 types of cards, except for ECC VRAM and double precision floats.



    Quote:
    Originally Posted by 2GIGS1CUP View Post


    Rendering is processed by the CPUs and not the GPUs. Pixar's render farm is a cluster of hundreds of Linux servers. My 8-core Mac at home sometimes takes 48 hours to render 720 frames (30 seconds of animation). We are far far away from real-time rendering. (Are you referring to Octane?)



    We are far away from real-time, but OpenCL and CUDA accelerates A LOT those algorithms (check LuxRender vs SmallLuxGPU).The render farm you are mentioning probably use more the graphic cards than the CPUs.



    I think Apple should divide the Mac Pro into 2 versions, one with a i5/i7 and the other with Xeon/dual-Xeon. The rest of the hardware could be the same as the previous Mac Pro's. The reason for those 2 versions is quite obvious: the gamers and the artists (image/video/3D) don't really need a ECC RAM, and the dual CPU doesn't really make sense because the GPU is more efficient for the algorithms used by their applications. On the other hand, CAD modelers, mathematicians, scientists and server admins need more precision, reliability and CPU parallelism (for virtualization for example), so ECC RAM and dual CPU do make sense.



    For those who think that the i5 and i7 don't have enough CPU parallelism for artists in general....think again. The i5 is a 4 core CPU with no HyperThreading (meaning 4 threads maximun), while the i7 is a 4 core CPU (the Extreme version is 6 core) with HyperThreading (meaning 8/12 threads maximum). The i7 is more than enough for heavy workloads, since most algorithms run on the graphic card.
  • Reply 30 of 56
    MarvinMarvin Posts: 15,310moderator
    Quote:
    Originally Posted by 2GIGS1CUP View Post


    This may be true for gaming or general graphics, but workstation class GPUs are a completely different class. Though they use the same hardware as consumer cards, Quadro and FirePro use different drivers and algorithms that supersede frame rate speed for image precision, quality, and accuracy. If you've ever done any CAD/3D modeling work, you'd know the difference.



    There are mobile Quadro and FirePro cards.



    Quote:
    Originally Posted by 2GIGS1CUP View Post


    Rendering is processed by the CPUs and not the GPUs. Pixar's render farm is a cluster of hundreds of Linux servers. My 8-core Mac at home sometimes takes 48 hours to render 720 frames (30 seconds of animation). We are far far away from real-time rendering. (Are you referring to Octane?)



    In that example, you are talking about 4 minutes per frame so would require a 7200x speedup to become real-time. 2^13 = 8192. If CPUs and GPUs double every 2 years and they are used together, we can get real-time in 13 years (edit: this calculation isn't entirely accurate as it would be 2 x 2^6 or something but high-end GPUs outperform CPUs in graphics rendering - http://www.luxrender.net/wiki/LuxMark_Results).



    It also doesn't need to be real-time exactly. Even 1/4 real-time would be good enough and you can visualise a draft quality render real-time and we'll get there in under 10 years.



    Renderers like Octane and Luxrender aren't particularly good as they are unbiased, which makes them way slower than engines like VRay or Mental Ray, which already experiment with hardware acceleration. The NVidia Gelato team had Larry Gritz working there who helped build Renderman. NVidia have acquired Mental Ray so I think you can see where that's going:



    http://blogs.nvidia.com/2011/05/nvid...tware-efforts/



    Another area to watch is voxel rendering. John Carmack has been mentioning voxel octrees for id tech 6 (after Rage), which are more efficient than polygon rendering for texture storage. You can see some of the benefits here:



    http://www.youtube.com/watch?v=Q-ATtrImCx4

    http://www.youtube.com/watch?v=tnboAnQjMKE



    I imagine one day not too far away where you use a file-based workflow and stream data to/from storage in a mudbox/zbrush app, running OpenCL shaders.



    Pixar's thousands of computers still have some advantages like massive amounts of RAM and storage but if you process a single frame quickly enough, it should be fine. It would be interesting to find out how a modern personal computer compares to the render farm they used for their early films.



    Quote:
    Originally Posted by 2GIGS1CUP View Post


    The pro segment should be able to configure components based on specific purposes-- Radeon 6870 for graphic design or FCP, a pair of 6770's for scientific visualization, Quadro for CAD, or a Pro Tools card for audio engineering.



    there's also the replacement of the existing infrastructure (fibre channel lines and Xsan hardware).



    The current Mac Pro doesn't give you that much flexibility though. The PCI slots get 300W total so realistically, you're only going to get a single high-end card in there. If you only use the other ports for audio or fibre-channel, there will be Thunderbolt solutions. There's even a Thunderbolt PCI slot product should all else fail.
  • Reply 31 of 56
    wizard69wizard69 Posts: 13,377member
    I've always have seen the Mac Pro as a bit of a boondoggle if you will. It is a big box that never was at the right price point to sell in large quantities. In the last couple of years Apple tired to offer up a low end Mac Pro that was configured in such a way to interest no one.



    With a bit of engineering effort I can see Apple building a box that serves a wider array of users thus leading to more units shipped. The key to the Mac Pros survival is to have a chassis that can support both low end and high end needs.

    Quote:
    Originally Posted by 2GIGS1CUP View Post


    This may be true for gaming or general graphics, but workstation class GPUs are a completely different class. Though they use the same hardware as consumer cards, Quadro and FirePro use different drivers and algorithms that supersede frame rate speed for image precision, quality, and accuracy. If you've ever done any CAD/3D modeling work, you'd know the difference.



    Actually that is true in the PC world but in the Mac world I'm not convinced there is much difference in driver quality. Lets face it Apple hasn't exactly set the OpenGL world on fire.

    Quote:

    Rendering is processed by the CPUs and not the GPUs. Pixar's render farm is a cluster of hundreds of Linux servers. My 8-core Mac at home sometimes takes 48 hours to render 720 frames (30 seconds of animation). We are far far away from real-time rendering. (Are you referring to Octane?)



    Economics dictate that you will move away from Macs to do the actual rendering.



    As to the CPU / GPU thing that will slowly change over time as they bring more capability to the GPU. Remember GPGPU computations where originally forced onto GPU's not really designed for that sort of work. AMD and NVidia have a long way to go before the GPU's can be leveraged in the way many would like.



    Even so I don't see Apple playing in the Cluster world anytime soon. So if you are seriously interested in Rendering your only choice is to look at rendering hardware running Linux.

    Quote:





    I'd love to see a smaller more energy efficient Mac Pro-- but not at the expense of expandability or versatility.



    I believe this is very doable.

    Quote:

    The pro segment should be able to configure components based on specific purposes-- Radeon 6870 for graphic design or FCP, a pair of 6770's for scientific visualization, Quadro for CAD, or a Pro Tools card for audio engineering. This is what separates the consumer from the pro lines.



    Well that is nice but then we are left with this reality. As general purpose hardware becomes faster and more capable, the need for the special purpose hardware slowly diminishes. Either that or the special purpose hardware ends up being sold into the high end niche markets. CAD is a perfect example here, many engineers are fine with their laptops these days, something that would have been unheard of a few years ago.

    Quote:

    Thunderbolt may be the answer for the future, but even medium-sized studios or firms can't make that transition overnight. The financial investment and effort takes more than upgrading your rigs-- there's also the replacement of the existing infrastructure (fibre channel lines and Xsan hardware).



    I don't think Thunderbolt will be replacing stuff as fast as some suggest. Some have vision of sugar plums in their eyes when discussing TB, it is not an be all end all port that some imagine.

    Quote:



    Until the day Thunderbolt transitions to optical and replaces all existing PCIe peripherals or the mac mini is a 16-core monster with 32G of RAM and upgradable graphics, the Mac Pro remains the workhorse for many professionals and I'm looking forward to Apple's solution and redesign.



    Though I would never buy one, at least not at this point, I also look forward to a redesigned PRO. I'm hoping they can come up with a chassis that solves a wide array of needs. It really shouldn't be all that difficult.
  • Reply 32 of 56
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by ludovic.silvestre View Post


    You're right: http://en.wikipedia.org/wiki/Nvidia_Quadro#Video_cards, but this difference in quality and performance is only due to optimizations made for certain applications/games. If you are creating a new app/game that uses the graphic card, there's no difference between those 2 types of cards, except for ECC VRAM and double precision floats.



    Note that this is biased towards PC's not Macs.

    Quote:



    We are far away from real-time, but OpenCL and CUDA accelerates A LOT those algorithms (check LuxRender vs SmallLuxGPU).The render farm you are mentioning probably use more the graphic cards than the CPUs.



    While it is true a lot of effort is going into GPU assist for Rendering applications it isn't as wide spread as one would like. Much of the work is still done on the CPU.

    Quote:



    I think Apple should divide the Mac Pro into 2 versions, one with a i5/i7 and the other with Xeon/dual-Xeon.



    Yep one chassis but two distinct performance and capability profiles.

    Quote:

    The rest of the hardware could be the same as the previous Mac Pro's. The reason for those 2 versions is quite obvious: the gamers and the artists (image/video/3D) don't really need a ECC RAM, and the dual CPU doesn't really make sense because the GPU is more efficient for the algorithms used by their applications.



    Your point is right but this isn't the evidence that supports it in my mind. The big advantage of an i5/i7 based desktop Mac is simply the access to the internals. That is storage and the ability to plug in a PCI-Express card or two. The new Mini covers the bottom end of the performance part of the equation but it still sucks as a platform that can be reasonably expanded. Especially as we move into the future (next year) when the Mini will be upgraded yet again to Ivy Bridge which will put a lot of power in the box that has no internal expansion capability to speak of.

    Quote:

    On the other hand, CAD modelers, mathematicians, scientists and server admins need more precision, reliability and CPU parallelism (for virtualization for example), so ECC RAM and dual CPU do make sense.



    Yep a mother board swap out would give one a rather impressive reconfiguration of the chassis. The focus would be on high reliability and high speed computation.

    Quote:

    For those who think that the i5 and i7 don't have enough CPU parallelism for artists in general....think again. The i5 is a 4 core CPU with no HyperThreading (meaning 4 threads maximun), while the i7 is a 4 core CPU (the Extreme version is 6 core) with HyperThreading (meaning 8/12 threads maximum). The i7 is more than enough for heavy workloads, since most algorithms run on the graphic card.



    Well we won't get to deep into that argument. I will simply say that a single socket general purpose motherboard would allow them a low cost avenue to an internally expandable desktop machine.
  • Reply 33 of 56
    wizard69wizard69 Posts: 13,377member
    I'm a bit like a fish out of water here as I little interest in rendering. However what I do keep an eye on is what is happening in GPU land. I see AMD making great strides here as they try to integrate the GPU with the CPU. Further they are looking at making the ALU's and other engines in a GPU far more flexible and capable. So the question is how soon can they get these ideas into silicon.



    The current Fusion products aren't there incase anybody is wondering but they do show potential. We could see some rather impressive hardware in less than four years. The big problem here is that software will have to catch up. Frankly I suspect Apple is pushing Intel hard in this direction also, that is where the GPU become equal to the CPU. Intels announced desire to make the Ivy Bridge GPU OpenCL capable is certainly a start.



    In any event with a surplus of transistors more an more capability will be rolled into the

    GPU's coming soon. However does this really solve anything? Think about it, if we get faster hardware will people still be rendering at 2K? As much as this future hardware will help the run of the mill user those on the bleeding edge will still be complaining in the future. It kinda relates to CAD users mentioned in another response, these days engineers and designers are often happy with a laptop with a half decent GPU, it is only the guys on the bleeding edge that hunger for faster and faster hardware.



    Quote:
    Originally Posted by Marvin View Post


    There are mobile Quadro and FirePro cards.







    In that example, you are talking about 4 minutes per frame so would require a 7200x speedup to become real-time. 2^13 = 8192. If CPUs and GPUs double every 2 years and they are used together, we can get real-time in 13 years (edit: this calculation isn't entirely accurate as it would be 2 x 2^6 or something but high-end GPUs outperform CPUs in graphics rendering - http://www.luxrender.net/wiki/LuxMark_Results).



    It also doesn't need to be real-time exactly. Even 1/4 real-time would be good enough and you can visualise a draft quality render real-time and we'll get there in under 10 years.



    Renderers like Octane and Luxrender aren't particularly good as they are unbiased, which makes them way slower than engines like VRay or Mental Ray, which already experiment with hardware acceleration. The NVidia Gelato team had Larry Gritz working there who helped build Renderman. NVidia have acquired Mental Ray so I think you can see where that's going:



    http://blogs.nvidia.com/2011/05/nvid...tware-efforts/



    Another area to watch is voxel rendering. John Carmack has been mentioning voxel octrees for id tech 6 (after Rage), which are more efficient than polygon rendering for texture storage. You can see some of the benefits here:



    http://www.youtube.com/watch?v=Q-ATtrImCx4

    http://www.youtube.com/watch?v=tnboAnQjMKE



    I imagine one day not too far away where you use a file-based workflow and stream data to/from storage in a mudbox/zbrush app, running OpenCL shaders.



    Pixar's thousands of computers still have some advantages like massive amounts of RAM and storage but if you process a single frame quickly enough, it should be fine. It would be interesting to find out how a modern personal computer compares to the render farm they used for their early films.







    The current Mac Pro doesn't give you that much flexibility though. The PCI slots get 300W total so realistically, you're only going to get a single high-end card in there. If you only use the other ports for audio or fibre-channel, there will be Thunderbolt solutions. There's even a Thunderbolt PCI slot product should all else fail.



  • Reply 34 of 56
    Quote:
    Originally Posted by wizard69 View Post


    I see AMD making great strides here as they try to integrate the GPU with the CPU. Further they are looking at making the ALU's and other engines in a GPU far more flexible and capable. So the question is how soon can they get these ideas into silicon.



    The "HD7000" series will appear by the end of this year, and from the latest news, their new GPU architecture (called GCN) might appear in 2013.



    Quote:
    Originally Posted by wizard69 View Post


    The current Fusion products aren't there incase anybody is wondering but they do show potential. We could see some rather impressive hardware in less than four years. The big problem here is that software will have to catch up.



    If Apple continues to deliver frameworks that use that power, then the software might catch up really soon. And impressive hardware might appear next year, when AMD will launch their new APU's.



    Quote:
    Originally Posted by wizard69 View Post


    Frankly I suspect Apple is pushing Intel hard in this direction also, that is where the GPU become equal to the CPU. Intels announced desire to make the Ivy Bridge GPU OpenCL capable is certainly a start.



    Yes they are, but I think Intel will fail in the GPU part. They don't have as much know-how compared to AMD (since they bought Ati).



    Quote:
    Originally Posted by wizard69 View Post


    It kinda relates to CAD users mentioned in another response, these days engineers and designers are often happy with a laptop with a half decent GPU, it is only the guys on the bleeding edge that hunger for faster and faster hardware.



    I mostly agree with you, but CAD engineer and designers would definitely prefer a low end Mac Pro (with a i5/i7 like I mentioned above) than a laptop. They generally needs a lot of RAM (8Gb minimum), a quad core CPU at least and a decent GPU (AMD HD6750M or equivalent, with 512Mb). The high-end laptops have everything above, but you'll need to go to a Mac Pro if you want more RAM or you want to update your GPU.
  • Reply 35 of 56
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by ludovic.silvestre View Post


    The "HD7000" series will appear by the end of this year, and from the latest news, their new GPU architecture (called GCN) might appear in 2013.



    I've been following AMD and their GPU and APU technology in general but don't bother with specifics. What I'm interested in seeing is the APU chips that end up with the GPU part being a full partner on the memory bus. That is when they implement memory management and cache control and provide the GPU with a 64 bit address space.



    Let's face it the current LLano APUs are fairly impressive already. I would not shy away from them unless I knew for sure that I needed high performance out of my CPU. As good as the current APUs are though moving to a completely heterogeneous computing platform is very enticing. I'm actually sure prized that Apple hasn't embraced Fusion as it seems to map directly onto where they are going with Mac OS.

    Quote:

    If Apple continues to deliver frameworks that use that power, then the software might catch up really soon. And impressive hardware might appear next year, when AMD will launch their new APU's.



    You are talking the Bulldozer based APUs right? They certainly have the potential to be very nice but I don't think they completely realize AMDs long term vision for heterogeneous computing. Even so I wouldn't likely reject a machine with such hardware. As long as they have a significant GPU performance advantage over Intel they are a good choice for many users.

    Quote:





    Yes they are, but I think Intel will fail in the GPU part. They don't have as much know-how compared to AMD (since they bought Ati).



    There is always that issue of Intel just being good enough. I would think though that people will notice a significant difference in GPU performance on AMD powered hardware before they notice the difference in CPU performance. AMD just needs to more aggressively market their positives.



    In any event I'm thankful Apple recognized the Intel GPU issue in the Mini. So we are half way there to an all AMD machine from Apple.

    Quote:



    I mostly agree with you, but CAD engineer and designers would definitely prefer a low end Mac Pro (with a i5/i7 like I mentioned above) than a laptop. They generally needs a lot of RAM (8Gb minimum), a quad core CPU at least and a decent GPU (AMD HD6750M or equivalent, with 512Mb). The high-end laptops have everything above, but you'll need to go to a Mac Pro if you want more RAM or you want to update your GPU.



    This is the part I find distressing and have to disagree with. There is a wide range of engineering taking place on laptops these days. The advantages that a laptop offers over a desktop is significant for field work. While everybody likes faster hardware it is possible to be very productive on a laptop with a real GPU. Even if that GPU isn't top of the line.



    So I tend to see more and more engineers with laptops. Can they handle high end CAD - nope but they don't have to.
  • Reply 36 of 56
    mactacmactac Posts: 316member
    Quote:
    Originally Posted by ludovic.silvestre View Post


    I mostly agree with you, but CAD engineer and designers would definitely prefer a low end Mac Pro (with a i5/i7 like I mentioned above) than a laptop. They generally needs a lot of RAM (8Gb minimum), a quad core CPU at least and a decent GPU (AMD HD6750M or equivalent, with 512Mb). The high-end laptops have everything above, but you'll need to go to a Mac Pro if you want more RAM or you want to update your GPU.



    That would be a nice machine. Since I do CAD and don't do it in the field this is the type of Mac I wish Apple would get a clue about.
  • Reply 37 of 56
    Apple discontinues its popular Steve Jobs 2.0 application and introduces a radical different Tim Cook 2.0 as a replacement. Can the new interface win the hearts and minds of Steve Jobs' famously loyal users?



    Anyone think this wasn't the "transition" Cook had in mind in his warning? He had to fib a little in order to not give it away.
  • Reply 38 of 56
    wizard69wizard69 Posts: 13,377member
    Quote:
    Originally Posted by ludovic.silvestre View Post


    The "HD7000" series will appear by the end of this year, and from the latest news, their new GPU architecture (called GCN) might appear in 2013.



    I've seen some of AMDs news releases, mostly with respect to the Fusion APUs. Here's hoping they can keep it together long enough realize what they are projecting for the future. I'm most interested in APUs as I see that as the way of the future.

    Quote:



    If Apple continues to deliver frameworks that use that power, then the software might catch up really soon. And impressive hardware might appear next year, when AMD will launch their new APU's.



    I believe you are talking about the Bulldozer based chip. If so I tend to agree it should provide for a very nice APU, especially considering that the GPU gets upgraded again.



    However a key point here is that the current APUs aren't that bad. They stress good performance where a lot of people want to see it.

    Quote:



    Yes they are, but I think Intel will fail in the GPU part. They don't have as much know-how compared to AMD (since they bought Ati).



    I don't think it is that simple. Look at how many machines where sold with Intel only graphics in the past.

    Quote:



    I mostly agree with you, but CAD engineer and designers would definitely prefer a low end Mac Pro (with a i5/i7 like I mentioned above) than a laptop. They generally needs a lot of RAM (8Gb minimum), a quad core CPU at least and a decent GPU (AMD HD6750M or equivalent, with 512Mb). The high-end laptops have everything above, but you'll need to go to a Mac Pro if you want more RAM or you want to update your GPU.



    When you say "CAD Engineer" I would have to agree. However the vast majority of CAD packages are not sold "CAD Engineers". Rather they go to a wide array of technical individuals involved in a wide array of projects. These days most of those people use laptops extensively. No you won't be doing top end mechanical design at Ford with a laptop, however that is a relatively small world these days. For many a laptop is fine for mechanical design.
  • Reply 39 of 56
    mactacmactac Posts: 316member
    Quote:
    Originally Posted by tonton View Post


    Apple discontinues its popular Steve Jobs 2.0 application and introduces a radical different Tim Cook 2.0 as a replacement. Can the new interface win the hearts and minds of Steve Jobs' famously loyal users?



    If an XMac shows up the answer is a resounding YES!
  • Reply 40 of 56
    hmmhmm Posts: 3,405member
    Quote:
    Originally Posted by Marvin View Post




    Sure people will complain that they can no longer run PCI cards and have to make do with slower Thunderbolt instead but remember when we all switched from SCSI to SATA? Mac Pro owners are getting by just fine with SATA and it will ramp up to an optical connection in due time.




    I mentioned this on another thread but in this scenario why would they create this in a new machine when they have the imac occupying that price point and the mini beneath it? The imac has thunderbolt and desktop processors. It can take the same amount of ram as a single socket mac pro, and if you want a nicer display, plug one in. When you remove the basis for expandability what could possibly drive enough sales volume from such a unit to make it viable in a shared price point?
Sign In or Register to comment.