Apple throws out the rulebook for its unique next-gen Mac Pro

1434446484966

Comments

  • Reply 901 of 1320
    wizard69wizard69 Posts: 13,377member
    Marvin wrote: »
    It was an engineering sample they tested here:

    http://www.tomshardware.co.uk/ivy-bridge-ep-xeon-e5-2697-v2-benchmarks,review-32756.html

    but it's not going to be half the speed of the production version. The sample has to be close to the production model in clock speed, features and power consumption otherwise there's little point in having the sample.
    This is true but it has been known that from time to time engineering samples will have serious faults.
    There may be performance improvements with final software and production hardware but the important detail as far as it applies to the objection to Apple marketing up to 2x the CPU performance is that the new top-end 12-core will not be double the performance of the old top-end 12-core nor close to it.
    Up to is awesome wiggle room. I don't expect every thing to be 2X faster, however anything leveraging new instructions has the possibility of being much faster. The problem is do bench marks this old really reflect what an unreleased processor can do?
    That would require Intel's performance-per-watt to increase 4x in 2 architecture steps, which doesn't happen. There should have been 3 steps since the old Mac Pro but there weren't. It's fair for people to object to their marketing but marketing material is usually not that accurate and they typically detail the comparison they use in their marketing pages. They did the same thing last year:
    It is only fair to object if the marketing is completely false. The reality is this isn't even the marketing of a released product.
    http://www.everymac.com/systems/apple/mac_pro/faq/mac-pro-mid-2012-performance-benchmarks.html

    "in other promotional copy, Apple did reveal that the fastest custom configuration "Mid-2012" Mac Pro -- the Mac Pro "Twelve Core" 3.06 (2012/Westmere) -- is between 1.2 and 1.5 times faster than the fastest custom configuration "Early 2009" Mac Pro -- the Mac Pro "Eight Core" 2.93 (2009/Nehalem).

    First, it is worth noting that this official comparison is a synthetic performance test using the "STREAM" 5.8 benchmark, and is a comparison with the much earlier "Early 2009" line rather than the previous, and effectively identical, "Mid-2010" Mac Pro line."
    I've been on the "when is a real MacPro update coming band wagon" for a few years now as I've seen nothing to indicate a real enhancement of the machine. I know that in part that is Intels fault. However in many ways this new Mac Pro does look like a solid update many have been waiting for. It isn't just faster processors but faster memory systems and greatly enhanced GPUs. I really see many tasks running much faster on this machine.
    Some people will be disappointed they can't buy two 12-core processors but a Mac Pro like that would cost over $7500 so it affects very few people and Apple has never used the highest end processors in the past. What Apple would have used if they'd gone with two CPUs is a dual 8-core and it would have offered up to 50% more CPU performance for about $1300 more.
    Most of these issues will be solved by a process shrink to 14nm or whatever feature size they hit. That could be as soon as 2014, though it might take another year for XEON to transition.

    Beyond all of that the impact of cores starts to get real interesting past the 12 core level. Many apps that might be seen as embarrassing parallel end up not getting the performance increment expected due to bandwidth limitations outside the cores. I don't see a rush to many more cores in a workstation environment until this is dealt with. Fortunately the industry is trying to address this with faster RAM subsystems and other architectural improvements.

    Of course XEON isn't a workstation only processor, so more cores can still be useful for server duties and the like. I just don't see the same advantage for workstations due to the highly varied workloads seen in the workstation market.
    That extra performance option isn't essential because real-time feedback isn't required from CPU tasks. People who need more CPU performance can buy more machines e.g get a 6-core slave Mac Pro in addition to the 12-core. Less convenient in some cases but media software is using OpenCL more so the GPUs will provide good value there.

    It is only convenient if the software you are using can leverage the hardware. Sometimes though ore machines actually makes lots of sense especially if bandwidth issues mentioned above raise their ugly heads.

    I saw a chart somewhere, can't remember where that show the ultimate performance possible from these ne intel chips based on the number of cores implemented. Due to clock rate issues with the 12 cores, the ultimate computational potential verses the other chips wasn't all that great. Many people will be just as well off sometimes far better off going with a six core or other lesser implementation simply due to the ability to run at a far higher lock rate. If your software of choice is only lightly multi threaded it may be a mistake to even look at a twelve core model this year. Of course that is this year, one process shrink and a twelve core machine could become more mainstream.

    One thing for sure, I see us at the same point the industry was in when dual core machines started to arrive for mass sale. Dual, even Quad core is mainstream now, we will quickly see six and eight core machines become the mainstream. I'm not sure if the iMac will get six cores this year but it is certainly a possibility but by 2014 I can see it as a requirement. Whatever issues the new Mac Pro will have with the 12 core won't last long at all. They really have no choice because six cores will soon be mainstream for desktop machines.
  • Reply 902 of 1320
    wizard69wizard69 Posts: 13,377member
    drblank wrote: »
    I went to Intel's site and it doesn't list a 12 core E5 series processor.  here's the link that I went to.  If anyone knows where they have information on a 12 Core E5 series processor, let me know.

    http://www.intel.com/content/www/us/en/processor-comparison/compare-intel-processors.html?select=server

    That is shipping hardware. Try: http://www.cpu-world.com/Releases/Server_CPU_releases_(2013).html. It is most interesting to see what Intel has coming in September, not just for workstation but processors suitable for all of Apples hardware.

    On the CPU-world link above you can jump between mobile, server and desktop processors from a bar on the right. I could see suitable Mini, IMac and Mac Pro processors being available mid September. By the way take CPU-World with a grain of salt, unreleased hardware is always subject to delays or changes.

    Frankly I think somebody at Intel needs to take a chill pill, they are marketing way to many variants of these processors. It is an Anti Apple method to be sure, more like let's throw a bunch of stuff at the wall and see what sticks.
  • Reply 903 of 1320
    mike fixmike fix Posts: 270member


    I am shackled to those Cinebench results as that's how I make my living...using CINEMA 4D for broadcast and theatrical work...IN THE REAL WORLD...


     


    Which is why those numbers are troubling to me, and everyone else that I've talked to in my industry.


     


    I'd gladly pay $7k for a dual CPU option, as would a lot of people that I work with. 

  • Reply 904 of 1320
    philboogiephilboogie Posts: 7,675member
    Sorry for going off topic, but this did catch my eye.

    No problemo.

    Maybe you can find something here: http://twelvesouth.com/products/

    They sell a riser to make the two displays align
    1000
  • Reply 905 of 1320
    MarvinMarvin Posts: 15,435moderator
    wizard69 wrote: »
    it has been known that from time to time engineering samples will have serious faults.

    Ok but people are making these kind of suggestions under the assumption that these benchmarks are wrong. They are in line with what people should expect. They have put higher performance than two CPUs from 2010/2011 onto a single CPU. In other words they have more than doubled the individual CPU performance but moved to a single CPU model so the overall performance is only slightly higher than the old top-end 12-core dual CPU model. It's not like the processors individually are just 10-15% faster.

    You can see here the single 130W 2.7GHz E5-2697v2 gets 17.63 in Cinebench:
    http://www.tomshardware.com/reviews/ivy-bridge-ep-xeon-e5-2697-v2-benchmarks,3585-6.html

    The old Mac Pro with dual 95W 3.06GHz X5675s (190W) was around 16. This would mean the new top-end is 10% faster but that option may cost up to $1300 less.

    If Apple was able to run the E5-2697 with all cores at the turbo clock speed of 3.5GHz, they'd manage 22.85 but that extra 30% doesn't make much difference in real world usage. 50-60% would start to be noticeable.
    wizard69 wrote: »
    I don't expect every thing to be 2X faster, however anything leveraging new instructions has the possibility of being much faster. The problem is do bench marks this old really reflect what an unreleased processor can do?

    This is still Ivy Bridge remember. There's no dramatic performance jump from Sandy Bridge architecture-wise, most of the improvement comes from the core count. There might be a bigger jump next year with Haswell though because they won't allocate space to an IGP on server hardware so rather than drop the TDP down from 130W to say 95W, they can boost performance 30-40%. Haswell-EP is rumoured to have up to 15 cores.
    mike fix wrote:
    those numbers are troubling to me, and everyone else that I've talked to in my industry.

    That phrase gets used about pretty much anything Apple does. Some people have spoken to everyone they know in the audio industry and they're all troubled by the lack of a 17" laptop. Apple will make the choices they want regardless. They chose to skip over Sandy Bridge in 2012 while competitors had options up to the E5-2687W that scored 24 in Cinebench and the MP was at 16.

    It's not as if people are going to migrate away from OS X for lack of a potential 50% speed bump.
    mike fix wrote:
    I'd gladly pay $7k for a dual CPU option, as would a lot of people that I work with.

    You could supplement the Mac Pro with another Mac Mini or Mac Pro on the network. The next Mini will probably score 7.4 in Cinebench so on top of the 18 of the 12-core Pro, that's the same as a dual CPU Apple would have offered. If the 12-core with the base GPUs is $5k and the Mini is $800, that would even be cheaper than a dual CPU setup.
  • Reply 906 of 1320
    wizard69wizard69 Posts: 13,377member
    mike fix wrote: »
    I am shackled to those Cinebench results as that's how I make my living...using CINEMA 4D for broadcast and theatrical work...IN THE REAL WORLD...
    In the real world you shouldn't be concerned with Cinebench. Rather you need to be concerned with your software vendors and the direction they are going to leverage new hardware technologies. In other words will Cinema 4D be leveraging GPU compute on this new Mac Pro.
    Which is why those numbers are troubling to me, and everyone else that I've talked to in my industry.
    The question I would have is can Cinema 4D leverage different computer machines. That is spread the work across multiple machines. Ultimately this would be the better approach
    I'd gladly pay $7k for a dual CPU option, as would a lot of people that I work with. 
    Just buy two machines or look for software that leverages the GPUs.
  • Reply 907 of 1320
    MarvinMarvin Posts: 15,435moderator
    wizard69 wrote: »
    In the real world you shouldn't be concerned with Cinebench. Rather you need to be concerned with your software vendors and the direction they are going to leverage new hardware technologies. In other words will Cinema 4D be leveraging GPU compute on this new Mac Pro.

    Cinebench is made by the same people that make Cinema 4D - it's pretty much a direct test of how Cinema 4D will run - so it's one of the least synthetic benchmarks around. This kind of processing has difficulties migrating to OpenCL because of the function calls. GPUs seem to only be able to handle smaller chunks of code. It's not so much OpenCL itself but OpenCL on the GPUs. Hopefully AMD and NVidia will eventually manage to work around these problems but what would help is if AMD actually got raytracing code to work themselves so that external developers could just use an API. NVidia has done this but they used CUDA - this is why Adobe's raytracer in After Effects isn't accelerated with AMD GPUs.

    One thing with the Cinebench scores is that the numbers do lead you to think there's a bigger difference the higher the scores get. For example, a score of 27 compared to 18 looks like it might be a huge difference whereas 7 vs 4 doesn't look that much different. The latter difference however is 75% and the former difference is 50%. This perception will get worse the higher it goes e.g a Mac Pro at 35 compared to an HP at 53 - the HP is still just 50% faster though.

    Apple has the sales data for their machines and I suspect that they will have found that people who buy the highest CPU models don't upgrade very often and may even extend the life of the machine doing their own GPUs upgrades as many online accounts of breaking the GPU tabs would indicate. GPUs go out of date quicker than CPUs so tying those down means that it encourages more frequent upgrades.

    I'd say the performance of this Xeon is very much down to being an architecture step behind. The Haswell i7-4770k, which may end up in the iMac scores 8.48 in Cinebench. Previously, the top-end Mac Pro has been 3x faster than an iMac but will now just be 2x despite comparing 4-core to 12-core. The 15-core Haswell should sort this and Haswell might run into delays, which would move the architectures back into alignment:

    http://www.dailytech.com/Report+Intel+Delays+14+nm+Broadwell+Schedules+Haswell+Refresh+for+2014/article31770.htm

    Instead of Broadwell in 2014, they'd hold the consumer chips back on Haswell. Then when the Xeon moves to Haswell, it will look better.
  • Reply 908 of 1320
    drblankdrblank Posts: 3,385member

    Quote:

    Originally Posted by Marvin View Post





    Cinebench is made by the same people that make Cinema 4D - it's pretty much a direct test of how Cinema 4D will run - so it's one of the least synthetic benchmarks around. This kind of processing has difficulties migrating to OpenCL because of the function calls. GPUs seem to only be able to handle smaller chunks of code. It's not so much OpenCL itself but OpenCL on the GPUs. Hopefully AMD and NVidia will eventually manage to work around these problems but what would help is if AMD actually got raytracing code to work themselves so that external developers could just use an API. NVidia has done this but they used CUDA - this is why Adobe's raytracer in After Effects isn't accelerated with AMD GPUs.



    One thing with the Cinebench scores is that the numbers do lead you to think there's a bigger difference the higher the scores get. For example, a score of 27 compared to 18 looks like it might be a huge difference whereas 7 vs 4 doesn't look that much different. The latter difference however is 75% and the former difference is 50%. This perception will get worse the higher it goes e.g a Mac Pro at 35 compared to an HP at 53 - the HP is still just 50% faster though.



    Apple has the sales data for their machines and I suspect that they will have found that people who buy the highest CPU models don't upgrade very often and may even extend the life of the machine doing their own GPUs upgrades as many online accounts of breaking the GPU tabs would indicate. GPUs go out of date quicker than CPUs so tying those down means that it encourages more frequent upgrades.


    I guess people need to figure out what tests are best for their purposes.   


     


    Have you priced out a HP Z820 computer filled to gills?  It's RIPPING expensive.  I think the MacPro with an top of the line Promise RAID that holds the same amount of drive storage will be much less expensive and still give a lot more expansion from TB2.  In the A/V market, there is a TON of TB products that they use, which they won't be able to use with the HP.   there are existing Firewire, Fiberchannel, and other products that can easily be connected to TB2 ports with a low cost or relatively low cost adapter.  That removes the need for PCI cards.  Also, the HP doesn't have as fast SSD storage whereas the MacPro will come standard with a certain amount for the OS, apps and some data.  You would have to install VERY expensive PCI cards for high speed SSD.    I think for those that work in an environment where they utilize a SANS network, they won't need lots of internal RAID.  That save a LOT of money, right there.  Anyone that does location work, will typically use external storage.  Lots of options currently available.


     


     I forgot to mention, Apple is doing some interesting things with optimization with OS X, so maybe it will help with certain speed tests due just to the OS.  I wish Apple could figure out how to really utilize MP better so they could get 2x whenever they plopped in a second processor.

  • Reply 909 of 1320
    wizard69wizard69 Posts: 13,377member
    Marvin wrote: »
    Cinebench is made by the same people that make Cinema 4D - it's pretty much a direct test of how Cinema 4D will run - so it's one of the least synthetic benchmarks around. This kind of processing has difficulties migrating to OpenCL because of the function calls. GPUs seem to only be able to handle smaller chunks of code. It's not so much OpenCL itself but OpenCL on the GPUs. Hopefully AMD and NVidia will eventually manage to work around these problems but what would help is if AMD actually got raytracing code to work themselves so that external developers could just use an API. NVidia has done this but they used CUDA - this is why Adobe's raytracer in After Effects isn't accelerated with AMD GPUs.
    AMD has been very forthright in this regard, thy have said heterogeneous computing is their future and more so they are transistioning their GPUs to better support compute.

    As to Cinebench it is still software that will need updating to reflect the new processor. Now will that make a huge difference is another discussion. Just changing to a new C++ compiler could have a big impact on how code runs on a processor. All I'm really saying is don't jump to conclusions before hardware and updated software ships.
    One thing with the Cinebench scores is that the numbers do lead you to think there's a bigger difference the higher the scores get. For example, a score of 27 compared to 18 looks like it might be a huge difference whereas 7 vs 4 doesn't look that much different. The latter difference however is 75% and the former difference is 50%. This perception will get worse the higher it goes e.g a Mac Pro at 35 compared to an HP at 53 - the HP is still just 50% faster though.
    Still 50% is nothing to sneeze at.
    Apple has the sales data for their machines and I suspect that they will have found that people who buy the highest CPU models don't upgrade very often and may even extend the life of the machine doing their own GPUs upgrades as many online accounts of breaking the GPU tabs would indicate. GPUs go out of date quicker than CPUs so tying those down means that it encourages more frequent upgrades.
    I'm not convinced that the Mac Pros new design is there to encourage updates. It touches upon to many other issues for that to be a prime factor in the machines design.
    I'd say the performance of this Xeon is very much down to being an architecture step behind. The Haswell i7-4770k, which may end up in the iMac scores 8.48 in Cinebench. Previously, the top-end Mac Pro has been 3x faster than an iMac but will now just be 2x despite comparing 4-core to 12-core. The 15-core Haswell should sort this and Haswell might run into delays, which would move the architectures back into alignment:
    Is the answer cores or clock rate? It is pretty obvious that the 12 core throttles hard. This is why I question the wisdom of running out and buying the 12 core platform. For many users lesser machines might deliver better results. It really comes down to the users software tools and how well they leverage clock rate versus lots of cores.
    http://www.dailytech.com/Report+Intel+Delays+14+nm+Broadwell+Schedules+Haswell+Refresh+for+2014/article31770.htm

    Instead of Broadwell in 2014, they'd hold the consumer chips back on Haswell. Then when the Xeon moves to Haswell, it will look better.

    Well the future is hard to predict, AMD could pull a rabbit out of the hat and compel Intel once again to become agressive. (Yes more wishful thinking). It is no surprise that Intel has dragged feet with respect to XEON as they are not hurting from competition.
  • Reply 910 of 1320
    drblankdrblank Posts: 3,385member


    I wish Apple made an i5/i7 Version of the MacPro for less money. 

  • Reply 911 of 1320
    tallest skiltallest skil Posts: 43,388member

    Originally Posted by PhilBoogie View Post


    They sell a riser to make the two displays align



     


    Ever since the first Studio Displays, I've always thought that was confusing bordering on idiotic. It wasn't until the iMac and Cinema Display got matching designs that this became totally inexcusable. It seems strange that Steve wouldn't have wanted them to match up, but, then again, he really didn't strike me as the multiple displays type. And in fact he wasn't.

  • Reply 912 of 1320
    MarvinMarvin Posts: 15,435moderator
    wizard69 wrote: »
    As to Cinebench it is still software that will need updating to reflect the new processor. Now will that make a huge difference is another discussion. Just changing to a new C++ compiler could have a big impact on how code runs on a processor. All I'm really saying is don't jump to conclusions before hardware and updated software ships.

    The Ivy Bridge architecture has been out for over a year now and is also just a die-shrink of Sandy Bridge. Haswell is the new architecture. There's a test here that showed a small increase in performance using compiler options with Sandy Bridge but zero and in some cases a downgrade with Ivy Bridge:

    http://www.phoronix.com/scan.php?page=article&item=intel_ivy_tuning&num=2

    If this was Haswell-EP, there would be a possibility of seeing a performance boost from the new architecture. I think the only possibility here is if Apple manages to run the CPU at a higher clock speed due to their cooling solution. I think these scores aren't all that bad though as long as the price points are more reasonable.
    wizard69 wrote: »
    Still 50% is nothing to sneeze at.

    It's noticeable; 15 minute jobs go down to 10 minutes. When it comes to Cinebench scores, 50% starts to look like a bigger difference the higher up the scores get because the numbers get further apart. The focus should be on the percentage difference and not the numbers themselves.

    If Apple could leverage Thunderbolt to chain machines transparently, that would largely make any complaints about lower performance in one machine redundant. All people would have to do is buy 2 or more machines, plug them in and enable compute sharing. If they were able to virtualize the hardware to avoid software license issues, that would be even better but a lot of software has unlimited core licenses.
    wizard69 wrote: »
    I'm not convinced that the Mac Pros new design is there to encourage updates.

    It's at least there to encourage BTO purchases of the SSD and GPUs. Rather than buy the entry model and get your own NVidia GPU on the cheap, you have to get Apple's options. The lack of upgradeability will encourage buying new machines too, even if it wasn't intentional. This is from the company that glued the screen on the iMac though so my guess is it was intentional. I think Mac Pro owners have convinced themselves over the years that Apple was giving them special treatment by keeping them upgradeable but they inflated the margins first. By locking down the upgrades, Apple can get better profits that way and that could give them the freedom to hit a lower entry price point. At the very least, I think the new Mac Pros will offer more performance value for the money spent.
    wizard69 wrote: »
    Is the answer cores or clock rate? It is pretty obvious that the 12 core throttles hard. This is why I question the wisdom of running out and buying the 12 core platform. For many users lesser machines might deliver better results. It really comes down to the users software tools and how well they leverage clock rate versus lots of cores.

    Ideally both but Intel seems to get better results from core-count for tasks that use all the cores. Clock speed increases probably increase temperatures faster than more cores at lower clocks. Certainly for a number of jobs that use very few cores, CPUs that can be clocked higher will perform better.
    wizard69 wrote: »
    Well the future is hard to predict, AMD could pull a rabbit out of the hat and compel Intel once again to become agressive. (Yes more wishful thinking). It is no surprise that Intel has dragged feet with respect to XEON as they are not hurting from competition.

    If AMD keeps racking up losses like they did last quarter, they will be bankrupt soon. They have $3.9b assets, $3.5b liabilities and they made a loss last quarter of $76m. If their stockholder equity goes below zero and more importantly their cash doesn't cover their bills, the stockholders will either have to finance the company or it will be put up for sale. This is why employees don't always like having large stockholdings in companies, especially ones the size of Apple as it can come with heavy financial responsibility when it does badly.

    AMD's losses do seem to be slowing down but they could be as little as a year away from bankruptcy. Everything is spiralling down, they are cutting marketing, R&D, increasing liabilities, selling property/assets, they outsourced their chip manufacturing to Globalfoundries in 2009 and that comes with its own problems:

    http://www.zdnet.com/amd-amends-globalfoundries-deal-to-pay-320-million-7000008443/

    They have no mobile presence at all, unlike NVidia. NVidia's stockholder equity is over 10x AMD's. NVidia actually has enough money to buy AMD. I think they'd be allowed to do that kind of purchase because they are still competing with Intel, who are the market leader. NVidia and AMD together against Intel would surely give them a little hotter competition because NVidia would be able to ship x86 machines to compete with Intel.
  • Reply 913 of 1320
    nhtnht Posts: 4,522member

    Quote:

    Originally Posted by Marvin View Post



    They have no mobile presence at all, unlike NVidia. NVidia's stockholder equity is over 10x AMD's. NVidia actually has enough money to buy AMD. I think they'd be allowed to do that kind of purchase because they are still competing with Intel, who are the market leader. NVidia and AMD together against Intel would surely give them a little hotter competition because NVidia would be able to ship x86 machines to compete with Intel.


     


    Yah but...you're at the point where you wonder if Huang and his board would even want to bother.  Huge risk for nVidia and they'd probably be forced to sell the ATI portion to someone anyway.


     


    I dunno, they just got more console business but this was business that nVidia walked away from...and ATI had both the Wii and the 360 so it's kinda a wash. Are folks really all that excited about Temash and Kabini?


     


    Kaveri delayed to 2014 (yah, okay they say they always planned '14 availability). Assuming Intel's 14nm process isn't in complete disarray they're going to get hammered.  Especially with 14nm Atom in the mix in Q2 2014.


     


    LOL...14nm Atom iPad design win in 2014 doesn't sound so outlandish anymore.


     


    http://www.eweek.com/pc-hardware/intel-may-speed-up-atom-production-report/

  • Reply 914 of 1320
    drblankdrblank Posts: 3,385member


    I just found this product as the perfect companion to the new MacPro.  I'm sure there will be similar products tailored for Thunderbolt 2.  But check out the Netstor NA333TB.


     


    It has 16 drive bays AND 3 PCI slots all in one box. Two birds with one stone.

  • Reply 915 of 1320
    wizard69wizard69 Posts: 13,377member
    Marvin wrote: »
    The Ivy Bridge architecture has been out for over a year now and is also just a die-shrink of Sandy Bridge. Haswell is the new architecture. There's a test here that showed a small increase in performance using compiler options with Sandy Bridge but zero and in some cases a downgrade with Ivy Bridge:
    Haswell is a new architecture but the stress is on power performance not computational performance. As such it isn't a huge step above Ivy Bridge performance wise. That isn't bad at all though as it gives us Mac Book AIRs that just run circles around last years while running on battery.
    http://www.phoronix.com/scan.php?page=article&item=intel_ivy_tuning&num=2

    If this was Haswell-EP, there would be a possibility of seeing a performance boost from the new architecture. I think the only possibility here is if Apple manages to run the CPU at a higher clock speed due to their cooling solution. I think these scores aren't all that bad though as long as the price points are more reasonable.
    It's noticeable; 15 minute jobs go down to 10 minutes. When it comes to Cinebench scores, 50% starts to look like a bigger difference the higher up the scores get because the numbers get further apart. The focus should be on the percentage difference and not the numbers themselves.
    For most users the performance should be much better than past hardware.
    If Apple could leverage Thunderbolt to chain machines transparently, that would largely make any complaints about lower performance in one machine redundant. All people would have to do is buy 2 or more machines, plug them in and enable compute sharing. If they were able to virtualize the hardware to avoid software license issues, that would be even better but a lot of software has unlimited core licenses.
    It will be interesting to see if Apple does anything with clustering. Sadly I think they have abandoned it for good.

    It's at least there to encourage BTO purchases of the SSD and GPUs. Rather than buy the entry model and get your own NVidia GPU on the cheap, you have to get Apple's options. The lack of upgradeability will encourage buying new machines too, even if it wasn't intentional. This is from the company that glued the screen on the iMac though so my guess is it was intentional.
    I think it is a realization of where technology is taking Apple. We are quickly coming to the point where integration will mean add in GPUs will be a thing of the past. The only machines likely to offer such features are workstations like the Mac Pro and even these machines will suffer from a why bother mentality. If a "Pro" keeps the new Mac Pro 3-4 years trying to upgrade with a new GPU will be silly as you will be putting GPUs into dated hardware.
    I think Mac Pro owners have convinced themselves over the years that Apple was giving them special treatment by keeping them upgradeable but they inflated the margins first. By locking down the upgrades, Apple can get better profits that way and that could give them the freedom to hit a lower entry price point. At the very least, I think the new Mac Pros will offer more performance value for the money spent.
    They better. As to the so called "pros" out there, I don't think a lot of them really know what they want. They are simpletons that look at what worked for them in the past and can't manage to grasp an improved future.
    Ideally both but Intel seems to get better results from core-count for tasks that use all the cores. Clock speed increases probably increase temperatures faster than more cores at lower clocks. Certainly for a number of jobs that use very few cores, CPUs that can be clocked higher will perform better.
    The thermal limiting of the many core models is something that 14 nm should deal with fairly well. We might not get more cores but we should at the very least get faster cores.
    If AMD keeps racking up losses like they did last quarter, they will be bankrupt soon. They have $3.9b assets, $3.5b liabilities and they made a loss last quarter of $76m.
    That is actually damn food for AMD. 76 million sounds like a lot to us grunts working for a wage but for a company the size of AMD it is real close to being in the black.
    If their stockholder equity goes below zero and more importantly their cash doesn't cover their bills, the stockholders will either have to finance the company or it will be put up for sale. This is why employees don't always like having large stockholdings in companies, especially ones the size of Apple as it can come with heavy financial responsibility when it does badly.
    I can see them moving forward out of this funk but it requires that the economy take off again, which won't happen with the current administration in Washington. It is hard to believe but people have gotten even tighter with money around here, I fully expect the economy to slow even more. This isn't all AMDs fault as even Intel is feeling the pain right now.
    AMD's losses do seem to be slowing down but they could be as little as a year away from bankruptcy. Everything is spiralling down, they are cutting marketing, R&D, increasing liabilities, selling property/assets, they outsourced their chip manufacturing to Globalfoundries in 2009 and that comes with its own problems:
    The global foundries deal happened a long time ago. It is what AMD is doing now that will either make or break the company. I think they have a chance. Slim maybe but they have a chance. However the big problem is factors outside of their control, the economy, the rise of ARM and with it mobile computing. They have to adapt to these new realities and frankly they are trying.
    ??? AMD has perfectly good mobile solutions. Apple isn't using them this year, but Apple is just as likely to drop NVidia for the next round of hardware. Beyond that AMD has been very successful with BRAZOS that has handily beat Atom in many design ins.
    NVidia's stockholder equity is over 10x AMD's. NVidia actually has enough money to buy AMD. I think they'd be allowed to do that kind of purchase because they are still competing with Intel, who are the market leader. NVidia and AMD together against Intel would surely give them a little hotter competition because NVidia would be able to ship x86 machines to compete with Intel.

    In some ways I see NVidia as being on the right track trying to do ARM right. The days of x86 are slowly fading away and frankly I'm not sure Intel can do anything about it. If Apple came out with an ARM based laptop we wold know that Intels days are numbered. AMD has also been making noise about ARM and frankly that looks like a case of seeing the writing on the wall.
  • Reply 916 of 1320
    wizard69wizard69 Posts: 13,377member
    nht wrote: »
    Yah but...you're at the point where you wonder if Huang and his board would even want to bother.  Huge risk for nVidia and they'd probably be forced to sell the ATI portion to someone anyway.
    Plus they rightly see a future world where Intel or x86 isn't the big deal it has been in the past. There is a lot of focus on the condition of AMD but Intel could find itself in a similar situation depending upon how the market evolves. They are doing everything they can to make ATOM a success but that success isn't a given at this stage.
    I dunno, they just got more console business but this was business that nVidia walked away from...and ATI had both the Wii and the 360 so it's kinda a wash. Are folks really all that excited about Temash and Kabini?
    AMD gets a little more respect outside the Mac world. The big problem they have is that they are quickly loosing their GPU advantage. That is huge even though people underestimate just how important GPUs are for modern operating systems.
    Kaveri delayed to 2014 (yah, okay they say they always planned '14 availability). Assuming Intel's 14nm process isn't in complete disarray they're going to get hammered.  Especially with 14nm Atom in the mix in Q2 2014.
    If they go with TSMC that could also be a 14 nm or so part. AMD does have to take a chance here with respect to TSMC.
    LOL...14nm Atom iPad design win in 2014 doesn't sound so outlandish anymore.
    It would be a joke really. Intel has been caught red handed offering up performance figures that are for the most part bogus. Atom is still a hot chip and carries a lot of i86 baggage with it.

    Intel is feeling the heat just like AMD is. Their balance sheet is still in the black though. The question is can they build a generic processor that meets the needs of tablet and other device manufactures and more importantly compete with Apple. If apple has a 64 bit version of their A series processors available in 2014 it could be a difficult landscape for Intel.
  • Reply 917 of 1320
    v5vv5v Posts: 1,357member

    Quote:

    Originally Posted by wizard69 View Post



    As to the so called "pros" out there, I don't think a lot of them really know what they want. They are simpletons that look at what worked for them in the past and can't manage to grasp an improved future.


     


    I don't know which bothers me more... the arrogance you exhibit with this kind of pontification or the ignorance you betray while doing it. You really believe that pros don't know how to manage their own businesses, and you know better than they what's good for them? Wow. It must be nice to be omniscient.


     


    Enjoy the bozo bin.

  • Reply 918 of 1320
    wizard69wizard69 Posts: 13,377member
    v5v wrote: »
    I don't know which bothers me more... the arrogance you exhibit with this kind of pontification or the ignorance you betray while doing it.
    It is neither arrogance nor ignorance, what is say is the result of observations made over time. You should not that I was careful not to include all pros, just a "lot" of them. In any event it it pretty clear that a lot of "pros" don't understand the technology they work with on a daily basis. You may personally but if you are honest with yourself you will find many around you that don't have a clue.
    You really believe that pros don't know how to manage their own businesses, and you know better than they what's good for them?
    Never said that. I'm simply pointing out the fact that many pros don't understand the technology they are working with. As for managing a business many idiots do that everyday, management isn't about being the smarter person on the block, it is a collection of skills that is hard to quantify.
    Wow. It must be nice to be omniscient.
    It has nothing to do with being omniscient, it has to do with many observations of people that call themselves pros. Maybe you take English as a second language so you don't grasp the less that inclusive use of the word "lot". It doesn't mean that every pro is ignorant about the technology they use on a daily basis, just that a good portion is.
    Enjoy the bozo bin.
    Bye bye!

    Hopefully you will take a chill pill and realize how foolish you have been here.
  • Reply 919 of 1320
    nhtnht Posts: 4,522member

    Quote:

    Originally Posted by wizard69 View Post





    Plus they rightly see a future world where Intel or x86 isn't the big deal it has been in the past. There is a lot of focus on the condition of AMD but Intel could find itself in a similar situation depending upon how the market evolves. They are doing everything they can to make ATOM a success but that success isn't a given at this stage.


     


     


    There's a huge performance gap between Intel and ARM and Intel has closed the power gap faster than ARM has increased performance.


     


    They aren't doing everything to make Atom a success until this year when the actually get it on the current process as opposed to lagging.


     


    AMD's primary disadvantage was the brain drain when they made a series of bad moves.


     


    Quote:


    AMD gets a little more respect outside the Mac world. The big problem they have is that they are quickly loosing their GPU advantage. That is huge even though people underestimate just how important GPUs are for modern operating systems.



     


    You hugely overstate the importance of GPU with respect to the OS.


     


     


    Quote:


    It would be a joke really. Intel has been caught red handed offering up performance figures that are for the most part bogus. Atom is still a hot chip and carries a lot of i86 baggage with it.



     


     


    There's a dual core 7W TDP haswell and a 15W TDP quad i7.


     


     


    The Bay Trail appear to be very good contenders for Q4 of this year.  Intel got the Galaxy 3 design win this year with the 32nm Z2560.  A little slow but at 32nm it's about what would be expected.


    Quote:


    Intel is feeling the heat just like AMD is. Their balance sheet is still in the black though. The question is can they build a generic processor that meets the needs of tablet and other device manufactures and more importantly compete with Apple. If apple has a 64 bit version of their A series processors available in 2014 it could be a difficult landscape for Intel.



     


    Why?  It's not as if you're going to run a MBP on ARM.  Or even the MBA.  On the other hand if the 14nm Core i3 can hit a 5W TDP down from 7W there's a lot of performance per watt there.  And Apple doesn't sell their chips to anyone.  Intel doesn't compete with them at all but Samsung, Qualcomm and nVidia.


     


    Lets see how good the haswell convertibles are.  North Cape looked cool.  I'd seriously love a MBA that did that.


     


  • Reply 920 of 1320
    drblankdrblank Posts: 3,385member

    Quote:

    Originally Posted by nht View Post


     


     


    There's a huge performance gap between Intel and ARM and Intel has closed the power gap faster than ARM has increased performance.


     


    They aren't doing everything to make Atom a success until this year when the actually get it on the current process as opposed to lagging.


     


    AMD's primary disadvantage was the brain drain when they made a series of bad moves.


     


     


    You hugely overstate the importance of GPU with respect to the OS.


     


     


     


     


    There's a dual core 7W TDP haswell and a 15W TDP quad i7.


     


     


    The Bay Trail appear to be very good contenders for Q4 of this year.  Intel got the Galaxy 3 design win this year with the 32nm Z2560.  A little slow but at 32nm it's about what would be expected.


     


    Why?  It's not as if you're going to run a MBP on ARM.  Or even the MBA.  On the other hand if the 14nm Core i3 can hit a 5W TDP down from 7W there's a lot of performance per watt there.  And Apple doesn't sell their chips to anyone.  Intel doesn't compete with them at all but Samsung, Qualcomm and nVidia.


     


    Lets see how good the haswell convertibles are.  North Cape looked cool.  I'd seriously love a MBA that did that.


     




    I think it's a safe assumption that Apple won't use i3 chips.  They only support i5 and i7 chips for their laptops and desktops. I don't Apple is even interested in i3's they aren't that desperate for sales to compete at the i3 level.  Most of the dirt cheap PC laptops are i3, and that's a market Apple doesn't want to play in because there is no room for decent profits.  Personally, Intel should raise their standards for processors and not destroy mfg abilities to make a decent profit.  These companies can't survive selling $400 laptops.

Sign In or Register to comment.