Intel's 'Core M' chip announcement suggests Broadwell-based MacBook Pros won't arrive until 2015

1246

Comments

  • Reply 61 of 112
    wizard69wizard69 Posts: 13,377member
    When is this?


    Well that is hard to say exactly but this may interest you: http://www.anandtech.com/show/8217/intels-knights-landing-coprocessor-detailed. This chip is for 2015 and with an execrated trickle down to conventional chips we could see this in desk top chips in three years. However you also need to consider Haswells with Iris Pro which is a slightly different technology to deal with the same issue of memory performance. So the basic concept of putting memory in the package to get past bandwidth issues to slow DRAM is already with us. The difference with Knights Landing is that the RAM is applications or main memory.

    Even if we don't see RAM in the processor modules, in three years, I would expect that high performance memory systems will be soldered in.

    As a side note that Knights Landing could make one hell of a Mac Pro Chip. Maybe not ideal for everyday mac Pro usage but in some situation it would be awesome.
  • Reply 62 of 112
    hmmhmm Posts: 3,405member

     

    Quote:

    Originally Posted by wizard69 View Post









    Even if we don't see RAM in the processor modules, in three years, I would expect that high performance memory systems will be soldered in.

     


    What do you mean by ram in the processor modules? Are you referring to a growth in cache/sram to the point where dram is obsolete?

  • Reply 63 of 112
    wizard69 wrote: »
    Well that is hard to say exactly but this may interest you: http://www.anandtech.com/show/8217/intels-knights-landing-coprocessor-detailed. This chip is for 2015 and with an execrated trickle down to conventional chips we could see this in desk top chips in three years. However you also need to consider Haswells with Iris Pro which is a slightly different technology to deal with the same issue of memory performance. So the basic concept of putting memory in the package to get past bandwidth issues to slow DRAM is already with us. The difference with Knights Landing is that the RAM is applications or main memory.

    Even if we don't see RAM in the processor modules, in three years, I would expect that high performance memory systems will be soldered in.

    As a side note that Knights Landing could make one hell of a Mac Pro Chip. Maybe not ideal for everyday mac Pro usage but in some situation it would be awesome.

    Skylake, I believe.

    If you think people are whining about change now, wait until post-Cannonlake chips drop silicon in favor of indium antimonide. :p  

    Thanks. Sounds interesting. However, also sounds a bit like "we are working in a tricorder and hope to release it wothong a year or two". Honestly, I once met a director of a science institute who told me this in all seriousness. A lot of nice concepts... But that was all. I'm no saying Intel will not deliver. It just seems that the technological envelope is being pushed harder so putting nice concepts into practice has a significant amount of risk. Just take a look at G5, today broadwell etc.
    maybe they just need someone who tells them: ;)
  • Reply 64 of 112
    wizard69wizard69 Posts: 13,377member
    hmm wrote: »
    What do you mean by ram in the processor modules? Are you referring to a growth in cache/sram to the point where dram is obsolete?

    No I mean high speed RAM mounted right in the same module as the processor silicon and closely tied to the processor through a very high speed bus. The goal is far higher performance out of the RAM.
  • Reply 65 of 112
    wizard69wizard69 Posts: 13,377member
    Sorry about some of the auto correct typos in my earlier post!

    Thanks. Sounds interesting. However, also sounds a bit like "we are working in a tricorder and hope to release it wothong a year or two".
    Well yeah sometimes it does sound that way. However you have to realize a few things. One is that going to RAM slows a processor down today significantly. There is a lot of contention for access to RAM and more cores and better GPUs integrated in the SoC just makes that bandwidth issue worst. The short term solution for desktop and probably mobile is DDR 4 wich is actually here now in some hardware. Longer term faster requires being very close to the processor.

    This is where things like Memory Cibe technology become very interesting as the high density and low power combined with high performance make for a very compelling solution to improving the and width problem to RAM. We probably won't see such tech in the desktop for awhile but I see it as inevitable and maybe closer than I think for something like a Mac Pro.

    As it is next year as DDR 4 supporting hardware rolls out more generally expect to see all sort of benchmarking exploring increased bandwidths. Of course the faster RAM has to actually arrive but the impact on APU based machines should be very noticeable.
    Honestly, I once met a director of a science institute who told me this in all seriousness. A lot of nice concepts... But that was all.
    A lot of the technology for a Tricoder and other Star Trek tech is already here, it just needs to be integrated into a device. Take the iPad for example as it is certainly an analog to many things in that Universe. In some ways an iOad is already more sophisticated than anything imagined for a Tricoder. It is certainly less bulky than the originals and frankly it wouldn't take much to add some of the I/O to deliver the analytical functionality.

    Think about the analytical capabilities on the space Buggys on Mars. Much of that stuff is extremely compact
    I'm no saying Intel will not deliver. It just seems that the technological envelope is being pushed harder so putting nice concepts into practice has a significant amount of risk. Just take a look at G5, today broadwell etc.
    maybe they just need someone who tells them: ;)

    There is always risk. Also there is no substitute for a working product in your hands to test. I notice in antiher forum today that Intel has apparently made a stealth release of more Broadwell M chips. Some are speculation that they could go into a new Mac Book Air like machine. That is possible but one has to be careful with Intels marketing materials.

    For example the processors are being described as 4.5 watt devices. However there are some benchmarks floating about where Intel runs the processor at 6 watts. Also it is unknown just how much the processor will throttle in a fanless design. The point is the proof is in the pudding and we need Apple to ship something to determine if performance will be acceptable.

    Technology is certainly interesting but it is far more interesting in a shipping product.
  • Reply 66 of 112
    hmmhmm Posts: 3,405member
    Quote:

    Originally Posted by wizard69 View Post





    No I mean high speed RAM mounted right in the same module as the processor silicon and closely tied to the processor through a very high speed bus. The goal is far higher performance out of the RAM.



    So on the cpu package but just not on the die itself?

  • Reply 67 of 112
    thttht Posts: 5,421member
    Quote:

    Originally Posted by hmm View Post



    So on the cpu package but just not on the die itself?


     

    Yeah. Standard fare. Intel already does this with "CrystalWell", which has been shipping for a year now. Anything Haswell with GT3e or Iris Pro graphics is a Haswell CPU/GPU with 128 MB of eDRAM on package. One is shipping in the 21.5" iMac.

     

    Intel did it with the Pentium Pro 20 years ago. Back then, L2 cache was off-package. You can upgrade your L2 cache then. With the Pentium Pro, Intel put a high speed SRAM L2 cache on the package. Time marches and lower levels of memory get moved to the die, and onto the package. I'm not sure if it'll be common practice as everything is hypersensitive to power consumption these days, but it certainly an option for desktop and laptop systems.

     

    Apple does it with their ARM SoCs, but it isn't "high speed". Just a space savings convenience and the memory bandwidth is just your normal multi-channel multi-data rate memory interface.

  • Reply 68 of 112
    wizard69wizard69 Posts: 13,377member
    hmm wrote: »

    So on the cpu package but just not on the die itself?

    Yes effectively a multi chip module.
  • Reply 69 of 112
    hmmhmm Posts: 3,405member
    Quote:

    Originally Posted by THT View Post

     

     

    Yeah. Standard fare. Intel already does this with "CrystalWell", which has been shipping for a year now. Anything Haswell with GT3e or Iris Pro graphics is a Haswell CPU/GPU with 128 MB of eDRAM on package. One is shipping in the 21.5" iMac.

     

    Intel did it with the Pentium Pro 20 years ago. Back then, L2 cache was off-package. You can upgrade your L2 cache then. With the Pentium Pro, Intel put a high speed SRAM L2 cache on the package. Time marches and lower levels of memory get moved to the die, and onto the package. I'm not sure if it'll be common practice as everything is hypersensitive to power consumption these days, but it certainly an option for desktop and laptop systems.

     

    Apple does it with their ARM SoCs, but it isn't "high speed". Just a space savings convenience and the memory bandwidth is just your normal multi-channel multi-data rate memory interface.


    I was unaware of those details with the Pentium Pro, mostly because I didn't follow computing very closely until the very late 90s, which seems odd in retrospect. I was aware of it in Apple's SoC's.

     

    Quote:

    Originally Posted by wizard69 View Post





    Yes effectively a multi chip module.



    Got it, but that may still be a while. Apple went that route to conserve height. With the mini they reuse design work wherever possible, so I suspect that led to the soldered ram.

  • Reply 70 of 112
    wizard69wizard69 Posts: 13,377member
    tht wrote: »
    Yeah. Standard fare. Intel already does this with "CrystalWell", which has been shipping for a year now. Anything Haswell with GT3e or Iris Pro graphics is a Haswell CPU/GPU with 128 MB of eDRAM on package. One is shipping in the 21.5" iMac.
    In this case Crystallwell is effectively a high speed cache chip. This isn't the same thing as system memory but the impact on performance is due to it being faster than DDR3 RAM.

    What I find shocking (if that is the right word) is that 128 MB of Cache RAM is more memory that some of my first few computers had altogether. The industry certainly has come a long way in one lifetime.
    Intel did it with the Pentium Pro 20 years ago. Back then, L2 cache was off-package. You can upgrade your L2 cache then. With the Pentium Pro, Intel put a high speed SRAM L2 cache on the package. Time marches and lower levels of memory get moved to the die, and onto the package. I'm not sure if it'll be common practice as everything is hypersensitive to power consumption these days, but it certainly an option for desktop and laptop systems.
    The funny thing is you never heard complaints about Cache memory moving on die or into the package back then. If I remember correctly your cache RAM was optional on some motherboards. It is just funny the reaction some have to RAM being soldered into a motherboard when the ability to install or upgrade a cache RAM array went away years ago.

    The other reality here is that modern processors can have many buffers and caches on chip to help deal with the impact of slow RAM. That works good for the current crop of mainstream processors but the slow path to RAM really impacts performance when you have a lot of cores running.
    Apple does it with their ARM SoCs, but it isn't "high speed". Just a space savings convenience and the memory bandwidth is just your normal multi-channel multi-data rate memory interface.

    This is a slightly different type of implementation and is more along the lines of what I expect intel is doing with the next Xeon Phi. That is the RAM is RAM not a cache implementation. The Apple processors implement 1GB of RAM while the apparent goal with XEON Phi is to have 16GB of RAM available in the processor module.

    It should be noted that Apples approach with the cell phone processors has other positives. For one you reduce pin counts that go to the outside world which is part of the space saving advantages. You also have tighter control over the electrical interface. There is also more that one approach to stacking dies in a package like this each with performance characteristics and assembly issues.
  • Reply 71 of 112
    Quote:

    Originally Posted by wizard69 View Post



    Sorry about some of the auto correct typos in my earlier post!

    Well yeah sometimes it does sound that way. However you have to realize a few things. One is that going to RAM slows a processor down today significantly. There is a lot of contention for access to RAM and more cores and better GPUs integrated in the SoC just makes that bandwidth issue worst. The short term solution for desktop and probably mobile is DDR 4 wich is actually here now in some hardware. Longer term faster requires being very close to the processor.



    This is where things like Memory Cibe technology become very interesting as the high density and low power combined with high performance make for a very compelling solution to improving the and width problem to RAM. We probably won't see such tech in the desktop for awhile but I see it as inevitable and maybe closer than I think for something like a Mac Pro.



    As it is next year as DDR 4 supporting hardware rolls out more generally expect to see all sort of benchmarking exploring increased bandwidths. Of course the faster RAM has to actually arrive but the impact on APU based machines should be very noticeable.

    A lot of the technology for a Tricoder and other Star Trek tech is already here, it just needs to be integrated into a device. Take the iPad for example as it is certainly an analog to many things in that Universe. In some ways an iOad is already more sophisticated than anything imagined for a Tricoder. It is certainly less bulky than the originals and frankly it wouldn't take much to add some of the I/O to deliver the analytical functionality.



    Think about the analytical capabilities on the space Buggys on Mars. Much of that stuff is extremely compact

    There is always risk. Also there is no substitute for a working product in your hands to test. I notice in antiher forum today that Intel has apparently made a stealth release of more Broadwell M chips. Some are speculation that they could go into a new Mac Book Air like machine. That is possible but one has to be careful with Intels marketing materials.



    For example the processors are being described as 4.5 watt devices. However there are some benchmarks floating about where Intel runs the processor at 6 watts. Also it is unknown just how much the processor will throttle in a fanless design. The point is the proof is in the pudding and we need Apple to ship something to determine if performance will be acceptable.



    Technology is certainly interesting but it is far more interesting in a shipping product.



    Thanks for the elaborate answer. 

     

    Regarding the Tricorder: I did not want to come across as judging. For sure, you have to aim high and have a lot of stamina, if you want to develop some breakthrough products and/or technologies. Just take the iPhone, e.g. starting with the Apple Personal Assistant concepts, then the Newton etc. Also, if I am not mistaken, the iPhone was a "side-product" of the iPad and then priorities were shifted. This kind of energy and focus over several years deserves some deep respect, in addition, if your company is registered in the stock market, and therefore the ROI is expected quickly. Also it is quite easy from a consumer side to expect step changes in tech on a regular basis. 

     

    Regarding risk: From my professional experience I see a clear correlation between the level of innovation and the risks that something does not work as expected, be it from a design perspective, from a manufacturing perspective, and increasingly from a system interaction perspective. Risk mitigation is often neglected, especially when the time to market is decreasing, and marketing is already promising the hell out of the new product. The proclaimed 4,5 W TDP (@ 800 MHz, wasn't it?) could be such an example. But like you said, it is all pudding and we have to wait and see when a final product ships. :-)

  • Reply 72 of 112
    wizard69wizard69 Posts: 13,377member

    Thanks for the elaborate answer. 

    Regarding the Tricorder: I did not want to come across as judging. For sure, you have to aim high and have a lot of stamina, if you want to develop some breakthrough products and/or technologies. Just take the iPhone, e.g. starting with the Apple Personal Assistant concepts, then the Newton etc. Also, if I am not mistaken, the iPhone was a "side-product" of the iPad and then priorities were shifted.
    it is too bad there isn't a more public history of the development of iOS and iPads/IPhones. What is really interesting here is that the detour into the iPhine gave time to debut the iPad with much better hardware which I see as a significant deal in the success of iPad. If iPad shipped with the first generation of iPhone processors it would have had to struggle to obtain consumer acceptance.
    This kind of energy and focus over several years deserves some deep respect, in addition, if your company is registered in the stock market, and therefore the ROI is expected quickly. Also it is quite easy from a consumer side to expect step changes in tech on a regular basis. 
    as a consumer sometimes Apple drags its feet and this causes me concern as a stock holder. The new iOad Air 2 is an example of Apple pulling out all stops. The iPod Touch and other items are examples of Apple ignoring a product so much that it can't possibly succeed in the marketplace. Sometimes it would be better if Apple killed off a product rather than to ignore it.
    Regarding risk: From my professional experience I see a clear correlation between the level of innovation and the risks that something does not work as expected, be it from a design perspective, from a manufacturing perspective, and increasingly from a system interaction perspective. Risk mitigation is often neglected, especially when the time to market is decreasing, and marketing is already promising the hell out of the new product. The proclaimed 4,5 W TDP (@ 800 MHz, wasn't it?) could be such an example. But like you said, it is all pudding and we have to wait and see when a final product ships. :-)

    I'd love to see a MBA that is fabless and a step up performance wise relative to what we have now but I don't see this chip providing that sort of solution. Maybe I'm wrong here, but I've been waiting for the MBA to hit the performance level I want and frankly broad well may do the trick at the current power levels.
  • Reply 73 of 112
    thttht Posts: 5,421member
    Quote:

    Originally Posted by wizard69 View Post



    In this case Crystallwell is effectively a high speed cache chip. This isn't the same thing as system memory but the impact on performance is due to it being faster than DDR3 RAM.

     

    True, but I view it as more of a semantics game. A computing system has pools of memory with the smallest, but fastest pool closest to the CPU logic. The pools get bigger, but slower the further away they get from the CPU.

     

    Storage, as in HDD and SSD storage, is memory. It is a lot further away from the CPU, but is ~10 times bigger than system memory (RAM). They effect the performance a computing system as anyone who's done an upgrade from an HDD to an SSD will attest. Heck, the "cloud" is another level memory available to you, and obviously it's slow, but faster access to the cloud will come to play as more and more data is stored there.

     

    With higher transistor densities, all these levels of memory get closer and faster access to the CPU/GPU. Like L2 cache, the memory controller used to be off-die, and it is now on-die. (Intel implemented some MCMs with Atom where the PCH/MCH was on-package, but not on-die I think, so they went through that stage too). I/O controllers used to be off-package, now they are on-package or on-die.

     

    The modern Mac has basically moved as much of the memory access and memory levels closer to the CPU/GPU then ever before for PCs. L3 cache on-die. Memory controller on-die. SSDs connected by way of PCIe I/O that's on-die.

     

    Apple iPhone SoCs have RAM that is on-package. If Apple made CPU/GPUs for their Macs, I bet, like you, a lot that RAM for some of their systems would be on-package sooner rather than later. But they have to wait on Intel.

     

    Lastly, storage will eventually be on-package as well. A triple stack of chips in the package where the CPU, GPU, higher levels of cache, memory controller, I/O (including SSD controller) all in one chip. RAM chips on top of that, then SSD Flash chips on top of that; with, the hottest chips closest to the surface.

     

    If you look at the Watch S1 "computer-in-package", they are almost all they way there. It's a PCB with a multitude chips, so it's more of a miniature logic board instead of an MCM, but the whole thing is encased in some kind of resin, and in effect, appears to the lone "package" in the system. I can see them maintaining the interfaces and shape of it so that Watch upgrades will become easy. 4 years down the road, the thing will have 2x to 4x the CPU, GPU, memory, storage, and radio performance, in the same basic "package".

     

    If they start moving that type of design (encase the PCB in resin) to iPhones, hmm...

     


    What I find shocking (if that is the right word) is that 128 MB of Cache RAM is more memory that some of my first few computers had altogether. The industry certainly has come a long way in one lifetime.


     

    Yup. I feel old, and I don't think I'm that old. Way back, "storage" used to be mobile and we carried it around with us.

  • Reply 74 of 112
    wizard69wizard69 Posts: 13,377member
    tht wrote: »
    True, but I view it as more of a semantics game. A computing system has pools of memory with the smallest, but fastest pool closest to the CPU logic. The pools get bigger, but slower the further away they get from the CPU.
    What is neat is that 128 MB of RAM is still enough to keep the core or an app in that cache. In some cases the on chip caches are big enough to keep the core of an app on chip. This has a very profound impact on performance.

    Storage, as in HDD and SSD storage, is memory. It is a lot further away from the CPU, but is ~10 times bigger than system memory (RAM). They effect the performance a computing system as anyone who's done an upgrade from an HDD to an SSD will attest. Heck, the "cloud" is another level memory available to you, and obviously it's slow, but faster access to the cloud will come to play as more and more data is stored there.
    Yeah it is storage but there is a big difference in the way it is addressed.
    With higher transistor densities, all these levels of memory get closer and faster access to the CPU/GPU. Like L2 cache, the memory controller used to be off-die, and it is now on-die. (Intel implemented some MCMs with Atom where the PCH/MCH was on-package, but not on-die I think, so they went through that stage too). I/O controllers used to be off-package, now they are on-package or on-die.
    Yep! This is what I was getting at when using the word "closer" as closeness is very important to speeding up a computer. Speed and distance are wedded in computer design, in crease one and you effectively lower the other. I believe it was Grace Hopper that came up with the term "light foot" the distance light travels in a nano second. This physical reality is why computers have gotten much faster as they have gotten much smaller. It is the fundamental point I'm trying to get across here.
    The modern Mac has basically moved as much of the memory access and memory levels closer to the CPU/GPU then ever before for PCs. L3 cache on-die. Memory controller on-die. SSDs connected by way of PCIe I/O that's on-die.
    Yes the constant evolution of computer hardware as always shrunk parts to increase speed. Even on chip there are high speed caches and low speed caches. This due to the ALU core being so fast that accessing the slower and sometimes further parts of the chip lead to real slow downs. It is actually more complex than that, especially in multi core processors, but in the end todays RAM is extremely slow relative to what is happening in the processor proper.
    Apple iPhone SoCs have RAM that is on-package. If Apple made CPU/GPUs for their Macs, I bet, like you, a lot that RAM for some of their systems would be on-package sooner rather than later. But they have to wait on Intel.
    Unfortunately I don't have a crystal ball. I do know what Intel is doing with Xeon Phi and such approaches eventually work their way down to the workstation level. Given that I can only say that the path here is pretty clear and even if you are not up to speed on tech you can look at the history of computers to see where we are still headed.
    Lastly, storage will eventually be on-package as well. A triple stack of chips in the package where the CPU, GPU, higher levels of cache, memory controller, I/O (including SSD controller) all in one chip. RAM chips on top of that, then SSD Flash chips on top of that; with, the hottest chips closest to the surface.

    If you look at the Watch S1 "computer-in-package", they are almost all they way there. It's a PCB with a multitude chips, so it's more of a miniature logic board instead of an MCM, but the whole thing is encased in some kind of resin, and in effect, appears to the lone "package" in the system. I can see them maintaining the interfaces and shape of it so that Watch upgrades will become easy. 4 years down the road, the thing will have 2x to 4x the CPU, GPU, memory, storage, and radio performance, in the same basic "package".
    I was actually at one point wondering if Apples 2015 debut was to wait for a 14 nm process to pack even more stuff into he watch at an even lower power point. At this point it doesn't look like that is the case. However we till have a few more process shrinks ahead of us and other techniques to lower power. So we can be looking at iPhone type power in a watch in a few short years.
    If they start moving that type of design (encase the PCB in resin) to iPhones, hmm...
    That would fix bendgate too. An iPhone that is one solid blob of whatever would be durable that is for sure. Most potting materials are Epoxy like so they could even reinforce the mix with glass fiber. The problem is such hardware wold be awfully expensive. At least the potted hardware I buy or see at work is.
    Yup. I feel old, and I don't think I'm that old. Way back, "storage" used to be mobile and we carried it around with us.

    Yeah I was a teenager when if first started reading Byte magazine. That was all I could afford to do for years. Some of my computers came with memory measured in KiloBytes, not even Megabytes.

    In any event I have a sense of history here which is why I comment on the new Mac Mini and try to balance all the negativity that is seen in the forums about this machine. people need to realize that we are in another epoch here where current hardware will quickly become outdated as new hardware comes on line supporting the latest standards. here DDR 4 should be well established by the time anybody seriously thinks about replacing the current Mini with a new one. DDR 4 based platforms should provide significant improvements over current machines to make upgrades of old hardware pretty silly. It can be liked to the move from the 386 to the 486, the step up was so great that old hardware quickly became door stops.
  • Reply 75 of 112
    wizard69wizard69 Posts: 13,377member
    I'm thinking that some of you may be interested in what Intel is up to with XEON Phi. In a nut shell you can get some of the Xeon Phi cards at extremely large discounts if you look around a bit. We are talking somewhere in the range of 200-500 dollars. This discounting apparently is going on until the end of the year.

    One theory has it that they want to get hardware into the hands of developers. My theory is that the have a surplus to zero out before the debut of Landing or whatever the next generation Phi is. Obviously this isn't Apple related unless Apple decides to offer the Mac Pro with a XEON Phi installed. I just thought that some following this thread might be interested.
  • Reply 76 of 112
    wizard69wizard69 Posts: 13,377member
    We may be disappointed with Intels Haswell line up for Mac Books and Minis but Intel hasn't given up on researching new tech. Here is an article about research into new RAM tech: http://www.taipeitimes.com/News/biz/archives/2014/11/19/2003604752. That tech would be very interesting in a cell phone

    On another note things aren't going to well for mobile at Intel. The division for mobile is apparently being merged into another group.
  • Reply 77 of 112
    jexusjexus Posts: 373member

    Intel's Mobile group is being merged with it's PC group and the combined group will be led by the Ultrabook chief.

     

    This only screams to me that Intel is trying to hide the massive cash drain that their mobile division is. 1 billion dollar loss was the latest result for it.

  • Reply 78 of 112
    wizard69wizard69 Posts: 13,377member
    jexus wrote: »
    Intel's Mobile group is being merged with it's PC group and the combined group will be led by the Ultrabook chief.
    The way I see this, Intel has two problems. One is that they're a manufacture of one size fits all chip sets in an industry that absolutely needs customizable SoCs. I can see Apple eventually moving into a position where they have no choice but to go to ARM in its laptops if Intel isn't willing to customize solutions to Apples needs. The second issue is that they are too hung up on the past with backwards compatibility. Intel should have come out with a 64 bit architecture that dropped all support for i86 legacy modes. A truly lean 64 bit i86 chip.
    This only screams to me that Intel is trying to hide the massive cash drain that their mobile division is. 1 billion dollar loss was the latest result for it.

    Yep, sweeping a failure under the rug. Intel is hurting as they have laid off people this year and frankly it is likely to get worst for them. They really have to come to grips with industry needs and also have to get with the program as far as 14 nm products go.
  • Reply 79 of 112
    jexusjexus Posts: 373member
    Quote:

    Originally Posted by wizard69 View Post





    The way I see this, Intel has two problems. One is that they're a manufacture of one size fits all chip sets in an industry that absolutely needs customizable SoCs. I can see Apple eventually moving into a position where they have no choice but to go to ARM in its laptops if Intel isn't willing to customize solutions to Apples needs.

    Yes, this is a huge problem. Intel won't customize chips for anyone but their largest clients, and even then, they seem to have pretty stingy limits.

     

    As for ARM, I see this as another potential way for AMD to encroach in on Intel in the mac line. As AMD is pretty open to customization of IP, and most importantly, access to some beefy solutions(Both X86 and ARM). I don't know how well the current PowerVR graphics in Apple's mobile chips perform in openCL benchmarks, but I do know that AMD's GCN arch(which will be integrated into K12) is a compute monster. Integrating those with Apple's current ARM design IMO may yield potential for interesting benefits on the mac side. If there is one thing I imagine apple would be happy to continue to push, OpenCL is one of my votes.  At least mobile wise as per the above anyway. Though I could easily see Apple just sticking with PowerVR mobile anyway, though I'm curious as to whether they'll reconsider AMD post Zen for the AIO Macs.

     

    Of course the obvious problem would still be supply. I do remember reading that Apple was strongly considering AMD's Llano chips a few years back but ultimately backed out because AMD couldn't supply the chips to meet Apple's demands quantitive wise. They would surely have improved since then yield wise?

     

    Quote:


    Yep, sweeping a failure under the rug. Intel is hurting as they have laid off people this year and frankly it is likely to get worst for them. They really have to come to grips with industry needs and also have to get with the program as far as 14 nm products go. 


    I used to believe that Intel Mobile had a shot, but I'm only ever seeing the same pattern. Outside of Windows Tablets, Intel is basically non existent and the performance of their chips on other platforms(Ala android) simply isn't enough to justify their non subsidy premiums.

  • Reply 80 of 112
    frank777frank777 Posts: 5,839member
    Quote:

    Originally Posted by wizard69 View Post





    Anybody whining about replaceable RAM is just out of Touch. Seriously what will these people do when RAM gets integrated right into the processor module?

     

    If this innovation is truly performance-based and not simply meant to turn computers into disposable commodities to be replaced every three years, shouldn't it begin with the Pro and not the Mini? Why can I replace the RAM in a Pro and not a Mini?

     

    Did making the Mini's RAM non-upgradable this year make it significantly faster?

Sign In or Register to comment.