Apple planning to ditch Intel chips in Macs for its own custom silicon in 2020

1234568

Comments

  • Reply 141 of 176
    melgrossmelgross Posts: 33,510member
    With all this BootCamp discussion, there's a key point that's getting missed -- It depends on what you need Windows for. Windows for ARM is a thing.

    Well, that’s what Microsoft is saying, isn’t it? But then, they’ve had that “Windows Everywhere” almost forever, and it was never true. It’s not true now.

    i know that people seem to have different definitions of “OS”. But it’s like biology, if animals can interbreed, and have viable offspring that can also breed, then they are the same species. If they can’t, then they are a different species.

    to me, an OS is similar. If two OSs can run the same software equally well, they are really the same OS. The internal coding isn’t important. But if, as with Microsoft’s plan, you have to rewrite the software to do so, it’s a different OS. And so it is! There is simply no way that an x86 app, that uses a much more powerful x86 chip, can run in any useful way, no matter what Microsoft does with the OS, on those weak Qualcomm SoCs that this is specified to work on.
    edited April 2018 docno42
  • Reply 142 of 176
    cgWerkscgWerks Posts: 2,952member
    melgross said:
    Ahh, that’s not quite true. Intel has several lines of chips. The have Desktop chips, low power chips, and ultra low power chips. The A10x can surpass the M series easily. They equal, in some ways, but not in all, some of the ultra low power i series. They even equal the bottom of the line low power i series. But they don’t come close to the full power Desktop series.

    people, let’s not get carried away!
    Also, are the benchmarks relevant in real-world application? I look at the power draw of Macs or of something like a PlayStation 4 and then we have our iPads and phones... and I have to wonder if these chips could be THAT much more efficient, or if we're talking about some kind of peak instantaneous performance that shows up on a benchmark. Or, can they just add huge batteries, more cooling, and get that level of performance long-term just like the Intel chips?
    (serious question... I've forgotten way to much of my original electronics training and architecture knowledge over the decades to make way for other info :smile: )

    melgross said:
    Well, that’s what Microsoft is saying, isn’t it? But then, they’ve had that “Windows Everywhere” almost forever, and it was never true. It’s not true now.

    i know that people seem to have different definitions of “OS”. But it’s like biology, if animals can interbreed, and have viable offspring that can also breed, then they are the same species. If they can’t, then they are a different species.

    to me, an OS is similar. If two OSs can run the same software equally well, they are really the same OS. The internal coding isn’t important. But if, as with Microsoft’s plan, you have to rewrite the software to do so, it’s a different OS. And so it is! There is simply no way that an x86 app, that uses a much more powerful x86 chip, can run in any useful way, no matter what Microsoft does with the OS, on those weak Qualcomm SoCs that this is specified to work on.
    Agreed. It won't cut it if it just runs some percentage of apps. This is especially true if my impression is correct that many use it for development and testing, where they are actually trying to replicate the x86 environment. Unless we see a major shift on the other side of the fence (i.e.: the Windows world running on a variety of platforms), the x86 isn't going anywhere.
    edited April 2018
  • Reply 143 of 176
    melgrossmelgross Posts: 33,510member
    cgWerks said:
    melgross said:
    Ahh, that’s not quite true. Intel has several lines of chips. The have Desktop chips, low power chips, and ultra low power chips. The A10x can surpass the M series easily. They equal, in some ways, but not in all, some of the ultra low power i series. They even equal the bottom of the line low power i series. But they don’t come close to the full power Desktop series.

    people, let’s not get carried away!
    Also, are the benchmarks relevant in real-world application? I look at the power draw of Macs or of something like a PlayStation 4 and then we have our iPads and phones... and I have to wonder if these chips could be THAT much more efficient, or if we're talking about some kind of peak instantaneous performance that shows up on a benchmark. Or, can they just add huge batteries, more cooling, and get that level of performance long-term just like the Intel chips?
    (serious question... I've forgotten way to much of my original electronics training and architecture knowledge over the decades to make way for other info :smile: )

    melgross said:
    Well, that’s what Microsoft is saying, isn’t it? But then, they’ve had that “Windows Everywhere” almost forever, and it was never true. It’s not true now.

    i know that people seem to have different definitions of “OS”. But it’s like biology, if animals can interbreed, and have viable offspring that can also breed, then they are the same species. If they can’t, then they are a different species.

    to me, an OS is similar. If two OSs can run the same software equally well, they are really the same OS. The internal coding isn’t important. But if, as with Microsoft’s plan, you have to rewrite the software to do so, it’s a different OS. And so it is! There is simply no way that an x86 app, that uses a much more powerful x86 chip, can run in any useful way, no matter what Microsoft does with the OS, on those weak Qualcomm SoCs that this is specified to work on.
    Agreed. It won't cut it if it just runs some percentage of apps. This is especially true if my impression is correct that many use it for development and testing, where they are actually trying to replicate the x86 environment. Unless we see a major shift on the other side of the fence (i.e.: the Windows world running on a variety of platforms), the x86 isn't going anywhere.
    Benchmarks are important, which is why they’re used everywhere, in every industry. They also use “real world” benchmarks, running typical working loads. Supercomputers use linpack. It’s when benchmarks are incorrectly interpreted, that problems creep in. Or when some companies write code to pick up on a benchmark, and cheat.

    there are benchmarks that do test short term functionality, and there are those made to test longer term performance. That’s normal for mobile, where both heat and power draw causes throttling after a time, which could be mere seconds, or minutes. Then there are single core tests, and multi core tests.

    if Apple does do this, and despite Bloomberg having a very good record on Apple predictions, it’s not certain, then I would think that they don’t want to do another major developer upheaval again. If they can prevent that, then they will. If they can’t, I don’t think they will bother with this. Running ARM code on an x86 will be easier than running x86 code on ARM. A heck of a lot easier. Unless they do what I’ve been suggesting. And who knows what they’re working on?
    cgWerks
  • Reply 144 of 176
    cgWerkscgWerks Posts: 2,952member
    melgross said:
    there are benchmarks that do test short term functionality, and there are those made to test longer term performance. That’s normal for mobile, where both heat and power draw causes throttling after a time, which could be mere seconds, or minutes.
    Yea, that's kind of what I'm getting at. When they say a particular A-series chip is the same speed as some model MacBook Pro on a benchmark... does that mean if you took that A-series chip, and gave it a huge battery or line-power, and heat dissipation, that it could do the same higher-end apps and games as the MacBook Pro at similar performance (assuming code compatibility)?

    Otherwise, we're talking some inefficiency on an unheard of scale if my PS4 sucks down 200 watts to do the same thing as an iPhone. :smiley: 

    Or, to put it another way... is it similar to an incandescent vs LED lightbulb where they put out the same lumens, but use less power, always? Or, is it that 'throttles down after a second' we're seeing that allows the mobile to use such less power, but still post a high number on the benchmark?
  • Reply 145 of 176
    Big mistake, Apple!  Some of us remember what a dead end the Power PC chips turned out to be.  I have been a Mac user since you changed to Intel (currently own 4, not counting my iPhone and iPad), and I will probably move away if you stop using industry standard hardware.  
  • Reply 146 of 176
    nhtnht Posts: 4,522member
    docno42 said:
    I've been able to run Macs for work because of the ability to virtualize Windows. Not sure this bodes well for the future of Macs in certain business segments unless emulation performance under these new chips will be acceptable.
    I wonder just how many people run Windows in any form (bootcamp or emulation) on Mac's these days.  I haven't thought about doing it for years now. 

    How long until Adobe get's it's act together; that's the real question
    And Microsoft.

    while we have a semi decent office most of the complex excel links and macros don’t work cross platform and there is no Ms project or Visio.
    cgWerksmethetechman
  • Reply 147 of 176
    nhtnht Posts: 4,522member
    With all this BootCamp discussion, there's a key point that's getting missed -- It depends on what you need Windows for. Windows for ARM is a thing.

    Windows for ARM is a non-starter if the apps aren’t rebuilt for ARM.  Wanna bet most games and most pro apps won’t be?
    cgWerks
  • Reply 148 of 176
    nhtnht Posts: 4,522member
    It probably makes more sense for Apple to make a netbook-style device with iOS on it. They could revive the iBook name for branding and market it to students and consumers. That way they can also continue to make the Mac & MacBook devices with Intel (or AMD) chips for several more years to support that market as the 'professional' line. I wouldn't be surprised to see Apple buy stake in AMD, but it may simply be more cost effective to partner with them.
    This strikes me as reasonable segmentation.  That said it’ll still likely cause confusion as shown in the Windows Surface RT fiasco.
    cgWerks
  • Reply 149 of 176
    cgWerkscgWerks Posts: 2,952member
    Big mistake, Apple!  Some of us remember what a dead end the Power PC chips turned out to be.  I have been a Mac user since you changed to Intel (currently own 4, not counting my iPhone and iPad), and I will probably move away if you stop using industry standard hardware.  
    Industry standard doesn't necessarily equal good. It's important here for x86 virtualization, but otherwise (aside from transition period bumps), I'm not sure why anyone would care. Also, Power PC was only a dead end in terms of how it was progressing at the time. The last G5s were faster than the fastest Intel chips at the time. And, if Intel hadn't switched to the core architecture, their Pentiums would have started burning holes through the floor.... i.e.: they were also at a dead end in practical terms. I'm not aware of how much influence Apple had in rescuing Intel at that point, but I certainly wonder.

    nht said:
    Windows for ARM is a non-starter if the apps aren’t rebuilt for ARM.  Wanna bet most games and most pro apps won’t be?
    No one is going to largely rebuild all that stuff unless the whole industry shifts. Companies are also still using IE6, too.
  • Reply 150 of 176
    cgWerkscgWerks Posts: 2,952member
    Just listening to a discussion about this on ATP podcast, and they made a really great point.

    Apple would have to design special ARM based chips for anything higher end than the power of the iOS devices. Given how Apple already lags in updating the Mac line... when the chips are done for them by Intel... what's to make us think Apple would put a bunch of additional effort into not only the surrounding hardware, but also custom chips.

    While they could do it (and would they outpace the rest of the industry long-term, or is their current A-series success more of an optimization, then a wall like Intel seems to be facing?), I think we'd need to see an even stronger Mac commitment than any of us have been imagining to pull this off beyond maybe powering their lowest end of Mac portables.
  • Reply 151 of 176
    I believe the Apple project roadmap goes to
    Feel it 3D touch CompleteOS.
    Meaning
    the OS will LOOK and FEEL 3D
    Not 3D glasses 3D, but "Sensing-Feeling" of the actual content will give you a more, how should i put it,  "gestalt" experiment of Apple devices.

    NB: I do not work at Apple.
  • Reply 152 of 176
    tallest skiltallest skil Posts: 43,388member
    Not 3D glasses 3D, but "Sensing-Feeling" of the actual content will give you a more, how should i put it,  "gestalt" experiment of Apple devices.
    Do you mean visual depth of UI (so a return to more skeuomorphism) or do you mean “touch something and the system will vibrate differently depending on what it is, simulating texture”?
  • Reply 153 of 176
    MarvinMarvin Posts: 15,322moderator
    cgWerks said:
    jbdragon said:
    I see no reason why you couldn't continue to Virtualize Windows on the Mac. Maybe it takes a bit of a speed hit.
    Emulation ≠ virtualization. Generally, there isn't a 'bit' of a speed hit, but a massive one. Instead of talking like x of native (maybe in the 80s-90s), it's more like how many x times slower (3-5, etc.).
    Windows on ARM uses x86 emulation and was benchmarked and as you say it's many times slower than native:

    https://mspoweruser.com/windows-10-on-arm-extensively-benchmarked-natively-and-with-x86-emulation/
    https://www.techspot.com/review/1599-windows-on-arm-performance/

    Emulation would have to be intended to support software that wasn't recompiled and should be the exception. All the big software suites would be recompiled. The big gain is battery life, which was up to 4 hours more. Price is another benefit but those benchmarks show that using Intel's least expensive Celeron chips would be better:

    https://ark.intel.com/products/95596/Intel-Celeron-Processor-N3450-2M-Cache-up-to-2_2-GHz ($107)

    If a hardware switch was planned in 2020, they could be on 5nm ARM chips ( https://hothardware.com/news/tsmc-breaks-ground-on-5nm-fab ). Apple's chips are already faster than the Snapdragon tested there. The Apple chip in those tests could be around 4x faster than the Snapdragon in 2020. This would still be slow in some cases and would look bad vs the 2020 Intel chips but emulation would be a stop-gap until native software solutions are implemented and I assume many solutions would be in place if Windows-based ARM devices are wide-spread.

    The Snapdragon Windows devices are already available ( https://store.hp.com/us/en/pdp/hp-envy-x2-12-e011nr ) so we'll see how those devices cope with the dual software support over the next couple of years and if Microsoft can manage it ok, Apple would too. It would be easy for people to stick to x86 Macs for the next 5 years at least so not something to be overly concerned about just now.
    edited April 2018
  • Reply 154 of 176
    thttht Posts: 5,437member
    cgWerks said:
    Just listening to a discussion about this on ATP podcast, and they made a really great point.

    Apple would have to design special ARM based chips for anything higher end than the power of the iOS devices. Given how Apple already lags in updating the Mac line... when the chips are done for them by Intel... what's to make us think Apple would put a bunch of additional effort into not only the surrounding hardware, but also custom chips.

    While they could do it (and would they outpace the rest of the industry long-term, or is their current A-series success more of an optimization, then a wall like Intel seems to be facing?), I think we'd need to see an even stronger Mac commitment than any of us have been imagining to pull this off beyond maybe powering their lowest end of Mac portables.
    The CPU ISA (x64, ARM) isn’t the issue. Most of it has to do with Apple fans thinking what their product portfolio should look like, what Apple thinks its product portfolio should look like, and what their commitment is like. There’s currently very little confidence in their Mac commitment, which has little to do with the CPU architecture.

    It’s quite easy to imagine Apple abandoning headless machines. It looks like they were on their way to doing that in 2016 when both the Mac mini and Mac Pro didn’t receive updates for 3 years, and somewhere late that year or early 2017, they decided the iMac Pro wasn’t going to be enough.

    This is assuming they wanted to do it that way, not that the Mac product team wasn’t continually fucked up, making mistake after mistake with products not making it to market and not making good products. Lots of talk with the Mac Pro, but Apple abandoned selling an Apple branded display. A $1000 Apple display would have made more money than a Mac Pro would have, perhaps by factors of 3 to 5. Then, they stopped development of wireless routers. That’s also an easy sale as it would be an accessory for their billion strong iOS customer base. They could have made the Time Capsule a private network storage/backup for iOS customers. Who knows.

    Sticking with Intel hasn’t been all that great for about 4 years now, so they aren’t some critical piece of the hardware. There currently isn’t any Y-series 8th gen processors (5 W), which means the MacBook can’t be updated until Fall at the earliest or Apple will update it in some other way, with little performance improvement to show for it, or not update it at all. High performance Intel processors require 90 to 150 W. If Apple can get the same performance at 50 W, that’s a huge win for them.
    docno42
  • Reply 155 of 176
    cgWerkscgWerks Posts: 2,952member
    Marvin said:
    Emulation would have to be intended to support software that wasn't recompiled and should be the exception. All the big software suites would be recompiled.
    Yeah, it really depends on where the industry as a whole goes, I think. It would be interesting to know what percentage of people are using x86 compatibility to run a particular app, vs running the platform. I know for me, it's the latter, as any software I really need is Mac native. But, I still run virtual machines for testing, browsers, etc. I'm sure the masses of developers running Macs do the same.

    tht said:
    There’s currently very little confidence in their Mac commitment, which has little to do with the CPU architecture.

    It’s quite easy to imagine Apple abandoning headless machines. It looks like they were on their way to doing that in 2016 when both the Mac mini and Mac Pro didn’t receive updates for 3 years, and somewhere late that year or early 2017, they decided the iMac Pro wasn’t going to be enough.

    This is assuming they wanted to do it that way, not that the Mac product team wasn’t continually fucked up, making mistake after mistake with products not making it to market and not making good products. 
    Probably some of each. I think resources got pulled off the Mac to support the rapid growth of iOS. Then, Apple's "what's a computer?" type thinking likely kicked off internally inside Apple or among the decision makers, at least. But, that move wasn't very well thought through.... it might apply to the masses, but not the whole. Trucks, are still necessary.

    I also think they are trying way, way too hard to come up with some revolutionary thing. Schiller's comment when they introduced the 2013 Mac Pro spoke volumes. What people need, though, is closer to a Dell that runs macOS. If they can truly innovate, then great. But, little beautiful cylinders and fancy touch-bars, and such seem more for show of 'look what we can do' than thought through practicality.

    And, while we all want computers to go faster and faster, using Intel as an excuse is pretty much just that, an excuse. The rest of the PC market seems to keep putting the latest Intel chips and tech into their designs OK. I (and I think most Mac users) would be relatively appeased if Apple could simply do that, too. As I've said bunches of times on these forums... if Apple just took the Mac Mini and Mac Pro (cheese grater, cylinder, or both) and put the latest CPUs, ports, and appropriate GPUs in, many of us would be really happy. Especially after seeing what they did to some of the designs like the '16-17 MBP, I'd rather they don't try to redesign anything!

    As for headless machines, I kind of agree (but hope not). I keep complaining about the lack of prosumer machines, and I think Apple just says - iMac. That's probably true for most consumers, I suppose. But, not everyone wants an all-in-one, especially since Apple removed input capabilities. The new iMacs have great displays, but I don't want a single-purpose display. I also don't trust running a normal iMac hard.
    docno42
  • Reply 156 of 176
    Not 3D glasses 3D, but "Sensing-Feeling" of the actual content will give you a more, how should i put it,  "gestalt" experiment of Apple devices.
    Do you mean visual depth of UI (so a return to more skeuomorphism) or do you mean “touch something and the system will vibrate differently depending on what it is, simulating texture”?
    Both. You got it.
  • Reply 157 of 176
    melgrossmelgross Posts: 33,510member
    cgWerks said:
    melgross said:
    there are benchmarks that do test short term functionality, and there are those made to test longer term performance. That’s normal for mobile, where both heat and power draw causes throttling after a time, which could be mere seconds, or minutes.
    Yea, that's kind of what I'm getting at. When they say a particular A-series chip is the same speed as some model MacBook Pro on a benchmark... does that mean if you took that A-series chip, and gave it a huge battery or line-power, and heat dissipation, that it could do the same higher-end apps and games as the MacBook Pro at similar performance (assuming code compatibility)?

    Otherwise, we're talking some inefficiency on an unheard of scale if my PS4 sucks down 200 watts to do the same thing as an iPhone. :smiley: 

    Or, to put it another way... is it similar to an incandescent vs LED lightbulb where they put out the same lumens, but use less power, always? Or, is it that 'throttles down after a second' we're seeing that allows the mobile to use such less power, but still post a high number on the benchmark?
    It’s an interesting question. Chip families use different amounts of power per performance levels.

    lets go back a ways to the turning of the century. Back then, Intel had an architecture called Netburst. Intel had improved its process technology to the point that they were ahead of everyone in running speed, then measured in hundreds of MHz, and then, rapidly, in GHz. They, IBM and others were so confident that they could continue to ramp these speeds up that we had a continuation of the MHz race, which became the GHz race. In fact, these companies were so sure of this that they were predicting that in a few years, CPUs would hit 10 and even 15GHz! We know how that worked out. When they got to 90nm, they all found that losses became so high that they couldn’t go on increasing speeds. Intel’s last Netburst chip, the Prescott, hit a normally cooled 3.8Ghz. These were single core chips.

    intel went back to a design they had put on the shelf, called the M series, not related to the M series of today. These were more mobile chips, and Intel went to them for their new chip designs. They were more efficient, though performance was lower. But they reworked them, and to our shock, Apple moved to Intel, before we even knew about these new chips, called the Yonah. Apple got these chips first, as part of the deal they they move to Intel. That evolved into the Core, then the Core 2, and we saw two cores, then four, etc.

    all of these chips ran at lower speeds than Netburst. After some time, each core exceeded the performance of the Prescott at much lower power levels per core.

    i know this walk back in history is boring to some, but it gives an example of a radical difference, even between x86 chip families on power draw and performance, and the different designs that resulted from that.

    the same thing is true for x86 vs, ARM. Two very different designs. ARM wasn’t designed for high performance, but for low power draw. Some people here remember that ARM was started by Apple back in the early 1990’s for their Newton project, based on a risc chip from Acorn Computer.

    consideringhat x86 was, from the very beginning, starting with the 4004, around 1970, when there was no serious thought about mobile, to be run off wall power. Efficiency wasn’t designed into it in a way that would be needed for any mobile use, while ARM was designed to be used back in 1993 with the really weak batteries of the day, and no real thought about desktop needs. So we have chip families coming from opposite directions, and we’re now seeing attempts from both ends to meet somewhere in the middle.

    As long as we’re not talking about x86 software on ARM, ARM has a major advantage in performance per watt today. Intel’s M series, used in the Macbook, is a direct competitor, but in the x86 space. But the A10x betters the Ms’ performance already, at the same power level, and might move even further ahead in the next iteration.

    a problem for x86 is that there is the question of backwards compatibility. So it took years for Intel to dump 8 bit instructions. But they still not only include 32 bit, but also pretty much ancient 16 bit as well. This all takes up space on the chip, and sucks power.  By the time Apple began to use ARM there were versions with just 32 bit, and then 32/64. With Apple purging all 32 bits from their SoCs, they’re again freeing up space and power requirements. So ARM is more efficient than x86.

    but the question remains as to what would happen if Apple decided to open the power envelope to 12 watts over the 6 they have now with the “x” variants. Will performance scale up well enough, or are these low power designs limited in that way, which is entirely possible? They could ramp up to 4 high power cores, and use two SoCs in a device. But more cores doesn’t always mean more performance. Most apps aren’t written for more than just a few cores, other than video editing, 3D, and a small number of others.

    it also isn’t clear just how much multitasking benefits from more than around 4 cores, if the fewer cores individually perform better than the cores in the larger system, in real world use.
    cgWerksmethetechmandocno42
  • Reply 158 of 176
    cgWerkscgWerks Posts: 2,952member
    melgross said:
    but the question remains as to what would happen if Apple decided to open the power envelope to 12 watts over the 6 they have now with the “x” variants. Will performance scale up well enough, or are these low power designs limited in that way, which is entirely possible? They could ramp up to 4 high power cores, and use two SoCs in a device. But more cores doesn’t always mean more performance. Most apps aren’t written for more than just a few cores, other than video editing, 3D, and a small number of others.

    it also isn’t clear just how much multitasking benefits from more than around 4 cores, if the fewer cores individually perform better than the cores in the larger system, in real world use.
    Great explanation, and thanks. I'm starting to get a better handle on this via such conversations and other's I've had over the last several days. If I'm understanding, Apple would *possibly* be able to use a current A-series chip in something like a MacBook, assuming software compatibility. But, to make something more akin to our more powerful desktops, Apple would have to actually design a higher-power, likely more-core variant of the A-series architecture.

    I'm always happy with more cores, but yeah, that isn't the case for a lot of people. Single-core performance is still quite important to many. 4-cores is probably enough for most, but I'd like even more. Besides the ability to use pretty much any number of cores on some pro apps (like a rending engine), it's nice to be able to run something like Folding@home on a couple cores, have an app crunching some video on a couple, and still have a couple more so what you're working on in the foreground doesn't even feel impacted. Two-cores is a big deal in general feel, 4 adds that feel again for more power-users... but then more cores allows the real pro-stuff to happen, or more individual apps to be operating without user-felt impact.
  • Reply 159 of 176
    melgrossmelgross Posts: 33,510member
    cgWerks said:
    melgross said:
    but the question remains as to what would happen if Apple decided to open the power envelope to 12 watts over the 6 they have now with the “x” variants. Will performance scale up well enough, or are these low power designs limited in that way, which is entirely possible? They could ramp up to 4 high power cores, and use two SoCs in a device. But more cores doesn’t always mean more performance. Most apps aren’t written for more than just a few cores, other than video editing, 3D, and a small number of others.

    it also isn’t clear just how much multitasking benefits from more than around 4 cores, if the fewer cores individually perform better than the cores in the larger system, in real world use.
    Great explanation, and thanks. I'm starting to get a better handle on this via such conversations and other's I've had over the last several days. If I'm understanding, Apple would *possibly* be able to use a current A-series chip in something like a MacBook, assuming software compatibility. But, to make something more akin to our more powerful desktops, Apple would have to actually design a higher-power, likely more-core variant of the A-series architecture.

    I'm always happy with more cores, but yeah, that isn't the case for a lot of people. Single-core performance is still quite important to many. 4-cores is probably enough for most, but I'd like even more. Besides the ability to use pretty much any number of cores on some pro apps (like a rending engine), it's nice to be able to run something like Folding@home on a couple cores, have an app crunching some video on a couple, and still have a couple more so what you're working on in the foreground doesn't even feel impacted. Two-cores is a big deal in general feel, 4 adds that feel again for more power-users... but then more cores allows the real pro-stuff to happen, or more individual apps to be operating without user-felt impact.
    Again, that depends on the individual core. As an example, we’ve seen some ARM chips for Android with 8 cores working together, the Big/Little concept (Apple’s is different). But each core, because of the high cost of chip area, and overall power draw, is very weak. So when running multi core tests, they measure very well, but individual core numbers are poor. Chips like that will perform worse in multitasking, not better.

    the reason to go to more cores is more because it’s difficult to pour more power in a core, so distributing it amongst several allows for overall greater performance. You’ll notice that most chips have a turbo mode for when only one or two cores are needed. But that’s really not intended for extended use. In the Desktop area this makes sense, because of unlimited power availability. Top chips use 150 Watts. The overall machine, with one high end GPU card can draw over 1,000 Watts. Then there’s the power for the monitor.

    i’m not sure what Apple thinks of this. They have no choice using x86, but they would using their own. People are assuming that these would be A series, but there’s no reason to,think that, though it could be. If they can do what I’ve been pushing - using some x86 instructions, they could more easily run x86 software without developers having to convert to a new chip family, particularly at first. They could make small changes each time they upgraded the software.

    but this is still a rumor.
    edited April 2018 cgWerks
  • Reply 160 of 176
    lkrupp said:
    I've been able to run Macs for work because of the ability to virtualize Windows. Not sure this bodes well for the future of Macs in certain business segments unless emulation performance under these new chips will be acceptable.
    I don’t think Apple has ever given a rats butt about running Windows on Macs. In my opinion they created Boot Camp to stave off the third party crap that was starting to ooze out of neckbeard’s basements. I don’t think many businesses opt to buy Macs and then install Windows. That’s not very cost effective at all. Nope, the dual boot crowd is the same tiny yet vocal minority of tech wannabes. And Apple has never, EVER, cared about what small minorities of naysayers think of their plans. I’m really happy about the prospect of the demise of hackintoshes though.
    Boot Camp is super-useful to me, a college student w/ a MacBook that has to use a few Windows-only apps.
Sign In or Register to comment.