Intel launches new Kaby Lake chips suited for Apple's MacBook Pro, iMac

1235»

Comments

  • Reply 81 of 96
    SoliSoli Posts: 10,035member
    bkkcanuck said:
    Soli said:
    Soli said:

    "However, I don't expect this to occur until their desktop OS reaches an evolutionary state that makes less instructive and offer a shorter learning curve like iOS."

    I've been going on about a proOS in several threads -- specifically something in between iOS and macOS -- to exploit the iPad Pro hardware.

    Implemented correctly, proOS could subsume macOS over time.
    eeh gads! not another OS to support. I seriously hope that you are wrong! This proOS sounds like what Microsoft has done for years (that has not worked) i.e., with their so-called tabletbooks going back well over a decade. They have all sucked IMHO - your mileage may vary ;-)
    I don't. see it happening. What I can see as a possible route is iOS, at least starting with the iPad Pro due to its performance capabilities and lower-volume sales, would also contain an Cocoa UI that is very similar to macOS' UI in many regards. Just like CarPlay is an additional UI that is sent from the iPhone to your automobile's display, the iPad Pro could push a Mac-like UI to an external display with a traditional keyboard and mouse to help bridge the gap between those that mostly need an iPad but occasionally need a desktop OS. This could actually help boost sales for both the iPad Pro and the Mac for those Apple customers that buy iPhones and iPads, but not Macs.

    I still think the odds are relatively slim, but I can see a path to how this could make sense. It could also be a trial run before we get a Mac or Mac-like low-end "PC" from Apple that runs on ARM.
    I get the resistance to another OS...  But, realistically Apple already supports multiple OSes (or variants) quite well -- see below.  Add to that list the iOS variant for CarPlay.

    […]
    I get that they support multiple OSes, but that's not the issue here, and CarPlay isn't an OS but an alternate UI that is built into iOS for when you're connected to a CarPlay-capable automobile. It's still iOS on the iPhone pushing the CarPlay UI and responding to touchscreen and button inputs from the automobile. Whether you have an automobile that supports CarPlay or not you're iPhone's iOS build still contains all the CarPlay code.
    If you are going to make the distinction that it is just the UI that is different for CarPlay, then technically speaking ALL Apple OSs are the same operating system .... "Darwin" with different UIs.
    That's not true at all. There's a huge range between the upper layer UIs and the united Darwin core. For example, you can't replace Cocoa with CocoaTouch to get macOS out of iOS. There are frameworks, drivers, and all sorts of other elements involved. Some frameworks for iOS may have been streamlined and ported to macOS without any changes, but even if that's true that's likely not common.
    edited January 2017
  • Reply 82 of 96
    Soli said:

    There is an argument to made that all those iOS variants as unique OSes, but they could be combined into a single build that that is much larger and then requires additional code to determine with HW iOS needs to be run on, but that screenshot is an argument as to why Apple will not make a single unifying OS for macOS, iOS, watchOS, touchbarOS, and every other OS that will possibly use their OS X/Darwin codebase. Macs can simply handle it better at this point. Even now there might be some HW variations within select iOS devices that is effetely a combo update that will then remove the code for HW that it doesn't have, but it's surely on a very limited scale compared to how macOS works.
    bkkcanuck said:

    If you are going to make the distinction that it is just the UI that is different for CarPlay, then technically speaking ALL Apple OSs are the same operating system .... "Darwin" with different UIs.

    In an [almost] always connected to the Cloud world we're seeing the Macs (to some extent) and iDevices off-loading their work to servers in the sky.  There are advantages and disadvantages to this approach.  One of the biggest disadvantages is that you have to upload/download/maintain large files in the Cloud at the expense off time and $.

    What I would like to see is a modular OS that adapts to what I need to do when I need to do it on my local computing device of choice.

    If I already have the files available, locally -- why should I upload them so I can process them on the Cloud -- then download the finished product to use locally... rinse, repeat.

    With an adaptive modular OS, I would only need to install the App I need from the App Store onto the local computing device -- the App Store would automatically install the required OS modules when necessary.

    The App/OS Module download would take a fraction of the time compared to shuttling large files to and fro with the Cloud.

  • Reply 83 of 96
    nhtnht Posts: 4,522member
    jSnively said:
    nht said:

    The only embarrassment is making silly assertions that are easily disproved.  Anyone on a 3 year replacement cycle sees a 35% increase in CPU performance.
    To be fair that was while we were still riding high on "tick tock". It's yet to be seen how things will shake out with the new "process - architecture - optimization" cycle in full. 
    The silly assertion was covering the last 4 years, not moving forward.  Given that "tick tock" was ambitious and suffered from delays anyway the new cycle likely won't be that much different than before.
    Kaby Lake is kinda the first of the 'optimization' part of that new cycle, and its a huuuuuuuuuuuuggggggeeeeee disappointment on the high end. The reaction is going to be bigger just by virtue of the fact that it's kind-of the first of those. 
    At the high end you can stably push the i7-7700K to 5 Ghz without water cooling taking it above the 1000 point mark on Cinebench.  Which you can't with the i7-6700K.  So yes, the optimize phase doesn't increase IPC but does improve production and better yields leading to higher performance.  

    Intel's 35% CPU gain in the last 3 years has been in addition to the heat and power reduction improvements to match or exceed ARM in terms of performance per watt.  Intel may have failed to capture mobile but did keep ARM from capturing highly dense server sales.  Improving performance per watt isn't just about bumping IPC and performance per watt is why Apple moved to Intel.

    The complaining about how the i7-7700K is a dissapointment is a lot like the complaining here about the MBP.  The folks doing the most complaining mostly don't own the i7-6700 part and don't overclock to get performance at the "high end".  The guys that own water-cooled i7-6700K rigs are probably wondering how fast the i7-7700K can go with top end cooling without needing to get as lucky at the silicon lottery.  The parts should come binned pretty well because of that optimization phase and overclock better.
    Soli
  • Reply 84 of 96
    SoliSoli Posts: 10,035member
    With an adaptive modular OS, I would only need to install the App I need from the App Store onto the local computing device -- the App Store would automatically install the required OS modules when necessary.
    That's not a modular OS, that's a modular app. That's what's Apple's been doing with the thinning (app slicing, bitcode, and on-demand resources). This is why when you grab an app for your iPhone it can automatically show up on your Apple TV. Wth an ARM-based "PC" the same could happen again there, too, if the developer has decided to build their app to work all those different OSes.
  • Reply 85 of 96
    nhtnht Posts: 4,522member

    bkkcanuck said:
    jSnively said:

    Everyone needs to make some expectation adjustments when it comes to CPUs, we're starting to bump up hard against physics. Graphene, germane, and Titanium trisulfide have shown promise over Silicon, but there's very real problems with all of those approaches. And if we want to keep shrinking these things, there's tons of hard work to be done. We can't just parallelize forever.
    Your right -- even if you have a CPU gain of 35% you will be lucky to see a 1/3 of that in real world scenario performance given everything else the same.  
    The real world scenario is in that past 3 years IO and GPUs have also improved.  Without that increased CPU performance the total system would be less balanced and the impacts of the other improvements not as profound.
    Also for the vast majority of users -- the computers that they use have 80% or 90% average idle time on the processors -- and only in short spurts do you see it hit the max.... and depending on the task that might not even be noticeable using a faster processor.  I could of course temporarily use one.... been running my Mac Pro 2008 (8-Core) at 750% CPU usage (i.e. almost 100% of all cores) for the last 6 weeks straight :o
    Well, given that the 4 core 2012 Mac Mini benches in at 10047 vs 8165 on your 8 core 2008 Mac Pro you probably should have bought a new machine within the last 5 years or so.

    The current 2015 4 core i7-6700K iMac benchmarks twice as fast as your Mac Pro and edges out the 6 core 2013 Mac Pro.

    On the other hand if you peg the iMac at 100% for 6 weeks it'll probably die.


    Solifastasleep
  • Reply 86 of 96
    nht said:

    bkkcanuck said:
    jSnively said:

    Everyone needs to make some expectation adjustments when it comes to CPUs, we're starting to bump up hard against physics. Graphene, germane, and Titanium trisulfide have shown promise over Silicon, but there's very real problems with all of those approaches. And if we want to keep shrinking these things, there's tons of hard work to be done. We can't just parallelize forever.
    Your right -- even if you have a CPU gain of 35% you will be lucky to see a 1/3 of that in real world scenario performance given everything else the same.  
    The real world scenario is in that past 3 years IO and GPUs have also improved.  Without that increased CPU performance the total system would be less balanced and the impacts of the other improvements not as profound.
    Also for the vast majority of users -- the computers that they use have 80% or 90% average idle time on the processors -- and only in short spurts do you see it hit the max.... and depending on the task that might not even be noticeable using a faster processor.  I could of course temporarily use one.... been running my Mac Pro 2008 (8-Core) at 750% CPU usage (i.e. almost 100% of all cores) for the last 6 weeks straight :o
    Well, given that the 4 core 2012 Mac Mini benches in at 10047 vs 8165 on your 8 core 2008 Mac Pro you probably should have bought a new machine within the last 5 years or so.

    The current 2015 4 core i7-6700K iMac benchmarks twice as fast as your Mac Pro and edges out the 6 core 2013 Mac Pro.

    On the other hand if you peg the iMac at 100% for 6 weeks it'll probably die.


    Funny, the Geekbench 3 chart had the Mac Pro higher and the Mac mini lower - don't know why the difference.  The Mac mini could not drive all my monitors.... which was the original reason for going with the Mac Pro originally.  If they ever released a Mac mini with the MacBook Pro 15" internals.... it would be more than sufficient.  It has been running Handbrake for 6 weeks constantly now :o 
  • Reply 87 of 96

    Soli said:
    With an adaptive modular OS, I would only need to install the App I need from the App Store onto the local computing device -- the App Store would automatically install the required OS modules when necessary.
    That's not a modular OS, that's a modular app. That's what's Apple's been doing with the thinning (app slicing, bitcode, and on-demand resources).

    You and Dick are miscommunication.  Dick is talking not about app resources being installed on-demand he is talking about OS modules.   I think they would be a fair distance from that anyways since the underlying operating system is a mostly monolithic kernel and a lot of what is built on top of it has dependencies that would make it more fragile to do it.  It is the reason why Apple has talked about "hiding" certain features that a user does not want on their iPhone, but it would be problematic to allow them to be removed.
    Soli
  • Reply 88 of 96
    jSnivelyjSnively Posts: 429administrator
    nht said:
    ...
    The silly assertion was covering the last 4 years, not moving forward.  Given that "tick tock" was ambitious and suffered from delays anyway the new cycle likely won't be that much different than before.
    Fair point, but that's also why I said it remains to be seen. The last 4 years were a bit of a dry-run of the new cycle (as you mention) that's why the 7700 is kind of the first 'optimization' pass on the market, so the response is going to be a bit louder than it otherwise would be. That said, this isn't something 'new', people have been complaining about the gains on Intel CPUs for a long time now. Hell, it's one of the reasons they decided to officially move to the new cycle instead of just allowing delays. People need time to adjust to the realities of the market.

    nht said:

    The complaining about how the i7-7700K is a dissapointment is a lot like the complaining here about the MBP.  The folks doing the most complaining mostly don't own the i7-6700 part and don't overclock to get performance at the "high end".  The guys that own water-cooled i7-6700K rigs are probably wondering how fast the i7-7700K can go with top end cooling without needing to get as lucky at the silicon lottery.  The parts should come binned pretty well because of that optimization phase and overclock better.
    ... which is a huge disappointment. No extra cores (we should have long been on 8 by now) and a +500Mhz base clock boost doesn't matter to almost anyone at this point. 6700Ks do 4.5 OC'd just fine on air (pretty much across the line, no needing to win bin lottery) and they can go higher depending on how you're willing to cool them and whether or not you win that lottery. Now the price of that chip is going to drop as well. You will need to start carving out some really specific circumstances in order to recommend a 7700K.

    While it is true that a lot of this doesn't matter for Mac owners (90% of their stuff won't actually use this part), that doesn't mean people's opinions of the chip are bunk. Being so dismissive is presumptuous and likely wrong in a lot of cases. A lot more sites than Ars are willing to go to bat saying the same stuff about this chip.

  • Reply 89 of 96
    nhtnht Posts: 4,522member
    jSnively said:

    nht said:

    The complaining about how the i7-7700K is a dissapointment is a lot like the complaining here about the MBP.  The folks doing the most complaining mostly don't own the i7-6700 part and don't overclock to get performance at the "high end".  The guys that own water-cooled i7-6700K rigs are probably wondering how fast the i7-7700K can go with top end cooling without needing to get as lucky at the silicon lottery.  The parts should come binned pretty well because of that optimization phase and overclock better.
    ... which is a huge disappointment. No extra cores (we should have long been on 8 by now) 

    Why?  4 cores is probably still the sweet spot in term of parallelization and single core speed.  While the articles are a couple years old now tasks on many high end applications vary in terms of core usage:

    https://www.pugetsystems.com/labs/articles/Adobe-Lightroom-CC-6-Multi-Core-Performance-649/

    https://www.pugetsystems.com/labs/articles/Adobe-Photoshop-CC-Multi-Core-Performance-625/

    Adobe states:

    Photoshop generally runs faster with more processor cores, although some features take greater advantage of the additional cores than others. However, you’ll get diminishing returns with multiple processor cores: The more cores you use, the less you get from each additional core. Therefore, Photoshop doesn’t run four times as fast on a computer with 16 processor cores as on a computer with four cores. For most users, the increase in performance that more than six cores provides doesn’t justify the increased cost.
    For the most part getting the 4 core i7-6700 was better than getting the 6 core i7-5930K or 8 core Xeon E5-1680.

    https://www.pugetsystems.com/labs/articles/Haswell-vs-Skylake-S-i7-4790K-vs-i7-6700K-641/#CPUPerformance-LightroomCC

    and a +500Mhz base clock boost doesn't matter to almost anyone at this point. 6700Ks do 4.5 OC'd just fine on air (pretty much across the line, no needing to win bin lottery) and they can go higher depending on how you're willing to cool them and whether or not you win that lottery.

    500Mhz is a 13% bump in clock speed for stock.  While this only translates into maybe a 5-6% increase for gamers it's a more significant 10% increase for Photoshop users.

    https://www.pugetsystems.com/labs/articles/Adobe-Photoshop-CC-2017-Intel-Core-i7-7700K-i5-7600K-Performance-879/

    While the 6700K will OC to 4.5 on air the 7700K will OC to 4.9 or 5.0 on air depending on which site did the OC.  That games aren't as CPU bound anymore isn't important to folks that buy top end gear for work.  Most of those won't OC because it's a crapshoot.

    People's opinion of this chip are objectively bunk as a 10% CPU performance increase is significant.  Would you bother upgrading if you had a top end i7-6700K machine from last year?  No.  But anyone with a 3 year replacement cycle sees huge performance increases from Intel.  Going from the 2014 i7-4790K to the 2017 i7-7700K is a 20-25% average CPU jump for photoshop users.

    edited January 2017 Soli
  • Reply 90 of 96
    jSnivelyjSnively Posts: 429administrator
    I linked Amdahl's law in my OP, so I'm well aware of the limits of parallelization.

    Obviously it's going to depend on the workload you're throw at it, but I think 8 cores is the sweet spot, not 4. We're also working with a few generations of software that have been optimized for 2-4 cores. The market penetration needs to happen to get developers to take better advantage of more cores. Games, since you brought it up, are not usually CPU bound. In fact, they typically enjoy so much CPU headroom because of the choices console manufacturers have made. Not because they can't scale better, perform different tasks, or otherwise take advantage.

    OC'ing is also not "a crapshoot." either. When was the last time you overclocked anything? A 10-15% overclock is basically automatic at this point, it's pushing 1 button in any relatively modern BIOS.

    People's opinions are not objectively bunk because you think 10% is a major improvement. We're talking about the difference of an OC in a new chip revision. That's how small the gains we're talking about have gotten. Yes, you can extend the cycle out further back and say "oh 30-40%", but the reality is that's nowhere near the amount of progress that we used to enjoy, and that's why so many people are unhappy with Intel.

    It's not enough to entice the previous generation of users to update, and when considering a new purchase the performance difference between the two is small enough that the price/performance curve is all out of whack. it's just not a good enough chip.

    Like I said in my OP, the market is going to need to adjust to the new realities.
    edited January 2017
  • Reply 91 of 96
    fastasleepfastasleep Posts: 6,417member
    Soli said:

    And what does power consumption have to do in a desktop processor?
    One thing at a time, why do you believe that power consumption isn't an issue with desktop-class processors? 

    It's not an issue because you're not running on a battery. It does have some benefits like allowing slimmer packaging, but these are minimal.
    I think it has more to do with just if battery operated or not. Heat is a big factor - the power consumed is directly related to the heat produced. Apple doesn't like fans in the iMac, Mini so they rely on convection to vent the heat. I have a 2011 i7 @3.4 Ghz and a 2013 at similar speed. The 2011 is the first one to come from Apple with an SSD & the combo of the screen, processor, SSD... it gets hot as a fire cracker on top. The extra heat slows the performance and also surely shaves time off the life of the iMac.
    "Convection" can include the use of fans you know. But either way, all iMacs and minis have fans:
    https://www.ifixit.com/Guide/iMac+Intel+27-Inch+Retina+5K+Display+Fan+Replacement/30519
    https://www.ifixit.com/Guide/Mac+Mini+Late+2014+Fan+Replacement/32644
  • Reply 92 of 96
    nhtnht Posts: 4,522member
    jSnively said:
    I linked Amdahl's law in my OP, so I'm well aware of the limits of parallelization.

    Obviously it's going to depend on the workload you're throw at it, but I think 8 cores is the sweet spot, not 4. We're also working with a few generations of software that have been optimized for 2-4 cores. The market penetration needs to happen to get developers to take better advantage of more cores. Games, since you brought it up, are not usually CPU bound. In fact, they typically enjoy so much CPU headroom because of the choices console manufacturers have made. Not because they can't scale better, perform different tasks, or otherwise take advantage.
    So if software is still largely optimized for 2-4 cores and the benchmarks of common pro software shows the same and adobe comments that going over 6 cores isn't cost effective and every build site recommends 4 cores for gaming, photoshop, etc builds then why is 8 cores the sweet spot?  

    What data do you have that indicates that 8 is better than 6 or 10?
    People's opinions are not objectively bunk because you think 10% is a major improvement. We're talking about the difference of an OC in a new chip revision. That's how small the gains we're talking about have gotten. Yes, you can extend the cycle out further back and say "oh 30-40%", but the reality is that's nowhere near the amount of progress that we used to enjoy, and that's why so many people are unhappy with Intel. 
    People's opinions are objectively bunk because the newest chip follows the existing performance improvement curve.



    http://preshing.com/20120208/a-look-back-at-single-threaded-cpu-performance/

    The large 50% increases ended around 2004 and the 20% increases around 2011.  The generational differences are on track based on historical data so the opinions that Intel sucks is bunk.  That you can get a 10% bump by OC'ing is immaterial.  You can do that the next generation as well.  So we haven't "enjoyed" massive improvements for over a decade.  Their unhappiness is based on nada but an entitled belief that they deserve unlimited growth JUST like the unhappiness here regarding the MBP. 
  • Reply 93 of 96
    rcfarcfa Posts: 1,124member
    ARM based Macs are totally doable.
    Not even virtualization needs to be the stumbling block, as the PPC-intel transition with Rosetta showed. Further, M$ is planning to release the full-fledged desktop version of W10 for ARM, meaning virtual PCs can emulate a Windows machine with an ARM CPU and run ARM based Windows software, which would be quickly plentiful once M$ goes that way. So there are plenty of options...
  • Reply 94 of 96
    nht said:

    The large 50% increases ended around 2004 and the 20% increases around 2011.  The generational differences are on track based on historical data so the opinions that Intel sucks is bunk.  That you can get a 10% bump by OC'ing is immaterial.  You can do that the next generation as well.  So we haven't "enjoyed" massive improvements for over a decade.  Their unhappiness is based on nada but an entitled belief that they deserve unlimited growth JUST like the unhappiness here regarding the MBP. 
    The expectation has been partly set by Intel and then missing the expectations they set.   As the dye shrinks the cost of each generation to come to market goes up.  The number of competitors left standing gets whittled down..... partially because the cost of development and the risk of failure rises.... making it less attractive to invest money into.  Silicon has come pretty well to it's end of life.  

    The three year cycle of needing to replace your computer has more or less died..... the computers are powerful enough now that you really don't need to replace your hardware but maybe once a decade (for desktops), or only after your battery has collapsed (for laptops)....  and most people demanding "performance" typically are not even using the performance they have now.
    edited January 2017 jSnively
  • Reply 95 of 96
    SoliSoli Posts: 10,035member
    rcfa said:
    ARM based Macs are totally doable.
    Not even virtualization needs to be the stumbling block, as the PPC-intel transition with Rosetta showed. Further, M$ is planning to release the full-fledged desktop version of W10 for ARM, meaning virtual PCs can emulate a Windows machine with an ARM CPU and run ARM based Windows software, which would be quickly plentiful once M$ goes that way. So there are plenty of options…
    1) Yes, it's very doable. It's been doable for many years, but it only seems to be in the last year that I've been getting less resistance with the "that's impossible" when trying to explain Apple's speed (distance in development × time between new A-series chips), but I'm still too many people only considering Apple's last, fastest A-seres chip. For all we know Apple has 4 or 8 cores that are closer to 3GHz and at least 8GiB RAM. I can't figure out where this collective notion that the fastest ARM chip they've released for an iPad is the fastest ARM chip Apple can produce.

    2a) First off, Rosetta was emulation, not virtualization since we're talking about entirely different architectures—but that aside, there was a clear advantage with emulating PPC-based apps in x86 because of the boost in performance we had over the aging, power hungry, and hot PPC chips that forced Apple to make the transition in the first place. That isn't to say that Apple can't do the same thing again, and I've read some sparse comments that Apple could build virtualization—not emulation—into their ARM designs so that x86_64 apps could run near or at native speeds because they can design chips and their OS with the features they need.

    2b) That said, I'm not sure they would even bother. With the Mac App Store in play and all their advancements in how apps are coded and compiled with Xcode there's considerably less need for emulation to take place the way it was needed for MS and Adobe projects when their higher-end Macs had to switch to Intel to stay relevant. The path I see is Apple starting with lower-end machines, like the 12" ARM-based MacBook, where the Mac App Store would show only the apps that can run on their machine and where customers who want this small, sub-$900 Mac aren't going to be using Photoshop or needing to run Windows in either Bootcamp or a VM. I could be absolutely wrong, but that's how I foresee this happening if and when Apple is ready to include ARM to their notebook line, not the major "we're going to move everything to Intel from PPC" like Steve in 2006.

    3) If and when ARM-based Macs grow upward then an ARM-based version of Windows would become more relevant, but I'd bet two things: 1) The percentage of Mac users that also require Windows on their Mac is very low, and 2) MS will have a long and bumpy road to get a decent version of Windows on ARM with adequate programs as their "situation" of supporting so much legacy code makes everything they do slow and messy.
    fastasleepmattinozpropod
  • Reply 96 of 96
    brucemcbrucemc Posts: 1,541member
    Soli said:
    rcfa said:
    ARM based Macs are totally doable.
    Not even virtualization needs to be the stumbling block, as the PPC-intel transition with Rosetta showed. Further, M$ is planning to release the full-fledged desktop version of W10 for ARM, meaning virtual PCs can emulate a Windows machine with an ARM CPU and run ARM based Windows software, which would be quickly plentiful once M$ goes that way. So there are plenty of options…
    1) Yes, it's very doable. It's been doable for many years, but it only seems to be in the last year that I've been getting less resistance with the "that's impossible" when trying to explain Apple's speed (distance in development × time between new A-series chips), but I'm still too many people only considering Apple's last, fastest A-seres chip. For all we know Apple has 4 or 8 cores that are closer to 3GHz and at least 8GiB RAM. I can't figure out where this collective notion that the fastest ARM chip they've released for an iPad is the fastest ARM chip Apple can produce.

    2a) First off, Rosetta was emulation, not virtualization since we're talking about entirely different architectures—but that aside, there was a clear advantage with emulating PPC-based apps in x86 because of the boost in performance we had over the aging, power hungry, and hot PPC chips that forced Apple to make the transition in the first place. That isn't to say that Apple can't do the same thing again, and I've read some sparse comments that Apple could build virtualization—not emulation—into their ARM designs so that x86_64 apps could run near or at native speeds because they can design chips and their OS with the features they need.

    2b) That said, I'm not sure they would even bother. With the Mac App Store in play and all their advancements in how apps are coded and compiled with Xcode there's considerably less need for emulation to take place the way it was needed for MS and Adobe projects when their higher-end Macs had to switch to Intel to stay relevant. The path I see is Apple starting with lower-end machines, like the 12" ARM-based MacBook, where the Mac App Store would show only the apps that can run on their machine and where customers who want this small, sub-$900 Mac aren't going to be using Photoshop or needing to run Windows in either Bootcamp or a VM. I could be absolutely wrong, but that's how I foresee this happening if and when Apple is ready to include ARM to their notebook line, not the major "we're going to move everything to Intel from PPC" like Steve in 2006.

    3) If and when ARM-based Macs grow upward then an ARM-based version of Windows would become more relevant, but I'd bet two things: 1) The percentage of Mac users that also require Windows on their Mac is very low, and 2) MS will have a long and bumpy road to get a decent version of Windows on ARM with adequate programs as their "situation" of supporting so much legacy code makes everything they do slow and messy.
    It should be quite "do-able" now.  The questions really are:
    - Is Apple interested in this "two PC" approach?
    - Is Apple interested in this level of investment in the laptop market in general (or do they have a different vision for evolution of computing, where the Intel Mac line is all they care to have in this area)?
    - If both those are positive, then the next question is "what is the go-to-market" strategy.  How do they introduce this "new PC product" that clearly will not be able to do everything a Mac can do today, but should be able to do a lot (what most "consumers need"), and maybe somethings better, but costs less?
Sign In or Register to comment.