Intel outlines upcoming Core i7 'Haswell' integrated graphics, touts up to triple performance

123468

Comments

  • Reply 101 of 147
    winterwinter Posts: 1,238member
    I don't think the Air processors are that bad. They're best for what they can do. Obviously I prefer standard i5 and i7 (more the i7) mobile processors.
  • Reply 102 of 147
    wizard69wizard69 Posts: 13,377member
    winter wrote: »
    I don't think the Air processors are that bad.
    Only in relation to what is currently available in other laptops. If you are looking for mainstream performance the AIRs are pretty terrible. I'm not so certain why people get so spun up over that statement as the processors simply aren't designed for performance.
    They're best for what they can do.
    This is certainly the case.
    Obviously I prefer standard i5 and i7 (more the i7) mobile processors.
    Then in fact the AIRs are currently a terrible solution for your needs! People seem to be reluctant to put certain Apple models into buckets that categorize the line up performance wise. The AIRs aren't MBPs by a long shot with anemic CPUs, embedded Intel GPUs, no OpenCL support yet and other shortcomings. I don't know, maybe people have a different definition of terrible than I do, but AIRs can be an unpleasant experience if your needs are greater than what the current models offer.

    There is only so much performance you can get out of a low wattage chip on a given process node. Even with Haswell there could be limitations with respect to just how much of an improvement the AIr capable variants will actually deliver. I'm very hopeful that Haswell can deliver a big boost to the AIR but it isn't obvious that the performance will be that great in the final product. Right now it looks like there is a big gulf between what the GPUs in the ULV chips can do versus upper end "Iris" containing chips can do.

    Who knows what Apple will do with the AIRs, at this point they could remain terrible or actually become very attractive to MBP users.
  • Reply 103 of 147
    winterwinter Posts: 1,238member
    The order of products that interest me starting with most to least to give you an idea with regards to processors.

    1. Mac mini
    2. iMac
    3. 15" retina MacBook Pro
    4. Mac Pro
    5. 13" retina MacBook Pro
    6. 13" MacBook Air
  • Reply 104 of 147
    scprofessorscprofessor Posts: 218member


    Alienware my friends.

  • Reply 105 of 147
    nhtnht Posts: 4,522member

    Quote:

    Originally Posted by wizard69 View Post



    Then in fact the AIRs are currently a terrible solution for your needs! People seem to be reluctant to put certain Apple models into buckets that categorize the line up performance wise. The AIRs aren't MBPs by a long shot with anemic CPUs, embedded Intel GPUs, no OpenCL support yet and other shortcomings. I don't know, maybe people have a different definition of terrible than I do, but AIRs can be an unpleasant experience if your needs are greater than what the current models offer.



    There is only so much performance you can get out of a low wattage chip on a given process node. Even with Haswell there could be limitations with respect to just how much of an improvement the AIr capable variants will actually deliver. I'm very hopeful that Haswell can deliver a big boost to the AIR but it isn't obvious that the performance will be that great in the final product. Right now it looks like there is a big gulf between what the GPUs in the ULV chips can do versus upper end "Iris" containing chips can do.



    Who knows what Apple will do with the AIRs, at this point they could remain terrible or actually become very attractive to MBP users.


     


    http://************/2012/01/25/macbook-air-thunderbolt-editing-4k-video-shows-why-the-mac-pro-as-we-know-it-can-die/


     



     


    It is spendy but the MBA is pretty svelte.   If only there were mac drivers for external GPUs.


     


    /shrug


     


    Yes, I know you're going to complain that it's not the MBA doing the work in these two examples but claiming that the Air will be an unpleasant experience if your needs are greater than what the current models offer is equally true for the Mac Pro.  It will be an unpleasant experience if your needs are greater than what the current Mac Pro models offer too.


     


    The current 2012 MBA benches in around a 2010 21.5" iMac, a desktop machine only 2 years older (faster than the i3s, about even with the i5s and below the i7s).


     


    http://browser.primatelabs.com/geekbench2/search?utf8=?&q=MacBook+air+2012


     


    http://browser.primatelabs.com/geekbench2/search?utf8=?&q=imac+2010


     


    They aren't far behind the 2012 13" MBP either which isn't surprising


     


    http://browser.primatelabs.com/geekbench2/search?utf8=?&q=macbook+pro+13+2012


     


    It's faster than the 2010 MBP I'm currently using for software development (averages around 5K in geekbench).


     


    http://browser.primatelabs.com/geekbench2/search?q=MacBookPro6,2


     


    Yah, that's a three year old MBP but the current year MBP and MBAs aren't out yet either.


  • Reply 106 of 147
    wizard69wizard69 Posts: 13,377member
    So why are you comparing the AIRs to two or three year old hardware. I'm not sure why people can't grasp what has been said here, AIR is a terrible performer when judged against what is possible with current technology. It is NOT debatable and is the nature of the ULV processor


    It is spendy but the MBA is pretty svelte.   If only there were mac drivers for external GPUs.
    [/Quote]
    You attempt to defend the AIR then resort to highlighting one of the issues I've mentioned as a problem on the AIR. Why I don't know but you need to realize that when I say the AIR has a terrible processor I'm not knocking the machine as a whole.
    [Quote]

    /shrug

    Yes, I know you're going to complain that it's not the MBA doing the work in these two examples but claiming that the Air will be an unpleasant experience if your needs are greater than what the current models offer is equally true for the Mac Pro.  It will be an unpleasant experience if your needs are greater than what the current Mac Pro models offer too.
    [/Quote]
    So?

    The point is AIR can be a disappointment to a lot of people if they have demanding app requirements.
    [Quote]
    The current 2012 MBA benches in around a 2010 21.5" iMac, a desktop machine only 2 years older (faster than the i3s, about even with the i5s and below the i7s).
    [/Quote]
    Again so?

    Benchmarks seldom reflect real world use.
    [Quote]
    [URL=http://browser.primatelabs.com/geekbench2/search?utf8=✓&q=MacBook+air+2012]http://browser.primatelabs.com/geekbench2/search?utf8=?&q=MacBook+air+2012[/URL]

    [URL=http://browser.primatelabs.com/geekbench2/search?utf8=✓&q=imac+2010]http://browser.primatelabs.com/geekbench2/search?utf8=?&q=imac+2010[/URL]

    They aren't far behind the 2012 13" MBP either which isn't surprising

    [URL=http://browser.primatelabs.com/geekbench2/search?utf8=✓&q=macbook+pro+13+2012]http://browser.primatelabs.com/geekbench2/search?utf8=?&q=macbook+pro+13+2012[/URL]

    It's faster than the 2010 MBP I'm currently using for software development (averages around 5K in geekbench).

    [URL=http://browser.primatelabs.com/geekbench2/search?q=MacBookPro6,2]http://browser.primatelabs.com/geekbench2/search?q=MacBookPro6,2[/URL]

    Yah, that's a three year old MBP but the current year MBP and MBAs aren't out yet either.
    [/quote]
    I really wish people would read what I've written here. Be definition when you buy an AIR you get a low end processor. It doesn't really matter how well that machine performs with respect to 3 year old hardware. I don't go out and buy hardware for to get 3 year old performance nor do I pine for the foolish concept of an external GPU.
  • Reply 107 of 147
    hmmhmm Posts: 3,405member

    Quote:

    Originally Posted by nht View Post


     

       If only there were mac drivers for external GPUs.


     



    The market is really going the other way. Beyond that you have to look at the spectrum beyond just macs. The market for eGPUs would basically be limited to notebooks. There is little to no benefit pushing it out of the box with a desktop. With the majority of notebooks in performance gpus you can get to something like a 650m at the 15" level or a Quadro K3000m in mobile workstation graphics, although it's quite expensive. I don't foresee thunderbolt alone motivating external gpus. You would need something more universal to distribute costs, and either way the bandwidth is a bit slim so it isn't appropriate for the high end cards. You would need to address things like hot pluggability  as it's a requirement for thunderbolt certification. Typically that kind of thing is still restricted to enterprise hardware. You would end up with something like a GTX 660 with a slight performance hit under some applications at a $600 price point, as it would be aimed at a smaller market. Whenever companies do that, they seek higher margins, so that development costs are met quickly. That's why I guessed $600 or so. Either way it wouldn't be cheap, and they would need companies that deal with things like gaming machine on board.


     


     


    Quote:

    Originally Posted by wizard69 View Post



    Benchmarks seldom reflect real world use.

    I really wish people would read what I've written here. Be definition when you buy an AIR you get a low end processor. It doesn't really matter how well that machine performs with respect to 3 year old hardware. I don't go out and buy hardware for to get 3 year old performance nor do I pine for the foolish concept of an external GPU.


    It's not even limited to just geekbench ones. There are a lot of benchmarking tests within specific applications. Barefeats is one source, and it's good. It's just that most people don't understand how to interpret data. When you buy a new machine, you might have a huge range of tasks within a given application and between applications where performance can be quite variable, which is why I don't really like static testing. It can highlight speed increases that might reflect a very small percentage of the total workload. The 3 year old analogy only works if  workloads remain static or are outpaced by marginal speed increases. The example of the imac wasn't that great. i3s were by definition low end options. The desktop i5s are not suitable for comparison. Hyperthreading on the Airs vs lack of it on desktop i5s tends to be exaggerated in something like geekbench beyond real world use. Presently the ULV options have been mostly marketing. Take higher end option and disable cores or underclock. Market as low voltage. If they were designing from the ground up for the lower power that would be different. There's some insinuation that they are doing such a thing with Haswell or Broadwell, but I'm not sure how much of that is early tech site kool-aid.

  • Reply 108 of 147
    MarvinMarvin Posts: 15,421moderator
    The engineer I know at Intel has said as much (though he's on the Cannonlake team, so Haswell is very old news to him)

    Wow, that's a bit down the line: Haswell 22nm 2013 -> Broadwell 15nm 2014 -> Skylake 15nm 2015 -> Skymont 10nm 2016 -> Cannonlake 10nm 2017 -> 7nm die-shrink 2018 -> 7nm 2019 -> 5nm die-shrink 2020 -> 5nm 2021

    They are currently hiring for testing:

    http://jobs.intel.com/job/Folsom-PowerPerformance-Engineer-Job-CA-95630/2507464/

    So that's the same 10nm as Skymont vs current 22nm. They said they have plans to go down to 5nm. Even if Intel only increases the GPU 50% every year, the Cannonlake GPU they are working on right now will be 1.5^4 = 5x faster than Haswell. That puts the laptop chips somewhere between a GTX 680 and the Titan right on the CPU die. They can likely cut the power draw by then too.

    They could probably put these into production right now but obviously they want to profit from the incremental upgrades. Still, that's only another 4 years away.

    Then think 2021 is another 5x, that makes it 25x faster than we have now or equivalent to 5 GTX680s in SLI, potentially in an Ultrabook in just 8 years. Here's just two in SLI:


    [VIDEO]


    Even phones/tablets will be turning out amazing graphics pretty soon. The PowerVR 6 should really raise the bar for mobile visual quality.
  • Reply 109 of 147
    wizard69wizard69 Posts: 13,377member
    Do you really think they will get to 10nm in 4 years? It just seems like the interval between nodes is getting a little longer.
    Marvin wrote: »
    Wow, that's a bit down the line: Haswell 22nm 2013 -> Broadwell 15nm 2014 -> Skylake 15nm 2015 -> Skymont 10nm 2016 -> Cannonlake 10nm 2017 -> 7nm die-shrink 2018 -> 7nm 2019 -> 5nm die-shrink 2020 -> 5nm 2021

    They are currently hiring for testing:

    http://jobs.intel.com/job/Folsom-PowerPerformance-Engineer-Job-CA-95630/2507464/

    So that's the same 10nm as Skymont vs current 22nm. They said they have plans to go down to 5nm. Even if Intel only increases the GPU 50% every year, the Cannonlake GPU they are working on right now will be 1.5^4 = 5x faster than Haswell. That puts the laptop chips somewhere between a GTX 680 and the Titan right on the CPU die. They can likely cut the power draw by then too.

    They could probably put these into production right now but obviously they want to profit from the incremental upgrades. Still, that's only another 4 years away.
    I'm surprised you would say something like that. If manufactures could crap out a new process at will Intel would have more challenging competition.
    Then think 2021 is another 5x, that makes it 25x faster than we have now or equivalent to 5 GTX680s in SLI, potentially in an Ultrabook in just 8 years. Here's just two in SLI:
    I'm not sure is Haswell will end the whining about integrated graphics but I'm pretty certain by 2015 few will complain. I'm actually a bit excited by the potential of Haswell in something like a Mini and low end laptops. Lets hope Apple goes for broke implementing Haswell.

    Even phones/tablets will be turning out amazing graphics pretty soon. The PowerVR 6 should really raise the bar for mobile visual quality.
    I'm wondering if I can hold out with my iPad 3 that long. We should be getting four CPUs with VR6 and that would be really sweet in an iPad. Maybe even 64 bit computing.
  • Reply 109 of 147
    wizard69wizard69 Posts: 13,377member
    Another double post. Is AI one the blink?
  • Reply 111 of 147
    MarvinMarvin Posts: 15,421moderator
    wizard69 wrote: »
    Do you really think they will get to 10nm in 4 years? It just seems like the interval between nodes is getting a little longer.

    Intel plans to be there in 4 years:

    http://www.anandtech.com/show/6253/intel-by-2020-the-size-of-meaningful-compute-approaches-zero
    http://www.reuters.com/article/2010/10/28/us-chipmakers-idUSTRE69R4GT20101028

    They might not be able to stick to that roadmap but they've done a pretty good job so far. The next node is 14nm, not 15nm as I wrote earlier.
    wizard69 wrote: »
    I'm surprised you would say something like that. If manufactures could crap out a new process at will Intel would have more challenging competition.

    It's not easy and it costs a lot of money but I think profitability plays a big role in holding it back. Intel's plants cost upwards of $5b. Intel can afford this because they make over $2b profit per quarter. NVidia can't afford to do this because they make 1/20th the profit. AMD is in a similar situation. They both use TSMC:

    http://www.theinquirer.net/inquirer/news/2256742/nvidia-says-tsmcs-rivals-are-knocking-on-its-door
    http://www.extremetech.com/computing/123529-nvidia-deeply-unhappy-with-tsmc-claims-22nm-essentially-worthless

    "As the process nodes shrink, it takes longer and longer for the cost-per-transistor to fall below the previous generation. At 20nm, the gains all-but vanish. Want to know why Nvidia rearchitected Fermi with a new emphasis on efficiency and performance/watt? You’re looking at the reason. If per-transistor costs remain constant, the only way to improve your cost structure is to make better use of the transistors you’ve got."

    This sort of thing is going to affect Apple too because unless a rival chip foundry can match the output from the likes of Samsung, Samsung is going to have a competitive advantage over them. There's a recent article that says TSMC is ramping up for the next iPhone but the 5S will use Samsung:

    http://www.tomshardware.com/news/TSMC-iPhone-Phase-5-Samsung,22423.html

    There are certainly technical issues, especially at the dimensions they are working with but there's not an economic advantage in moving too quickly:

    http://www.zdnet.com/intel-we-know-how-to-make-10nm-chips-7000004170/

    Intel doesn't have any reason to rush Haswell Xeons for example. AMD's latest Opterons still don't beat Sandy Bridge Xeons.
  • Reply 112 of 147
    v5vv5v Posts: 1,357member

    Quote:

    Originally Posted by wizard69 View Post



    We should be getting four CPUs with VR6 and that would be really sweet in an iPad. Maybe even 64 bit computing.


     


    What's the point of 64 bit in a computer with only half a GB of RAM?

  • Reply 113 of 147
    mdriftmeyermdriftmeyer Posts: 7,503member

    Quote:

    Originally Posted by nht View Post


     


    There is no magic pixie dust in the ARM architecture that's going to make it outperform Intel chips.  The rapid increase in performance is paid by a higher TDP.


     


    The current top end A15s is benching around 3K in geekbench at best for the Exynos (8W TDP).  Or around the same performance as a old Core 2 Duo.  The current i5-3317U (17W) in the 11" MBA benches in around 6.5K.  You can see the Core i3 330M crushing the Exynos Dual in every benchmark and it benches in at 3906 and the Core i3 330M is from 2010.


     


    http://www.phoronix.com/scan.php?page=article&item=samsung_exynos5_dual&num=3


     


    With Haswell 10W chips expected to perform as well as the 17W Ivy Bridge chips and the expected 7W chips I think that the window for any ARM MBA is history.


     


    The Haswell Xeon E3 look to be really interesting...at 13W I'd like a 13" MBP version with 32GB ECC RAM max and a BTO discrete GPU (doesn't have to be super fast, just has to do both CUDA and OpenCL)...that would be a killer engineering laptop.


     


    A Mac Mini Server Xeon E3 with 32GB ECC ram slots would be killer too.  Connect with a render or compute farm and that would work nicely as well for both small server and low-end workstation needs.



     


    Xeon chips will never be in Mac Mini, period.

  • Reply 114 of 147
    mdriftmeyermdriftmeyer Posts: 7,503member

    Quote:

    Originally Posted by Marvin View Post





    Intel plans to be there in 4 years:



    http://www.anandtech.com/show/6253/intel-by-2020-the-size-of-meaningful-compute-approaches-zero

    http://www.reuters.com/article/2010/10/28/us-chipmakers-idUSTRE69R4GT20101028



    They might not be able to stick to that roadmap but they've done a pretty good job so far. The next node is 14nm, not 15nm as I wrote earlier.

    It's not easy and it costs a lot of money but I think profitability plays a big role in holding it back. Intel's plants cost upwards of $5b. Intel can afford this because they make over $2b profit per quarter. NVidia can't afford to do this because they make 1/20th the profit. AMD is in a similar situation. They both use TSMC:



    http://www.theinquirer.net/inquirer/news/2256742/nvidia-says-tsmcs-rivals-are-knocking-on-its-door

    http://www.extremetech.com/computing/123529-nvidia-deeply-unhappy-with-tsmc-claims-22nm-essentially-worthless



    "As the process nodes shrink, it takes longer and longer for the cost-per-transistor to fall below the previous generation. At 20nm, the gains all-but vanish. Want to know why Nvidia rearchitected Fermi with a new emphasis on efficiency and performance/watt? You’re looking at the reason. If per-transistor costs remain constant, the only way to improve your cost structure is to make better use of the transistors you’ve got."



    This sort of thing is going to affect Apple too because unless a rival chip foundry can match the output from the likes of Samsung, Samsung is going to have a competitive advantage over them. There's a recent article that says TSMC is ramping up for the next iPhone but the 5S will use Samsung:



    http://www.tomshardware.com/news/TSMC-iPhone-Phase-5-Samsung,22423.html



    There are certainly technical issues, especially at the dimensions they are working with but there's not an economic advantage in moving too quickly:



    http://www.zdnet.com/intel-we-know-how-to-make-10nm-chips-7000004170/



    Intel doesn't have any reason to rush Haswell Xeons for example. AMD's latest Opterons still don't beat Sandy Bridge Xeons.


     


    TSMC co-developed with GlobalFoundries who is growing much faster. ARM/AMD both contract to TSMC and GF. Both are stamped out certified for ARM. AMD based APU/CPUs are only stamp certified for GF. The > $8 Billion being invested in Malta with IBM is a lock for the most advanced wafer scale downs for both ARM and AMD.


     


    TSMC and Samsung were both partners with GF. They all compete against one another.


     


    MEMS markets are exploding and that requires those most current with the future and it isn't INTEL.


     


    http://thefoundryfiles.com/2013/04/23/bringing-mems-to-the-mainstream-latest-milestones-and-future-trends/


     


    GF has already announced stamp outs for 10nm at the start of 2015 and 7nm at the start of 2017.


     


    Intel annoucing 10nm is months after GF/TSMC is a me too response.


     


    ARM CEO has a great view of what he thinks of Intel:


     


    http://www.electronicsweekly.com/mannerisms/markets/intel-has-no-process-advantage-2012-10/


     


    As usual Intel makes crap up as it goes along and the media buys it hook, line and sinker.


    Quote:



    Mannerisms


    Intel Has No Process Advantage In Mobile, says ARM CEO





    October 24, 2012


    Intel has no advantage in IC manufacturing when it comes to manufacturing processes used for mobile ICs, Warren East, CEO of ARM, tells EW.


     


    “This time last year there was a lot of noise from the Intel camp about their manufacturing superiority,” says East, “we’re sceptical about this because, while the ARM ecosystem was shipping on 28nm, Intel was shipping on 32nm. So I don’t see where they’re ahead.”


     


    Furthermore, with the foundries accelerating their process development timescales, it looks increasingly unlikely that Intel will be able to find any advantage on mobile process technology in the future.


     


    “We’re supporting all the independent foundries,” says East. That includes 20nm planar bulk CMOS and 16nm finfet at TSMC; 20nm planar bulk CMOS and 14nm finfet at Samsung and 20nm planar bulk CMOS, 20nm FD-SOI and 14nm finfet at Globalfoundries.


     


    It gives the ARM ecosystem a formidable array of processes to choose from. “I’m no better equipped to judge which of these processes will be more successful than anyone else,” says East, “our approach is to be process agnostic.”


     


    The important thing is that the foundries’ process roadmap is on track to intersect Intel’s at 14nm.


     


    14nm will be the first process at which Intel intends to put mobile SOCs to the front of the node i.e. putting them among the first ICs to be made on a new process.


     


    Asked if the foundries were prepping their next generation processes with the intention of putting mobile SOC at the front of the node, East replies: That’s the information we’re seeing from our foundry partners.”


     


    Globalfoundries intends to have 14nm finfet in volume manufacturing in 2014, the same timescale as Intel has for introducing 14nm finfet manufacturing.


     


    In fact, GF’s 14nm process may have smaller features than Intel’s 14nm process because, says Mojy Chian senior vp at Globalfoundries, because “Intel’s terminology doesn’t typically correlate with the terminology used by the foundry industry. For instance Intel’s 22nm in terms of the back-end metallisation is similar to the foundry industry’s 28nm. The design rules and pitch for Intel’s 22nm are very similar to those for foundries’ 28nm processes.”


     


    Jean-Marc Chery, CTO of STMicroelectronics points out that the drawn gate length on Intel’s ˜22nm” process is actually 26nm.


     


    Furthermore Intel’s triangular fins, which degrade the advantages of finfet processing could underperform GF’s rectangular fins which optimise the finfet advantage.


     


    At the front of the GF 14nm finfet node will be mobile SOCs says Chian. GF has been working with ARM since 2009 to optimise its processes for ARM-based SOCs.


     


    At TSMC the first tape-out on its 16nm finfet process is expected at the end of next year. That test chip will be based on ARM’s 64-bit V8 processor.


     


    Using an ARM processor to validate its 16-nm finfet process should give TSMC’s ARM-based SOC customers great confidence.


     


    Asked about the effects of finfets on ARM-based SOCs, East replies: “There’ss no rocket science in what you get out of it. The question is does it deliver the benefits at an acceptable cost? You don’t get something for nothing. How much does it cost to manufacture? How good is the yield? And that, of course, affects cost.”


     


    And so on goes Intel beating its head against the wall to get into the low-margin mobile business.


     


    Recently Intel  said it expected its Q4 gross margin to drop 6% from Q3?s 63% to 57%. Shock, horror said the analysts


     


    But if Intel succeeds in the mobile business, its gross margin will drop a lot more than that.


     


    It’s a funny old world.




     


    ARM is quite pleased with the tape-outs at TSMC, Samsung and GF. Apple has choices.


     


    With the announcement of an additional Fab 8.2 and $10 Billion invested in Saratoga:


     


    http://www.timesunion.com/business/article/New-chip-fab-could-go-up-fast-4318910.php


     


    It's rather clear that Intel will get squeezed from all sides. Apple will never have to use Intel for any of it's embedded devices.

  • Reply 115 of 147
    wizard69wizard69 Posts: 13,377member
    Performance will improve for one. That however isn't the big issue, today's iOS systems come with 1GB of RAM which can easily expand to 2GB in iPads this year. Considering the way Safari and other apps behave on the platform this extra memory is needed. After that it becomes a problem of how do you divide up the memory map between system demands and memory usage.

    What a 64 bit chip, with a 64 bit address space provides is a clean way for Apple to move iOS forward without having to jump through hoops to leverage more RAM on 32 bit systems once they move past 2GB of RAM. Beyond that I suspect that Apple will have to at some point address demands for multitasking user apps. One approach there would be virtual 32 bit images. Cortex A15 is a possibility but frankly it is a short term solution, it just makes sense to set yourself up with 64 bit hardware and be done with it. Also more RAM means that Apple can add more features to the OS that operate without encroaching on user space RAM.

    So the real issue comes down to this; 64 bit hardware would give iOS a lock on future development. It effectively removes an obstacle or distraction from software development. Technically it isn't a huge issue today as Apple can transition to 2GB easily, but it becomes significant very quickly after that.
    v5v wrote: »
    What's the point of 64 bit in a computer with only half a GB of RAM?
    IPad is at 1GB.
  • Reply 116 of 147
    solipsismxsolipsismx Posts: 19,566member
    wizard69 wrote: »
    IPad is at 1GB.

    The sad thing for mobile computing is that it won't be too long before Android-based devices will be needing more than 4GB RAM and 64-bit ARM will be upon us. They'll use that as marketing and consumers will likely eat it up and the trolls will likely come here saying how Apple is out of touch and falling behind for only offering 2GB RAM and 32-bit architecture.
  • Reply 117 of 147
    MarvinMarvin Posts: 15,421moderator
    It's rather clear that Intel will get squeezed from all sides. Apple will never have to use Intel for any of it's embedded devices.

    I hope that will be the case. I don't think it would be a good situation if one company (especially Intel) had a manufacturing advantage.

    GlobalFoundries is suggesting a better roadmap than Intel:

    http://www.xbitlabs.com/news/other/display/20130211140309_GlobalFoundries_10nm_Process_on_Track_for_2015_7nm_Fabrication_Process_Due_in_2017.html

    7nm in 4 years. Apple is on 32nm just now with the A6X. The A6 and A6X are already faster than an entry PowerMac G5. By the time they get down to those smaller processes, they'll be able to get 2013 laptop performance on passively cooled hardware using less than 2W. No fan, 24+ hour battery life.
  • Reply 118 of 147
    tallest skiltallest skil Posts: 43,388member


    Originally Posted by SolipsismX View Post

    The sad thing for mobile computing is that it won't be too long before Android-based devices will be needing more than 4GB RAM and 64-bit ARM will be upon us.


     


    Hey, look at it this way:



    For the first time in its existence, a company other than Apple will be the driving force (and testbed, as Apple often is) behind innovation in the industry.


     


    Android crap will test out all the early (too early*) 64-bit ARM chips. They'll have the failures and the glitches, and by the time they get it right, Apple will be wanting the later, faster, cheaper models. 


     


    *Oh, I guess LTE, but that's not really Apple's industry.

  • Reply 119 of 147
    solipsismxsolipsismx Posts: 19,566member
    Hey, look at it this way:

    For the first time in its existence, a company other than Apple will be the driving force (and testbed, as Apple often is) behind innovation in the industry.

    Android crap will test out all the early (too early*) 64-bit ARM chips. They'll have the failures and the glitches, and by the time they get it right, Apple will be wanting the later, faster, cheaper models. 

    *Oh, I guess LTE, but that's not really Apple's industry.

    As with the desktop OS move from 32-bit to 64-bit Apple wasn't first but they were the first to do it right. Fat binaries that support both architectures simultaneously and excellent driver support when they made the move. I really hope Android doesn't copy what MS did since there is no server market for Android I don't expect them to.
  • Reply 120 of 147
    nhtnht Posts: 4,522member

    Quote:

    Originally Posted by wizard69 View Post



    So why are you comparing the AIRs to two or three year old hardware.


     


     


    2-3 year old desktop hardware.  That's not shabby.


    Quote:


     I'm not sure why people can't grasp what has been said here, AIR is a terrible performer when judged against what is possible with current technology. It is NOT debatable and is the nature of the ULV processor



     


     


    It sure as hell is debatable if the performance is equal to what a desktop was doing a couple years ago.


     


     


    Quote:


    Benchmarks seldom reflect real world use.



     


    Yes, in the real world the performance differences are even smaller.


     


    Quote:


    I really wish people would read what I've written here. Be definition when you buy an AIR you get a low end processor. It doesn't really matter how well that machine performs with respect to 3 year old hardware. I don't go out and buy hardware for to get 3 year old performance nor do I pine for the foolish concept of an external GPU.



     


    I read what you wrote here, it's just stupid.  Of course a ULV processor isn't as fast as the top end desktop CPU.  As you say: So what? 


     


    It doesn't make it terrible.  Nor does it make it automatically a "terrible solution for your needs" even as a developer (unless you happen to be a 3-D dev).  The fact is that the current 2.0Ghz dual core i7 is fast enough for many users who a few years ago would have been considered "demanding" users.


     


    When you buy an Air you do not get a low end processor but a low power processor.


Sign In or Register to comment.