Intel to release something new

Jump to First Reply
Posted:
in Future Apple Hardware edited January 2014
Fascinating speculation.



Basically it speculates that Intel's next generation of cores will be simpler multicore processors that no longer directly use the x86 instruction set. Intel has long planned to go down the route of simple multiple cores, just never this soon so it kind of makes sense given Intel's intense desire to kill x86 over the years and this gives it the opportunity to save their Itanium line. The only question is will the translation work well enough and is Intel just being a typical company and ridiculously overhyping something that is otherwise ho hum.
«13

Comments

  • Reply 1 of 55
    macroninmacronin Posts: 1,174member
    Let us not forget about this nugget of goodness...



    Platform 2015
     0Likes 0Dislikes 0Informatives
  • Reply 2 of 55
    Quote:

    Originally posted by Telomar

    Fascinating speculation.



    Basically it speculates that Intel's next generation of cores will be simpler multicore processors that no longer directly use the x86 instruction set. Intel has long planned to go down the route of simple multiple cores, just never this soon so it kind of makes sense given Intel's intense desire to kill x86 over the years and this gives it the opportunity to save their Itanium line. The only question is will the translation work well enough and is Intel just being a typical company and ridiculously overhyping something that is otherwise ho hum.




    Hopefully Hannibal at Ars reads this piece of speculation...I'd be willing to avoid drinking too much cool-aid before a couple of luminaries pipe up and either support or bag this type of analysis. Mind you, the writer does a pretty good job of backing himself up, especially when he mentions the 'Russian acquisition'.
     0Likes 0Dislikes 0Informatives
  • Reply 3 of 55
    Jeez, that author really talked himself into believing his myth. There are factual problems, though. Like assuming the Xbox/PS3 SPEs are as good as "any desktop", which is a joke.
     0Likes 0Dislikes 0Informatives
  • Reply 4 of 55
    The author of the original article keeps mentioning all this stuff that can be done in software with regards to running non-native binaries.



    1) there is a (severe) performance overhead.



    2) Intel does not write windows, and getting M$ to implement these features is never gonna happen. Translating binaries and writing them out to disk?! This is all beyond Intel's control.
     0Likes 0Dislikes 0Informatives
  • Reply 5 of 55
    thttht Posts: 6,018member
    That's some crazy speculation there.
     0Likes 0Dislikes 0Informatives
  • Reply 6 of 55
    cubistcubist Posts: 954member
    Re the title, I suppose Intel might release something new at some point, but the times they've tried in the past (iAPX432, 860, Itanium) have been billion-dollar failures. It's only the incremental improvements to X86 that have kept them alive. Big corporations, especially those with Intel's monopolistic tendencies, have never been big innovators.



    ARM/Xscale is still a promising architecture...
     0Likes 0Dislikes 0Informatives
  • Reply 7 of 55
    Intel just has to have something new to back up the 15/70 ratio for units of performance per watt Jobs made and now apparently Intel are making too. I don't think Yonah is it although that Inquirer piece has a lot of alternative thinking there.



    For instance, XBench (yeah I know) is a universal binary now and a 2.0Ghz iMac trounces the development system (yeah I know - it's early).



    http://ladd.dyndns.org/xbench/merge....41&doc2=128621



    If you look at those benches, the P4 handles memory an awful lot faster than a Mac but the raw CPU figures are terrible, especially at vector code. And elsewhere, where it does well, I'd bet it's because the Intel has 4 times the L2 cache of the G5.



    I've been using iTunes in the last week on a Pentium M laptop and it's a third the speed at ripping tracks that my iMac achieves. I'm not impressed. So the graphics run snappier. meh. Who cares other than gamers.
     0Likes 0Dislikes 0Informatives
  • Reply 8 of 55
    THT, I find it odd that the iMac 2 gig can flatten a Pentium M in computation but the graphics on a Pentium M laptop are snappier.



    What is going on with graphics optimisation on the PPC?



    It's the same in flash, web browsing and Open GL.



    Intel cpus seem to be better 'snappier' at those things yet a PowerMac seems to flatten a 3 gig Pentium in the calculation dept?



    Anybody care to explain why? A stab at it?



    My guess is still that from compilers to graphics drivers, the PPC platform doesn't have the resources, monetary, pervasive developer support etc that Intel has. ie to better optimise said platform.



    How else can the same graphics card suck half as bad on a dual PowerMac??!?



    Where said optimisation does take place ie Apple's Safari vs M$ IExlporer or Apple's Final Cut vs Adobe's Premiere, the Apple solution then saws the legs off the opposition...



    When Intel 'does release something new' hopefully, it is the reason why Apple have gone Intel. Maybe we'll get teh 'snappy', Open GL parity, computational punch and the ability to pack it all into a 1 inch Powerbook... Want one?



    Lemon Bon Bon
     0Likes 0Dislikes 0Informatives
  • Reply 9 of 55
    I wonder if the 'new' Intel cpu/architecture is anything like 'Cell'. ie with mini cores optimised at doing h.264 decoding and stuff...but...for computer environment?



    That would be exciting. Along with dual cores in laptops...maybe this 'next big thing' is what persuaded Apple to jump ship?



    Lemon Bon Bon
     0Likes 0Dislikes 0Informatives
  • Reply 10 of 55
    thttht Posts: 6,018member
    Quote:

    Originally posted by Lemon Bon Bon

    THT, I find it odd that the iMac 2 gig can flatten a Pentium M in computation but the graphics on a Pentium M laptop are snappier.



    What are you basing this conclusion on?



    If it is the XBench results in this thread, I wouldn't rely on at it. XBench is a horrible benchmark. It barely does a good job with G4 and G5 comparisons. PPC versus x86 will be completely unreliable for awhile.



    Quote:

    What is going on with graphics optimisation on the PPC?



    It's the same in flash, web browsing and Open GL.




    Intel systems have very good burst memory performance. Flash and OpenGL issues are optimisation and driver issues.



    Quote:

    When Intel 'does release something new' hopefully, it is the reason why Apple have gone Intel. Maybe we'll get teh 'snappy', Open GL parity, computational punch and the ability to pack it all into a 1 inch Powerbook... Want one?



    Yes. I'm waiting for a dual-core 2GHz Yonah Powerbook.



    The Merom information to be talked about next week will be very interesting, very interesting. Most everyone is speculating it will be a brand new microarchitecture. What the Inquirer is speculating is a little too out-of-the-box though.
     0Likes 0Dislikes 0Informatives
  • Reply 11 of 55
    thttht Posts: 6,018member
    Quote:

    Originally posted by Lemon Bon Bon

    I wonder if the 'new' Intel cpu/architecture is anything like 'Cell'. ie with mini cores optimised at doing h.264 decoding and stuff...but...for computer environment?



    That would be exciting. Along with dual cores in laptops...maybe this 'next big thing' is what persuaded Apple to jump ship?




    My bet for why Apple is switching is a quad-core Merom at 45 nm with the same power envelope as Yonah, and the high likelihood of IBM being 6 to 12 months behind Intel in getting to 45 nm. And I don't think it will be like Cell.



    So supposedly, 2 GHz Merom core = 2.5 to 3 GHz Yonah core = 2.5 to 3 GHz 970fx. 2 GHz quad-core with same power consumption as a 2 GHz Yonah means a laptop equivalent to 2.5+ GHz quad-PPC machine in 2008.



    With IBM and Freescale behind in the fab wars, and not putting in much effort design a good personal computer CPU, it would be a tremendously uphill battle for Apple, and a switch was prudent.
     0Likes 0Dislikes 0Informatives
  • Reply 12 of 55
    telomartelomar Posts: 1,804member
    Quote:

    Originally posted by nowayout11

    Jeez, that author really talked himself into believing his myth. There are factual problems, though. Like assuming the Xbox/PS3 SPEs are as good as "any desktop", which is a joke.



    I think his point in focusing on the SPEs was more they remove out of order execution and branch prediction. All their transistors are aimed at accelerating code. In order processors aren't necessarily bad and removing logic to add other things may provide benefits. That's the route they tried to go to some extent with Itanium.



    Quote:

    Originally posted by 1337_5L4Xx0R

    The author of the original article keeps mentioning all this stuff that can be done in software with regards to running non-native binaries.



    1) there is a (severe) performance overhead.



    2) Intel does not write windows, and getting M$ to implement these features is never gonna happen. Translating binaries and writing them out to disk?! This is all beyond Intel's control.




    Well yes and no. Doing these things in software is slow. FX!32 achieved around 50% native speeds and 70% was probably doable but this article talks about doing it on the processor itself. Essentially removing much of the latency that software has to deal with in translation that slows it down. It's worth noting Transmeta has done this for years and only struggled with their inability to actually produce chips like the big manufacturers and perhaps being a little too ahead of their time. Bit more here.



    Currently because of the lack of registers Intel shoves huge amounts of cache in, what if they redesign it, strip out a whole bunch of control circuitry and put a code translator in on the die. You'd end up with a much simpler processor, this is actually what Intel attempted with the Itanium.



    This is also similar to what is done with virtualisation where a piece of software loads to communicate with any OSs a person wants to use rather than giving any direct communication with the processor.



    Also keep in mind that although the Pentium-M solves many of the power problems of the P4 ultimately Intel still expects the power draw to head back up to problematic levels so something eventually has to be done to reduce power consumption again.
     0Likes 0Dislikes 0Informatives
  • Reply 13 of 55
    @homenow@homenow Posts: 998member
    Quote:

    Originally posted by Lemon Bon Bon

    THT, I find it odd that the iMac 2 gig can flatten a Pentium M in computation but the graphics on a Pentium M laptop are snappier.



    What is going on with graphics optimisation on the PPC?



    It's the same in flash, web browsing and Open GL.



    Intel cpus seem to be better 'snappier' at those things yet a PowerMac seems to flatten a 3 gig Pentium in the calculation dept?



    Anybody care to explain why? A stab at it?



    My guess is still that from compilers to graphics drivers, the PPC platform doesn't have the resources, monetary, pervasive developer support etc that Intel has. ie to better optimise said platform.



    How else can the same graphics card suck half as bad on a dual PowerMac??!?



    Where said optimisation does take place ie Apple's Safari vs M$ IExlporer or Apple's Final Cut vs Adobe's Premiere, the Apple solution then saws the legs off the opposition...



    When Intel 'does release something new' hopefully, it is the reason why Apple have gone Intel. Maybe we'll get teh 'snappy', Open GL parity, computational punch and the ability to pack it all into a 1 inch Powerbook... Want one?



    Lemon Bon Bon




    The "feel" of a program is a bad measure, as it relies on feedback from the program and has nothing to do with how fas the operation is bieng completed. In fact giving that feedback could actually slow down the completion of the task at hand, even if it does not "feel" like it is as fast to the user.
     0Likes 0Dislikes 0Informatives
  • Reply 14 of 55
    aegisdesignaegisdesign Posts: 2,914member
    Dual Yonah 2.16Ghz is supposedly in the 29-32W mark which is just a little more than the current Dothans. I'd imagine they'd aim Merom at the same range. 30W is about as much as you want in a thin laptop. Quad core 2.5Ghz PPC perfromance at 30W would indeed be nice.



    But how does it tally with Jobs' 15/70 thing.



    Intel = 70 'perf units' per watt = 2100 units in a 30W laptop.



    PPC = 15 units per watt = 140W for same 2100 units. Obviously not a laptop ;-)





    That's 35W per core of a quad core PPC which seems unlikely for your 2008 chip. IBM could probably hit that now. IBM have low power 970FX at 13W and they'd save even more with process shifts and multi-core.



    At the moment, for the sake of argument, let's call the PentiumM v G5 performance/power match a draw. The G5 is faster, the PentiumM consumes less power.



    Is jobs therefore really saying that Intel can get the power usage down on their chips to a level where it's consuming 70/15 (=4.66) times less power than now but as fast as a G5, or maintain the 30W limit in laptops but increase speed 4.66 times a current G5?



    And that IBM won't be able to do the same or improve?



    I'm sure IBM could improve. If they halved their power requirements to 70W in our fictional 2008 computer keeping todays performance still, that would mean the Intel would have to be 9.3 times quicker than today's G5. That just seems unlikely.



    The intel dev conference next week will either usher in a new level of performance chip we're not expecting or otherwise we'll be doing more maths to work out how Jobs arrived at the 70/15 BS ratio.
     0Likes 0Dislikes 0Informatives
  • Reply 15 of 55
    wizard69wizard69 Posts: 13,377member
    I found the article interesting. Speculation of course but Intel itself seems to be priming the market for a big change.



    In a sense the author is right about the pentium arcitechure and the great deal of heat generation that goes on inside the unit that is not producing direct computations. In a way IBM tried to address this with the PPE processor - apparently it was a failure with respect to power usage. I take that to be an IBM process issue.



    One approach though might meet somewhere in the middle that is between todays designs and VLIW processing. One could build a chip with a bunch of cores optimized to support modern 64 bit applications and handle older addressing and odd functions in a complex core. AMD has even suggested such a design.



    As to any new hardware the one thing that is extremely obvious is that anything new to the market must be able to address the need for massively parallel systems in the future. Since todays Intel hardware doesn't do this well that is something that would have to be addressed in the new hardware.



    As to the speculation, it is a given that the I86 base is becoming a drag on innovation. Intel recognized this with Centrino and reaching back a couple generations for the processor part of the package. That of course is a short term laptop solution. Future solutions needs lots of cores at very low power points. My big concern though is that Intel may make these power points with improved process technologies.



    Dave
     0Likes 0Dislikes 0Informatives
  • Reply 16 of 55
    thttht Posts: 6,018member
    Quote:

    Originally posted by aegisdesign

    That's 35W per core of a quad core PPC which seems unlikely for your 2008 chip. IBM could probably hit that now. IBM have low power 970FX at 13W and they'd save even more with process shifts and multi-core.

    ...

    Is jobs therefore really saying that Intel can get the power usage down on their chips to a level where it's consuming 70/15 (=4.66) times less power than now but as fast as a G5, or maintain the 30W limit in laptops but increase speed 4.66 times a current G5? And that IBM won't be able to do the same or improve?




    Jobs was exaggerating obviously. But there is and will be a kernel of truth in there. It's really all about the process shifts (which allows for multicore chips in affordable packages) and who has the business case to do it.



    My basic premise is really Intel getting to 45 nm 6 to 12 months before anyone else does; they'll be the only ones with a real, non-risky business case to develop a 45 nm fab. It's been like that for every node since 130 nm - Intel has been in front, 6+ months ahead of everyone else. In Q1 06, they will reach 65 nm 6 months before anyone else will, and I can only see that time scale increasing for 45 nm. I would not be surprised if IBM is 1+ year behind.



    So, in 2008, the roadmap could have 2 GHz 45 nm quad-core Intel at 30 watts against a 2 GHz 65 nm dual-core PPC at 60 watts. For the sake of easy math, a Merom core would have the same IPC as a PPC core, they are at the same clock rate, and they each have 10 performance units per core. This will mean the quad-core Merom derivative would have 4x the performance/watt.



    Possible? Just look at a 65nm 2 GHz Yonah at 25 watts versus a 90nm 2 GHz 970fx at 50 Watts. A 90nm 2 GHz P-M (Dothan) arguably has the same performance as a 2 GHz 970fx, and Intel has been shipping 25 Watt Dothans for 6 months already. Intel has had a 2x performance/watt advantage on the 970fx for 6 months now.



    If only Freescale could have shipped a 2 GHz 7448 in Q1 05, meaning that they would have had a 90 nm fab in Q4 04, Apple could stay close to the P-M. Freescale is almost a full process generation behind Intel now, so it's a "never was."
     0Likes 0Dislikes 0Informatives
  • Reply 17 of 55
    telomartelomar Posts: 1,804member
    A couple more tidbits from my Ars roaming expedition. First a comment on code morphing and caching, from here:

    Quote:

    So things look bad for Transmeta at this point, but I'm actually not quite yet willing to concede that it's over. If you've been reading Ars for a long time, you'll probably recall that I was an early proponent of Transmeta's Code Morphing technology and that I've repeatedly identified some of the design decisions it embodies as a portrait of things to come. Specifically, I singled out the technique of translating binary code and caching the translated copy (with or without optimization) as a significant development in microprocessor design, citing as evidence the fact that a number of current and future designs use this approach (not the least of which is the Pentium 4). The reason I'm not ready to drive the nail in TM's coffin is that I still believe this to be true. The facts are that binary translation with caching is the future and Transmeta has invested over six years, lots of money and an immense amount of brainpower into this technology.



    Also on ISAs in general. Particularly the stuff on Dynamo on page 4 is amusing if nothing else. It essentially looks at a computer translating its own ISA and the performance gain it accomplished but also looks at the question of emulation vs translation.



    Of course all of this could just be Intel moving to improve Itanium and future versions of IA-32e. Either way it poses interesting questions for microprocessor design.
     0Likes 0Dislikes 0Informatives
  • Reply 18 of 55
    Quote:

    Originally posted by cubist

    ARM/Xscale is still a promising architecture...



    ppc and mips are much cleaner, which is important since in the embedded space there's still plenty of assembling. ARM should go straight 32bit, and ditch some of the legacy features that are dated. It's a good ISA, but with the Xscale in particular it's getting inelegant. The Samsung S3C24x0 and TI OMAPs are more straightforward ARM derivatives and, coincidentally, are much better.
     0Likes 0Dislikes 0Informatives
  • Reply 19 of 55




    It's nice to have a PC/Mac card. As more Macs are getting sold these days and Apple's marketshare improves...surely this bold dual format move will become a compelling business case?



    Especially with the move to Intel?



    You can see the weird Doom 3 GL scores. Pretty much every Mac card performs similarly. Except the outdated 9600. And that's my point. Why attempt a dual format card with such an out of date gpu? Why not a 9800? Or higher? Seems pricey to me...



    ...it's almost as if there is a deliberate ceiling on the Doom 3 game. Even at 640x480, 40-ish fps.



    And on Cinebench GL benches, the Mac equivalents perform poorly to their x86 counterparts.



    Personally, I think THT is right. It seems to be a optimisation issue.



    I would think with the move to Intel, ATI, Nvidia etc have more familiarity with Intel chips, how to program for them...the little endian thing...the support structure...only have to think about one kind of chip...etc Maybe we will see Mac parity in GL.



    THT, have you heard anything about GL2? Will it be any faster? Or is it about fancy shaders and programmable gpus? Even in GL threads, nobody seems to talk about it. It seems to be the only competition right now to Direct X. And what's worrying is that M$ seems to dropping levels of GL support in Vista ie their os will be less optimised for it. Which may mean that developers won't bother with it. No Wintel games for it. Less easy to do Direct x ports to Mac Intel GL?



    THT, your business case for Apple's move is very impressive in that it illustrates some basic maths with the roadmap we're aware of. Simple. Logical. If we think about it. IBM is behind on dual core. And they DO multicore technology...and EVEN THEY couldn't beat the Wintel to the punch. Not only the process delay of IBM getting behind but adding to that Apple's to market time for products. Can Apple afford to be 6-12-even 18 months behind the competition? Especially when they're enjoying success they haven't had since the Apple II..? IBM becomes Motorola compared to Intel going forwards. ie can't compete on Fab transition. IBM made a meal of the last transition. How will they cope going forwards? Keep issuing press releases and issue 'tech-demo' news releases of 'some amazing' new tech' we'll never see in our lifetime?



    Lemon Bon Bon
     0Likes 0Dislikes 0Informatives
  • Reply 20 of 55
    iqatedoiqatedo Posts: 1,846member
    I loved this in the original article:



    'Intel will now be free to do as it pleases with X86 decoding done in software Intel can change the hardware at will.'



    It took a lot of guts for Apple and Steve Jobs to switch to Intel. What sweey irony if perhaps one day, Intel processors are viewed as the natural processors for OS X, providing the highest possible performance but Windows runs in emulation (or at least - virtualization).



    I don't think Apple swapped to Intel to take on the X86 architecture. Time will tell.
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.