AMD's Hammer gets delayed again.....

Posted:
in Future Apple Hardware edited January 2014
............by at least one quarter. Also to other Athlon chips......



<a href="http://news.com.com/2100-1001-957757.html"; target="_blank">http://news.com.com/2100-1001-957757.html</a>;



Now let see if Intel is going to do the same
«1

Comments

  • Reply 1 of 26
    who cares, it's not like a delay will make any difference in the already huge gap between intel/amd and apple.
  • Reply 2 of 26
    [quote]Originally posted by detah:

    <strong>who cares, it's not like a delay will make any difference in the already huge gap between intel/amd and apple.</strong><hr></blockquote>



  • Reply 3 of 26
    As if the G5 hasn't been delayed a few times....



    More like, Apple's future is on delay while Apple does hard time in a Federal Pound-Me-In-The-Ass Prison.
  • Reply 3 of 26
    eugeneeugene Posts: 8,254member
    Intel looks to be right on schedule with their chips. Over the last few months I've grown really weary of AMD in general. IA-64 is going to stomp all over x86-64.
  • Reply 5 of 26
    [quote]Originally posted by Junkyard Dawg:

    <strong>More like, Apple's future is on delay while Apple does hard time in a Federal Pound-Me-In-The-Ass Prison.</strong><hr></blockquote>



    What is that supposed to mean?
  • Reply 6 of 26
    Why would AMD do this???



    I think they could realse it at 500 MHz, then realize they can't deliver drop it down to 450 for a while. Then make a big deal about going back to 500 and stay there for a year. That would keep Intel away and gain market share.
  • Reply 7 of 26
    [quote]Originally posted by smithjoel:

    <strong>Why would AMD do this???



    I think they could realse it at 500 MHz, then realize they can't deliver drop it down to 450 for a while. Then make a big deal about going back to 500 and stay there for a year. That would keep Intel away and gain market share.</strong><hr></blockquote>





    Humor is not your strong point kid.
  • Reply 8 of 26
    [quote]Originally posted by Junkyard Dawg:

    <strong>



    More like, Apple's future is on delay while Apple does hard time in a Federal Pound-Me-In-The-Ass Prison.</strong><hr></blockquote>





    DUDE! turn on the TV! It's the breast exams!
  • Reply 9 of 26
    Hey dude, check out channel nine! Check out this chick, man!



  • Reply 10 of 26
    xypexype Posts: 672member
    [quote]Originally posted by Eugene:

    <strong>IA-64 is going to stomp all over x86-64.</strong><hr></blockquote>



    Eh? What makes you think that?
  • Reply 11 of 26
    bartobarto Posts: 2,246member
    [quote]Originally posted by xype:

    <strong>



    Eh? What makes you think that?</strong><hr></blockquote>



    Because x86-64 (x86? Doesn't it have an FPU? ) adds some more registers to a bloated architecture.



    Modern ISAs like PPC and IA-64 are designed so as much as the CPU can be devoted to actually getting the job done.



    Bloated architectures means many transistors are wasted translating crappy code into code that is actually processed.



    Witness the Athlon and Pentium Pro onward. They are RISC chips which translate crappy x87 into clean code. Modern CPUs would be too complex to build if x87 was executed natively.



    Basically, for the same cost IA-64 chips should be faster than x86-64 chips.



    Barto
  • Reply 12 of 26
    [quote]Originally posted by Barto:

    <strong>Modern ISAs like PPC and IA-64 are designed so as much as the CPU can be devoted to actually getting the job done.



    Bloated architectures means many transistors are wasted translating crappy code into code that is actually processed.



    Witness the Athlon and Pentium Pro onward. They are RISC chips which translate crappy x87 into clean code. Modern CPUs would be too complex to build if x87 was executed natively.



    Basically, for the same cost IA-64 chips should be faster than x86-64 chips.</strong><hr></blockquote>



    You'd think that, wouldn't you? There are a couple of reasons why I don't agree with you and I won't try to predict which architecture will live longer.



    1) The %age of the chip devoted to instruction translation is decreasing. The complexity of the decoder units is going up more slowly than the complexity of the actual execution core and surrounding support circuitry. That means the fact that instructions are being translated means less and less.



    2) The POWER4 actually does a similar kind of translation with the PowerPC ISA in order to get higher performance. Strange but true. It does better with the PPC ISA partially because of a superior ISA, but more because its a bigger, better design intended for high end markets.



    3) The IA-64 is a hideously complex ISA that relies on the compiler to do a tremendous amount of optimization work and deal with the instruction-level parallelism. Forget about any assembly language work (which still happens occasionally when you hit language limitations or when debugging compiler bugs... which are more common in more complex compilers). The ISA design forces this complexity on the chip designers, and if you are a conspiracy theorist you might think that Intel designed it this way deliberately so that no other chip making company could (or would want to) clone IA-64. Personally I feel strongly that IA-64 is a horrible mistake and recently I've seen things that make me think that even Intel agrees! It is certainly much more limiting that the PPC ISA, for example, which scales very nicely from the very low level / low power embedded processors, all the way up to (and beyond) the POWER4. Still, if there was any company I'd want pouring into a excessively expensive CPU architecture, it would be Intel.



    4) One of the biggest problems with x86 (the x87 was just the FPU) is that it has so few registers that code loads and stores from memory very frequently. Internally the RISC cores they are using can't do much about that since all sorts of side-effects can happen once the data leaves the core, so they end up having to load/store just like the instructions tell them to. The x86-64 doubles the register count to 16, IIRC, which improves the situation significantly. There are still all sorts of other problems with it, but this helps. I'd still rather have a PPC -- I hate debugging x86 assembly code, and writing it is even worse.
  • Reply 13 of 26
    "Still, if there was any company I'd want pouring into a excessively expensive CPU architecture, it would be Intel."



    Programmer has teeth..!



    Lemon Bon Bon
  • Reply 14 of 26
    "Once upon a time there was a computer company which had built their entire OS around a single CPU instruction set. The CPU was fine and fancy at the time and continued to advance at a reasonable rate beginning with 16-bit 8MHz and trundling *all* the way up to 32-bit 110MHz w/ a built in FPU!! However, the generation of CPUs in this line were running out of growing room and their best and brightest incarnation couldn't stand up to the "other guys"... What to do what to do?!



    This computer company had a *very* clever plan. They would move to a whole new CPU that was totally different than everything they had ever done before, one that would be fast and efficient! But... what about all the software that was written for the old crusty CPUs? Ahhh... this company was clever indeed. They wrote a micro-emulator that ran in the L1 cache of the new CPU that emulated the instruction set of the old one! Sure, it hurt initial performance for the new CPU, but it allowed everyone to run all of their software with few problems and all was well. As the new CPUs took hold, more and more vendors rewrote their apps to use the new CPU's instruction set and when they did, they when vroom! And gradually, support for the old crusty instruction set was dropped.



    The company, obviously, is Apple. The crusty old CPU? 680X0. The new CPU? PPC.



    In my opinion, given that all modern CPUs have large full-speed L2 caches, there's no reason Apple couldn't do this again. It seems to me that it would be totally do-able for them to build a machine around *any* modern CPU and load a PPC emulator into the L2 cache at boot time. All our apps would run, and we'd get the raw speed associated with strapping a rocket engine onto a slug bug. As apps would be ported to the new CPU's instruction set, the true speed advantages would come to light, and Apple would be saved from Mot's inneptitude.



    maybe no? maybe yes? Whatcha think?"



    Got that from Macrumors. There have been rumours of of Apple checking AMD out. How would they make this work?



    (Kind of ties in with Apple taking bids for the G5 from various parties as rumoured by MOSR a while back...)



    I didn't realise you could emulate in this way. Would it be possible for Apple to repeat this trick but stash an emulator on Hammer cache?



    Anybody?



    Lemon Bon Bon
  • Reply 15 of 26
    [quote]Originally posted by Lemon Bon Bon:

    <strong>



    I didn't realise you could emulate in this way. Would it be possible for Apple to repeat this trick but stash an emulator on Hammer cache?



    Anybody?



    Lemon Bon Bon</strong><hr></blockquote>



    Not really. The 68k emulator on PPC, was neatly done, and acheived considerable speed because of the much greater resources of the PPC (mainly the number of registers, you could allocate a PPC register for each 68k register and still have plenty left over for the emulator code itself, but also the instruction sets matched relatively well.) Emulating a PPC on even x86-64 with its 16 registers, let alone x86-32 with its 8 registers, means that the emulator will be forever spilling registers to memory which will slow things down immensely, also the PPC uses mostly 3 register instructions (2 source and 1 result), whilst x86 uses 2 register instructions (one source register is overwritten with the result). Trying to emulate AltiVec would be appaling, even using SSE2 code and registers, they don't come near matching AltiVec's power, AltiVec uses 4 register instructions, and has a permute unit whose emulation would be incredibly slow.

    I doubt the emulator would get to one tenth the speed of the processor it was running on, and much less for AltiVec code.

    As a developer, if I spend a lot of time optimising my code for a processor, and using AltiVec to get the speed, and Apple did this to me, I would give up on the platform for good.



    michael
  • Reply 16 of 26
    [quote] More like, Apple's future is on delay while Apple does hard time in a Federal Pound-Me-In-The-Ass Prison. <hr></blockquote>



    No, No man.

    Shit no man, I believe you'd get your ass kicked saying something like that, man.
  • Reply 17 of 26
    kukukuku Posts: 254member
    This is basically a deflating ego post to all those 'Look at intel/AMD' posts.



    In the way our economy is now, things like this can happen a lot more often. Belts are tightened and future projections are being questioned a lot more.



    "It's apple's fault" is as dumb a response even more now then it was before. Shit happens and it doesn't single anyone out.



    And in a bad economy don't expect much even from your "Look at how far company X is" quote. It's almost as annoying as a whinny mother who constantly compares you to someone else.



    There is a slow down in the tech sector, anyone can tell you that. Some people can't face reality when they praise or to bash.



    ~Kuku
  • Reply 18 of 26
    [quote]Originally posted by cinder:

    <strong>



    No, No man.

    Shit no man, I believe you'd get your ass kicked saying something like that, man.</strong><hr></blockquote>



    No way! Why should I change my name, he's the one who sucks!!!



    [ 09-13-2002: Message edited by: Junkyard Dawg ]</p>
  • Reply 19 of 26
    xypexype Posts: 672member
    [quote]Originally posted by Barto:

    <strong>Basically, for the same cost IA-64 chips should be faster than x86-64 chips.</strong><hr></blockquote>



    Basically, from what I know, x86-64 runs 32bit code pretty well which means _if_ people upgrade to a 64bit CPU they don't need to run 32bit code trough much of a emulation process which in turn means faster 32bit code. And since most code today is 32 bit (and a chunk of it will likely stay so) I see the trnasition being less painful when taking the x86-64 route.



    So whatever technology is better is not the question, the question is what technology the customer will find easier to migrate to (think USB/FireWire).
  • Reply 20 of 26
    mr. memr. me Posts: 3,221member
    [quote]Originally posted by Eugene:

    <strong>Intel looks to be right on schedule with their chips. Over the last few months I've grown really weary of AMD in general. IA-64 is going to stomp all over x86-64.</strong><hr></blockquote>

    On schedule?



    Which schedule?



    Are you talking about the schedule outlined when Intel and HP entered into their pact to develop IA-64? Wasn't that sometime back during the last millenium?
Sign In or Register to comment.