970GX and low power 970s for PowerBooks

13»

Comments

  • Reply 41 of 43
    Quote:

    Originally posted by wizard69

    I'm wondering how many of you consider Transmetas approach and hardware to be a success? Really!



    I just mentioned them as a point of comparison, not to judge or validate it in any way.



    Quote:

    Transmetas processors are really VLIW machines not SIMD machines anyways but that to me is not the point. I see Transmets greatest failing to be building a processor that can not be runned in a native mode. There may very well be an approach that allows the processor itself to translate a PPC instruction stream on the fly into VLIW instructions, but it just seems to be a complete waste of effort. Why not produce a VLIW processor that interpets legacy code yet can run native VLIW code unrestricted?



    Internally most of the latest processors are a form of VLIW. The source instruction stream is essentially the packed form, and they expand out internally and dynamically (that it is dynamic is the important point). Native VLIW processors typically have issues with code size and the limitations of compile-time scheduling. Look at Intel's EPIC architecture, for example.



    Quote:

    An interesting thought just popped into my head - stand back. A VLIW engines would be a good replacement for a SIMD engine. That is a VLIW engines should be able to emulate an SIMD instruction stream rather easily maybe with little cost in hardware. A SIMD unit however could not emulate a VLIW unit with any sort of performance that one would want to right home about. Think about this: AltVec2 is simply an expansion of the vector engine into a VLIW engine, with the old SIMD words emulated in hardware. this would vastly improve the utility of the "vector" engine while leaving the rest of the processor unencumbered with stuff that doesn't fit. Ok to step back in place now![/B]



    I don't think I buy this -- VLIW and SIMD are in some sense orthoganal. VLIW is about packing lots of behaviour into each instruction, whereas SIMD is about applying (the same) behaviour to lots of data. If you're sold on the advantages of VLIW then it makes sense to apply it to your vector engine, but otherwise why go there? Personally I'm not a fan of VLIW ISA's, but perhaps I just haven't seen one done well yet. Certainly Intel's attempt was far too much of a monster to be successful, but a design at a more reasonable scale might work. As you point out, however, Transmeta hasn't been particuarly successful at it (at least I've never seen one of their machines, or heard of anyone buying one).
     0Likes 0Dislikes 0Informatives
  • Reply 42 of 43
    Quote:

    Certainly Intel's attempt was far too much of a monster to be successful, but a design at a more reasonable scale might work. As you point out, however, Transmeta hasn't been particuarly successful at it (at least I've never seen one of their machines, or heard of anyone buying one).



    Gotta ask, what about Intel's EPIC ISA was a monster (I know, I know, it's Intel, what about them isn't a monster).



    If they had the chance to remove all the barnacles of x86, why bulk up with EPIC?



    ... and how did IBM avoid this with their POWER stuff, in fact, given that much of the POWER ISA was designed ages ago, I suppose it was either a miracle of foresight, or a miracle of circumstances which - by accident - enforced an effective philosophy.



    Or, is the niche of the POWER ISA so orthogonal to the intended EPIC niche, that blaming INTEL for going large, a bad case of Monday morning quarterbacking?



    OT
     0Likes 0Dislikes 0Informatives
  • Reply 43 of 43
    Quote:

    Originally posted by OverToasty

    Gotta ask, what about Intel's EPIC ISA was a monster (I know, I know, it's Intel, what about them isn't a monster).



    If they had the chance to remove all the barnacles of x86, why bulk up with EPIC?




    EPIC turned out to be extremely complex, and the details of this complexity were exposed to the compiler. The complexity of the EPIC architecture is such that its not at clear that it could ever scale down, and the nature of the details exposed is such that its not clear how far it can be scaled up without breaking compatibility. Unfortunately the analysis that compilers can do is limited to things which are static -- the code is emitted before execution, after all. And the complexity of generating efficient code for this ISA makes the implementation of JIT's questionable.



    Why? Interesting question, but I've heard it suggested that Intel chose to do this in order to make it impossible for anybody else to design and build compatible processors.



    Quote:

    ... and how did IBM avoid this with their POWER stuff, in fact, given that much of the POWER ISA was designed ages ago, I suppose it was either a miracle of foresight, or a miracle of circumstances which - by accident - enforced an effective philosophy.



    Not at all -- IBM just chose to continue pursuing an existing ISA, much like Intel and AMD continue to pursue the x86 ISA. I like the PowerPC ISA, but there is nothing magical about it.



    Quote:



    Or, is the niche of the POWER ISA so orthogonal to the intended EPIC niche, that blaming INTEL for going large, a bad case of Monday morning quarterbacking?




    They are not orthogonal, I just think Intel made a bad call. Back in the early to mid-90s when EPIC was being designed the OoOE processors hadn't arrived yet, and so the advantages of that approach hadn't been demonstrated. Intel was at the height of its dominance (and arrogance), and they wanted to establish The Next Architecture in a such a way as it would ensure their place as the sole provider of all processors. Things change, some plans succeed and some fail.
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.