CONFIRMED: G5 enters volume production!

1235712

Comments

  • Reply 81 of 239
    [quote]Originally posted by Programmer:

    <strong>



    I'm curious about what the supposed "Apple Pi" extensions could be... HyperTransport, perhaps? More specialized graphics instructions for Quartz? Fancy DMA features for use with the on-chip memory controller? Hyper-threading? Steve's favourite recipe for apple pie?</strong><hr></blockquote>



    ... I suspect it's as much any one of those things, as it was a conveniently contrived excuse to run screaming from Moto as fast as possible.
  • Reply 82 of 239
    mokimoki Posts: 551member
    [quote]Originally posted by Amorph:

    <strong>



    That depends on how Apple decides to implement them. If they understand a subset of the AltiVec instruction set, then Apple could probably include logic to reroute those instructions on the fly, and forward anything that they can't handle across the bus. It wouldn't be easy (and, to be honest, I'm not sure if it's even practicable) but it certainly would be transparent.</strong><hr></blockquote>



    No, it would need to be custom-coded -- and in reality, only a very few applications would likely end up using them. We're talking about very light-weight DSPs here. However, for certain very specific applications, they'll enabled some very cool stuff to happen.



    [ 06-11-2002: Message edited by: moki ]</p>
  • Reply 83 of 239
    [quote]Originally posted by moki:

    <strong>



    What would seem mostly likely for the Pro line at MacWorld would be a DDR machine with USB 2, 800mbs FireWire, and a fairly nifty feature (which will be used only by a very few engineers): mini DSPs sitting on the memory controller, allowing things like MPEG4 to be done with close to zero overhead.

    </strong><hr></blockquote>



    Hmm. Given Apple's focus on owning the DV market outright, this would make sense. Makes me think that a mini-DSP would be labeled as Velocity Engine Pro, or some damn thing, clearing the way for Altivec to get tossed and possibly Moto along with it.



    Apple should be in a position to take Altivec away from Mot (Apple really designed it, IIRC), and while IBM clearly could care less about putting it in their chips, nVidia might like to take a crack at putting it in their stuff. By breaking this apart, Apple would gain a lot of flexibility in their hardware path and would possibly get a new partner in nVidia.



    Also by yanking out the Altivec, Apple can put more traditional chips into more traditional hardware (xServe doesn't need Altivec for many of it's markets) for less, and make some weirder stuff for the pro users that would really use it.



    There really are only a few developers that use Altivec - mostly Adobe and Apple and some of the engineering apps (BLAST) but the stuff that would really benefit from this are pretty high-end, and therefore easier to get ported to new hardware.



    All in all, sounds like a reasonable plan. It would also explain some of the odd G5 rumors. The G5 really could be done, but Apple could hardly ship an Altivec-lacking G5 box unless the dsp hardware and mobo were solid. After all, the PS bake-off against the G4 would backfire without the dsp support. It certainly makes it clearer why Apple would be buying up so many high-end DV apps and de-committing to other platforms. Apple could ensure that the new dsp stuff is built in to these apps, and why run a P4 version of the app if it's 1/5 the speed, so why ship it. Unlike some others, I don't think Apple is interested in shuttering itself out of the x86 market just for spite. I think if Apple drops a profitable x86 product, it'll be for good reason.



    So the G5 could essentially be done, and if done by IBM, you certainly wouldn't see any evidence at Mots website. It would also explain any rumors that Apple killed the G5 (Mot's proposed version) It could really be ready for volume production. It could just be that Apple hasn't gotten the mobo and the developers in line to release it, and a bad rollout would give the chip a bad reputation, which Apple needs to avoid.



    Ok, there's a lot of 'what if's' in there, but nothing that's too much of a stretch if Moki is hinting us the right way.
  • Reply 84 of 239
    naepstnnaepstn Posts: 78member
    [quote]Originally posted by moki:

    <strong>

    I am not under NDA for any of this stuff, and indeed, it is a mixture of water cooler talk and speculation. But still, you know people here and there, you can put the pieces together. Clearly there is no way in hell I'd state anything that could affect any NDAs with anyone.

    </strong><hr></blockquote>



    So, what did Apple say about a Java 1.4 timeframe at WWDC?



    I need it badly!!!
  • Reply 85 of 239
    mokimoki Posts: 551member
    [quote]Originally posted by johnsonwax:

    <strong>All in all, sounds like a reasonable plan. It would also explain some of the odd G5 rumors. The G5 really could be done, but Apple could hardly ship an Altivec-lacking G5 box unless the dsp hardware and mobo were solid. After all, the PS bake-off against the G4 would backfire without the dsp support. It certainly makes it clearer why Apple would be buying up so many high-end DV apps and de-committing to other platforms. Apple could ensure that the new dsp stuff is built in to these apps, and why run a P4 version of the app if it's 1/5 the speed, so why ship it. Unlike some others, I don't think Apple is interested in shuttering itself out of the x86 market just for spite. I think if Apple drops a profitable x86 product, it'll be for good reason.

    </strong><hr></blockquote>



    While I don't know the details of exactly what these DSPs will be capable of, I do know that they are not even close to being on par with general-purpose DSPs or what AltiVec is capable of.



    That isn't what they are there for -- the idea is that as long as you have a memory controller that has to sit between main memory and your processor, why not give it some smarts so it can apply various algorithms to data as it is being shuffled to and from the processor.



    [ 06-11-2002: Message edited by: moki ]</p>
  • Reply 86 of 239
    programmerprogrammer Posts: 3,458member
    [quote]Originally posted by Amorph:

    <strong>That depends on how Apple decides to implement them. If they understand a subset of the AltiVec instruction set, then Apple could probably include logic to reroute those instructions on the fly, and forward anything that they can't handle across the bus. It wouldn't be easy (and, to be honest, I'm not sure if it's even practicable) but it certainly would be transparent.</strong><hr></blockquote>





    It really seems that there is a widespread and major misconception about what AltiVec is and how it works. It is simply a set of registers and instructions in the processor in addition to the integer & floating point registers and instructions. The day of having a seperate floating point unit is long gone, and the same is true of having a seperate vector unit. They are just far too tightly coupled to do that, not to mention that your memory controller runs at 200-300 MHz.



    Putting this functionality into the memory controller is just a desperate bid to get more out of the DDR without improving the G4's front side bus. It will not replace AltiVec, and AltiVec should not go away -- all other processors have vector units, and IBM is adding vector units. Even if most programmers don't directly write AltiVec code, they implicitly take advantage of it because large amounts of system code do use it (OpenGL, QuickTime, network stack, and even just the basic copy memory routine). Apple doesn't own (and didn't design) AltiVec, but they probably have the rights to allow IBM to build a new SIMD implementation which is compatible with it.



    So Moki has clarified... he expects to see an Xserve-like machine with a few nifty improvements and a clock rate bump. Well I won't be surprised by this at all, and with Quartz Extreme it will be a significantly faster machine even if it doesn't benchmark that well.
  • Reply 87 of 239
    And nVidia will be onboard. My friend is a stockholder and says that ~September nVidia is coming out with something cool with Apple.



    Guess we will see...exciting times these are...
  • Reply 88 of 239
    amorphamorph Posts: 7,112member
    [quote]Originally posted by Programmer:

    <strong>It really seems that there is a widespread and major misconception about what AltiVec is and how it works. It is simply a set of registers and instructions in the processor in addition to the integer & floating point registers and instructions. The day of having a seperate floating point unit is long gone, and the same is true of having a seperate vector unit. They are just far too tightly coupled to do that, not to mention that your memory controller runs at 200-300 MHz.</strong><hr></blockquote>



    Eh?



    All I said was that if the controller had a few simple DSP instructions on board to do transformations on the data coming from memory, it might be possible to sniff out those instructions and divert them, if the DSPs understood a subset of AltiVec instruction set. Why a subset? Because they're not going to have the full capability of AltiVec, but if they recognize the same instructions, no custom instructions have to be generated, either by the programmer or by the compiler, and the additional hardware will be transparent.



    I was not talking about moving AltiVec to the memory controller altogether. As you point out, that doesn't make any sense.



    Of course, I was thinking out loud. One major disadvantage to my approach - again, assuming that it is practicable in the first place - is that it would not be easy to write code specifically to take advantage of the controller's DSP capabilities (you'd have to write your AltiVec code knowing the controller's instruction routing logic), and so they'd seldom be used at anything like peak efficiency.



    More tellingly, the controller could not be intelligent enough to discern a situation where a block of instructions would be more efficiently executed entirely by the processor.



    [quote]<strong>Putting this functionality into the memory controller is just a desperate bid to get more out of the DDR without improving the G4's front side bus.</strong><hr></blockquote>



    Actually, what it reminds me of is IBM's Channel architecture, where the busses themselves could be programmed to perform instructions on the data sent across them. If it's desperate, then IBM is guilty of tremendous amounts of desperation in designing their high end architectures. It's understandable: Bandwidth is crucial. On a personal computer platform, it's at a premium.



    [quote]<strong>It will not replace AltiVec, and AltiVec should not go away</strong><hr></blockquote>



    Obviously not. AltiVec is capable of accelerating incredibly complex calculations, and by its design and placement this hypothetical DSP would be meant to perform (relatively) simple transformations on streaming data.



    [quote]<strong>Apple doesn't own (and didn't design) AltiVec, but they probably have the rights to allow IBM to build a new SIMD implementation which is compatible with it.</strong><hr></blockquote>



    Are you sure they didn't at least have a hand in its design? I've read that they had an important, and possibly central, role in developing the instruction set, and in pushing for an onboard vector unit in the first place. The implementation in silicon is obviously all Moto, however.



    [quote]<strong>So Moki has clarified... he expects to see an Xserve-like machine with a few nifty improvements and a clock rate bump. Well I won't be surprised by this at all, and with Quartz Extreme it will be a significantly faster machine even if it doesn't benchmark that well.</strong><hr></blockquote>



    That would work for me. As nifty as a dedicated DSP on the memory controller might be in theory, it has a good chance of getting orphaned, like the DSP in the old AV series. Or IBM's ill-starred MicroChannel architecture.



    [ 06-11-2002: Message edited by: Amorph ]</p>
  • Reply 89 of 239
    johnsonwaxjohnsonwax Posts: 462member
    [quote]Originally posted by Programmer:

    <strong>

    Even if most programmers don't directly write AltiVec code, they implicitly take advantage of it because large amounts of system code do use it (OpenGL, QuickTime, network stack, and even just the basic copy memory routine). </strong>



    Right. Which is why a replacement for Altivec isn't totally out of the question. So long as Apple is able to replicate the performance boost in the system code, most (but not all) developers wouldn't care. The ones that would care are probably few enough in number that Apple can throw engineers at them to help with moving their code, and large enough that moving the code would make financial sense if the performance benefits were really there.



    That said, based the above comments, it doesn't sound like the DSPs are there for that purpose. But I think if we had an IBM G5 with a different SIMD engine, it would be a manageable transition for Apple, all things considered.



    <strong>Apple doesn't own (and didn't design) AltiVec, but they probably have the rights to allow IBM to build a new SIMD implementation which is compatible with it.

    </strong>



    Actually, I'm pretty sure most of Altivec's design was driven by Apple. That's not to say that Mot didn't have a hand in it as well and doesn't have a contract preventing Apple from taking it to other vendors - certainly Mot benefits from it in their other products, but I'm quite sure Apple had a substantial role in it's development. It only makes sense that they can take Altivec with them in some way. The problem with IBM seemed to have as much to do with the fact that Altivec didn't line up with IBMs plans for the G4 as it did with it being an Apple/Mot technology.

    <hr></blockquote>
  • Reply 90 of 239
    stoostoo Posts: 1,490member
    [quote]That isn't what they are there for -- the idea is that as long as you have a memory controller that has to sit between main memory and your processor, why not give it some smarts so it can apply various algorithms to data as it is being shuffled to and from the processor.<hr></blockquote>



    latency ?
  • Reply 91 of 239
    kidredkidred Posts: 2,402member
    [quote]Originally posted by johnsonwax:

    <strong>





    All in all, sounds like a reasonable plan. It would also explain some of the odd G5 rumors.



    So the G5 could essentially be done, and if done by IBM, you certainly wouldn't see any evidence at Mots website. It would also explain any rumors that Apple killed the G5 (Mot's proposed version) It could really be ready for volume production. It could just be that Apple hasn't gotten the mobo and the developers in line to release it, and a bad rollout would give the chip a bad reputation, which Apple needs to avoid.



    Ok, there's a lot of 'what if's' in there, but nothing that's too much of a stretch if Moki is hinting us the right way.</strong><hr></blockquote>



    Also, add the fact that MOTO removed the G5 from it's road map. Maybe that explains why. Also, IBM announced along with the Sahara that they had a altivec like unit (or someone reported it). All this definitely sounds like IBM is on board for Apple's next chip. I always wondered why IBM would stay on board just make some G3's for the iBook. It's only a matter of time before the iBook goes G4, then what for IBM? It won't matter if IBM will be making more or all of Apple's chips.
  • Reply 92 of 239
    davegeedavegee Posts: 2,765member
    Getting back to Apple 3.1415 (pi) (pie) whatever...



    Remember back pre-MWSF and we had those criptic messages from Codename?



    Well here was one of his messages:



    "A little bird told that Trinity shall return, after eating pie, more voluminous than a dolphin..."



    No connection I'm sure but since some of the stuff that Codename posted is starting to come true 'Rosetta' for one it got me to thinking... and now a new reference to 'pie' when that was one item I never could find a connection to...



    Oh well... as you were..



    Dave
  • Reply 93 of 239
    programmerprogrammer Posts: 3,458member
    [quote]<strong>

    All I said was that if the controller had a few simple DSP instructions on board to do transformations on the data coming from memory, it might be possible to sniff out those instructions and divert them, if the DSPs understood a subset of AltiVec instruction set. Why a subset? Because they're not going to have the full capability of AltiVec, but if they recognize the same instructions, no custom instructions have to be generated, either by the programmer or by the compiler, and the additional hardware will be transparent.

    </strong><hr></blockquote>



    Heh, you know not what you ask. The instruction stream is handled by the processor -- it reads instructions from memory according to its program counter(s) and decode those instructions, dispatching them to the appropriate execution unit. On the 7455 most execute in 7 cycles with a throughput of 1 per cycle per execution unit (up to 4 maximum). When the instruction is done a retirement unit orders them to ensure that they write back their results in the correct order. Several of the instructions use values to/from the integer and condition code registers. Memory loads have to go through the caching system. Several instructions exist just to control the caching system. All of this is used by every AltiVec using program, so a subset would not be much of a subset. Trying to move any of it off-chip onto a substantially slower piece of silicon would be prohibitively expensive. A software AltiVec emulator would probably be faster (and no, it would not be at all fast). Either way it would completely kill the entire point of having the AltiVec instructions.



    <strong> [quote]

    Actually, what it reminds me of is IBM's Channel architecture, where the busses themselves could be programmed to perform instructions on the data sent across them. If it's desperate, then IBM is guilty of tremendous amounts of desperation in designing their high end architectures. It's understandable: Bandwidth is crucial. On a personal computer platform, it's at a premium.

    </strong><hr></blockquote>



    I'm not familiar with their architecture, but I suspect it is quite different than a "little DSP in the memory controller". There are very cool things that can be done by auxilary processors, and it would be cool if Apple was actually doing something like this... but I doubt it.



    The bus you're refering to is MicroChannel? I think all that did was apply logical operations to the data crossing the bus... super-specialized and not really worth the effort. Now, if the memory controller and CPU would do some kind of data compression before putting the data on the bus, that would effectively increase memory bandwidth and would definitely be "worth it". That's not likely to happen on MPX.



    <strong> [quote]

    Are you sure they didn't at least have a hand in its design? I've read that they had an important, and possibly central, role in developing the instruction set, and in pushing for an onboard vector unit in the first place. The implementation in silicon is obviously all Moto, however.

    </strong><hr></blockquote>



    johnsonwax said "(Apple really designed it, IIRC)". This is not correct -- it was a collaborative design effort to create the instruction set, with Motorola doing the hardware implementation.



    <strong> [quote]

    That would work for me. As nifty as a dedicated DSP on the memory controller might be in theory, it has a good chance of getting orphaned, like the DSP in the old AV series. Or IBM's ill-starred MicroChannel architecture.

    </strong><hr></blockquote>



    Exactly... and it means Apple (and maybe one or two 3rd parties) will spend programming resources code a few things up for it, whereas they could instead be doing cool things that will work on all AltiVec-equipped PowerPCs going forward.



    Ah well, hopefully they at least go to the 166 MHz MPX, DDR333.
  • Reply 94 of 239
    haderachhaderach Posts: 32member
    This is all very exciting. I got much information about Motorola G5 processors so far, but until now no one I know could confirm that one of these chips will be used by Apple. On the other hand I haven't (yet) found somebody who was able to confirm that Apple really dropped the G5 project.



    The idea that IBM will design a desktop processor for Apple is very interesting. IBM has recently announced that the Power4 successors (Power5 and Power6) will be designed be be much cheaper and cooler, and they will also use new instructions for complex tasks like managing stacks - a technology called "Fast Path". Maybe "Apple Pi" and "Fast Path" are the same thing.



    Unfortunately we will have to wait until 2004 for the Power4 - I wonder if IBM will be able to ship a powerful desktop CPU before that date.



    [ 06-11-2002: Message edited by: haderach ]</p>
  • Reply 94 of 239
    amorphamorph Posts: 7,112member
    [quote]

    All I said was that if the controller had a few simple DSP instructions on board to do transformations on the data coming from memory, it might be possible to sniff out those instructions and divert them, if the DSPs understood a subset of AltiVec instruction set. Why a subset? Because they're not going to have the full capability of AltiVec, but if they recognize the same instructions, no custom instructions have to be generated, either by the programmer or by the compiler, and the additional hardware will be transparent.



    <strong>Heh, you know not what you ask.</strong><hr></blockquote>



    No, I do, sort of. I knew it would require an instruction decoder in the memory controller. I was trying to think of ways to make the little DSPs transparent.



    At this point, I've come to the conclusion that there are too many reasons why it couldn't happen. But it was a fun thought experiment.





    [quote]<strong>I'm not familiar with their architecture, but I suspect it is quite different than a "little DSP in the memory controller". There are very cool things that can be done by auxilary processors, and it would be cool if Apple was actually doing something like this... but I doubt it.



    The bus you're refering to is MicroChannel?</strong><hr></blockquote>



    No, it was called the Channel architecture, and as far as I can recall it predates the personal computer revolution. MicroChannel was a scaled-down version for the PC which was utterly doomed when that became a commodity market. I don't know the exact details, but several of the people I work with programmed Channel architectures back in the day, and it was capable of some powerful stuff. Not just Boolean logic. A (very simple) DSP would be in line with what it could do.



    [quote]<strong>Now, if the memory controller and CPU would do some kind of data compression before putting the data on the bus, that would effectively increase memory bandwidth and would definitely be "worth it". That's not likely to happen on MPX.</strong><hr></blockquote>



    No, but it would be nice. The biggest disadvantage I can think of is increased latency. That, and performance would vary significantly based on how well the data compressed at any given moment, which might yield some odd results.



    [quote]

    That would work for me. As nifty as a dedicated DSP on the memory controller might be in theory, it has a good chance of getting orphaned, like the DSP in the old AV series. Or IBM's ill-starred MicroChannel architecture.



    <strong>Exactly... and it means Apple (and maybe one or two 3rd parties) will spend programming resources code a few things up for it, whereas they could instead be doing cool things that will work on all AltiVec-equipped PowerPCs going forward.</strong><hr></blockquote>



    Apple could get around that by putting them in every single memory controller they shipped, across all models. Then it would be something like Quartz Extreme, that kicked in if you had the proper hardware, and was translated and handled by the CPU if it wasn't there. That way Apple would work around the problem that killed the AV DSP (shipping in exactly two expensive models for a couple of years), and the one that killed MicroChannel (relevance, incompatibility in a commodity market).



    This is not the first time something like this has come up. The "Raycer chip," "QuickTime-on-a-chip," and various rumors about dedicated MPEG acceleration have all pointed this way for a couple of years now. That could mean that where there's smoke, there's fire; or it could mean that where there appears to be smoke, there's a lot of hot air. I have my reservations about auxiliary processors, though. They've done well in the embedded market, and in big iron, but they have a poor track record in personal computers.



    [quote]<strong>Ah well, hopefully they at least go to the 166 MHz MPX, DDR333.</strong><hr></blockquote>



    I wouldn't complain.



    __________________
  • Reply 96 of 239
    [quote] Remember back pre-MWSF and we had those criptic messages from Codename?



    Well here was one of his messages:



    "A little bird told that Trinity shall return, after eating pie, more voluminous than a dolphin..."



    No connection I'm sure but since some of the stuff that Codename posted is starting to come true 'Rosetta' for one it got me to thinking... and now a new reference to 'pie' when that was one item I never could find a connection to...



    <hr></blockquote>



    Hmm ... forgot all about the Codename messages ... not sure if this is any help, but Dolphin-IC (www.dolphin-ic.com) seems to have some licensing agreements with IBM for fab technology, and they're involved with Hypertransport.



    Also, IBM are the chip suppliers and one of the designers of the Nintendo Gamecube architecture, which was named 'Dolphin' in it's prototype stages.



    Now I'd like to know what 'Trinity' stands for.
  • Reply 97 of 239
    amorphamorph Posts: 7,112member
    [quote]Originally posted by audiopollution:

    <strong>Now I'd like to know what 'Trinity' stands for.</strong><hr></blockquote>



    Trinity is the code name for the Cube.
  • Reply 98 of 239
    [quote] Trinity is the code name for the Cube.



    <hr></blockquote>



    Okay, so I wonder if this was not just a veiled reference to the 'sunflower' iMac, then.



    Ate some pie. (round)

    More voluminous than a Dolphin. (bigger than gamecube)



    Nah. I'll just shut up now.
  • Reply 99 of 239
    marcsirymarcsiry Posts: 27member
    [quote] "A little bird told that Trinity shall return, after eating pie, more voluminous than a dolphin<hr></blockquote>



    My read:



    "A little bird [unknown] told that Trinity [the G3 processor, IBM's specialty] shall return, after eating pie [Apple Pi, the new interconnect strategy], more voluminous [greater internal bandwidth] than a dolphin [than the PPC chip in Gamecube, which is known for its massive internal bandwidth].
  • Reply 100 of 239
    flounderflounder Posts: 2,674member
    Hmmm, trinity was the code name for the cube.

    Someone in this thread (or another one, i'm too lazy to look) said they were told the next powermac would look like two cubes stacked on top of one another. Sounds more voluminous than a dolphin to me.
Sign In or Register to comment.