PPC 970 date?

1235718

Comments

  • Reply 81 of 344
    amorphamorph Posts: 7,112member
    Although the 970 can scale up past dual processors more efficiently than the G4, there still seem to be issues associated with taking it past configurations of two, just because each chip has its own bus, so the support logic gets complicated fast. It's possible, certainly, it's just a question of exactly how expensive Apple wants to get. They could enter the >$10K UNIX workstation market with a very compelling product if they chose to, but if they're sticking to their current price range than I don't think four- or eight- processor configurations will happen. I note that the traditional UNIX workstation market is dying, so they probably won't exceed their traditional price range by much.



    Apple will probably wait for multiple core chips, which will give them effective four, eight, etc. processor configurations with the approximate complexity of a dual processor motherboard. Given IBM's stated direction, and the rumored Son of Zilla kernel-level clustering support (plus FW800/Gb Ethernet/2Gb fibre channel + Rendezvous) that shouldn't be far off.



    [ 02-07-2003: Message edited by: Amorph ]</p>
  • Reply 82 of 344
    ed m.ed m. Posts: 222member
    Amorph, I'm I'd have to agree with AirSluf on this one... Marketing. Besides that, Apple will want to kill them with numbers... And if what you say is true, why even bother to develop a processor with outstanding SMP capabilities (and AltiVec) if all that hard work that went into the design will go unnoticed because it's not even being utilized? In other words, sticking with *only* 2 CPUs doesn't make any sense. And it's already been done to death for YEARS. It's not some *breakthrough*.. Well, on the desktop it would be... but then again, that's been done on the Mac desktop already too...



    As I've stated before (and Programmer can back me up on this), SMP is the future. It's not going to be a giant, single CPU config like we are used to from Intel and AMD. There has already been enough work done on greater than 2 CPU configs over the years, that it's likely Apple could have already designed a really sweet way of implementing it. As a matter of fact, it would be brilliant on their part if they invested heavily in a way where adding more than 2 CPUs is as simple as adding another backplane -- similar to the way mainframes do it. I'm not sure it has to be *extremely* complex, and how do we know that Apple hasn't already done the work? If you remember, IBM stated that their workstations employing the 970 were meant to be released as a 4-way config right from the start and I doubt that these systems will be in the 10k range. Anyway, I think it would be an outstanding idea if Apple designed a config that can take them well into the future. Their initial expense might be high, but it will pay off as time goes on. Call it a "gambit" or "stratagem"... And you can bet that *if* they are going to use the 970 then it's likely that they've gotten some heavy-duty assistance from IBM. So, I think we will see more than dual-configs... It's just a matter of "when".



    --

    Ed M.
  • Reply 83 of 344
    amorphamorph Posts: 7,112member
    [quote]Originally posted by Ed M.:

    <strong>Amorph, I'm I'd have to agree with AirSluf on this one... Marketing.</strong><hr></blockquote>



    Marketing would be stuck selling really expensive machines in a dying market. That's the problem.



    Even with HT and RIO, NUMA architectures are expensive to implement. The fact that marketing would like to sell 16 processor iMacs means nothing.



    [quote]<strong>And if what you say is true, why even bother to develop a processor with outstanding SMP capabilities (and AltiVec) if all that hard work that went into the design will go unnoticed because it's not even being utilized?</strong><hr></blockquote>



    First, the G4 was designed for up to 8 CPU SMP support. We never saw 8-way SMP PowerMacs.



    Second, don't forget IBM, who will cheerfully put as many 970s as they please into their pricey RS/6000 line. The 970 was not produced exclusively for Apple, and the real cost of a 16-way 970 system can fit comfortably into IBM's enormous price brackets.



    [quote]<strong>As I've stated before (and Programmer can back me up on this), SMP is the future.</strong><hr></blockquote>



    I don't question that. But if you look at IBM, they're saying Cell is the future, and Cell is multiple cores per die. This removes a lot of expensive traces, and a lot of expensive logic, and numerous bandwidth bottlenecks, from the MP equation. IBM is planning on dozens of cores per die down the line. So, as I said, Apple can scale up from two cores to dozens without significantly increasing the cost or complexity of their motherboard, which seems to me like a win-win situation. At that point, of course, you'll be able to buy RS/6000s (or whatever) that are effectively massively MP, with dozens of CPUs, each with dozens of cores, at something like the current price points (tens of thousands of dollars).



    So I'm definitely thinking in terms of SMP. However, this involves processors, not dies, so you can easily have multiple processors (cores) per die and have SMP. It's not kinda-sorta-MP, like hyperthreading is (although future IBM cores will have that, too!).



    [quote]<strong>As a matter of fact, it would be brilliant on their part if they invested heavily in a way where adding more than 2 CPUs is as simple as adding another backplane -- similar to the way mainframes do it.</strong><hr></blockquote>



    There's a reason mainframes cost as much as they do. The mainframes essentially have all the traces and support logic in place to accomodate a maximum number of CPUs, so you're talking about worst-case cost for the motherboard right up front. The daughtercard wouldn't cost all that much; the problem would be the board. Also, you'd be looking a great big board, and some really fancy work to provide a reliable, high-bandwidth connection of the CPU module to the board. None of this is impossible, of course, but as with hot-swappable PCI it's a matter of how much you want to pay for it.



    If HT and RapidIO really make interconnects cheap, we might see four chips. It depends on how soon, and in what quantity, IBM can provide multicore processors. Apple does want that $3500 price point back, I'm sure.



    [quote]<strong>If you remember, IBM stated that their workstations employing the 970 were meant to be released as a 4-way config right from the start and I doubt that these systems will be in the 10k range.</strong><hr></blockquote>



    Not all of them will. The pSeries UNIX servers start at $3500; the RS/6000 workstations start at about $8500 and go well over $10K.



    This is IBM, after all. They sell machines that run up into the millions of dollars. They consider $10K to be entry level.



    [ 02-07-2003: Message edited by: Amorph ]</p>
  • Reply 84 of 344
    programmerprogrammer Posts: 3,467member
    Actually I'll side with the "no more than 2 processor chips" camp. Apple's market isn't really the right place for machines with more than 2 FSBs, multiple memory controllers, etc etc. That doesn't mean that they won't be increasingly SMP, however.



    Apple is big on integration and that's probably where their future lies. Consider that a dual core 2-way hyperthreaded twin processor machine is effectively an 8-way SMP machine! At 50 million odd transistors per 970 (+ 10% for hyperthreading) they would only need about 220 million transistors to get a quad core chip, and they could build a 16-way SMP machine out of that with just two chips. Such a monster chip will probably need something smaller than a 0.09 micron process... but that is only ~5 years away. And Apple just isn't going to be able to build a memory subsystem that is costed for their market and even remotely close to delivering the necessary bandwidth.
  • Reply 85 of 344
    bartobarto Posts: 2,246member
    If Apple wants Macs with 4+ CPUs, why don't they just license Mac OS X Server to IBM? Obviously it would be a license restriced to workstations and servers.



    Barto
  • Reply 86 of 344
    algolalgol Posts: 833member
    I think apple will leave the Powermacs with no more than 2 CPUs, but their Xserve line may see quads. The Xserves all ready cost more, so why not add something like a quad to differentiate them more from the Powermacs? I believe we will see single and dual PowerMacs, and we will see dual and quad Xserves. The Xserve will obviously get the 970 before any other Computer apple has, after all it still hasn't been updated.



    Wait a minute! You think apple is saving the Xserve update for the 970? After all if they are not why are they not adding the 1.42Ghz to it? mmmmmm We shall see all in good time...
  • Reply 87 of 344
    [quote]Originally posted by Algol:

    <strong>

    Wait a minute! You think apple is saving the Xserve update for the 970? After all if they are not why are they not adding the 1.42Ghz to it? mmmmmm We shall see all in good time...</strong><hr></blockquote>



    Actually, Apple is having heat problems with the 1.4GHz chips inside the 1U enclosure of the XServe.



    They were hoping for .13 micron G4s, but that didn't show up on schedule [big surprise]



    The X Serves will be updated, but not for a bit. Consider them to be in a holding pattern. Also, as true to their "humble foray into the server market" Don't expect them to update the product more than once a year. This isn't a consumer product, nor is it a "prosumer" product. Real servers take many moons to debug, and admins don't update what works.



    .
  • Reply 88 of 344
    algolalgol Posts: 833member
    I hate motorola! Bunch of Dip Shits! <img src="graemlins/oyvey.gif" border="0" alt="[oyvey]" /> <img src="graemlins/cancer.gif" border="0" alt="[cancer]" />



    Gosh I feel better now.
  • Reply 89 of 344
    bartobarto Posts: 2,246member
    Ah, ye old "mad at moto" post. It is stress-relieving, isn't it?



    As far as the Xserve, it is not a cutting-edge performance platform. It will probably continue to use the G4+U2+Keylargo chipset for a while, as it is a proven architecture.



    It needs cool CPUs to operate in a 1U enclosure. Dual CPUs and quad HDDs are the Xserve's main selling points.



    Barto



    [ 02-08-2003: Message edited by: Barto ]</p>
  • Reply 90 of 344
    nevynnevyn Posts: 360member
    [quote]Originally posted by Amorph:

    <strong>

    I don't question that. But if you look at IBM, they're saying Cell is the future, and Cell is multiple cores per die. This removes a lot of expensive traces, and a lot of expensive logic, and numerous bandwidth bottlenecks, from the MP equation. IBM is planning on dozens of cores per die down the line. </strong><hr></blockquote>



    But this is also an extension of the _other_ things IBM is doing, and has been doing on the high end. Not all of the comments about Cell explicitly state 'single die' for the whole widget.



    We've gotten used to a 'CPU core' having a variety of functional units (Multiple integer units, FPUs, VPUs etc) all sharing some resources (registers).



    At the next level is 'multiple cores', where they share L1/L2.



    At the next level is IBM's Power4 - where CPUs are combined into 4x CPU 'modules', where the L3 caches are shared (though not equally).



    Then at the next level multiple modules are combined to play chess. (er, or whatever it is they're doing lately.)



    Unit-Core-CPU-Module-Supercomputer.



    At each step along the way there is some parallelism, some SMP-isms. At each layer things 'look' like pretty standard SMP. A four module machine only has to worry about coherency between _four_ modules - the individual modules manage their own internal coherency.



    Cell is just an extension of this where more of it can be wedged onto a single bit of silicon. But there'll still be some limit to how many 'cores' one die can hold -&gt; when they talk about 256 'Cells' being devoted to a task they're probably refering to more than one die. As far as I see, the 256 wouldn't be organized as a flat heirarchy. 4 modules of 4 CPUs of 4 cores of 4 units gets to the same 'Cell' count, but the organisational problems are drastically simplified.



    The part where I get lost is:

    Would Apple be _forced_ to go NUMA instead of a more normal SMP arrangement if the L3 cache was 128MB or so? The use of a _standard_ backplane like RIO or HT would seem to simplify multi CPU/GPU configs drastically, so much so that it would seem the major hurdle for a 'XStation', or XServe-Blade, or XServe Xtreme or whatever would be well on its way to solved.



    Not coming soon to an iMac near you anytime soon mind you.
  • Reply 91 of 344
    [quote]Originally posted by Nevyn:

    <strong>The part where I get lost is:

    Would Apple be _forced_ to go NUMA instead of a more normal SMP arrangement if the L3 cache was 128MB or so? The use of a _standard_ backplane like RIO or HT would seem to simplify multi CPU/GPU configs drastically, so much so that it would seem the major hurdle for a 'XStation', or XServe-Blade, or XServe Xtreme or whatever would be well on its way to solved.

    </strong><hr></blockquote>



    As soon as one memory controller isn't enough then you have a NUMA system. NUMA=Non Uniform Memory Access, which basically means that a processor doesn't get to all memory bytes in the same way. Some from this controller, some from that controller... one of which is usually much faster. Since a single memory controller is limited by the speed of its memory, if you want to go faster than the fastest single controller you can build you'll need more of them. At what point you do this depends on how bandwidth hungry your processors are, how many interfaces they have, and how many of them you have.



    Once the L3 cache starts getting very large you have to wonder if it might be time to stop using it as a cache and start using it as local memory. The G4 can already sort of do this, but it is limited to ~4 MB and it is strictly private memory. If the memory were made public so that it could respond to external requests then you have the possibility of a NUMA system. Not terribly useful on the G4 since its FSB is slow, but if you imagine a machine with a fast connection, the ability to use much more local memory (possibly on-die), and a DMA engine so that it doesn't tie up the processor when accessed externally, then it becomes interesting. Now the OS virtual memory system can be used to move around larger pages between local memory pools, resulting in better bus utilization due to longer bursts. Memory can live attached to the processor or the chipset (either northbridge or a unified one). A lot of flexibility is possible, but the OS has to carry the burden of managing the memory and task-to-processor allocation. Something like RapidIO is then used as the basic communications fabric for moving data around. The nice thing about RapidIO is that it scales well from small systems to large multiprocessor ones.
  • Reply 92 of 344
    [quote]Originally posted by AirSluf:

    <strong>



    Any particular reasons why?



    Current bus limitations that result in rapidly diminished performance returns will seemingly be avoided in 970 based designs. Different engineering environments can make significantly different options realistic compared to predictions based on track record decisions.



    I don't know what Apple will do, but don't see compelling evidence that 970's would terminally suffer the same diminishing return curve for greater than dual installs. That would make marketability a significantly larger player in the decision loop than before, and marketeers always vote for bigger numbers are better.</strong><hr></blockquote>





    Like I said: I'm not making any claims as the the advantages or disagvantages or going above 2 CPUs, or even whether or not its possible.



    All I'm saying is that my sources indicate that it is not in their (Apple's) plans for the next while. I'm sure that they have their reasons.
  • Reply 93 of 344
    algolalgol Posts: 833member
    Transcendental Octothorpe, since you seem to know so much, why don't you let us know when you think we will see the 970 in a PowerMac. And whether we will see a 970 PowerBook around the same time. Thats all I really want to know.
  • Reply 94 of 344
    [quote]Originally posted by Algol:

    <strong>Transcendental Octothorpe, since you seem to know so much, why don't you let us know when you think we will see the 970 in a PowerMac. And whether we will see a 970 PowerBook around the same time. Thats all I really want to know.</strong><hr></blockquote>





    I believe that I've made it clear on several ocasions.



    I have no info direct from Apple.



    I only have info on production and specs for the 970 and the 57/47. Believe you me, I wish I knew an exact date for a 970 PM too. See my sig.
  • Reply 95 of 344
    [quote]Originally posted by T'hain Esh Kelch:

    <strong>Who has 'official' rumors and who has 'unofficial' rumors, and who's just guessing? </strong><hr></blockquote>



    What the hell does that mean. Rumors are rumors. And the bottom line is simply fun, isn't it?
  • Reply 96 of 344
    In relation to the delayed production date for the next iteration of the G4 (which may not even land in a desktop PowerMac), what do you guys think this spells for the PowerMac for the next year? Is Steve/Apple Marketing going to stick by this "Year of the Notebook" thing and leave professionals hanging for an ENTIRE year with the antiquated G4 -- there are so many bad contingencies that I can't say them. New XPress would be a longshot, but gee, if that DID happen and Apple were stuck with a 1.42 G4? Will IBM pull through and make this year something besides full of notebook advances (which aren't bad, but not important for everyone involved).
  • Reply 97 of 344
    [quote]Originally posted by fred_lj:

    <strong>In relation to the delayed production date for the next iteration of the G4 (which may not even land in a desktop PowerMac), what do you guys think this spells for the PowerMac for the next year? Is Steve/Apple Marketing going to stick by this "Year of the Notebook" thing and leave professionals hanging for an ENTIRE year with the antiquated G4 -- there are so many bad contingencies that I can't say them. New XPress would be a longshot, but gee, if that DID happen and Apple were stuck with a 1.42 G4? Will IBM pull through and make this year something besides full of notebook advances (which aren't bad, but not important for everyone involved).</strong><hr></blockquote>



    The 0.13 micron G4 has always been intended for notebook and consumer machines only. Their roadmaps back in 2000 said this. The PowerMacs will be going to the 970 as soon as possible, and the 7457 will appear in the low end machines as soon as its available. While Motorola's public production announcement is Q4, this might just be for generally available parts while Apple gets the earlier production. They've done that before.
  • Reply 98 of 344
    bartobarto Posts: 2,246member
    [quote]Originally posted by Programmer:

    <strong>



    The 0.13 micron G4 has always been intended for notebook and consumer machines only. Their roadmaps back in 2000 said this. The PowerMacs will be going to the 970 as soon as possible, and the 7457 will appear in the low end machines as soon as its available. While Motorola's public production announcement is Q4, this might just be for generally available parts while Apple gets the earlier production. They've done that before.</strong><hr></blockquote>



    I don't think the 7457 will in Macs, apart from the iBook, for too long.



    Apple must be so incredibly tired of Motorola. As soon as the PowerPC 970 moves to 90nm, watch out!



    Barto
  • Reply 99 of 344
    algolalgol Posts: 833member
    [quote]Originally posted by Programmer:

    <strong>



    The 0.13 micron G4 has always been intended for notebook and consumer machines only. Their roadmaps back in 2000 said this. The PowerMacs will be going to the 970 as soon as possible, and the 7457 will appear in the low end machines as soon as its available. While Motorola's public production announcement is Q4, this might just be for generally available parts while Apple gets the earlier production. They've done that before.</strong><hr></blockquote>



    What roadmaps do you speak of? I was not aware we had any roadmaps as to the use of different chips...
  • Reply 100 of 344
    [quote]Originally posted by Algol:

    <strong>What roadmaps do you speak of? I was not aware we had any roadmaps as to the use of different chips...</strong><hr></blockquote>



    I can't remember the details of whether it was an Apple or a Motorola roadmap, it was published back in early 2000. I wish I still had a copy of it. Obviously things have changed since then, but the SOI 0.13 micron was there and clearly marked for consumer / notebook use. I can't remember when they expected it to be delivered but it was surely by now so Moto is rather late with it. There was also a G5 mentioned for the high-end but that appears to have been replaced with IBM's 970.
Sign In or Register to comment.