New g4 specs & info

2»

Comments

  • Reply 21 of 37
    matsumatsu Posts: 6,558member
    Now I know nothing about all this, but I thought that the point of a faster bus was to get more information moving to the CPU and around the whole system in general. Particularly where the CPU has a good SIMD unit, a G4 might be able to take much better advantage of a fat bus (hence man 'multiple datas') than something in the X86 realm. And there's the spectre of QE and it's future enhancements to consider. With GPU'd being asked to do more heavy lifting, a fat memory bus, especially one that is faster than the CPU's own native FSB abilities, would enable BOTH the CPU and the GPU to swap info to and from RAM without either one hogging bandwidth from the other.



    The performance of the Xserve, which uses a similar DDR 'hack' seems very illuminating to me, but that could be down to a very fast ATA disk system. Even then a better memory bus is still great as PCI, ATA, AGP, and CPU are not all competing for memory bandwidth.



    It's probably not a 'ferrari at 55' analogy, maybe more like an old Aston at 130 -- we all know the heavy beast could hit 160 if only the road were a little longer and straighter, unfortunately curves pop up whenever we build up some momentuum.



    Ughh, now I feel dirty: I hate car/computer analogies. <img src="graemlins/hmmm.gif" border="0" alt="[Hmmm]" />



    [ 07-09-2002: Message edited by: Matsu ]</p>
  • Reply 22 of 37
    cindercinder Posts: 381member
    I never knew what a 'speed governor' was until i read this thread.



    Actually, I still don't know for sure.

    Is it a car device?
  • Reply 23 of 37
    spartspart Posts: 2,060member
    Basically it is part of a car's electronics that will not allow it to go past a certian MPH rating, or in some cases RPM rating.
  • Reply 24 of 37
    bigcbigc Posts: 1,224member
    Mechanical ones were around way before electronics were put into vehicles.
  • Reply 25 of 37
    bruiserbruiser Posts: 3member
    [quote]Originally posted by BR:

    <strong>



    Speaking of speed governors...I hate speed governors. It scared the shit out of me when my jimmy decelerated 20 mph after I hit 100 on the way to vegas last year.</strong><hr></blockquote>

    Hey, thanks for the global warning and the war with the Arab world, genius.
  • Reply 26 of 37
    jeromejerome Posts: 17member
    Please, please, please, no more ****in' white Apple!
  • Reply 27 of 37
    ed m.ed m. Posts: 222member
    [[[Now I know nothing about all this, but I thought that the point of a faster bus was to get more information moving to the CPU and around the whole system in general. Particularly where the CPU has a good SIMD unit, a G4 might be able to take much better advantage of a fat bus (hence man 'multiple datas') than something in the X86 realm. ]]]



    Read my other thread that talks about how Moto will likely integrate the memory controller directly onto the CPU. The topic is under "G5 Speculation Revisited"\t



    --

    Ed M.
  • Reply 28 of 37
    [quote]Originally posted by SteveS:

    <strong>Not all CPU based applications are Bus bandwidth limited...The real world differences between the high end and low end was on the order of 5% to 10% difference! That's it! Now factor in the heavy dose of L3 cache that Apple currently uses but not found on most PC motherboards and you're down to about a 2% to 3% difference.



    So, for all of the EE wannabe's, I beg you to stop making claims like this without substantial performance proof to back it up.</strong><hr></blockquote>



    Unfortunately, the MAJORITY of the population are at the the level of EE wannabe's...and for those of us who are EEs, how much can you do on a Mac? I had to build an Athlon box for Protel, VHDL, CPLD programming, etc, since VPC simply won't cut it (especially VHDL simulations)



    Next point...I HATE Apple's tech-specs, 'cuz they don't provide enough details to be a real tech-spec. With that said, L3 cache...the G4 only has a 7 stage pipe, and if the engineers are competent enough with branch prediction (which I think Intel has the best, but they pretty much have to with a 20 stage PoS), a large L3 cache isn't very useful at all, at least for Personal Computing. If you think 'bout it, the programs that people use: Office, Photoshop, Internet, and such don't really require much branch prediction, and in Photoshop's case, a fatter memory bandwidth can make a lot of difference (and a hella fast HD). As for games and such, GPUs are doing most of the processing, so as far as I can see, I have much more preference for higher memory bandwidth than a large L3 cache. Memory bandwidth has been the traditional bottleneck, and still is, and it's definitely a good thing that the Xserve is finally on PC2100 DDR RAM. 133DDR+PC2100 is the least Apple should do with their UMA2.0...'cuz looking at raw memory bandwidth numbers, Apple are pretty far behind...
  • Reply 29 of 37
    johnsonwaxjohnsonwax Posts: 462member
    [quote]Originally posted by evangellydonut:

    <strong>Memory bandwidth has been the traditional bottleneck, and still is, and it's definitely a good thing that the Xserve is finally on PC2100 DDR RAM.</strong><hr></blockquote>



    I have to agree that memory bandwidth is a problem. Regarding PC benchmarks, they don't always paint a representative picture of performance for content creators - DV, Photoshop, RIP, etc. These are generally tasks that do require more memory throughput, and the whole point of Altivec was to be able to efficiently work large data sets.



    Further, Apple is in a good position to make headway through MP systems, and while single G4 systems might not be so starved for memory, you can bet that dual 1.4GHz G4s will be in a serious way if you're trying to push data to those Altivec units.
  • Reply 30 of 37
    tigerwoods99tigerwoods99 Posts: 2,633member
    I've heard something about Level 4 cache in the new machines (and processor), anyone else hear this? Might have been Dorsal that mentioned it.



    I just don't get how Apple doesn't implement current technology in its machines. If it were an issue of the current G4 iteration then why would the Xserve being using it? The Xserve uses the same 7455 that can be found in the latest PowerMac G4s. Is Apple just clueless? I mean considering how well Macs can do without buzzword technology, I figure a 1.4 Ghz G4 w/Rapid IO memory controller DDR FireWire 2, USB 2, etc. would absolutely scream. There's no reason why Apple can't make a case with 2 full drive bays. Some of these things are just beyond me...
  • Reply 31 of 37
    lemon bon bonlemon bon bon Posts: 2,383member
    evangellydonut : astute post. (I heard a few tech' types on these boards question the need for Level 3 stash a long while back...before it turned up on the Towers...)



    Tiger'



    "I just don't get how Apple doesn't implement current technology in its machines. If it were an issue of the current G4 iteration then why would the Xserve being using it? The Xserve uses the same 7455 that can be found in the latest PowerMac G4s. Is Apple just clueless? I mean considering how well Macs can do without buzzword technology, I figure a 1.4 Ghz G4 w/Rapid IO memory controller DDR FireWire 2, USB 2, etc. would absolutely scream. There's no reason why Apple can't make a case with 2 full drive bays. Some of these things are just beyond me..."



    Well said. I don't know why either. (When even Macmags are dropping hints...) Sigh. It's frustrating, I KNOW. They've got EVERYTHING ELSE NAILED DOWN NOW...(right down to the 'Switch' campaign...) except...



    Lemon Bon Bon



    [ 07-09-2002: Message edited by: Lemon Bon Bon ]</p>
  • Reply 31 of 37
    anandanand Posts: 285member
    "a large L3 cache isn't very useful at all"



    In our lab, we have several PowerMacs. One is a top of the line 733 and the other is the educational 733. The only difference is the old 733 has L3 cache. That runs rings around the newer (and cheaper) 733. For the Mac, I am convinced that the L3 cache helps performance - tons.
  • Reply 33 of 37
    lemon bon bonlemon bon bon Posts: 2,383member
    But aren't grab sized bags of cache on the processor expensive? re: power4?



    Wouldn't it be better to have an extra fpu or two?



    Lemon Bon Bon
  • Reply 34 of 37
    terkansterkans Posts: 6member
    [quote]Originally posted by Phrogman:

    <strong>



    Didn't Apple jump from 500 mhz to 733mhz??....which is a 47% increase....7% more than going from 1ghz to 1.4 ghz.</strong><hr></blockquote>



    I think the largest percentile increase ever was way back when they went from an 8 MHz 68000 to a 16 MHz 68020 in the Mac II.
  • Reply 35 of 37
    johnsonwaxjohnsonwax Posts: 462member
    [quote]Originally posted by TigerWoods99:

    <strong>

    I just don't get how Apple doesn't implement current technology in its machines. If it were an issue of the current G4 iteration then why would the Xserve being using it? The Xserve uses the same 7455 that can be found in the latest PowerMac G4s. Is Apple just clueless?</strong><hr></blockquote>



    Well, it's not like they can just solder in DDR. Mot needs to build that support into the CPUs, which they might be unwilling to do.



    The Xserve does use DDR, but in a not particularly useful way for a desktop. The DDR in the Xserve delivers rather nice bandwidth from memory to I/O, but not from memory to CPU. Need to stream a gazillion MPEG files? Xserve's your box. Need to compress those MPEG files? It's gonna be slow. The slow memory to CPU bus isn't usually a big problem in most server environments, so the Xserve really is a good product.



    If, however, you are going to do computational clustering, the Xserve will probably suck, and you might as well save your pennies on that second CPU because it'll damn well never get used. Desktops tend not to be as much the I/O engines as servers, rather they do much more computational work. The type of DDR support in the Xserve, if translated to a desktop system simply won't help performance in the area in which it is needed.



    What we need in a desktop (particularly a dual) is enough memory bandwidth to the CPUs that Altivec can be well fed. We do need true DDR or something of comparable speed. Altivec in the 1GHz G4s can often chew up 10x as much data as they can currently be fed from main memory. Give Altivec a small problem that fits in the cache, and we're in good shape, give it a 20MB Photoshop file and we're toast.



    The engineering tradeoff that Apple often makes involves the investment in a marginal improvement to the architecture, vs the investment in a substantial improvement that will take longer to get to market. In the PC space, you have a bunch of board makers that leapfrog one another as they introduce new tech. No one company stays ahead all the time - they just can't afford to re-engineer every board every month, but they pick and choose and for a time have the latest and greatest. Apple is all alone. They have to skip technology to pick the time on the horizon when the bang/buck ratio justifies the investment in an architecture change. Apple will always be in some way behind on this point - always. And it's a risky game because it's a 'putting all your eggs in one basket' scenario. If Apple banks on Altivec thinking it will give them a performance edge, it might also mean that they are tied to Motorola who can't fab chips faster than 500Mhz for a year and a half...



    I'm not suggesting that Mot sucks or Apple does or anything of the sort, rather that as Mac users we need to accept that this is the reality right now, like it or not. Apple is introducing some changes that should ameliorate this problem. Quartz Extreme should be an excellent example of the benefits of total system integration that simply won't be matched in the Wintel world. How far that will help us remains to be seen. Other similar improvements are likely to come over the next several years so that even if Apple isn't raw horsepower king, they've got an efficient balance of resources that will translate into better overall performance.



    Keep in mind, with this strategy the benchmarks are guaranteed to suck to high heaven.
  • Reply 36 of 37
    stevessteves Posts: 108member
    [quote]Originally posted by evangellydonut:

    <strong>



    Unfortunately, the MAJORITY of the population are at the the level of EE wannabe's...</strong><hr></blockquote>



    ...and that's absolutely fine. All are welcome and everyone's entitled to an opinion, etc. The problem is, I've never seen a forum so focused on buzzwords. The thing is, this is a clear example where people get the right idea DDR &gt; non-DDR, etc. However, the perceived performance deltas have been blown up to mythical proportions. People are pointing to Xserve as there example. What benchmarks have we seen, some sever based tasks from Xinet (or something like that - I don't have the link handy). Doesn't the Xserve have faster ATA controllers as well? How much of Xserve's increased performance is due to DDR? Let's see some Photoshop filters, not just "opening files" which is largely Disk I/O limited.



    [quote]<strong>

    a large L3 cache isn't very useful at all, at least for Personal Computing. If you think 'bout it, the programs that people use: Office, Photoshop, Internet, and such don't really require much branch prediction, and in Photoshop's case, a fatter memory bandwidth can make a lot of difference (and a hella fast HD).</strong><hr></blockquote>



    The need for L3 cache is certainly dependent upon the architecture of the chip/platform in discussion. Regarding the current Macs, the L3 cache is proven to be beneficial as mentioned by another poster in this thread. I've also seen benchmarks comparing the top of the line 733 G4 with L3 cache against the low end 733 G4 w/o L3 cache. Given Apple's current motherboard and G4 combo, the need for L3 cache is without question. My point however, was that the use of a large L3 cache diminishes a good portion of any performance advantage DDR alone would provide.



    [quote]<strong>

    As for games and such, GPUs are doing most of the processing, so as far as I can see, I have much more preference for higher memory bandwidth than a large L3 cache. Memory bandwidth has been the traditional bottleneck, and still is, and it's definitely a good thing that the Xserve is finally on PC2100 DDR RAM. 133DDR+PC2100 is the least Apple should do with their UMA2.0...'cuz looking at raw memory bandwidth numbers, Apple are pretty far behind...</strong><hr></blockquote>



    A couple of things here.



    1) The performance of games can be CPU limited. This mostly depends upon the resolution. At low resolutions, the benchmarks are more CPU bound. At higher resolutions, the fill rate of the GPU comes into play. Of course, there are other factors the GPU handles with it's TC&L engine, etc.



    2) I agree with your preference for higher memory bandwidth as opposed to larger L3 caches, etc. The former is clearly a better all around approach.



    3) Likewise, I also agree that Apple is behind in raw memory bandwidth. There can certainly be no argument for that claim. However, it is very important to point out that huge increases in memory bandwidth have only marginal increases in actual overall performance in real applications (not stream based synthetic benchmarks). My issue is with the people in this forum that seem to think DDR (or faster memory bandwidth) is some sort of magic bullet that will level the playing field, etc. It is not. Yes, faster bandwidth is needed. However, those expecting more than something like a 10% across the board improvement are only setting themselves up for disappointment. Yes, there will be a specific benchmark here and there that benefits more. However, by and large memory bandwidth increases have a minimal effect on overall performance. Specifically, the use of L3 cache in current Macs closes the memory performance gap considerably in most applications when compared to PC based DDR systems w/o L3 cache.



    Steve
  • Reply 37 of 37
    daveleedavelee Posts: 245member
    But don't forget as well that Apple's hardware engineers have squeezed an enormous amount of throughput (I think we would all agree) from the apparently modest capabilities of the MPX bus.



    What I would like to see is that gusto applied to a real DDR controller - imagine the performance gains that they could manage (would probably kick the higher clocked busses of the Athlons and P4 into next week, no)?



    That is why I think we will never see an Xserve style solution in an Apple PowerMac (and I am even hopeful that the next rev mobo would be able to handle 333MHz effective).
Sign In or Register to comment.