Latest G5's = Short EOL?

13»

Comments

  • Reply 41 of 56
    programmerprogrammer Posts: 3,467member
    Quote:

    Originally posted by emig647

    I predict sooner than later, AMD and Intel are going to get passed the problems they have been having for the last year or so. Once they get passed this I predict they will jump about a ghz within the next year.... if they do, apple will fall far behind again.



    My point is, the dual 2.5 won't hold them until june of 2005... or rather july of 2005. I hope for apple's sake they at the very least upgrade these machines at MWSF'05.




    I'm not so sure. Keep in mind that at 90nm the devices we're talking about are only a few molecules wide (I think I read that they were <100, down around 10-20... can't remember the source). The leakage current from traces of this size is tremendous, parts of this tiny size are very vulnerable to damage, and there is simply not the material there to be removed anymore. They may have finally reached a physical limit in terms of power dissapation, and that will limit the clock rates.



    Everybody has become so accustomed to the breakneck pace of progress that they have trouble imagining what a world without this level of advancement would look like. The last 30 years of microprocessor development have been like the first 30-50 years of automobile & aircraft development -- astonishing and rapid progress until we learn how to reach the physical and practical limits, and then incremental refinements & re-stylings on an annual (or slower) basis.
  • Reply 42 of 56
    wizard69wizard69 Posts: 13,377member
    While it is hard to deny that manufactures have hit a wall performance wise there is still a great deal of potential with respect to PPC. IBM has not hit the same wall that Intel has hit. In other words they could do a lot more to optimize the desing of the 970, be it longer pipes or enhanced execution units.



    I also suspect that if one could get some control over the leakage issue, current designs could run much faster on 90nm. This would not require changes to the architecture. Since this is a significant problem at 90nm a big focus on solving this problem seems to be smart engineering.



    The other issue from my perspective is that It appears that IBM now has the worst thermal profile of any current generation processor. If they can get a handle on this issue and become competive with the rest of the market the upsdie looks good. Just look at the performance that AMD is not getting out of the 64 bit line and the frequencies they run at. AMD's thermal profile is not this bad, considering the performance offered, on an older process.



    The thermal density issue does make one wonder if the rush to 90nm was premature. It seems to me that AMD and Motorola are actually taking a slower and maybe wiser approach. Yes it was hard to write that last sentence!



    Thanks

    dave





    Quote:

    Originally posted by PB

    I am afraid he is not. There is a general slowdown in chip industry, not just IBM, as clock speed goes up. The range 3-4 GHz seems the be the wall for the current technology implementation. When Intel, the GHz king, come out and talk about dual cores and efficient architectures and abandon officially what gave them dominance in the CPU market, you cannot have much doubt.



  • Reply 43 of 56
    wizard69wizard69 Posts: 13,377member
    Quote:

    Originally posted by DaveGee

    Heck the system designers are still in a delusional state that makes them think shipping a G5 with 256MB of RAM is fine and dandy! (Are they freakin NUTS!)



    Oh I almost forgot, they DID slap on some additional memory onto the 1+ year old video card technology. Whoop-de-freakin-do!



    It also seems to me that a lot of people are misisng one other thing related to the video cards. That is that there has been dramatic drops in the prices of the cards due to the near term arrival of PCI-Express and the new GPU's offered for that bus. So Apple could very well be making out positively with respect to profits on these machines.



    The memory allocations in the machines is equally pathetic. At this point they are probally getting that memory at bargain basement prices due to the age of the product.



    So sadly I have to agree Apple did itself no favors at all with this release. They may be able to exploit the few that see nothing other than the price decrease and avoid looking for value. The rest of us though are likely to sit back and say do we really need to take this crap from Apple. Once one gets past the initial sales spike due to '2.5GHz" I have to wonder if they will be selling 150000 a quarter. The rest of the industry is jsut blowing on buy while Apple keeps its head in the sand.

    Quote:



    Am I missing anything?



    NOT AT ALL !!!!!!!



    You have hit some important issues. Currently buying an Apple PC is a bit like buying a lawn mower that is not ready to go out of the box. That is to actually get the grass cut, you first have to go out and buy a larger gas tank and a larger set of wheels to keep the chassis from dragging. After you "upgrade" your mower it performs as expected.



    I really believe that Apple has given up on market share. They currently exist to milk the pockets of the unknowing.

    Quote:



    Come on Apple You coulda thrown us a bone somewhere - you know just to show us the PowerMac hardware designers weren't on a 1 year sabbatical ALL at the same time!



    What is even worst is that this "upgrade" didn't even require an engineer. Giving the customers what they need to run OS/X wouldn't require an engineer either.



    Expansion of the main memory to a reasonable value would be covered by the reduced cost of the video card. This release strikes me as nothing but greed on the part of Apple. After all it has been almost a year since these computers where announced.



    Dave



    Quote:



    Dave [/B]



  • Reply 44 of 56
    I've seen a lot of people talking about why Apple didn't include the ATI 9800XT, 256MB card on the high end Mac.



    Not charging people for something they may not need is an excellent point, but there is another one that no one seems to be considering. If you go to the Apple site and read up on the 9800XT card, you'll realize that it is a much bigger card and Apple states that it will eliminate one of the PCI-X slots because it requires so much room.



    That is definately a decision for the consumer, not a computer company. If you want the 9800XT, you HAVE to give up one of your PCI-X slots.



    ~ Ryan ~
  • Reply 45 of 56
    emig647emig647 Posts: 2,455member
    Quote:

    Originally posted by Programmer

    Everybody has become so accustomed to the breakneck pace of progress that they have trouble imagining what a world without this level of advancement would look like. The last 30 years of microprocessor development have been like the first 30-50 years of automobile & aircraft development -- astonishing and rapid progress until we learn how to reach the physical and practical limits, and then incremental refinements & re-stylings on an annual (or slower) basis.



    But Programmer,



    Why does it just have to be clock cycles? Whats wrong with adding more l2 cache, maybe some l3 cache? Is it because the 970 runs so fast it can't use l3 cache? Why can't they add dual memory controllers? Is that an Apple issue? Why can't they different cooling mechanisms to clock higher?



    I guess my biggest problem with the 970xxxx is that it only has 512k cache compared to others... but does the faster bus make up for that? What is the main reason the Opteron wins the benchmarks? The on-chip mem controllers?
  • Reply 46 of 56
    emig647emig647 Posts: 2,455member
    Quote:

    Originally posted by wizard69



    You have hit some important issues. Currently buying an Apple PC is a bit like buying a lawn mower that is not ready to go out of the box. That is to actually get the grass cut, you first have to go out and buy a larger gas tank and a larger set of wheels to keep the chassis from dragging. After you "upgrade" your mower it performs as expected.




    This is the most insane analogy I've ever seen. How in the hell can you say that the dual 2.0 or 2.5 isn't ready out of the box!? It isn't ready for how few of people.... scientists, some programmers, and some 3d artists. And the only reason it isn't is because of 2 things, ram and graphics card.



    a 9800xt will be sufficient enough for anyone. You guys are talking like the X800 is that much more powerful between the 2. The benchmarks almost double going from 9600xt to 9800xt. X800 they are similar. And ram... its good apple isn't charging me for ram... I would feel like i was getting jacked. When I went to BTO my dual 2.5 I tried to downgrade the RAM!!! I don't want to pay apple's prices for ram.



    Even the 9600xt is still competitive!! I don't understand where you get the notion that it isn't. If I can work in opengl the whole time in Cinema 4d with a 9600xt then its doing its job. Only a few other people need this power besides Gamers. I actually can't think of any Mac Games right now that you can't turn the settings on full High at 1200x1024 and play at 50-75fps with the 9600xt.



    These machines are just fine.



    You need to remember that the chipsets are different between the mac graphic cards and the pc graphic cards... as far as ATI. I'm not positive on the NVidia cards. So apple has to have these cards manufactured... while these cards are mass produced for all of the different graphic card resellers. It drives price down for everyone except apple. It would be in apple's best interest to find a way to use the pc cards with minimal change... ideally just software changes.



    Even dual 2.0 upgraded to 9600xt would be sufficient for anything for the next few years. The g5 was way ahead of its time last year.... it still is... Look at the BUS!! Who has a 1.25ghz bus!? Who even has a ghz bus!?!? No one except apple. Not to mention that its a bi-directional bus, FOR EACH PROC! Only bottle neck IMO is single Mem Controller.



    These machines will definitely be taken seriously until the next Rev.



    So you're saying Opterons are going to smoke them... in what scenerios!? Yes they are faster chips, but no PC software except some linux / unix exceptions take advantage of it. In an every day environment (besides a programmers and scientists environment) these machines will be in windows... with fake multitasking at that... If it can barely handle multitasking how is it going to handle the dual opteron processors? Not to mention barely handle the 64bit side of it.



    I won't even compare it to Intel... are they still a company?
  • Reply 47 of 56
    manjimanji Posts: 5member
    Sure the new updates aren't really that impressive, but let me play Devil's Advocate for a minute.



    1. The Downside:

    These G5's are not likely to be short term. The new ones don't even ship until July. They aren't going to release new one at MW in July or WWDC this month, otherwise the newest models are not going to sell at all.



    2. The Upside:

    Have we forgotten already that G5's are 64 bit badboys. OSX isn't really a 64 Bit OS and neither is any other software. We haven't even taken advantage of the 64 bit architecture yet. The current stuff is similar to the the Fat apps of yesteryear. We're probably looking at another 6mos to a year before we start seeing some true 64 bit apps. Unless of course "Tiger" is a true 64 bit OS. Then maybe we all won't be so unimpressed with the current G5 configurations.

    You have to figure...if these new Macs aren't even taking advantage of the technology yet why do they need to be upgraded so fast or even so often?

    Just matching a clock cycle number is ridiculous. So Pentiums are at 3.XX GHz, while Macs are at 2.5GHz. Think about it...
  • Reply 48 of 56
    emig647emig647 Posts: 2,455member
    Quote:

    Originally posted by Manji

    You have to figure...if these new Macs aren't even taking advantage of the technology yet why do they need to be upgraded so fast or even so often?

    Just matching a clock cycle number is ridiculous. So Pentiums are at 3.XX GHz, while Macs are at 2.5GHz. Think about it...




    Very good point... I have a feeling Tiger is going to be a 64bit os and a 32bit os... we'll have to wait and see. But apple can't tell developers to develop 64bit apps when apple themselves don't take advantage of it... so Tiger will either be 64 bit... or we'll go another year before a 64bit OS comes out.
  • Reply 49 of 56
    programmerprogrammer Posts: 3,467member
    Quote:

    Originally posted by emig647

    But Programmer,



    Why does it just have to be clock cycles? Whats wrong with adding more l2 cache, maybe some l3 cache? Is it because the 970 runs so fast it can't use l3 cache? Why can't they add dual memory controllers? Is that an Apple issue? Why can't they different cooling mechanisms to clock higher?



    I guess my biggest problem with the 970xxxx is that it only has 512k cache compared to others... but does the faster bus make up for that? What is the main reason the Opteron wins the benchmarks? The on-chip mem controllers?






    Adding cache is a diminishing return. Increasing it to 1 MB helps, but obviously IBM's target market and design criteria doesn't make the cost / performance trade off worth it. I expect future 9xx processors will have a larger cache and will have a substantially increase die size with all sorts of other additions (SMT, more cores, SoC, etc). The rapid scaling of clock speeds is what I was talking about, and those days are numbered.
  • Reply 50 of 56
    emig647emig647 Posts: 2,455member
    Quote:

    Originally posted by Programmer

    Adding cache is a diminishing return. Increasing it to 1 MB helps, but obviously IBM's target market and design criteria doesn't make the cost / performance trade off worth it. I expect future 9xx processors will have a larger cache and will have a substantially increase die size with all sorts of other additions (SMT, more cores, SoC, etc). The rapid scaling of clock speeds is what I was talking about, and those days are numbered.



    How much of a performance gain could larger cache add with a 1.25ghz bus? It seems the main reason to have large cache is when the bus is too slow to keep retreiving items...



    Would adding a on-chip memo controller or dual mem controllers help out that much?
  • Reply 51 of 56
    smalmsmalm Posts: 677member
    Quote:

    Originally posted by emig647

    Would adding a on-chip memo controller or dual mem controllers help out that much?



    On-chip cintroller helps a bit, but the biggest problem is the cycle time of the memory which will not decline till DDR2-533 ram with 333 timing comes to market.
  • Reply 52 of 56
    programmerprogrammer Posts: 3,467member
    Quote:

    Originally posted by emig647

    How much of a performance gain could larger cache add with a 1.25ghz bus? It seems the main reason to have large cache is when the bus is too slow to keep retreiving items...





    This is impossible to predict without running simulations, and it heavily depends on the software running on the processor(s). Cache only speeds up access to something you've already accessed, rememeber? And only if you use it again fairly soon after the first time you use it. If your code is always accessing <512K or streaming through several times 512K then you might find that more L2 cache does you no good at all, especially on the 970 which has pretty good cache prefetching capabilities. In practice there is always some degree of improvement, but it varies greatly depending on how it is used.



    I'm not sure how much DDR2-533 will decrease memory latencies, but I don't think its a big improvement. I think we'll need a fundamental change in how memory works before we'll see a big improvement on this score.
  • Reply 53 of 56
    existenceexistence Posts: 991member
    Quote:

    Originally posted by Existence

    That is why I said "A single DVI connector cannot transmit that much data at 60Hz..." If Apple is going to make a a 30" display with 2560x1600 resolution, Apple needs a dual-link DVI connector and a graphics card to match. Neither ADC nor a single DVI connector can do this resolution.



    Looks like I was right.



    I predict new G5s for October.
  • Reply 54 of 56
    ludwigvanludwigvan Posts: 458member
    Quote:

    Originally posted by Existence

    Looks like I was right.



    I predict new G5s for October.




    Wanna pass along some lottery numbers too?
  • Reply 55 of 56
    matsumatsu Posts: 6,558member
    Ooops, just watched the keynote. Looks like dual-link houses a lot of bandwidth on one connector. Cool
  • Reply 56 of 56
    lemon bon bonlemon bon bon Posts: 2,383member
    Hmmm. Apple have talked about a 9 month to 12 month cycle for the PowerMac range.



    Intel and AMD's updates have slowed significantly since the race to 1 and 2 gig.



    They're now at a crawl.



    Intel may scrape to 4 gig. But at what cost? A nuclear cooling system? You'll have to build an extension to your house for that...



    Steve DID address 'the wall'.



    Trade off. The extra 500 mhz didn't come. What. An extra 20% ish judging by Apple's CPU benches on site?



    Surely more significant is the return on the bus. 1.25 gig which blows Intel's 800 mhz right outta the water..?



    And Apple's machines are ALL dual processor.



    I DO want that dual 3 gig. But by the time it arrives? Will dual core come next Summer? Or next fall? Dual dual Core will be a formidable machine.



    And there's the prospect of a dual PCI Express slot for a SLI capable 6800 Geforce. Ouch. That's gotta hurt.



    The next PowerMac update can/could be one heckuva update.



    But instead of waiting every 6-8 months? We know wait 9-12 months. (Heh, is this the winding down of the Mac business unit or symtomatic of the cpu slow down?)



    A dual 3 gig Fx chip? (Obviously, this WAS the chip that was going to take Apple to 3 gig and IBM, speed binning, yields or otherwise...didn't make it...EVEN with cooling!) Arrives early 2005 if we're lucky. March if we say 9 months at the earliest.



    Next June at the latest. Which means we won't get dual Core PowerMacs until early 2006! Much later than has previously muted. That's the 9-12 month schedule Apple talked about in their financial conference.



    Clearly, much rests on the timeable of the fabled 975. Optimistic noises about arrival this year seem to be off the pace.



    PCI Express isn't really going to be an adopted to force until PC motherboard makers get on board. AMD won't be at 0.09 until July to Sept? Intel won't hit 4 gig anytime soon. All this won't be mainstream until early 2005.



    Apple will then, hopefully, have a PCI Express PowerMac, dual 3gig 975 hyperthreading beast to meet an ageing Intel chip that has limped to 4 gig by early 2005? Fx or 975. Place yer bets. Prob' 975? Or will IBM have speed binned enough 2.8 and 3 gig cpus by then? :P



    A dual 2.5 970 fx with 6800 (a card I wasn't sure if we were going to get...and we're getting it just about the same time as PC users are...given IBM is shipping low yields 'late' to Nvidia...) is the PowerMac proposition for 2004. By the time PCI Express and dual SLI Nvidias are a reality we'll be ready for them a month or so after? It's just that the talk, the foreplay in PC land goes on for longer making you think the PC boys have something we don't...



    ...and we can power two 30 inch Displays that look gorgeous. Hey PC man...look waht we got...!



    They don't have 'Tiger' (We will have it long before they have 'longhaul'.)



    Lemon Bon Bon (semi-optimistically...)



    PS. Whither Open GL 2 to counter Direct 9.whatever by M$?



    I thought this may have been mentioned in the 'Tiger' Keynote. Has Open GL 2 been ratified?



    Does it matter when you see what Core Image/Video can do? Look at how stunning Myst looks...



    In truth...the current 9800xt range has hardly been exploited to the limit and Half Life 2 looks like it should run well on a 9800 and a G5 Mac.



Sign In or Register to comment.