Nr9 Prophecy being fullfilled?

13

Comments

  • Reply 41 of 74
    wizard69wizard69 Posts: 13,377member
    Quote:

    Originally posted by The Woodman

    I read these postings, and I can get into the prediction game, too. But I just received my 30" ACD coupled with a dual G5 2.5Mhz with the Nvidia 6800 card, and well, this machine is so fast, I have to wonder who really needs that kind of speed?







    Well you purchased the machine and it is making you pretty happy apparrently. The reality is that what you have is a slow machine for many applications.



    As an aside it must be nice to have that money to throw around.

    Quote:

    Unless you are doing weather modeling or analyzing nuclear explosions, do you really need more?



    Yep we have a very long ways to go software wise, in fact software is somewhat gated by the ability of hardware to execute it in a timely manner. So progress on the software front requires significant strides on the hardware front. The two are tied together. Obviously one does not need such power to run todays mainstream office software but that might not be the case next year and says nothing aobut software outside of normal business apps. It is the new apps that can't yet be delivered that such a machine will be ideal for, but that doesn't mean that todays won't benefit. For example CAD and engineering design software can always use more horse power. Very few programmers are happy with the build porcesses on their machines either.

    Quote:



    Tonight I worked on my Halloween costume, editing an 80MB file in Photoshop to output to a large format HP DesignJet, and I never needed to wait for the computer. Not once. I never see the color wheel.



    Well if you consider a Halloween costume to be significant usage of a computer then you need to discover new benchmarks!

    Quote:



    I can open iPhoto (Apple's worst app, but I still use it), have over a hundred photos on the screen at once (remember, this is on the 30" ACD), then move the slider to resize all the photos at once. No problem. No waiting.



    At this point, if you are waiting for the next generation Mac, you aren't likely to actually ever purchase a machine.



    Not true at all. I currently run linux on i86 hardware that is rather old. I've already made a commitment to buy a 64 bit system that supports PCI-Express. It will be interesting to see if Apple can submit to the public a competitive machine with those features. PPC is the way to go if one is going 64 bit, but an AMD chip is a reasonable alternative.



    In anyevent it is not the absolute power of the system that interests me. It is the desire to make an investment that actually last awhile. The current PowerMac simply has to many longevity issues to fall into the acceptable purchase list.

    Quote:

    So, while you can enjoy speculating, you lose some credibility saying that you are going to wait.



    So why is that? Somebody waits for technology they know is coming to avoid wasting money and you say that they loose credibility! I can't even count all the times that I waited to commit to purchases that have saved me money or trouble in the long run. Then there are the mistakes made that where the result of rash decisions to buy before reasonable contemplation took place.



    It never hurts to wait if you are getting by with current technology. This applies to just about anything one has in the house. In the case of Computer technology it is especially responsible way to approach money management. Computers in general just keeep getting better, the industry is a long ways from having matured.

    Quote:



    Woodman



    Dave
     0Likes 0Dislikes 0Informatives
  • Reply 42 of 74
    Interesting comments, but I still contend that most of the people posting on this thread really need that kind of horsepower for the VAST majority of the things they do. My point is this: you like to talk and speculate about the future, about what is around the next corner, about the latest and greatest technology, about the next generation of software that is going to run slowly on this generation of computers, etc., etc., and that is fun to do. That's why I like to read these threads. But some people just talk in hypotheticals, about software that might be developed to run on a machine they might purchase if it could only reach 200 gigaflops. Actually name me three existing programs that won't run decently on today's high end machines... I can't think of amy.



    We can live in the moment, or always live with anticipation for the Next Great Thing (I do that with girlfriends, but not computers). The real world benefits of upgrading from a 486 to a Pentium were really significant back in the day. Or from an 040 to a PowerPC. But now, machines are really fast. I just upgraded from a Dual 500 G4, so the speed bump was nice. But even my previous machine wasn't THAT slow for most tasks. The greatest benefit for me in the past five years was actually MacOSX, not a huge hardware bump. People say they CONSTANTLY NEED these new technologies and complain that IBM or Apple or Intel aren't stepping up to the plate, but I think these posters aren't living in the real world. What percentage of your day do you really need something that doesn't even exist yet?



    And think of all the available time you have to wait for your computer to process your CAD drawings, software compiling, etc., while you are contributing your comments to this thread! I am relieved that I have the Dual 2.5 G5 and the big screen, as I can post to this forum very efficiently. (I don't use emoticons, but I am joking in this paragraph.)



    I agree that top-of-the-line machines are expensive. Whoa, are they expensive. I am on a four to five year computer purchase cycle these days, but my four year old computers are a lot more tolerable than they used to be.



    Woodman
     0Likes 0Dislikes 0Informatives
  • Reply 43 of 74
    programmerprogrammer Posts: 3,503member
    Quote:

    Originally posted by Zapchud

    From which hat do you pull that 200 GFlops figure? 2.5 GHz * 2 chips * 2 cores * 2 HW threads * 4 FLOPS (FMAC) + 2.5 GHz * 2 chips * 2 cores * 2 HW threads * 4 FLOPS (Altivec)?



    Edit 2: I'm not sure I get this. With 2 HW threads, you don't get twice the peak performance? The FPUs themselves won't be able to push through twice as much work, do they?




    I goofed and assumed 2 vector ALUs. Correcting it would more likely look like this: 2.5 GHz * 2 chips * 2 cores * (4-way SIMD * 2 FLOPS (FMAC) + 2 FPU * 2 FLOPS (FMAC)) = 120 GFlops. The 2-way SMT just makes it more likely you can achieve something closer to the peak than you might otherwise.
     0Likes 0Dislikes 0Informatives
  • Reply 44 of 74
    programmerprogrammer Posts: 3,503member
    Quote:

    Originally posted by The Woodman

    But some people just talk in hypotheticals, about software that might be developed to run on a machine they might purchase if it could only reach 200 gigaflops. Actually name me three existing programs that won't run decently on today's high end machines... I can't think of any.



    As I said, look at my user name -- my job is to make the next program that will take advantage of your bleeding edge hardware (pity I don't work on the Mac, however). I've also got a bunch of programs at work that are hopelessly CPU bound that I really wish would run faster. No, I'm not typical but I'm also very interested in making compelling software that will make people want to buy these brutish machines. At some point performance will again enable a new killer-app.



    If you look at the iMac G5 you'll see that Apple could have made a much faster consumer machine, but chose not to because they agree with you. The average consumer user does not currently have a compelling reason to accept the trade-offs in building a maximally fast computer, so instead they build one that has other advantages in terms of noise level, form factor, etc. They also build the 2.5 GHz dual PowerMac w/ 6800 and 30" display for those people that do need maximum performance and are willing/able to pay for it. That crowd is nowhere near satiated.
     0Likes 0Dislikes 0Informatives
  • Reply 45 of 74
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by wizard69

    Well you purchased the machine and it is making you pretty happy apparrently. The reality is that what you have is a slow machine for many applications.



    Which applications, and relative to what?



    If they're not applications he uses, and the alternative hardware isn't suited to his needs or budget, then the reality is that your argument is moot in his case.



    BlueGene/L is a slow machine for "many applications."



    Quote:

    Yep we have a very long ways to go software wise, in fact software is somewhat gated by the ability of hardware to execute it in a timely manner. So progress on the software front requires significant strides on the hardware front. The two are tied together.



    Actually, there's a tension there that you're ignoring. Years of G4 stagnation gave us a lot of really tight code and a significant investment in AltiVec acceleration, which is paying off.



    Basically, the tension is that while faster hardware will run the same software faster, software can also rely on the user "throwing hardware at the problem" to get good performance. Microsoft has even throttled software performance in order to spur hardware upgrades (and thus, sales of Windows licenses).



    More than a few people are actually looking forward to the end of the clockspeed era, in the hope that software designers will have to write tighter code instead of assuming that PCs will be 1GHz faster next year.



    Quote:

    Not true at all. I currently run linux on i86 hardware that is rather old. I've already made a commitment to buy a 64 bit system that supports PCI-Express. It will be interesting to see if Apple can submit to the public a competitive machine with those features. PPC is the way to go if one is going 64 bit, but an AMD chip is a reasonable alternative.



    Apple will probably adopt PCI Express when there are parts out there that use it. PCI and related interfaces are inherently backward looking, because professionals invest thousands of dollars in solutions that use them, and they (quite reasonably) demand that their new machines be compatible with these solutions. So Apple has very little interest in "pushing" adoption of a new, incompatible replacement. They got enough flack for phasing out 5v PCI in the G5, despite the fact that 5v cards had been officially deprecated for years. To the extent possible, they will follow the lead of the market here, and they will do it precisely to cater to their professional base.



    For that reason, I'm currently expecting them to replace AGP with PCI-E, and keep the backward-compatible and adequately-performing PCI-X for the remaining slots.



    Quote:

    In anyevent it is not the absolute power of the system that interests me. It is the desire to make an investment that actually last awhile. The current PowerMac simply has to many longevity issues to fall into the acceptable purchase list.



    I hear this complaint all the time about Macs, but... every single time someone looks into it, they have a significantly longer useful lifespan on average.



    In the case of PCI-E, I expect it to be much like FireWire 800: It's nifty, it's fast, and ... well, you know, I have a computer that will be 4 years old next month, and I can't think of a single FW peripheral that I wouldn't happily hang off the FW400 bus. I'm sure that somewhere, there's someone for whom FW800 is a godsend, but Apple knew to keep the FW400 port there anyway, because they knew the general market wouldn't budge, because it didn't need to. Apple will do the same thing with PCI.
     0Likes 0Dislikes 0Informatives
  • Reply 46 of 74
    wizard69wizard69 Posts: 13,377member
    If nothing else you do offer a interesting discussion! See my comments below.



    Quote:

    Originally posted by The Woodman

    Interesting comments, but I still contend that most of the people posting on this thread really need that kind of horsepower for the VAST majority of the things they do.



    Who is most people? The world is made up of all sorts of people, some of them need thousands of computers clustered together to solve their problems. Others need a 1 to a 100 based on todays performance levels.

    Quote:

    My point is this: you like to talk and speculate about the future, about what is around the next corner, about the latest and greatest technology, about the next generation of software that is going to run slowly on this generation of computers, etc., etc., and that is fun to do. That's why I like to read these threads.



    It is not a matter of speculation at all, but that really isn't significant to the whole argument. The problem is that todays software is a problem execution wise.

    Quote:



    But some people just talk in hypotheticals, about software that might be developed to run on a machine they might purchase if it could only reach 200 gigaflops. Actually name me three existing programs that won't run decently on today's high end machines... I can't think of amy.



    The compile of any large program written in C++ for starters. That is using just about any bodies compiler on any platform.



    There is a very large body of Java applications that certainly could use a little more performance. Many of these programs could benefit greatly from a machine with multiprocessing.



    CAD systems especially 3D could benefit greatly from more CPU horse power. The same goes with EDA.



    As far as that goes I've seen spread sheets that could use a significant speed up.



    Most Macs can't handle gaming as well as PC at a quarter of the cost.

    Quote:

    We can live in the moment, or always live with anticipation for the Next Great Thing (I do that with girlfriends, but not computers).



    I feel sorry for the woman you meet!

    Quote:

    The real world benefits of upgrading from a 486 to a Pentium were really significant back in the day. Or from an 040 to a PowerPC. But now, machines are really fast.



    You keep repeating this theme that is the machines today are really fast. This is total nonsense. The low end G5 is barely faster than a G4 for some tasks.



    Sure this is dependant on usage, holloween costume design is just not that demanding. It may very well be that the great majority of the people with a Mac at home are not stressing it. But that does not make the needs of those who do stress their machines any less significant.

    Quote:



    I just upgraded from a Dual 500 G4, so the speed bump was nice. But even my previous machine wasn't THAT slow for most tasks. The greatest benefit for me in the past five years was actually MacOSX, not a huge hardware bump. People say they CONSTANTLY NEED these new technologies and complain that IBM or Apple or Intel aren't stepping up to the plate, but I think these posters aren't living in the real world.



    In the Apple world OS/X is a significant technology, it is a shame to dismiss it because it is software. In fact OS/X is the only thing that has renewed my interest in Apple hardware!



    I actually think that those complaining the loudest are living in the real world. These are people that are disappointed in the hardware that Apple is delivering and feel compelled to switch over to hardware from the i86 world.



    It is not that people are constantly needing new technologies it is just that Apple is often (not always) way behind the eight ball with respect to technologies that can be very significant. One example here is PCI-Express.



    I'm not going to be so bold as to say that they need PCI-Express now. It is a new technology after all, but if it isn't part of the next PowerMac rev then I see no good reason to keep my interest in Apple hardware. By that time 64 bit Linux on AMD hardware will be completely stable. I was also rather disheartened by the lack of PCI-Express in the new iMac, but that is a wholly different thread.

    Quote:



    What percentage of your day do you really need something that doesn't even exist yet?



    I think I've found the problem, you are missing my point entirely. At this moment in time I'm getting by fine with what I have. The point I'm trying to make is that it would be foolish at this point in time to invest in any of the PowerMacs unless one really had to. They simply do not have the feature set to satisfy a customer who expects to keep the hardware for a significant amount of time. The lack of disk drive expansion, the lack of PCI-Express and a few other gotchas all add up to a machine that is a poor investment. Now I'm sure there are a few who can put the machine to work in their business and pay it off in a weeks time but that isn't the general state of affairs.

    Quote:



    And think of all the available time you have to wait for your computer to process your CAD drawings, software compiling, etc., while you are contributing your comments to this thread! I am relieved that I have the Dual 2.5 G5 and the big screen, as I can post to this forum very efficiently. (I don't use emoticons, but I am joking in this paragraph.)



    You have a dual machine but you do not seem to be aware that the very nature of your machine makes for a very responsive uint when doing more than one thing at a time. Just think about how a quad processor machine would perform when you are doing even more demanding work (maybe christmas costumes for the whole family).

    Quote:



    I agree that top-of-the-line machines are expensive. Whoa, are they expensive. I am on a four to five year computer purchase cycle these days, but my four year old computers are a lot more tolerable than they used to be.



    Frankly it has been a long time since I've even thought about buying a top of the line machine. What I'm looking for is good performance at a reasonable price. Even more important I want that machine to last for a few years and be able to handle at least a couple of upgrade cycles. What will anybody do with a current PowerMac in two or three years if a new video card is wanted?

    Quote:

    Woodman



    In any event the problem on this forum is not that people are expecting to much or want to much, it is more of a matter of Apple delivering far to little for the almighty $$$$. Hey if it is Apple intent to transition to a consumer products company and drop computer hardware from its portfolio than they really should share that info with us. As it is the current line up of hardware looks like it was built especially to target people with more $$$$$$ than brains.



    Thanks

    dave
     0Likes 0Dislikes 0Informatives
  • Reply 47 of 74
    wizard69wizard69 Posts: 13,377member
    Quote:

    Originally posted by Amorph

    Which applications, and relative to what?



    If they're not applications he uses, and the alternative hardware isn't suited to his needs or budget, then the reality is that your argument is moot in his case.



    BlueGene/L is a slow machine for "many applications."



    This is exactly my point, he sees his computer as fast when in reality it isn't. It isn't fast relative to any cluster and may not be fast at all with respect to similar i86 hardware. It all depends on usage. Since the biggest stress the poster has put on his machine was designing Holloween costumes I'm not sure that the processors every get warmed up on his machine.

    Quote:



    Actually, there's a tension there that you're ignoring. Years of G4 stagnation gave us a lot of really tight code and a significant investment in AltiVec acceleration, which is paying off.



    Basically, the tension is that while faster hardware will run the same software faster, software can also rely on the user "throwing hardware at the problem" to get good performance.



    Exactly this is what I want to see Apple do. That is throw hardware at the problem in a machine that doesn't cost an arm and a leg.

    Quote:



    Microsoft has even throttled software performance in order to spur hardware upgrades (and thus, sales of Windows licenses).



    More than a few people are actually looking forward to the end of the clockspeed era, in the hope that software designers will have to write tighter code instead of assuming that PCs will be 1GHz faster next year.



    That is an outlook filled with negativity and false hopes. It is almost always cheaper to throw hardware at the problem.



    Beyond the cost issue though there are a huge number of applications that have been optimizd to death already that simply need more horse power. We are talking huge performance increases year on year. If IBM, Intel & AMD can't deliver this sort of performance increase then the industry is head for a technological dark ages. We will probally remain in those dark ages until new technology comes along. As it is it looks like Intel has run the course of being a viable processor development company, I'd hate to see IBM fall in the same way that Intel has.



    It is the nature of our economy for business to focus so heavily on their core technologies that the fail to see the big picture. ---I live only a few miles from Kodak's main facilities so I have lots of data here--- Like the buggy whip manufactures of the past Intel is very likely to work itself into being a has been. IBM still has the potential to work itself out of the mess it is in. If they forget about performance though they will have issues.



    Quote:

    Apple will probably adopt PCI Express when there are parts out there that use it. PCI and related interfaces are inherently backward looking, because professionals invest thousands of dollars in solutions that use them, and they (quite reasonably) demand that their new machines be compatible with these solutions. So Apple has very little interest in "pushing" adoption of a new, incompatible replacement.



    Apple doesn't need ot push the adoption of PCI-Express the market has already made the decision. The result of that decision is the poor sales of the PowerMac line. Sure PCI-Express is not the only component of Apples sales issues but that technology is significant in the markets it is trying to focus on. Right now the PowerMac is not a high performance workstation nor can it be made into one.



    In any event there is no reason for Apple to abandon conventional PCI expansion slots. The technologies can coexist. The bigger problem is that the case Apple uses is a little tight shale we say. There isn't really room for two large PCI-Express slots.

    Quote:



    They got enough flack for phasing out 5v PCI in the G5, despite the fact that 5v cards had been officially deprecated for years. To the extent possible, they will follow the lead of the market here, and they will do it precisely to cater to their professional base.



    That is exactly what they will have to do. The professional base has made it very clear that alternative hardware will be looked at. PowerMac sales have of recent sucked, now Apple can blame that on part availability which I won't argue as it is a real component of the sales fiascoe. The bigger problem is that many potential customers don't see a solution to their problems in a PowerMac.

    Quote:



    For that reason, I'm currently expecting them to replace AGP with PCI-E, and keep the backward-compatible and adequately-performing PCI-X for the remaining slots.



    Yep not a big deal. Well except for the need for case modifications. Ideally there will be at least two PCI-Express slots to support dual GPU's.

    Quote:

    I hear this complaint all the time about Macs, but... every single time someone looks into it, they have a significantly longer useful lifespan on average.



    I disagree here, but there is so many factors to phrase "significantly longer useful lifespan on average" that it isn't worth arguing on line. I do feel safe in saying that Apple users are more likely to accept the performance of an old machine. Also if you factor in serviceability / upgrade factors the tables turn in different directions.

    Quote:



    In the case of PCI-E, I expect it to be much like FireWire 800: It's nifty, it's fast, and ... well, you know, I have a computer that will be 4 years old next month, and I can't think of a single FW peripheral that I wouldn't happily hang off the FW400 bus. I'm sure that somewhere, there's someone for whom FW800 is a godsend, but Apple knew to keep the FW400 port there anyway, because they knew the general market wouldn't budge, because it didn't need to.



    PCI-Express is a whole different ball game. In a sense it can be seen in similar light to the adoption of FireWire in Apples hardware. FireWire is very similar to USB but yet not close at all, much the same as PCI-Express is similar to PCI/AGP but yet not the same at all. Like FW, PCI-Epress will provide the opportunity for new Applications and hardware that work in ways the competeing interfaces wouldn't allow.



    The biggest problem with respect to PCI-Express is that it is being adopted by the GPU manufactures as the replacement for AGP. This creates a situation where Apple will fall behind very quickly if they don't adopt the technology. Frankly the only other choice Apple has is to get a GPU manufacture to adopt the HT bus. That is not likely to happen though Apple could surprise the hell out of us.



    As an aside, the route to a HT interface, to the GPU, is via a north bridge with a built in GPU. This technology has come a long way in the last year or two, I'm actually surprised that the iMac did not go this way.

    Quote:

    Apple will do the same thing with PCI.



    Yes I expect Apples conventional use of PCI to be around a long time. But this as point out has nothing to do with the pressing need to put PCI-Express slots into the Tower. Apples very credibility as a PC manufacture depends on it.



    Dave
     0Likes 0Dislikes 0Informatives
  • Reply 48 of 74
    I still maintain that Programmer really has no credibility when he says that he is designing "hopelessly CPU bound" software that the Mac can't handle, when he doesn't even "work on the Mac" to begin with. I haven't done so much programming that I would call myself "Programmer," but I have used Vectorworks/Renderworks (a CAD program for the Mac), and it runs very fast. And although Photoshop is a itsy bitsy little desktop app (yes, even with my micro 80MB Halloweanie file), it also runs very fast. I haven't done any C++ compiling on the Mac, but neither has Programmer.



    As for the iMac G5, this is an entirely different beast and really, off topic. Apple isn't trying to make it the fastest machine in the world... far from it. In fact, they made it the same upright angle as the iPod sitting in a dock (in my opinion) to leverage the iPod's popularity in their advertisements. But who cares? It is a cool machine that works great for the intended iPod-user-web-surfer audience. I prefer the Dual G5. For those of us that actually own and USE these machines, they are pretty remarkable.



    Wizard has more interesing points. If you have experience compiling C++ on a fast Mac and it is slow, well then there is a data point. I haven't done this, but have you? I have yet to really see many Java apps run on any platform that swimingly, but again, kinda off topic. As for CAD software, again, have you actually used this software on a fast Mac? It sounds like you are generalizing what you THINK will happen. Use a good CAD program on a fast dual G5. You won't be complaining about the speed, even with really complex drawings.



    Gaming is another beast entirely. Yes, games are not as fast on the Mac. If you are a big gamer, you would buy the platform that for which programmers optimize their code. Enough said.



    Apple's current lineup is pretty impressive. Sure, the PowerMacs are huge, expensive blow dryers, but saying they aren't fast or that Apple isn't good about bringing new technologies to market is simply untrue (um... liquid cooling?). Then, sit in front of a 30" display with four million pixels and unbelievable brightness and contrast. Anyway, I would rather wait a generation of machines than have Apple (or any other company) cram new technology down my throat before it is ready for prime time.



    I appreciate the great computing tools available to me today, from Apple and other companies. My initial complaint was that posters in this forum are always waiting for the next technology right around the corner before they buy. Ten years from now, there will still be a corner and I will have had two or three revs of machines that I have enjoyed along the way.



    As for my Halloween costume, I am going as an iPod, complete with functioning headphones (amplified speakers) and backlight. Disparage it if you must, but it is going to rule. Oops. That is really off topic.



    I thought Amorph brought up some really good points.
     0Likes 0Dislikes 0Informatives
  • Reply 49 of 74
    henriokhenriok Posts: 537member
    Quote:

    Originally posted by The Woodman

    I still maintain that Programmer really has no credibility when he says that he is designing "hopelessly CPU bound" software that the Mac can't handle, when he doesn't even "work on the Mac" to begin with.



    Seeing that you are new to these boards..

    There are some users on these boards that have earned their reputation as someone above the normal standards. Programmer is one of those, his reputation is unpeckable. He have never given me reasons to doubt his competence, Mac wise or otherwise. Whever he says is true, and that's that. You'd be wise to merit his arguments above the ordinairy.

    He has most certainly done programming on the Mac, but maybe not in his profession.



    I agree with him completely. I've used my 2x2 G5 over a year now and there are stuff I do that I most certainly want to go faster. Rendering 3D and video and applying filters to pictures in Photoshop. 80 MB is a pretty small picture, try something that's closer to 1 GB. I would find a 4 or even 10 GHz computer more useful that this one I'm using right now so I for one is hoping that we haven't hit the roof of performance scaling.
     0Likes 0Dislikes 0Informatives
  • Reply 50 of 74
    zapchudzapchud Posts: 844member
    Quote:

    Originally posted by The Woodman

    I still maintain that Programmer really has no credibility when he says that he is designing "hopelessly CPU bound" software that the Mac can't handle, when he doesn't even "work on the Mac" to begin with. I haven't done so much programming that I would call myself "Programmer," but I have used Vectorworks/Renderworks (a CAD program for the Mac), and it runs very fast. And although Photoshop is a itsy bitsy little desktop app (yes, even with my micro 80MB Halloweanie file), it also runs very fast. I haven't done any C++ compiling on the Mac, but neither has Programmer.





    Now you are out on a limb.



    Compilation of applications on other platforms aren't that much different from on a Mac. You can safely assume that the mac is "hopelessly CPU bound" running the applications that he's designing if the same app is "hopelessly CPU bound" on whatever platform he is designing it on. And by the way, how do you know he has never done any C++ compiling on the mac? He's only said that he doesn't design the apps he's talking about on the mac at work. If you had a history on these boards, you would have seen some code that he wrote possibly on the mac, but at the very least, compiled on the mac.



    Enough defending other posters for now ...
     0Likes 0Dislikes 0Informatives
  • Reply 51 of 74
    pbpb Posts: 4,255member
    Quote:

    Originally posted by The Woodman

    If you have experience compiling C++ on a fast Mac and it is slow, well then there is a data point. I haven't done this, but have you?



    I have done it. On a lowly 867 MHz G4. It took more than 24 hours. Now if I had your system, it would take at most 4-5 hours and I would be much more happy. But why not reduce that to 1-2 hours or even less? Speed is relative here. And it does not matter if it is Mac or PC.
     0Likes 0Dislikes 0Informatives
  • Reply 52 of 74
    Quote:

    Originally posted by The Woodman

    I still maintain that Programmer really has no credibility when he says that he is designing "hopelessly CPU bound" software that the Mac can't handle, when he doesn't even "work on the Mac" to begin with. I haven't done so much programming that I would call myself "Programmer," but I have used Vectorworks/Renderworks (a CAD program for the Mac), and it runs very fast. And although Photoshop is a itsy bitsy little desktop app (yes, even with my micro 80MB Halloweanie file), it also runs very fast. I haven't done any C++ compiling on the Mac, but neither has Programmer.





    Heh, you're quickly blowing any hope of your own credibility out of the water. First of all, I said I don't current work on the Mac. I did years ago, but now I "play" on the Mac and have for nigh 20 years now.



    Second, the fact that you are running CAD programs with what you consider complex drawings is completely irrelevent. I run Calculator and it is really really really fast, but that doesn't mean that people don't need a Mac that is 10+ times faster than what is currently available.



    There is so much CPU bound software out there if you bother to look that all you're doing by saying faster machines aren't needed is flaunting your ignorance. Games are a very good example because they have been ramping up their processor requirements to match what is currently available since the first computer game was created, and there is no end in sight.



    Any number of pro users on this board and many others will tell you that Photoshop (or whatever other media software) isn't fast enough. Some operations will lag behind real-time, some will take minutes to run... and since time is money the pros want it in real-time or want it in seconds instead of minutes. Running a 3D render can take hours on the current machines (instead of days on the previous ones), and if a new machine drops that to minutes then that just means the scene can be made more complex.





    Really you're in good company though. 640K RAM will be enough for anyone, after all.
     0Likes 0Dislikes 0Informatives
  • Reply 53 of 74
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by wizard69

    This is exactly my point, he sees his computer as fast when in reality it isn't.



    No, it's just applying a definition of "fast" that is meaningless. I can easily think of an application for which all computer hardware into the forseeable future will be effectively worthless - that collapses the fastest supercomputer and an old Altair into the same category. Does that mean that the truth is that all computer hardware is worthless? No.



    A computer is a tool. If it gets what you're actually doing done fast, then it's a fast tool. You can't separate performance from real-world usage, and you can only consider real-world usage on a case-by-case basis. The subway is a fast way to get around NYC, even if it's slow relative to the speed of light in a vacuum.



    Quote:

    It isn't fast relative to any cluster and may not be fast at all with respect to similar i86 hardware.



    To indulge in your fascination with abstract cases for a moment: There are cases for which a cluster of 1,000,000 G5s would be no faster or better than 1. So, again, this is meaningless. You can't separate performance from use, and all instances of use are discrete and concrete.



    Quote:

    Since the biggest stress the poster has put on his machine was designing Holloween costumes I'm not sure that the processors every get warmed up on his machine.



    Or, in other words, he has every reason on Earth to consider his machine to be fast.



    Quote:

    Exactly this is what I want to see Apple do. That is throw hardware at the problem in a machine that doesn't cost an arm and a leg.



    [The hope that software will be better optimized for want of MHz increases] is an outlook filled with negativity and false hopes. It is almost always cheaper to throw hardware at the problem.




    But, again, throwing hardware at the problem doesn't guarantee anything other than stasis, because the problem keeps up with the hardware (in MS' case, again, artificially - but that doesn't matter to people who need to run Office).



    I don't consider it pessimistic or negative to expect people to do their jobs or take pride in their work, but that isn't the point. The point is that speed increases are not absolutely tied to hardware that keeps getting faster without changing design-wise.



    Quote:

    Beyond the cost issue though there are a huge number of applications that have been optimizd to death already that simply need more horse power.



    But that horsepower doesn't necessarily come from MHz. The clusters you're so fond of require the sort of ingenuity in software that you're so pessimistic about in order to do much of anything efficiently. And this was my whole point in the argument you labelled negative: The ways in which hardware will get faster require software engineers to not be lazy. Maybe that will change. But for now, the consensus is clear.



    Quote:

    If IBM, Intel & AMD can't deliver this sort of performance increase then the industry is head for a technological dark ages.



    Now who's negative and pessimistic?



    Quote:

    As it is it looks like Intel has run the course of being a viable processor development company, I'd hate to see IBM fall in the same way that Intel has.



    ?!?!?!?



    We're not headed for a technological dark age. We're headed toward a technological sea change. It will require some paradigm shifts, and some work, to change, but since clockspeed scaling looks to be dead this is the way forward. A crisis is just an opportunity.



    And Intel is not finished as a processor development company. They made a couple of very expensive mistakes, but they're so huge that they can afford to. And with the Pentium M, they recovered quickly. If you mean that they're finished as a developer of enormous, superhot, superfast single core CPUs, perhaps. And that day couldn't come soon enough, in my humble opinion. That's not the way the future lies.



    Quote:

    Apple doesn't need ot push the adoption of PCI-Express the market has already made the decision.



    So all the PCI Express products are... where?



    That's what I meant. When there is a market for PCI Express products, it will make sense to release a machine with PCI Express. Right now, there's what? A couple of AGP video cards with a bridge chipset, that can run perfectly well in AGP 8x? And there's still an immense, high-end legacy in PCI and AGP. So Apple ships PCI and AGP. This is not difficult.



    Quote:

    That is exactly what they will have to do. The professional base has made it very clear that alternative hardware will be looked at.



    And this "alternative hardware" includes other machines in Apple's line, because the days when a "professional machine" meant a tower are long since gone. The machine only has to be fast and well-featured enough to do the job. It doesn't have to be fast relative to the largest problem set that one can possibly imagine. eMacs run the latest version of QuarkXPress just fine.



    And, oh, by the way, PCI is declining. It has been for years now.



    Quote:

    PCI-Express is a whole different ball game. In a sense it can be seen in similar light to the adoption of FireWire in Apples hardware. FireWire is very similar to USB but yet not close at all, much the same as PCI-Express is similar to PCI/AGP but yet not the same at all. Like FW, PCI-Epress will provide the opportunity for new Applications and hardware that work in ways the competeing interfaces wouldn't allow.



    So does FW800. The principle difference between PCI and PCI Express is bandwidth, and a lot of common problems simply don't require that much. Those that do, or will, will eventually shift to PCI Express. They haven't yet (the early solutions are bridged). When enough of them have, Apple will roll out a machine with PCI Express. I'm sure they're watching it keenly, because 16x bandwidth both ways to and from a video card is more interesting to them than to any other systems vendor—they're the only ones seriously using the GPU. But you don't roll it out before it makes sense to roll it out. Pragmatically speaking: If Apple switched to PCI-E right now, what video cards would they ship right now in all their configurations? Will doesn't sell machines. Does sells machines.



    Quote:

    Frankly the only other choice Apple has is to get a GPU manufacture to adopt the HT bus.



    That doesn't make any sense. Apple will adopt whatever the GPU vendors standardize on in shipping products. To do otherwise simply doesn't make any sense.
     0Likes 0Dislikes 0Informatives
  • Reply 54 of 74
    airslufairsluf Posts: 1,861member
    Kickaha and Amorph couldn't moderate themselves out of a paper bag. Abdicate responsibility and succumb to idiocy. Two years of letting a member make personal attacks against others, then stepping aside when someone won't put up with it. Not only that but go ahead and shut down my posting priviledges but not the one making the attacks. Not even the common decency to abide by their warning (afer three days of absorbing personal attacks with no mods in sight), just shut my posting down and then say it might happen later if a certian line is crossed. Bullshit flag is flying, I won't abide by lying and coddling of liars who go off-site, create accounts differing in a single letter from my handle with the express purpose to decieve and then claim here that I did it. Everyone be warned, kim kap sol is a lying, deceitful poster.



    Now I guess they should have banned me rather than just shut off posting priviledges, because kickaha and Amorph definitely aren't going to like being called to task when they thought they had it all ignored *cough* *cough* I mean under control. Just a couple o' tools.



    Don't worry, as soon as my work resetting my posts is done I'll disappear forever.

     0Likes 0Dislikes 0Informatives
  • Reply 55 of 74
    rickagrickag Posts: 1,626member
    I agree with AirSluf. Pointless to argue whether we will need faster computers or if current computers are fast enough. Of course we'll need faster computers, it's a given.



    I wish IBM would release a 970fx using Strained Silicon on Insulator bump the figgin speed 10-30% with lower power usage and solve these arguments, er um, for a while at least.
     0Likes 0Dislikes 0Informatives
  • Reply 56 of 74
    wizard69wizard69 Posts: 13,377member
    Quote:

    Originally posted by The Woodman

    [B]I still maintain that Programmer really has no credibility when he says that he is designing "hopelessly CPU bound" software that the Mac can't handle, when he doesn't even "work on the Mac" to begin with.



    I don't know the man myself, other than through postings here, but I have to say he is far more credible, far more oftan than the majority of the people here. Further if a program is CPU bound on one processors it is very like to be CPU bound on another. The 970 is a mixed bag as far as a processor goes, at times it is a very poor performaner relative to the i86 world. What you see as good performance is a combination of not stressing the machine and the experience of making the big performance jump from your old machine. The underlying hardware is not that fast.

    Quote:



    I haven't done so much programming that I would call myself "Programmer," but I have used Vectorworks/Renderworks (a CAD program for the Mac), and it runs very fast. And although Photoshop is a itsy bitsy little desktop app (yes, even with my micro 80MB Halloweanie file), it also runs very fast. I haven't done any C++ compiling on the Mac, but neither has Programmer.



    What differrence does it make if he has or hasn't compiled a C++ program on the MAC? There simply are no machines made today that do that in a reasonable amount of time.

    Quote:



    As for the iMac G5, this is an entirely different beast and really, off topic. Apple isn't trying to make it the fastest machine in the world... far from it. In fact, they made it the same upright angle as the iPod sitting in a dock (in my opinion) to leverage the iPod's popularity in their advertisements. But who cares? It is a cool machine that works great for the intended iPod-user-web-surfer audience. I prefer the Dual G5. For those of us that actually own and USE these machines, they are pretty remarkable.



    Well the iMac is a G5 that unfortunately was throttled a bit more than many would have liked. Either machine can be remarkable if it meets user needs. The whole point we are trying to make is that the G5 Tower does not have the performance to meet everybodies needs.

    Quote:



    Wizard has more interesing points. If you have experience compiling C++ on a fast Mac and it is slow, well then there is a data point. I haven't done this, but have you? I have yet to really see many Java apps run on any platform that swimingly, but again, kinda off topic.



    So anything that does not support your point of view is off topic? That is really to bad as you just clearly reinforced the points that everybody was trying to make. As fast as they are the CURRENT towers do not deliver all the performance required for todays applications. We can run around all day looking for specific examples but that really serves little purpose if those examples are off topic.

    Quote:



    As for CAD software, again, have you actually used this software on a fast Mac? It sounds like you are generalizing what you THINK will happen. Use a good CAD program on a fast dual G5. You won't be complaining about the speed, even with really complex drawings.



    Gaming is another beast entirely. Yes, games are not as fast on the Mac. If you are a big gamer, you would buy the platform that for which programmers optimize their code. Enough said.



    Again you rush to our aid to support the position that the Towers aren't all that fast. Instead of calling this one off topic though you blame the issue on optimization. But that is no more valid than calling Java programs off topic, for whatever reason the Mac does not work well here either. What you have to realize though is that part of the problem is the Mac itself and the associated operating system and its drivers. OS/X's graphic drivers have been known for their slowness for some time now.

    Quote:



    Apple's current lineup is pretty impressive. Sure, the PowerMacs are huge, expensive blow dryers, but saying they aren't fast or that Apple isn't good about bringing new technologies to market is simply untrue (um... liquid cooling?).



    The response is accurate and certainly balanced with respect to your ramblings. Sure there are things that a PowerMac does fairly well but there are very few things that it is fastest at.



    As to bringing new technology to the market, the issue that has been pointed out repeatedly is that it is not a good time to be buying hardware if you can avoid doing so. The big issue is PCI-Express and the transisiton to this interface. It is not a question about being good at bringing the technology to market (AMD & partners have yet to release PCI-Express hardware), the issue is that it is coming hopefully in a couple of months. So why saddle ones self with AGP technology if a short wait will allow investment in technology that will be upgradeable a couple of years down the road?



    Now it is completely possible that Apple will mis PCI-Express on the next rev of the Towers. If that is the case people should really evaluate their continued loyalty to the hardware.

    Quote:

    Then, sit in front of a 30" display with four million pixels and unbelievable brightness and contrast. Anyway, I would rather wait a generation of machines than have Apple (or any other company) cram new technology down my throat before it is ready for prime time.



    Agian your attitude is missplaced. It is the market demanding PCI-Express that is pushing the changes. Apple can go on and on about parts shortages, but the reality is that the some of the markets that Apple was focused on have alread left the ship to high performance hardware.

    Quote:



    I appreciate the great computing tools available to me today, from Apple and other companies. My initial complaint was that posters in this forum are always waiting for the next technology right around the corner before they buy. Ten years from now, there will still be a corner and I will have had two or three revs of machines that I have enjoyed along the way.



    I've been trying to be rather nice but it has gotten to the point where I think you are rather dense in the extreme!!! I've stated repeatedly that the current PowerMacs are not a good investment. One of the reasons for that position is that I believe that PCI-Express is just around the corner. Sure I could be wrong, but is waiting 2 months for the next rev all that much of a problem? It would be a different story if this was the begining of the year but what you suggest is like buying water skis just before the lakes freeze up for the winter. Hey those skis may be OK if you can get a huge discount but when you are paying the normal retail price you have to wonder if it makes sense to spend the money on that day.



    Beyond that there are other problems with the Tower that I believe that Apple will have to address sooner or later. The more people that hold off buying the machine the more likely some of these fixes will take place sooner.

    Quote:



    As for my Halloween costume, I am going as an iPod, complete with functioning headphones (amplified speakers) and backlight. Disparage it if you must, but it is going to rule. Oops. That is really off topic.



    I thought Amorph brought up some really good points.



    Yep he does do that from time to time.



    dave
     0Likes 0Dislikes 0Informatives
  • Reply 57 of 74
    wizard69wizard69 Posts: 13,377member
    Quote:

    Originally posted by Amorph

    No, it's just applying a definition of "fast" that is meaningless.



    Ok what is the difference? In either case he is mistaken.

    Quote:

    I can easily think of an application for which all computer hardware into the forseeable future will be effectively worthless - that collapses the fastest supercomputer and an old Altair into the same category. Does that mean that the truth is that all computer hardware is worthless? No.



    No one has said anything about any computer being worthless. What was being challenged was the original posters position that the PowerMac is fast. Further; the idea that no one would need a faster computer is troublesome.

    Quote:



    A computer is a tool. If it gets what you're actually doing done fast, then it's a fast tool. You can't separate performance from real-world usage, and you can only consider real-world usage on a case-by-case basis. The subway is a fast way to get around NYC, even if it's slow relative to the speed of light in a vacuum.



    The problem is that we have somebody judging a computer as fast based on very short term exposure to the hardware. Everything seems fast when you make a generational change in technology. The jet transport seems fast to those who where use to transport on prop planes.



    Quote:

    To indulge in your fascination with abstract cases for a moment: There are cases for which a cluster of 1,000,000 G5s would be no faster or better than 1. So, again, this is meaningless. You can't separate performance from use, and all instances of use are discrete and concrete.



    That is very much a possibility as there are all sorts of problem where a cluster would do nothing for the solution.



    What we can do is seperate performance judgment from individuals that have at the least an awkward sense of how people use the hardware. It is useless to take information from somebody until you understand their context. I'm sure you can find people who would claim that an old Commodore 64 is fast for what they do. Such a position would be garbage of course as it is pretty much accepted that the C64 is and never was fast.



    I may go out on a limb here but you almost have to seperate the concept of performance from the users perception.



    Quote:





    Or, in other words, he has every reason on Earth to consider his machine to be fast.



    No not really. He has the right to think that is machine is getting things done faster than he has ever been able to do before but that is not the same thing as saying his machine is fast. To say it is fast we need to know relative to what, in this case a very old Mac. I'm sorry but if somebody comes on this board and claims his machine is fast relative to a 5 year old machine it really doesn't mean much to me.

    Quote:





    But, again, throwing hardware at the problem doesn't guarantee anything other than stasis, because the problem keeps up with the hardware (in MS' case, again, artificially - but that doesn't matter to people who need to run Office).



    I don't consider it pessimistic or negative to expect people to do their jobs or take pride in their work, but that isn't the point. The point is that speed increases are not absolutely tied to hardware that keeps getting faster without changing design-wise.



    Well if I implied that things need to get faster with out design changes then that was a mistake though I'm not sure I implied that. What I do believe is that we can not rely on one avenue of perfromance growth in the future and that one of those avenues that must be pursued is single thread performance improvements.

    Quote:

    But that horsepower doesn't necessarily come from MHz. The clusters you're so fond of require the sort of ingenuity in software that you're so pessimistic about in order to do much of anything efficiently. And this was my whole point in the argument you labelled negative: The ways in which hardware will get faster require software engineers to not be lazy. Maybe that will change. But for now, the consensus is clear.



    Well horse power so to speak doesn't have to come from MHz but on the other hand I believe we have a way to go there.



    I'm not pessimistic at all about multiprocessor systems, it is just that I don't believe we need to drop all other attempts at processor improvement. As good as the potential of multiprocessors are there is still a ligitimate need for single thread performance increases.



    Like cluster that are some problems that do not break down well even for SMP machines.

    Quote:





    Now who's negative and pessimistic?







    ?!?!?!?



    We're not headed for a technological dark age. We're headed toward a technological sea change. It will require some paradigm shifts, and some work, to change, but since clockspeed scaling looks to be dead this is the way forward. A crisis is just an opportunity.



    Well this is certianly open for discusion. At this time though I will say that SMP is the wave of the future. It is however an extremely limited solution at least on the desktop.

    Quote:



    And Intel is not finished as a processor development company. They made a couple of very expensive mistakes, but they're so huge that they can afford to. And with the Pentium M, they recovered quickly. If you mean that they're finished as a developer of enormous, superhot, superfast single core CPUs, perhaps. And that day couldn't come soon enough, in my humble opinion. That's not the way the future lies.



    There is still a need to continue to extract performance increase out of single cores. Intels problem is that they need to do a lot of work to deliver a dual core that is thermally responsible. It is here where I believe that IBM does have an advantage due to core size. At some point though additional cores will not be possible so we will be back to extracting more performance out of the cores we can put on a chip.



    Moving to SMP is a very good thing, the problem is that it appears that the industry is doing it for the wrong reason.

    Quote:







    So all the PCI Express products are... where?



    That's what I meant. When there is a market for PCI Express products, it will make sense to release a machine with PCI Express. Right now, there's what? A couple of AGP video cards with a bridge chipset, that can run perfectly well in AGP 8x? And there's still an immense, high-end legacy in PCI and AGP. So Apple ships PCI and AGP. This is not difficult.



    I do believe that the people who are making this difficult are the ones reading my posts. The point I'm trying to make is that it is a poor time to be buying a PowerMac if you expect them to have a PCI-Express version out in a couple of months. I certainly could be wrong here with respect to timing but I believe Apple has no choice but to switch over as soon as possible.



    Again though PCI-Express does not imply the elimination of the other PCI slots.

    Quote:



    And this "alternative hardware" includes other machines in Apple's line, because the days when a "professional machine" meant a tower are long since gone. The machine only has to be fast and well-featured enough to do the job.



    Many times those features do include the facilities of a Tower. But I geuss that depends on what you call a professional machine.

    Quote:

    It doesn't have to be fast relative to the largest problem set that one can possibly imagine. eMacs run the latest version of QuarkXPress just fine.



    Yes the eMac is one of Apples success stories. That does not mean though that it would be acceptable to a large number of people based on its speed and responsiveness.

    Quote:



    And, oh, by the way, PCI is declining. It has been for years now.



    Interesting thought. It makes me wonder why almost every i86 motherboard has at least a couple of slots.



    Quote:



    So does FW800. The principle difference between PCI and PCI Express is bandwidth, and a lot of common problems simply don't require that much. Those that do, or will, will eventually shift to PCI Express. They haven't yet (the early solutions are bridged). When enough of them have, Apple will roll out a machine with PCI Express.



    You grossly over simplfy the differences between PCI & PCI-Express. That however would be a whole new thread.

    Quote:

    I'm sure they're watching it keenly, because 16x bandwidth both ways to and from a video card is more interesting to them than to any other systems vendor—they're the only ones seriously using the GPU. But you don't roll it out before it makes sense to roll it out. Pragmatically speaking: If Apple switched to PCI-E right now, what video cards would they ship right now in all their configurations? Will doesn't sell machines. Does sells machines.



    Well what ever they ship will be better than the 5200 that they must have made a bulk purchase on

    Quote:



    That doesn't make any sense. Apple will adopt whatever the GPU vendors standardize on in shipping products. To do otherwise simply doesn't make any sense.



    No what I was thinking about here is that ATI and I believe nVidia are working on north bridges with integrated GPU's and Hypertransport interfaces. If somebody could manage a PPC processor with a HT interface that would make a very interesting Mac. This is pure imagination as I don't believe either ATI nor Nvidia has the chip sets ready. They will however make for extremely interesting AMD64 machines.



    All I was really attempting here was to show that there are alternatives outside of PCI-Express.



    dave
     0Likes 0Dislikes 0Informatives
  • Reply 58 of 74
    airslufairsluf Posts: 1,861member
    Kickaha and Amorph couldn't moderate themselves out of a paper bag. Abdicate responsibility and succumb to idiocy. Two years of letting a member make personal attacks against others, then stepping aside when someone won't put up with it. Not only that but go ahead and shut down my posting priviledges but not the one making the attacks. Not even the common decency to abide by their warning (afer three days of absorbing personal attacks with no mods in sight), just shut my posting down and then say it might happen later if a certian line is crossed. Bullshit flag is flying, I won't abide by lying and coddling of liars who go off-site, create accounts differing in a single letter from my handle with the express purpose to decieve and then claim here that I did it. Everyone be warned, kim kap sol is a lying, deceitful poster.



    Now I guess they should have banned me rather than just shut off posting priviledges, because kickaha and Amorph definitely aren't going to like being called to task when they thought they had it all ignored *cough* *cough* I mean under control. Just a couple o' tools.



    Don't worry, as soon as my work resetting my posts is done I'll disappear forever.

     0Likes 0Dislikes 0Informatives
  • Reply 59 of 74
    Quote:

    Originally posted by AirSluf

    Go on, give it another kick.



    Okay.







    Amorph, whether his machine is blindingly fast for the tasks he uses it isn't the issue... it is that he generalizes that into stating that current computers are fast enough. That's the kind of attitude that would still have us picking berries while watching cautiously for roaming lions.





    Quote:

    Wizard69:

    Moving to SMP is a very good thing, the problem is that it appears that the industry is doing it for the wrong reason.



    It doesn't appear that way to me. Multi-core designs have been the inevitable direction for a long time, it was just a matter of time before the engineering trade-offs made it more effective to go that way than to continue pushing forward with single core designs. I know from our previous discussions that you don't believe that IBM and Intel have really run into a frequency limit, but it seems clear to me that the design costs of high frequency operation now make it less attractive use conventional clock rate scaling than going MP, and it will remain so for at least the foreseeable future.



    Other alternatives exist, however, and since the "easy" scaling has run into trouble I think we'll see more investment in them. Asynchronous designs (Sun is doing work there), multiple clock designs (a al P4's double speed integer units), or specialized deep pipeline vector ALU cores (IBM/Sony's Cell) will all have parts running at higher speeds... but its going to be a different ballgame than the exponential clockrate march we've been on until now.
     0Likes 0Dislikes 0Informatives
  • Reply 60 of 74
    wizard69wizard69 Posts: 13,377member
    Quote:

    Originally posted by Programmer

    Amorph, whether his machine is blindingly fast for the tasks he uses it isn't the issue... it is that he generalizes that into stating that current computers are fast enough. That's the kind of attitude that would still have us picking berries while watching cautiously for roaming lions.



    Whoa that in a nut shell is exactly what the problem is, it is a shame I couldn't have delivered a similar response. It does smack a bit of Ludite like thinking.



    Quote:



    It doesn't appear that way to me. Multi-core designs have been the inevitable direction for a long time, it was just a matter of time before the engineering trade-offs made it more effective to go that way than to continue pushing forward with single core designs.



    Yes the reality of having the extra area to implement another full 64 bit processor is a great advantage of the current process sizes. It is an excellent trade off for certain systems. The problem is that at a given feature size the trade off is a one time deal. Once you filled up the die to maximum size, based on the economics of production, the only recourse you have left is to go back to enhancing the cores themselves.



    That doesn't mean that Higher clock rates are the only avenue to increasing performance but lets be honest it is pretty easy realtive to building a whole new core. Freescale/Motorola has done this for a long time, making minor tweaks to the core to help with scaling operating frequency.



    IBM itself has avoided major changes to Power going from 4 to 5. In effect they added SMT, which in retrospect must have been thought about when Power 4 was developed, and made some other tweaks.

    Quote:

    I know from our previous discussions that you don't believe that IBM and Intel have really run into a frequency limit, but it seems clear to me that the design costs of high frequency operation now make it less attractive use conventional clock rate scaling than going MP, and it will remain so for at least the foreseeable future.



    First off I do not believe that IBM and Intel have the same problems, similar maybe but that does not make them the same at all. IBM's problem is that the 970 run exceptionally hot for the number of transistors it has and its operating clock rate. If they can address this then we may be able to determine where they stand with respect to maximum clock rate. In any event they got 500MHz improvement over the old clock rate where Intel got little at all.



    As to 2X SMP I'm all for it, I'm not sure how the impression got started that I wasn't for it. But like I eluded to above after you have two cores on the die it is back to scaling frequency, in part, to improve performance in an economical manner. The other part being improvments to the cores themselves.



    I don't know myself where the break point is for cores with the current 970 FSB arraingement. They might be able to go to three cores, maybe not. There is a point where it would be practical to go to more at the current feature size and FSB limitations.

    Quote:



    Other alternatives exist, however, and since the "easy" scaling has run into trouble I think we'll see more investment in them. Asynchronous designs (Sun is doing work there),



    I thought I saw something recently about ARM partnering with somebody on such designs. The problem with PPC is that you are not likely to see such a change in the processor for a very long time. Of course that is based on my belief that both Freescale and IBM have a ways to go with current hardware.

    Quote:

    multiple clock designs (a al P4's double speed integer units), or specialized deep pipeline vector ALU cores (IBM/Sony's Cell) will all have parts running at higher speeds... but its going to be a different ballgame than the exponential clockrate march we've been on until now.



    Well at least you are reasonable and are not chanting that all clock rate increases are forever dead. This is what I have problems with. People see IBM and intel having problems and all of a sudden the whole industry is collared with the same problems.



    Dave
     0Likes 0Dislikes 0Informatives
Sign In or Register to comment.