A look at the July Power Macs now that we know the Xserve specs

168101112

Comments

  • Reply 141 of 238
    ssendamssendam Posts: 19member
    To the dude who said that Carbon was slow and everything would be much faster once it was all Cocoaified:

    Look at OmniWeb... While it's a pretty nice browser, it is by no means a speed demon, try opening five or six web pages with it and see how it handles them.



    To the other dude who dismissed Fight Club:

    Your loss. It's a great movie, by virtually any metric.
  • Reply 142 of 238
    bodhibodhi Posts: 1,424member
    Fight Club is a very cool movie. Not shakespeare but it's not trying to be...
  • Reply 143 of 238
    lemon bon bonlemon bon bon Posts: 2,383member
    I liked Fight Club.



    I liked theMagius post.



    I think he said it all.



    Sure. You can blow £3K on a 'Power'mac and think it's very fast...well, if you're living in the Mac Pocketuniverse where your last computer was a 200mhz Mac clone. Yep. Then the 1 gig dual may seem actually fast. But if you've used any AMD cpu based machine with even a few hundred megs of ram from the past year then the Dual 1 gig Power mac seems too little, too late, overpriced and none to impressive. You may even get the distinct feeling Apple is ripping you off, charging obscene amounts of dosh for hardware that was out of date last year.



    The falling 'Power'mac sales are obviously dropping within the context of the larger industry performance picture. ie Apple's 'Power'mac monopoly powerbase is shrinking. And until they offer a compelling upgrade it will likely continue to do so.



    ('Whoops, 'Power'mac has been lapped three times by Intel Pantium 4.') Signal for more 'Power'mac users to abandon ship. Perhaps some professionals look not just whether it was fast compared to my last mac but whether it's value for money to what they can get else where. Windows with XP is getting increasingly bareable.



    I don't think Mac users need to be lectured on what they actually use their machines for. I like the Mac Os X. I presume most people who post here do so too and would just like apple to push the boat out more than 'a little'. 'X'? It isn't worth £3000 for 'X' on an out of date machine. Well, not to me it isn't.



    I don't see the dual 1 gig G4 beating my Athlon 1.6 gig. The AMD seemed faster to me.



    I expect better from a 'Power'mac three times the price with no monitor included.



    I like 'x' but it's retail value is less than £100! I guess these things are subjective.



    Part of my would like to believe its worth paying £3000 for. It's not. I want competitive kit to run it on. Not a warmed over G3.



    I want a 'Power'mac worthy of the name. In my view, the current one isn't. Hence the stagnating sales (oops, sorry, Apple, must be the 'economy'...oh...and millions awaiting the the 'PS7' upgrade...) The point is apple are constanly stiffing their users with 'old' specs.



    ibook with 8 meg graphic card anyone?



    The dual gig is far from impressive in my eyes.



    You're paying one heck of a premium for performance that is mostly 1 gig or so most of the time.



    Fraoch anyone?



    Lemon Bon Bon









    [ 05-17-2002: Message edited by: Lemon Bon Bon ]</p>
  • Reply 144 of 238
    keyboardf12keyboardf12 Posts: 1,379member
    fight club was an awesome movie. you should rent it this weekend.
  • Reply 145 of 238
    sonnyssonnys Posts: 4member
    I did not say that a Cocoa application will automatically be faster than its Carbon counterpart. I read in an article that one of the biggest reasons even the "fastest" Cocoa browsers are still slower than PC browsers isn't because they run on Mac hardware -- it's because the programmers aren't fully versed in all the appropriate events, etc., that need to be utilized in order to maximize performance.



    It makes sense that Windows apps will be faster than their Mac counterparts -- look how long Windows has been around compared to OS X. OS X is brand new and it will take some time for apps to get better and faster as programmers learn new programming techniques to obtain better performance. Patience.



    Now about the gigahertz crap. I have no doubt that an AMD Athlon system can wipe the floor with a PowerMac. I've seen plenty of benchmarks and comparisons. So what's new about that? The PC world has always been able to "wipe the floor" with the Macs, or vice versa, depending on whos benchmarks you believe. But, more to the point, who cares? If all that mattered was raw numbers, we'd all be working on Athlon systems. Obviously we all don't, and if you feel you are getting cheated or ripped-off by Apple, then please do not purchase Apple products. Stick with your superior, faster, much more competitive PC/Windows product.



    As for me, I am perfectly content with my G4 500MP. I am a print and web designer, routinely working with images that are many megabytes huge, and I don't have any performance problems... Not only that, but with OS X I can reliably burn CD-Rs and check email in the background without any perceptible loss of performance. Overall, my productivity has gone up since moving to OS X, even though I haven't put a dime into faster hardware.



    The increased productivity afforded by any particular hardware/OS combo is the ultimate measure of a system's performance. It has nothing to do with gigahertz, megabytes, or anything else.



    Case in point: I plugged my PowerMac into a Linksys wireless router and it ran without a hitch. I started sending out proposals and proofs via email almost immediately. But installing a matching wireless network card on my roommate's multi-gigahertz PC took over 2 hours of debugging, reinstallation, and a phone call to Linksys technical support before it would work. So you see, all those gigahertz don't mean squat if they save you a few seconds here and there, but then you waste several hours at a time trying to do what should be very simple and straightforward.



    The ultimate measure is productivity and smoothness of workflow. Period.



    Also, to those people who are lambasting Apple's Xserve because it may not have the latest and greatest specs. I watched the video of the introduction and saw them pull 500 DSL-quality QuickTime streams off the server with &lt; 50% CPU utilization. Call me a moron, but that looked pretty darned impressive for a single box. Because servers are more oriented towards throughput, bandwidth, and responding to a multitude of client requests, CPU and processing power isn't the most important feature. It's the overall system performance, which Apple seems to have nailed.



    Your Mac, no matter how much or how little money you've spent on it, is only a tool. It's not a measure of your intelligence or self worth, so let's stop investing so much of ourselves in the fate of Apple and its computers. Use the tool that helps you accomplish the most, and move on.
  • Reply 146 of 238
    [quote] The ultimate measure is productivity and smoothness of workflow. Period.<hr></blockquote>



    Well said. Mac's should not be compared with PC's, but that doesn't mean I don't want Apple to make faster Macs. A computer can never be too fast.
  • Reply 147 of 238
    detahdetah Posts: 57member
    <strong> [quote]

    The ultimate measure is productivity and smoothness of workflow. Period.

    </strong><hr></blockquote>



    generally, applications offered on both platforms are identical. who requires colorful doohickeys in order to be productive?



    windows is rigid precisely because of it's neglection of aesthetics in favor of productivity/efficiency. aqua was hardly created with efficiency in mind, style rather.



    now, with that in mind and considering the speed gap which would also tip the scale of productivity, i find your arguments baseless.



    please refrain from misconstruing style with "ease of use" from this point forward.
  • Reply 148 of 238
    My arguement was simply that Final Cut Pro is only available on the Mac and it is a much better program than Premeir. That is pretty straightforward and has nothing to do with style of "ease of use".
  • Reply 149 of 238
    [quote]<strong>

    I don't get my sense of economics from Hollywood films, and since I saw nothing about Fight Club to recommend it, I ignored it.

    </strong><hr></blockquote>



    Heh. Well, I guess everyone's tastes in movies is different. Allow me to clarify.



    My point in quoting portions of "Fight Club" was not to convince you that movie economics are accurate. My point was that the main character in the film works for a company that pretty much decides EVERYTHING (including loss of life) based on the underlying cost-impact it will have on their company.



    Although Apple Computer does not seem NEARLY this drastic, it is likely to assume that they make the majority of their decisions with cost-impact as one of the more prominent deciding factors. Thus, if a 23% loss in PowerMac sales can be effectively offset with profit increases from the iMac and iBook lines, then there is no IMMEDIATE need to focus on the PowerMac line. Does Apple have concerns about the lagging sales? Well, I would HOPE so, but money is money.



    Now, imagine if those numbers went to, say 50% losses for PowerMac sales. Would that be enough for Apple to place its IMMEDIATE focus on the PowerMac line? I can't say for sure, but I'm willing to bet that SOMETHING radical would be done to the design to try to compensate for lost sales.



    Again, it's all just my speculation. I don't work for Apple, and I'm not an economist.



    [quote]<strong>

    What part of "quality" implies that they should use an implementation that isn't plausible?



    They've eked an astonishing amount of performance out of their current architecture, far more than most - if not all - PC motherboards ever got out of their single-pumped busses. If they switched to DDR, and the net result was a single-digit performance improvement sometimes, Apple would be laughed off the boards. They implemented DDR for their servers, despite the 133Mhz MaxBus, because it made sense to have all that extra memory bandwidth for I/O; now that they have a memory controller that can use DDR efficiently, they'll implement DDR in the towers when they have a processor that supports it. To claim that they're holding back because the iMac is selling well is just strange.

    </strong><hr></blockquote>



    I see what you're saying, but there's a big "IF" in your third sentence. What happens if they spend the money and resources to implement a DDR-based bus that yields double-digit performance improvements? Was it worth the product focus then?



    [quote]<strong>

    Might as well ask another question: Is processor speed the only item of concern for you? Or hardware speed, for that matter? Do you mean real-world performance or numbers on spec sheets?

    </strong><hr></blockquote>



    Please don't misunderstand me. I do most all of my (creative) work on my G3 350MHz. There's pretty much NOTHING that Apple offers today that wouldn't beat the snot out of my current setup. My contention IS NOT that Apple doesn't make fast hardware. My contention is that Apple's PowerMac line is not necessarily built on the most powerful/current hardware available, and thus, does not warrant the cost that Apple Computer is charging to the consumer.



    [quote]<strong>

    In software, which has the same relationship to marketing that the hardware guys do. Plus a great many of my friends and coworkers are hardware guys. The "more with less" doctrine is more especially critical in hardware, especially where clock speed is concerned, because high clock speeds introduce a double handful of complications to motherboard design. But it's not uncommon in software, either. In fact, on the programming newsgroups, someone with a loaded, up-to-the-minute rig is most likely an abject newbie, and his uncapitalized "how do i clear the screen in c k thx bye" is probably going to be handled by someone on an 8500.

    </strong><hr></blockquote>



    I, too, work in software. Specifically, I have worked as a software engineer, software test engineer, and systems engineer. My experience has been MOSTLY as a technical contractor, but (of course) I have worked as a permanent employee for several large firms.



    My experience--which, arguably, may be different than yours--is that engineering firms are concerned, FIRST, about their stock prices. They will do ALMOST anything (manipulate earnings numbers, lay off employees, and eliminate company-wide raises) to make their quarterly figures look good enough to promote stockholder investment. SECONDLY, they are concerned with the needs of the customer. The convergence of these two (I believe) is where the "more with less" doctrine comes in. If the company can save money by being more efficient, they will do so. My personal belief is that Apple computer does this (partially) by furnishing their products with older technology that can be purchased for less.



    [quote]<strong>

    Sorry, but what you said is that you wanted something faster. If Dell, Gateway and HP ship those... so what? The theoretical speed advantage they offer is irrelevant, and they offer no increase in storage capacity, unlike ATA/133. So we're back to real performance vs. theoretical numbers.

    </strong><hr></blockquote>



    Well, I guess we're just going to have to "agree to disagree" on this one. If you believe that 33MB/s to 66MB/s greater data transfer capability does not contribute to a "noticeable" increase in speed, then I'm not sure I can convince you otherwise.



    [quote]<strong>

    Whether there is or not is moot if there isn't the hardware to provide those bursts.



    Also, I doubt that there is. There is software that benefits from it, of course, which is why people buy big SCSI RAIDs like the ones that I linked to, but nothing I've heard of that requires it. Real-time video, perhaps? Off to MacGurus you go, because you're not going to get it from your ATA bus.

    </strong><hr></blockquote>



    From what I've been taught in my A+ classes, the advantage of a SCSI chain is NOT so much that it is inherently faster at data transfer than an ATA drive (as they both move data to the HD in un-sustained bursts). The advantage is that multiple devices (e.g. HD1, HD2, SCANNER, CDROM, DVDROM, etc.) on the same SCSI chain can SIMULTANEOUSLY transmit/receive data. Whereas, an ATA bus can only communicate with ONE of its devices at a time (e.g. HD1 or CDROM on the same ATA controller, but NOT both at the same time).



    I guess I don't see how SCSI comes into play since my (personal) beef is with apple supplying ATA/66 controllers with their PowerMacs instead of something more advanced.



    [quote]<strong>

    Maybe a fraction of a second here or there reading relatively small files from RAM cache (which OS X does to some extent anyway by caching files in main RAM, bypassing the drive bus altogether), but nothing like the 33% increase in theoretical bandwidth. You won't get that until sustained reads from ATA drives get faster (what's a sustained read from an ATA drive? Well, copy a 1GB file on an optimized disk. There you go. ).

    </strong><hr></blockquote>



    Perhaps I need a hardware engineer to explain this to me, but my understanding was that every ATA drive has a maximum data transfer limit. This limit can be reached for brief periods of time, but cannot be sustained for all data transfers. How is disk optimization to have any effect for a prolonged period of time unless a defragmentation program is executed after every disk write?



    [quote]<strong>

    With almost 100% certainty. Any software that failed to run without being able to access an ATA HDD at that rate (i.e., which required that bandwidth) would find itself with a vanishingly small market. People who really need that kind of speed fork out for SCSI RAIDs. That's where your read/write rates of 100MB/s are, and you'll pay dearly for them.



    It's possible that Apple will introduce something like the ATA RAID technology on the XServe, although I'll bet they have two channels instead of four if they do. Even then, people who want real speed will get a SCSI RAID or an XServe RAID (when those appear). If disk performance is critical, you don't under any circumstance rely on a single ATA drive to provide it. It's just never done.

    </strong><hr></blockquote>



    Well, I guess this must be the "marketing ploy" of the century, then. Hell, if I could get thousands of computer users to buy a product that would yield better performance (e.g. ATA/133) without actually gaining any noticeable performance over the "old, cheaper product" (e.g. ATA/66), I'd be a millionaire!



    Now, I don't mean any disrespect. You're the moderator on this board, and I'm just some guy who came onto the website and tossed forth his opinion.



    If you could post a hyperlink to some test results (PC/MAC) where a faster HD controller and drive interface DIDN'T yield better performance, I'd be more than happy to humble up and eat crow.



    Drawing another proverbial line in the sand,

    -theMagius



    [ 05-18-2002: Message edited by: theMagius ]</p>
  • Reply 150 of 238
    bodhibodhi Posts: 1,424member
    After some thought and reading some of the posts in this thread I could not help to think that maybe, just maybe...Apple spends too much money on "design" R&D and not on "architecture" R&D. They seem to hit the nail on the head almost every time in a design sense, but seems to be always 4-5 steps behind on technological R&D. Granted this does not apply to FireWire or Airport, it applies to 66MHz bus iBooks. 133MHz bus Power Macs. I mean how many years has Apple known that the G4 could not work with a bus faster than 133MHz? Years people. Is it Apple's responsibility if it's Mot's processor? You betcha! Don't think that Apple doesn't contribute to Mot R&D costs. It's 2002 and there still isn't a fix. I blame both Apple & Mot. And Apple...please...Ives is great but there are other more pressing issues.



    [ 05-18-2002: Message edited by: Bodhi ]</p>
  • Reply 151 of 238
    Hi,

    I think that Apple will show something completely new, because it would be too easy for the whole world to make their predictions on the xserve model.

    Regards

    Torsten,

    germany
  • Reply 152 of 238
    amorphamorph Posts: 7,112member
    [quote]Originally posted by G-News:

    <strong>Denying that ATA133 is faster than ATA 66 or 100 is just stupid. The differences are small for most tasks, but it's not like Maxtor just put a "133" sticker on their ATA 66 drives.</strong><hr></blockquote>



    I said exactly that. I also noted that since OS X, like a good UNIX, tends to cache files in main RAM, the size of the HD cache and/or the speed of the ATA bus is less relevant.



    [quote]<strong>However, the real reason why were still stuck at ATA-66 in the towers, is because we're still using the same Southbridge since the G4 Sawtooth was released.



    And southbridges usually don't get changed without changing the whole rest of the mainboard too.</strong><hr></blockquote>



    Precisely. In the mean time, ATA/66 works just fine.
  • Reply 153 of 238
    mugwumpmugwump Posts: 233member
    I think we are forgetting about last year's New York show. Such hype from Apple, and nothing much unveiled. Won't everyone be disappointed if it's just speed bumped desktops with a drop in price and only DDR Ram?



    Maybe, but I'm not sure that the company is too concerned. They keep raising the bar with their own software (Jagwire, iDVD, Final Cut Pro, DVD Studio Pro, iPhoto, iTunes) that current users on g3's are being motivated to upgrade no matter what.



    Let's get some common sense. They just unveiled the new apollo chips, you now want new versions again to support a 266mhz bus in July, and then a g5 for MWSF? Get real. SF is the big show, and what will they do for that one if you get the improved mobo and case in July?



    Get ready for disappointment, because AI members often are. I suspect it will be 20% speed bump with DDR RAM, possible price drop as well. That and Jagwire.



    Then again, I could be wrong. I wish we knew for certain, though, so I could plan a purchase.
  • Reply 154 of 238
    amorphamorph Posts: 7,112member
    theMagius wrote:



    [quote]<strong>Although Apple Computer does not seem NEARLY this drastic, it is likely to assume that they make the majority of their decisions with cost-impact as one of the more prominent deciding factors. Thus, if a 23% loss in PowerMac sales can be effectively offset with profit increases from the iMac and iBook lines, then there is no IMMEDIATE need to focus on the PowerMac line. Does Apple have concerns about the lagging sales? Well, I would HOPE so, but money is money.</strong><hr></blockquote>



    Apple's profit margin on the LCD iMac is smaller than on previous iMacs, and it shrunk from what they expected. iBooks are getting sold in huge transactions with very slim profit margins per machine. They make their money off the professional lines, and in particular from the people who are willing to pay for the absolute fastest model. You can be sure that the drop in PowerMac sales has hurt Apple's bottom line hard, and they'd have to sell a whole lot of iMacs and iBooks to make up the difference.



    [quote]<strong>Now, imagine if those numbers went to, say 50% losses for PowerMac sales. Would that be enough for Apple to place its IMMEDIATE focus on the PowerMac line? I can't say for sure, but I'm willing to bet that SOMETHING radical would be done to the design to try to compensate for lost sales.</strong><hr></blockquote>



    This, and the rest of your post, are based on the assumption that Apple is not focused on its PowerMac line. I see no basis for that. The fastest PowerPC they can get their hands on doesn't support DDR. Their southbridge's ATA controller might not be bleeding edge anymore, but there's no point updating it until there's a lot to update, especially since the advantage of updating it is small to negligible. So they have dedicated a lot of work to getting insane amounts of efficiency out of what they do have. That looks like focus to me. No matter how hard their engineers concentrate on something, they can't pull things out of thin air.



    There have been murmurings of a good, fast DDR motherboard for some time now. I would be frankly shocked if Apple didn't have several in process. But again, if your fastest processor doesn't support DDR in its front side bus, well, what can you do?



    Still, if you feel it will help, go ahead and tell Apple that you're not going to buy a PowerMac, and tell them why. I'd be surprised if it spurs them to any greater action than they're already undertaking, because I see no reason to believe that they aren't working hard already.



    Most people were skeptical that two GHz G4s could be fed well at all by a 133MHz bus, but they are. Not optimally, obviously, but better than anyone predicted. If Apple hardware does things that people don't think it should be capable of doing, that tells me their engineers are hard at work. Even if the result doesn't have the biggest numbers on its spec sheet.



    [edit: more]



    [quote]<strong>Well, I guess we're just going to have to "agree to disagree" on this one. If you believe that 33MB/s to 66MB/s greater data transfer capability does not contribute to a "noticeable" increase in speed, then I'm not sure I can convince you otherwise.</strong><hr></blockquote>



    It's available bandwidth. If you have a two-lane highway that principally handles traffic from a couple of country roads, it can probably do all right. If you widen it to six lanes, it's theoretically capable of handling much greater amounts of traffic, but in practice there's just those country roads feeding into it. So the four extra lanes buy you... nothing.



    That's an extreme example, but it illustrates the point: ATA/100 is only an improvement on ATA/66 in those circumstances when there's actually a transfer rate greater than 66MB/s. Most of the time, the drive can muster 40MB/s at best. So the benefit of the faster bus is restricted to reads off the HDD's (comparatively small) RAM cache.



    [quote]<strong>From what I've been taught in my A+ classes, the advantage of a SCSI chain is NOT so much that it is inherently faster at data transfer than an ATA drive (as they both move data to the HD in un-sustained bursts).</strong><hr></blockquote>



    Well, no. Bandwidth is bandwidth. SCSI has the advantage you list, plus the ability to schedule reads and writes out of order to minimize the amount of seeking the drive heads have to do, plus the ability to do all of this without leaning on the CPU. Also, there is the simple fact that you can get 10k and 15k SCSI HDDs, but no such things exist for ATA. So besides being a more autonomous and efficient bus, SCSI has the (potential) advantage of faster drives, and thus higher data transfer rates.



    [quote]<strong>I guess I don't see how SCSI comes into play since my (personal) beef is with apple supplying ATA/66 controllers with their PowerMacs instead of something more advanced.</strong><hr></blockquote>



    I was addressing the numbers you were throwing around (100MB/s), since you seemed to think that an ATA controller would be sufficient to provide them. These are the sort of numbers that, if needed, are almost always provided by a big, expensive SCSI RAID. It's a third-party solution available to PowerMac owners who need that kind of speed badly enough to plunk down $8k or more. Apple has provided (and might still provide - I haven't checked) high-end PowerMacs with Ultra160 SCSI cards and twin 10k drives as BTO options, so their solution seems to be to recognize that ATA (and specifically ATA/66) works well enough for the majority of cases, and those people who need more can either get Apple's BTO SCSI solution or a third party's.



    Apple shipping PowerMacs with ATA/100 controllers, or ATA/133 for that matter, wouldn't significantly change this situation unless Apple pulled an XServe and set up multiple controllers, each attached to a single drives, and the whole system configured as a RAID (I wouldn't put this past them). In this case, again, the difference between ATA/66 and ATA/100 would be measurable, but nowhere even close to what the 33% gain in available bandwidth implies, because each controller is only handling a single, dense (and therefore relatively slow) 7200RPM drive.



    [ 05-18-2002: Message edited by: Amorph ]</p>
  • Reply 155 of 238
    lemon bon bonlemon bon bon Posts: 2,383member
    Agreed Bodhi.



    Lemon Bon Bon
  • Reply 156 of 238
    lemon bon bonlemon bon bon Posts: 2,383member
    "Maybe apple wanted something that was more than a single core 10 stage pipelined cpu that was little more than a mhz'd G4? I didn't hear of any additional fpus.

    But that this 'G5' would have the benefits of an improved mobo. Which basically gives us a bumped Apollo (which is clocking higher than Moto expected...) in rapid io set up. This reduces the need for the G5 that Moto envisaged replacing the Apollo. ie cheaper to just push the G4 on if you've found it has legs. (Look at what Intel have done to push their chips over the last few years...ie bolting things on and engineering more mhz from them when they were supposed to 'hit the wall' years ago.)



    Apollo, a chip that was to enable the G4 to get to just over 1 gig. Maybe Moto has found it can clock higher so their original G5 needed to be re-thought, cancelled or improved. ie was it worth the cost of developing a new chip when their were few advantages over the old one?



    Whether Moto' gets the contract for the successor to the G4 or IBM gets it... Well, nobody seems to know for sure. But the chip we think of the original 'G5' may have had its chips, mate. Perhaps what we want to be the 'G5' had to be redesigned to be accepted by the market. ie, more than merely mhz, more fpu, more MHZ, more integer, more simd, multi-core and architectural improvements. Apple and moto and even IBM haven't had an easy ride cpu rise over the last few years. Perhaps we can expect the next 'true' next gen' processor to be something special.



    ...if Motorola had this G5, then why isn't it here now? Perhaps Moto's vision of the G5 was the same 'cheap shot' gimmick that took us from G3 to G4? ie the G5 would have been a G4 but with more mhz and a ten stage pipeline (ie a really stretched G3 ). Which... is...er...what Apollo will be this Summer at Macworld New York..? (minus the ten stage pipeline?) But at San Fran, maybe the G4 pipeline will be stretched to 10 stages with Rapid Io and then be what the G5 would have been anyway.)



    Surely Apple would ship the fast chip they could instead of make do with shrinking Powermac sales and a g4 that has clearly struggled to keep apace with the x86 side of the competition.



    I'm beginning to think the '7500' chip will be nothing but an Apollo+ with a rapid io architecture. Clocked at 1.2 - 1.6 (if all we see at Macworld New York is a 1.2 gig Apollo with DDR mobo.) This is effectively the G5 chip, by proxy, that Moto would have shipped. It's not THE G5 that would have shipped...but it's effectively the same in all but name. And they didn't have to waste money getting the same product.



    I heard the G5 was going to be single core from a poster who used to post here called Motoman.



    That being the case...if all Moto's G5 was...some glorified G4 ie longer pipeline with no real benefit...then what we have is...is...bumped Apollo ala Macworld New York.



    Maybe IBM bid with a multicore g3 with it's own simd unit which would more than take on Moto's version of the 'G5'.



    Take away the altivec unit and the G4 outside of the embedded market doesn't seem that impressive to me.



    Maybe Apple wanted more and hence the delay for any G5 type chip worthy of the mantle.



    With Maya and Digital Video markets becomging more important to Apple then it's realistic to assume, long term, Apple know the cpu and performance issue has to be addressed...



    We'll have to see if the chips Apple has coming in the next half year deliver compelling performance reasons to make me get out the wallet.



    I'm dying here without a POWERmac.



    However, it seems more and more to me, that unless Apple have something we just don't expect up their sleeve, there are going to be some disappointed people not only at Macworld New York but at San Fran next year also.



    The true 'next gen' processor from Apple/Moto/IBM may not be with us until Macworld Newyork 2003. With a 64 bit version to follow that...



    Lemon Bon Bon"



    Had these thoughts in the 'nixing' thread. Thought I'd add more broth to our masturbatory Apple/Moto' kickings.



    Spare boot anyone?







    [ 05-18-2002: Message edited by: Lemon Bon Bon ]



    [ 05-18-2002: Message edited by: Lemon Bon Bon ]



    [ 05-18-2002: Message edited by: Lemon Bon Bon ]</p>
  • Reply 157 of 238
    sorry guys, but this thread starts to be very amusing.



    there are people who are bitching about apple isn't taking powermac R&D serious today - and they have no clue what apple has planed for MWNY.



    and there are people who are bitching that actual powermacs are slower than actual pcs - and they are still working with a G3 300MHz.



    just imagine: "i would like to buy a new powermac, would be 3 times faster than my old G3. but then, no. a P4 would be 3,5 times faster. so i guess i'm better of with my old slow mac until powermacs are getting cheaper."



    sorry, if you care about a fast computer then think about buying a new powermac - or a new pc for gods sake. but stop complaining about this "apple is slower than pc today" when you're comfortable with your old G3 and do not want a new computer ...



    and, btw, do not count on taken seriously!



    chris

    iBook 366 - will buy a new powermac after MWNY - and will love it :o )
  • Reply 158 of 238
    sonnyssonnys Posts: 4member
    Like I said, my G4 500MP system works just great for what I do... high-end print design / web publishing. My work would directly benefit from higher memory bandwidth and faster processors, but the 500MP speeds along at an acceptable pace -- I never find that I'm waiting on anything.



    People, there's a danger to becoming psychologically tied to your computer's speed rating. It's just like anything else that gets updated on a regular basis -- cars, fashion, whatever -- you will always suffer an inherent feeling of inferiority if you don't own the latest and greatest. Companies exploit this psychological weakness to keep themselves in business. If you unplug your emotional processor from your computer's central processor, you'll find the hidden reality: a computer, like a car, is simply a vehicle designed to get you from point A to point B.



    Computers are tools, folks. Buy the best tool that is available and be content that it does the job you purchased it for. When it stops doing that job to your satisfaction, upgrade it as necessary or buy a new one.



    I chuckle when I read about people getting mad that PowerMacs don't have superior specs to PCs. These are people who just like to complain, who are never happy with what they have, and who are playing p*ssing games with their buddies over whose system can put out more frames in Quake.



    Detach yourselves for a moment and look at the situation realistically. The PC world has Intel and AMD, each an incredible corporate powerhouse in its own right, working on hardware innovation. Then there's Microsoft, which is practically a small country in terms of employees and economic power, working on the OS side of the equation. Then there's Apple, which is a tiny fraction of the size of the above, developing both hardware *and* software.



    Having the fastest hardware is not a prerequisite to running the best software. Many years ago there was this computer called the Amiga. It did with 7 MHz of hardware what the Mac and PC are only now approaching with GHz of hardware. They key is the OS and underlying technologies, and this is what Apple is correctly focusing on.



    Apple's products, whether they are the fastest or not, have one thing that no PC or Windows product has ever had: a soul.
  • Reply 159 of 238
    pfflampfflam Posts: 5,053member
    The point is not about wanting the fastest now -- though I do want the fastest now



    the point is will Apple be able to survive with such consistently slow progress on the speed bump/hardware update end.



    survive, or at least flourish...after all I own stock and want to make enough from it to buy the FASTEST when it comes out



    ...it would be ironic if my Apple stock ended up purchasing a PC
  • Reply 160 of 238
    stevessteves Posts: 108member
    [quote]Originally posted by crayz:

    <strong>Are you blind? I specifically said "Q3A is not one of the 'toughest games' either"



    And no, I am not all that impressed by the most expensive and powerful Mac ever created barely breaking the 200FPS barrier in Q3 only when using a tweaked config file.</strong><hr></blockquote>



    1) The original poster did not name a specific game, you did. You named Q3 and RTCW (which happens to be built on the Q3 engine). RTWC might be a little more demanding, however, it's built off of the same generation of technology. It's not like comparing to a game based on generations old technology like the original Doom, etc.



    2)It is commonly acceptable to do minor tweaks that in no way impair the image or sound quality of the game. Likewise, <a href="http://www.xlr8yourmac.com"; target="_blank">www.xlr8yourmac.com</a> changing the chunk size is no different from PC Gamer disabling vertical syncing. In fact, many web sites don't even report such tweaks as part of their testing. Anyone familiar with the Q3 engine will perform similar tweaks and this is commonly understood.



    3)The fact that your not impressed by Macs breaking the 200fps barrier only shows your troll like PC bias. Seeing as though it takes a 1.8 - 2.0 GHZ P4 with a Geforce 3Ti or higher to achieve the same task, most would credit Macs as being a pretty decent gaming system. I didn't hear a claim that Macs were better or faster in this area. They are powerful enough though.



    [quote]Originally posted by crayz:

    <strong>Tom's has an Athlon XP 2000+(a much cheaper system than a DP 1GHz) running Windows XP getting nearly 220FPS at 1024x768. That's the exact same card and the exact same game run with the exact same settings, and its getting over 35FPS more than the much more expensive PowerMac. That means that the PowerMac, either the CPU, the OS, or some other part of the system(like the AGP bus) is limiting the ability of the Ti4600 to do its job - BECAUSE THE MAC IS JUST TOO SLOW.

    </strong><hr></blockquote>



    First, who said anything about cost? This is a common argument for PC trolls and is irrelevant to the discussion at hand.



    Second, yes, here's a link from Tom's Hardware which illustrates your point:



    <a href="http://www6.tomshardware.com/graphic/02q2/020409/geforce4ti4200-10.html"; target="_blank">http://www6.tomshardware.com/graphic/02q2/020409/geforce4ti4200-10.html</a>;



    Notice that the same Athlon with a Geforce 4 Ti 4200 only gets 199.7 fps at 1024x768. The Mac gets 182.9 fps at that resolution. Is the Mac version of the Geforce 4Ti based on the 4600 or the 4200? I don't know. Either way, performance is very much in the same ball park. Yes, the Athlon system is a little faster. One thing is for certain, no end user would be able to tell the performance difference between 182.9 fps and something over 200fps. This is a fact. Macs are no where near as slow as you suggest.



    Steve
Sign In or Register to comment.