New ATI Graphics Card... Will be?

2

Comments

  • Reply 21 of 59
    telomartelomar Posts: 1,804member
    Quote:

    Originally posted by onlooker

    Where does it say DOOM3 is coming to the Mac? I must have missed that.



    Id has long done releases for all platforms. John Carmack also made statements of his support and intention for simultaneous releases for Mac OS X, Windows and Linux.

    Quote:

    Originally posted by onlooker

    Besides. Nvidia looks way better. Even the makers of DOOM3 say so.



    Strangely enough Valve, the makers of Half-Life 2, say ATi is the way to go. Both graphics card makers are much the same but nVidia has long used driver optimisations to obtain better frame rates at the expense of picture quality for certain games. In terms of actual image quality ATi has almost always led the way. Mind you with this next generation of cards it's going to be hitting the limits of noticeable.



    It's worth mentioning the GeForce FX was almost universally rated as inferior to ATi's products though.

    Quote:

    Originally posted by onlooker

    The new Nvidia 6800's are way ahead of that ATI map, and the 6800 is ready.



    The 6800 isn't ready. It won't be ready until late may by which time ATi will be shipping their own offering. nVidia isn't way ahead, ATi isn't way ahead and both are releasing some great products.
  • Reply 22 of 59
    Quote:

    Originally posted by onlooker

    Where does it say DOOM3 is coming to the Mac? I must have missed that.



    Besides. Nvidia looks way better. Even the makers of DOOM3 say so.

    They also recommend Nvidia Geforce FX's (which are not available on the Mac) to play the game on.



    The new Nvidia 6800's are way ahead of that ATI map, and the 6800 is ready. This is what it say's at Toms hardware







    LINK TO TOMS




    Am I the only one who noticed this?



    nVidia



    "Operating Systems



    Windows XP

    Windows ME

    Windows 2000

    Windows 9X

    Macintosh OS, including OS X

    Linux "
  • Reply 23 of 59
    mattyjmattyj Posts: 898member
    Concentricity, that fact that it says the card is compatable with mac OS means nothing. Sorry to have to tell you this. All the GeForce FX cards were mac compatable, but we had none of the high performance versions, such as the FX5950.



    I would think that this card will be made available as a BTO option on Powermacs in the near future.



    On a separate note, yes Valve does say that Half Life 2 runs better on Radeon cards, their E3 demo was run on 3Ghz P4s with Radeon 9800 pros. ATi isn't far behind, and I would not be surprised if they surpass nVidia with their newest offering. nVidia is pumping all it can into these cards, while ATis seems to be more efficient. Form what I've heard, the newest ATi card with 10 or 12 texture pipelines runs games just as fast as the 16 pipeline 6800.
  • Reply 24 of 59
    zapchudzapchud Posts: 844member
    And rumor has it that ATI will also release a 16 pipeline version of their card. There is no way one can say that the nVidia offerings are better than the ATI ones before the products are shipping. Or the other way around.
  • Reply 25 of 59
    g3prog3pro Posts: 669member
    Quote:

    Originally posted by cubist

    That NV40 is a total power hog: it takes two slots, it has a monster blower, and it needs independent power connections.



    Arty50's info is good. The PCI-Express is the new version, and if the Rev-B PowerMacs have a PCI-E slot on them (as some of the forthcoming Athlon64 cards will have), these boards will be very good performers at a reasonable price.






    actually, the NV40 uses a little more power (10W) than the 5950 AND the 9800XT. the asus (and others) version of the 6800 are single slot. and the 6800non-ultra is 1 molex. and the connectors don't need to be on separate chains.



    so get your f***ing facts right before you post.
  • Reply 26 of 59
    g3prog3pro Posts: 669member
    Quote:

    Originally posted by Programmer

    I disagree -- nVidia's track record isn't significantly different, and competition is a good thing.





    nVidia's track record isn't significantly different? i find this troubling, as OS X runs OpenGL only where ATi has horrible performance and drivers for it.
  • Reply 27 of 59
    g3prog3pro Posts: 669member
    Quote:

    Originally posted by mattyj

    On a separate note, yes Valve does say that Half Life 2 runs better on Radeon cards, their E3 demo was run on 3Ghz P4s with Radeon 9800 pros. ATi isn't far behind, and I would not be surprised if they surpass nVidia with their newest offering. nVidia is pumping all it can into these cards, while ATis seems to be more efficient. Form what I've heard, the newest ATi card with 10 or 12 texture pipelines runs games just as fast as the 16 pipeline 6800.





    You've heard that the new ATi card is faster than the 6800? Give me a break, man. You haven't got a clue.



    1) There have been no benchmarks of the R420.



    2) ATi had to bump their release schedule of the R420XT ahead SIX MONTHS because they know their 12x1 card can't compete with a 16x1 part.



    3) The R420 does not support ShaderModel 3.0 (pixel shader 3.) and thus rely on very limited shaders (128 instructions vs 65523 instructions on the NV40).



    4) The current direction of games is being geared away from pure texturing power and more towards programmability. NV40 delivers absolutely massive programmability while the R420 does not.



    5) If you had cared to look at the benchmarks of the NV40, you would see that games are massively CPU-limited, as the NV40 will get the same frame rates at 10x7, 12x9, 16x12 with AA and AF high.
  • Reply 28 of 59
    amorphamorph Posts: 7,112member
    Quote:

    Originally posted by g3pro

    nVidia's track record isn't significantly different? i find this troubling, as OS X runs OpenGL only where ATi has horrible performance and drivers for it.



    If you're getting your figures from the Windows side, they don't carry over. Everyone here was complaining about ATi in the Rage 128 era, before nVIDIA came on board, and once nVIDIA did come on board I never heard another peep about how bad ATi's drivers were.



    In fact, starting with the RADEON, ATi's Mac drivers have been consistently solid. nVIDIA's, by contrast, have been fairly problematic. Part of this comes from the fact that ATi engineers write ATi Mac drivers, while nVIDIA just ships Apple their hardware and their Windows drivers and it's up to Apple to make them work. This disequilibrium has probably been righting itself over time, slowly, but it's important to remember one thing about nVIDIA: They came to prominence by adopting and championing Microsoft's DirectX APIs, starting with 1.0, and their first and highest priority has always been maximum performance with DirectX under Windows.
  • Reply 29 of 59
    telomartelomar Posts: 1,804member
    Quote:

    Originally posted by g3pro

    You've heard that the new ATi card is faster than the 6800? Give me a break, man. You haven't got a clue.



    1) There have been no benchmarks of the R420.




    I find it amusing that you therefore sit here and proclaim nVidia's superiority without any evidence in your hands. Either you're a fan boy or you make irrational comments and decisions.



    There have been no benchmarks formally released to the public. That's different to no benchmarks. I'd be quite confident in ATi's cards matching nVidia's.



    Quote:

    Originally posted by g3pro

    2) ATi had to bump their release schedule of the R420XT ahead SIX MONTHS because they know their 12x1 card can't compete with a 16x1 part.



    nVidia responded to ATi's 12x1 card by dumping their original NV40 in favour of a 16x1 card, hence the reason you'll be waiting until June to get your hands on one. ATi heard about the 16x1 card and brought forward their own. Both companies pushed ahead their products.



    Quote:

    Originally posted by g3pro

    3) The R420 does not support ShaderModel 3.0 (pixel shader 3.) and thus rely on very limited shaders (128 instructions vs 65523 instructions on the NV40).



    4) The current direction of games is being geared away from pure texturing power and more towards programmability. NV40 delivers absolutely massive programmability while the R420 does not.




    This is true but no game supports this level of programmability and no game likely will until next year, and that's being generous. To take your own game Doom 3's engine, which will prove the basis of many many games for quite some time, is aimed at the Shader Model 2.0.



    The only real argument you could make would be future proofing and ATi will have products out later this year that support Shader Model 3.0.



    Quote:

    Originally posted by g3pro

    5) If you had cared to look at the benchmarks of the NV40, you would see that games are massively CPU-limited, as the NV40 will get the same frame rates at 10x7, 12x9, 16x12 with AA and AF high.



    Really? Just taking the link to Tom's from earlier in this thread it is quite apparent that isn't true.
  • Reply 30 of 59
    g3prog3pro Posts: 669member
    Quote:

    Originally posted by Telomar

    I find it amusing that you therefore sit here and proclaim nVidia's superiority without any evidence in your hands. Either you're a fan boy or you make irrational comments and decisions.





    There have been no benchmarks formally released to the public. That's different to no benchmarks. I'd be quite confident in ATi's cards matching nVidia's.



    you find a 12x1 pipeline card based on a 2 year old design with very limited programmability and 1 shader operation per pipeline to match a 16x1 card with almost infinite programmability and 2 shader operations per pipeline. hmmm, so 12 shader ops per clock = 32 shader ops per clock? i don't think so.





    Quote:

    nVidia responded to ATi's 12x1 card by dumping their original NV40 in favour of a 16x1 card, hence the reason you'll be waiting until June to get your hands on one. ATi heard about the 16x1 card and brought forward their own. Both companies pushed ahead their products.



    nVidia did not 'respond' to ATi's 12x1 card at all. If you had any bit of knowledge of the design of the NV40, you would understand that it has been in design for about 3 years. ATi has been playing it easy for the past 2 years by designing a card with 12x1 pipes, thinking that (as has been rumored) nVidia would go with a 8x2 card. Realizing that nVidia was actually developing a 16x1 card, ATi was in 'oh s***' mode and they bumped the production of their R420XT ahead by 5 months just to compete with the NV40. ATi intended to have the 12x1 at first and then release the XT refresh much later.









    Quote:

    This is true but no game supports this level of programmability and no game likely will until next year, and that's being generous. To take your own game Doom 3's engine, which will prove the basis of many many games for quite some time, is aimed at the Shader Model 2.0.





    FarCry supports Pixel Shader 3.0 and it is already out. So does STALKER, Half-life 2, LOTR: BFME, Splinter Cell X, and many more. I don't think you grasp what SM3.0 is capable of especially when you consider that it took CryTek (makers of Far Cry) 3 weeks to fully implement SM3.0 in FarCry. I didn't mention DooM 3 with this at all since it is OpenGL and it does not use Shader Model designations.







    Quote:

    The only real argument you could make would be future proofing and ATi will have products out later this year that support Shader Model 3.0.



    ATi is going to release their R500 according to their established release plan of 1 generation of cards every 18 months. nVidia has also implemented this plan.





    what's even better is that the cards sent to preview sites were given A1 revisions of the NV40. nVidia is manufacturing A2 revisions which are capable of much much higher clock speeds to further bury the 12x1 pipe R420. What's even better than that is the A3 revision which is nVidia's trick up their sleeves which is another refresh of their NV40 if the R420XT manages to be equal with the NV40.





    You should say that the R420XT has a chance of competing with the NV40, not the R420 for all the reasons specified above.
  • Reply 31 of 59
    onlookeronlooker Posts: 5,252member
    Q = New ATI Graphics Card... Will be?



    A = Nothing compared to what NVIDIA released today.



    TRUTH = THE ABOVE STATEMENT IS A FACT.
  • Reply 32 of 59
    madmax559madmax559 Posts: 596member
    I heard from a confidental source that the next NVidia card was going to be called the

    Super GEForce 95050++ Hyper-titanium Happy Extreme-platinum Ultra MK-II Enhanced V2.2 Omicron version.

    Keep your eyes open.







    as seen on /.
  • Reply 33 of 59
    g3prog3pro Posts: 669member
    Quote:

    Originally posted by madmax559

    I heard from a confidental source that the next NVidia card was going to be called the

    Super GEForce 95050++ Hyper-titanium Happy Extreme-platinum Ultra MK-II Enhanced V2.2 Omicron version.

    Keep your eyes open.







    as seen on /.




    it's actually going to be called the nVidia GeForce 6800. 2 versions: ultra and non-ultra.



    the R420/R420XT will be called the ATi X800. 3 versions: SE, Pro, XT. (8x1, 12x1, 16x1)
  • Reply 34 of 59
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by g3pro

    it's actually going to be called the nVidia GeForce 6800. 2 versions: ultra and non-ultra.



    the R420/R420XT will be called the ATi X800. 3 versions: SE, Pro, XT. (8x1, 12x1, 16x1)






    No actually I was talking about the NVIDIA Quadro FX 4000 series that came out today that blow the 6800's away, but are not intended for the average user as the 6800 series is. Although the 6800 series looks faster than the new ATI's.
  • Reply 35 of 59
    arty50arty50 Posts: 201member
    Quote:

    Originally posted by g3pro

    I didn't mention DooM 3 with this at all since it is OpenGL and it does not use Shader Model designations.



    So then tell me this, why should I give two chits about SM3.0? You're hyping the hell out of this one feature of the card and it's something that no Mac user will ever care about. Not now, not in 1-2 years, and probably not ever. Unless Apple magically decides to implement DirectX.



    99% of everything else will work on both cards. And perhaps the new ATI top of the line card will even come out ahead. Or maybe not. Let's wait for the benchmarks before declaring a winner.



    As for ATI deciding to release the 16 pipeline card ahead of time. Why does that matter? The only way it affects them is they can't milk the 12 pipeline card as long as they hoped. Otherwise, I'm sure their offerings will be very competitive, and that's all consumers care about in the end.
  • Reply 36 of 59
    g3prog3pro Posts: 669member
    Quote:

    Originally posted by Arty50

    So then tell me this, why should I give two chits about SM3.0? You're hyping the hell out of this one feature of the card and it's something that no Mac user will ever care about. Not now, not in 1-2 years, and probably not ever. Unless Apple magically decides to implement DirectX.



    99% of everything else will work on both cards. And perhaps the new ATI top of the line card will even come out ahead. Or maybe not. Let's wait for the benchmarks before declaring a winner.



    As for ATI deciding to release the 16 pipeline card ahead of time. Why does that matter? The only way it affects them is they can't milk the 12 pipeline card as long as they hoped. Otherwise, I'm sure their offerings will be very competitive, and that's all consumers care about in the end.




    SM3.0 not important? That's another designation for the ability of ANY graphics card to support near-infinite shader instructions. SM3.0 hardware capability is the same as the OpenGL capability. 2 shader ops per pipeline is also extremely important: 32 ops vs 12 ops? which will win?



    ATi knows it can't beat the next generation NV40 card with their puny next-generation card which only has 12 pipes. they knew that and they compensated. that is a sign of defeat. they will still be competitive, but the R420 is certainly not going to beat the NV40.
  • Reply 37 of 59
    screedscreed Posts: 1,077member
    "Puny"? Ahem. "X800PRO will beat NV40 Ultra" in performance if not in features -- from The Inquirer.



    "But Nvidia seems to be keeping some of its powder dry too. We hear about some faster NV40 chips that Nvidia is saving for later."



    Screed
  • Reply 38 of 59
    arty50arty50 Posts: 201member
    Quote:

    Originally posted by g3pro

    SM3.0 not important? That's another designation for the ability of ANY graphics card to support near-infinite shader instructions. SM3.0 hardware capability is the same as the OpenGL capability. 2 shader ops per pipeline is also extremely important: 32 ops vs 12 ops? which will win?



    That may be true, but until OpenGL can take advantage of the hardware it's meaningless.



    Quote:

    ATi knows it can't beat the next generation NV40 card with their puny next-generation card which only has 12 pipes. they knew that and they compensated. that is a sign of defeat. they will still be competitive, but the R420 is certainly not going to beat the NV40.



    Sign of defeat my arse. You're making it look like ATI pulled the 16 pipe card out of thin air. That's definitely not the case. ATI had designed the R420 series for 16 pipes from the get go. However, ATI expected nVidia to only put out a 12 pipe card in the near term. So their release plans called for a 12 pipe card this spring and a 16 pipe card in a few months. That way they could milk the 12 pipe card and its lower production cost for awhile. nVidia spoiled those business plans however and ATI had to ramp 420 production up to 16 pipes basically from the get go. If the NV40 is designed to have more than 16 pipes and nVidia is sandbagging, then you have a point. But I don't think that's the case and so your point is meaningless from a technology perspective since both cards were designed from the start to have a 16 pipeline architecture.



    Normally I don't get into crap like this, nor do I care which is better. Both are great companies, and a good friend of mine works for nVidia. But you're being a serious fanboy. The R420 isn't even out yet. Once it is and we have actual benchmarks to talk to then post back here. Otherwise it's all just baseless conjecture.
  • Reply 39 of 59
    boogabooga Posts: 1,082member
    Ah, the ol' nVidia vs. ATI debate. I've used both quite a bit in the PC world, and I was a huge nVidia fan from the TNT through the GeForce 4Ti, the last time they were definitely ahead. Then they tried to shrink their parts while ATI went the more practical route, and the 9x00 series ended up getting released 6 months before the comparable GeForce 5 series. Since then, nVidia's been playing serious catch-up. My housemate and I put them side-by-side, and there was really no comparison a year ago-- the ATI 9700 was clearly faster and higher quality at the more complex scenes than the best first-generation GF5FXs, and the 9700 wasn't even the top of ATI's line. However, nVidia's a pretty sharp company, and they're by no means out of it... I suspect the next generation will put them neck and neck again.



    Unfortunately, since this is a Mac board I have to point out that both chipsets and drivers are very targeted towards DirectX, with OpenGL as an afterthought at best. I doubt even if there was a Mac version of all the cards, that the Mac games would or could take full advantage of them anytime soon .
  • Reply 40 of 59
    g3prog3pro Posts: 669member
    Quote:

    Originally posted by Arty50

    Sign of defeat my arse. You're making it look like ATI pulled the 16 pipe card out of thin air. That's definitely not the case. ATI had designed the R420 series for 16 pipes from the get go. However, ATI expected nVidia to only put out a 12 pipe card in the near term. So their release plans called for a 12 pipe card this spring and a 16 pipe card in a few months. That way they could milk the 12 pipe card and its lower production cost for awhile. nVidia spoiled those business plans however and ATI had to ramp 420 production up to 16 pipes basically from the get go. If the NV40 is designed to have more than 16 pipes and nVidia is sandbagging, then you have a point. But I don't think that's the case and so your point is meaningless from a technology perspective since both cards were designed from the start to have a 16 pipeline architecture.



    Normally I don't get into crap like this, nor do I care which is better. Both are great companies, and a good friend of mine works for nVidia. But you're being a serious fanboy. The R420 isn't even out yet. Once it is and we have actual benchmarks to talk to then post back here. Otherwise it's all just baseless conjecture.




    ATi indeed designed a 16x1 processor this whole time, but the way they intended to produce the actual 16x1 card was to wait for massive production of the 12x1 where there would be certain chips that can actually use the 16 pipes and then if there is a problem in manufacturing where only 12 pipes can be used, that's the 12x1 part. The rumor this whole time was that nVidia would be producing an 8x2 part. nVidia shocked everyone with their 16x1 part.





    Being a serious fanboy? How about being pissed that 'one particular card company' * cough * ATi * cough * actually had the balls to tell game developers not to produce games using any new features because ATi didn't want to be behind in the feature race. That's anti-progress. I'm sure you enjoy having a video card manufacturer advising game developers to retard design, right?
Sign In or Register to comment.