R300

Posted:
in Future Apple Hardware edited January 2014
Amidst all the Macworld hoopla and hysteria, I present to you.....a world without limits. What of am I speaking on you ask? Parhelia, P10, NV30, NV31, NV40, and R300. The graphics world continues to make huge leaps and bounds, defying Moore's Law with their performance increases in each generation. While everyone has been focused on CPUs (although who doesn't want a G5 now?), perhaps we should reserve more of that attention on GPUs. The reason is because sooner or later, gpus will eventually overtake cpus. The microprocessors of Intel, AMD, and Motorola, will act as mere afterthoughts to the gpu in the future. This has been stated by nVidia's CEO, who plans to have his company become an Intel. I don't think it's a far stretch to call him right.



While nVidia has developed a solid relationship with Apple in recent years, ATI will have a few promising products out soon with R300 and RV250, their next-generation Radeon graphics. The R200---> R300 leap has been described as being similar to the Rage6c-->R200 leap, i.e. HUGE. LetÕs not forget that R300 and RV250 are set to debut a couple days after the Steve-note takes place. Could these cards be an option in the next PowerMacs? One can only hope, as the R300 (aka Radeon 9700/1000) looks like it will take the performance crown away from the GeForce4 Ti 4600. That is, until nVidia follows up with NV30 in the fall months.



Here is some info on the R300:







R300 is being rumored as the Radeon 9700/1000. 315 Mhz clock, .15, 15000 3dmarksÉ.



RV250







* RADEON 9000 - RV250 graphics processor working at 300 MHz, equipped with 64 MB of 3.3 DDR SDRAM memory clocked at 600 MHz.

* RADEON 9000LE - RV250 graphics processor working at 275 MHz, equipped with 64 MB of 4.0 DDR SDRAM memory clocked at 500 MHz.

* RADEON 9000LE - RV250 graphics processor working at 250 MHz, equipped with 64 MB of 5.0 DDR SDRAM memory clocked at 400 MHz.

* RADEON 9000LE - RV250 graphics processor working at 250 MHz, equipped with 64 MB of 5.5 DDR SDRAM memory clocked at 366 MHz.





"As we have read over here , ATI is going to finally unveil its RADEON 9000 on the 17th of July. Since the site locates in China, while such announcements are made on the same time in different regions, we can assume that the new mainstream beauty from ATI Technologies will be revealed here in Europe on the 16th of July while in the USA it may still be the 15th depending on

the exact time of the release."



* "RADEON RV250 SECONDARY" = ati2mtag_RV250, PCI\\VEN_1002&DEV_496E

* "Radeon Mobility 7500 GL" = ati2mtag_M7, PCI\\VEN_1002&DEV_4C58

* "Radeon Mobility" = ati2mtag_M6, PCI\\VEN_1002&DEV_4C59

* "Radeon 7000 / Radeon VE" = ati2mtag_RV100, PCI\\VEN_1002&DEV_5159

* "RS200M" = ati2mtag_RS200M, PCI\\VEN_1002&DEV_4337

* "R300_4E44" = ati2mtag_R300, PCI\\VEN_1002&DEV_4E44

* "Radeon 7500" = ati2mtag_RV200, PCI\\VEN_1002&DEV_5157

* "Radeon 8500" = ati2mtag_R200, PCI\\VEN_1002&DEV_514C

* "Radeon Mobility M9-GL" = ati2mtag_M9, PCI\\VEN_1002&DEV_4C64

* "Radeon Mobility M9" = ati2mtag_M9, PCI\\VEN_1002&DEV_4C66

* "R300_AD" = ati2mtag_R300, PCI\\VEN_1002&DEV_4144

* "R300_AE" = ati2mtag_R300, PCI\\VEN_1002&DEV_4145

* "Radeon 7200 / Radeon" = ati2mtag_default, PCI\\VEN_1002&DEV_5144

* "R300_AF" = ati2mtag_R300, PCI\\VEN_1002&DEV_4146

* "Radeon 8500 DV Edition" = ati2mtag_R200, PCI\\VEN_1002&DEV_4242

* "Radeon Mobility 7500" = ati2mtag_M7, PCI\\VEN_1002&DEV_4C57

* "Radeon 8500 / 8500LE" = ati2mtag_R200, PCI\\VEN_1002&DEV_516C

* "A3" = ati2mtag_A3, PCI\\VEN_1002&DEV_4136

* "RS200" = ati2mtag_RS200, PCI\\VEN_1002&DEV_4137

* "RADEON RV250" = ati2mtag_RV250, PCI\\VEN_1002&DEV_4966

· "Radeon IGP 320M" = ati2mtag_U1, PCI\\VEN_1002&DEV_4336



There is rumor of a dual-head RV250 in the works (see RADEON RV250 SECONDARY) or even a Radeon 8500 Maxx (said to exist and being released soon).



More info to come.





I was writing this post when Chimera quit on me, so I may have left out a lot, but I'll post it later.



[ 07-12-2002: Message edited by: TigerWoods99 ]</p>

Comments

  • Reply 1 of 19
    junkyard dawgjunkyard dawg Posts: 2,801member
    LOL, doesn't matter if the graphics are CPU/bandwidth limited. Apple probably couldn't feed ATI's next generation video cards with enough data to keep them saturated.
  • Reply 2 of 19
    spartspart Posts: 2,060member
    Keep in mind that the MWNY Keynote is on the same day as the announcement and maybe after...and we MAY get new PowerMacs at MWNY. I say may in light of recent speculation by many sites.
  • Reply 3 of 19
    tigerwoods99tigerwoods99 Posts: 2,633member
    Don't know what was wrong with the thread title before, but whatever. ATI has it on their website right now, referring to the upcoming R300 and RV250 product launch.
  • Reply 4 of 19
    [quote]Originally posted by TigerWoods99:

    <strong>Don't know what was wrong with the thread title before, but whatever. ATI has it on their website right now, referring to the upcoming R300 and RV250 product launch.</strong><hr></blockquote>



    t'was stupid, people couldn't tell wtf the thread was about...



    descriptive titles rock.
  • Reply 5 of 19
    [quote]Originally posted by TigerWoods99:

    <strong>Don't know what was wrong with the thread title before, but whatever. ATI has it on their website right now, referring to the upcoming R300 and RV250 product launch.</strong><hr></blockquote>



    t'was stupid, people couldn't tell wtf the thread was about...



    descriptive titles rock.
  • Reply 5 of 19
    [quote]Originally posted by TigerWoods99:

    <strong>Don't know what was wrong with the thread title before, but whatever. ATI has it on their website right now, referring to the upcoming R300 and RV250 product launch.</strong><hr></blockquote>



    t'was stupid, people couldn't tell wtf the thread was about...



    descriptive titles rock.
  • Reply 5 of 19
    [quote]Originally posted by Jonathan:

    <strong>



    I read a bit of an article involving the CEO of Nividia in wired or sumthin...



    he wants his graphics cards to more or less become the new intel... takin over the traditional tasks the CPU would... this kinda fits with Extreme Quartz... (thats what its called right&gt;</strong><hr></blockquote>



    I said that in that radeon thread before this was posted, so I believe that it is very possiable especialy with apple already havin the GUI relay on the card more then the computer!
  • Reply 8 of 19
    airslufairsluf Posts: 1,861member
  • Reply 9 of 19
    telomartelomar Posts: 1,804member
    [quote]Originally posted by TigerWoods99:

    <strong>The graphics world continues to make huge leaps and bounds, defying Moore's Law with their performance increases in each generation. While everyone has been focused on CPUs (although who doesn't want a G5 now?), perhaps we should reserve more of that attention on GPUs. The reason is because sooner or later, gpus will eventually overtake cpus.</strong><hr></blockquote>



    AirSluf covers a lot of this but I feel the need to tirade about Moore once again, who seems to get credited with a hell of a lot of praise for a somewhat questionable theory.



    Firstly Moore's Law is basically a statement on optimum production costs with respect to transistor count year by year. It says from one year to the next the optimum transistor count for the cheapest manufacture of a component occurs at a 2 fold increase in transistors from the previous year.



    That turned out to be wrong and they later revised it to 18 months, which still turns out to be wrong anyway (at least so far as to be called a law).



    Not to take anything away from the research Moore has done or even the research he did to compose what is now Moore's Law, which at the time made for an interesting observation and hypothesis, but the law itself is near trash. It doesn't make the computer world turn round and it isn't necessary to follow it. It's just an observation about process economics and development rates.



    I say that like Moore's Observation isn't important, which is wrong, but it isn't so important that variation from it will shatter the world.



    On a completely unrelated note I am looking forward to seeing the R300 I have a feeling one way or another one will likely find its way into my new PowerMac when I get one later in the year. Hopefully I will just be able to get it BTO.
  • Reply 10 of 19
    well, well, well. Perhaps we may start to see some interesting stuff in the user interface with jaguar and beyond. With these robust GPU's, We could see things like little sparks poping from the tip of the mouse cursor whenever you click. windows that fly around the screen on random swooping and looping paths as they minimize into the dock. Hilights that Glow and or pulsate when you select Pictures and text. A trash basket that catches fire and incinerates it's contents with a puff of billowing smoke when you enpty it..



    If you doubt that any of this is possible then <a href="http://www.serenescreen.com/Download/MarineAquariumOSX.sit"; target="_blank">download this</a>. You will notice that this uses very little CPU for the tasks at hand and is 100% pure openGL. :eek:
  • Reply 11 of 19
    tigerwoods99tigerwoods99 Posts: 2,633member
    ATI'S RADEON 9700 will be introduced very soon now and we have some performance information that we can share with you. ATI, as well as its partners, is very optimistic about this card and now it seems we can say their optimism in not unfounded.



    With eight pipelines, the card will deliver very nice performance figures and we learned that it should be clocked at 315MHz, which is obviously the fastest speed that ATI can guarantee to its partners. We are sure that the company will also introduce cards clocked at lower speeds, under a slightly different name, though we expect the LE label to be dropped.



    We expect ATI to use an Nvidia-style number nomenclature and to add some kind of number instead of the unsexy LE on the name of its cards.



    Anyhow, don't forget that all that Matrox could manage with its Parhelia was 220MHz and their chip is far less complex than ATI's one. 300MHz seems to be the magical barrier for the 150 nanometer process and ATI was able to pass just above that. Nvidia planned to use this speed with its Geforce 4 TI 4600, we understand, but then, before launch, decided to stick with 300MHz.



    As for performance, we learned that cards based on the 9700 should deliver a magnificent 15,000 3DMarks and will be able to beat Nvidia's fastest offering for this summer by a few thousand in this very popular test. Nvidia's best performance is between 10,000 and 11,000 depending on the configuration.



    Since we learned that the first reviews will appear after the official presentation we can still only wonder whether ATI will be able to deliver these cards in September as planned. In the past, missing delivery-date promises has been ATI's prime problem.
  • Reply 12 of 19
    mattyjmattyj Posts: 898member
    As far as I can see, the poer of this card must mean that it needs a 8xAGP bus to keep it is, or I am wrong???



    If this is correct however, and they are put in new Powermacs this July, then we wil see new motherboards....



    I wish.
  • Reply 13 of 19
    paulpaul Posts: 5,278member
    [quote]Originally posted by TigerWoods99:

    <strong>"As we have read over here , ATI is going to finally unveil its RADEON 9000 on the 17th of July. Since the site locates in China, while such announcements are made on the same time in different regions, we can assume that the new mainstream beauty from ATI Technologies will be revealed here in Europe on the 16th of July while in the USA it may still be the 15th depending on

    the exact time of the release."</strong><hr></blockquote>



    tiger, you Do realize the error in this statement right? it cannot be the 15th over here and the 17th over in china... 1 day difference is the max, and well the norm... (well really only a 17+ hr difference, and this means 1 day ahead most of the time...)



    just a nitpick, but it struck me as odd
  • Reply 14 of 19
    tigerwoods99tigerwoods99 Posts: 2,633member
    Hmm didn't notice that. Just quoting the site.
  • Reply 15 of 19
    cdong4cdong4 Posts: 194member
    <a href="http://store.apple.com/1-800-MY-APPLE/WebObjects/AppleStore?productLearnMore=M8757G/A"; target="_blank">http://store.apple.com/1-800-MY-APPLE/WebObjects/AppleStore?productLearnMore=M8757G/A</a>;



    looks like apple wants to clear out GeForce 4 Ti cards

    hmmmm new radeon maybe, i think so.
  • Reply 16 of 19
    blizaineblizaine Posts: 239member
    [quote]Originally posted by CDonG4:

    <strong><a href="http://store.apple.com/1-800-MY-APPLE/WebObjects/AppleStore?productLearnMore=M8757G/A"; target="_blank">http://store.apple.com/1-800-MY-APPLE/WebObjects/AppleStore?productLearnMore=M8757G/A</a>;



    looks like apple wants to clear out GeForce 4 Ti cards

    hmmmm new radeon maybe, i think so.</strong><hr></blockquote>



    i would say that selling them for $299 (same as the PC Geforce4Ti cards) would be "Clearing them out", not $400.



    Either way, ATI will announce the R300 next week and nVidia will follow in August/Sept time frame with their "Next Gen" card...



    BTW: is the Geforce4 Ti card that apple sells, the 4600? As opposed to 4400 or 4200....



    [ 07-14-2002: Message edited by: Blizaine ]</p>
  • Reply 17 of 19
    tigerwoods99tigerwoods99 Posts: 2,633member
    Yes, it's the 4600.
  • Reply 18 of 19
    cdhostagecdhostage Posts: 1,038member
    Why are the clockspeeds on these lower than possible?

    In CPUs at least, double the clockspeed on same version of chip, get at least 90% performance increase.



    Why run a GPU at 315 MHz when you could get a GHz out of it? They are simpler than CPUs, after all.



    Or perhaps...



    are they necessarily impler than a CPU? Many of the computer's functions are now integrated into the CPU, which used to be handled by dedicated hardware. I/O operations, memory management... I guess it's bette t have it all on one chip.



    I wonder how the prices for a barebones CPU and a GPU compare/
  • Reply 19 of 19
    eskimoeskimo Posts: 474member
    [quote]Originally posted by cdhostage:

    <strong>Why are the clockspeeds on these lower than possible?

    In CPUs at least, double the clockspeed on same version of chip, get at least 90% performance increase.



    Why run a GPU at 315 MHz when you could get a GHz out of it? They are simpler than CPUs, after all.



    Or perhaps...



    are they necessarily impler than a CPU? Many of the computer's functions are now integrated into the CPU, which used to be handled by dedicated hardware. I/O operations, memory management... I guess it's bette t have it all on one chip.



    I wonder how the prices for a barebones CPU and a GPU compare/</strong><hr></blockquote>



    While modern GPUs pack many more transistors in them now than CPUs they are nowhere near as complicated to design and build as a modern CPU. There is a good reason that a design team of ~100 Nvidia designers can make a GPU in 18 months from paper to silicon while it takes teams of ~1000 engineers 5 years to go from paper to silicon for a CPU.



    One reason that the design of a CPU is so complex is that in addition to providing the functionality necessary to perform all the tasks required integrated circuits have to be designed for speed. Somewhat similar to how a programmer optimizes their code to run faster, hardware designers figure out ways to optimize for speed. This takes a lot of careful analysis of which paths through the CPU signals travel, how they propagate, and how to eliminate any bottlenecks.



    Additionally a modern CPU is designed at the layout level using what is known as a "full custom" design. This means that each transistor is custom sized to provide for optimum speed through the critical paths. On the other hand companies like Nvidia and others use a method referred to as "standard cell". That is one designer makes a generic block of logic out of standard sized transistors, such as a NAND gate, or an inverter. Then the designers just drag and drop all those logic gates together to form their design. This saves a great deal of time, but the drawback is that you can't optimize for speed and power consumption like you can when you do a full custom job. Now in reality Nvidia probably doesn't do a plain standard cell design but does a semi-custom one in which they use mostly standard cell devices but do some optimization to it. If they had more time and didn't have to ship new products every 6 months to keep competing with ATI and company they could probably eventually come out with a 1GHz GPU.



    Other factors holding back the GPUs clockspeed includes the process that they are manufactured under. There are very few companies in the world that have the expertise and technology necessary to build multi GHz CPUs. This is why AMD, IBM, Motorola, and Intel are the only real players in the busines (some being better players than others ). Nvidia and ATI use TSMC for their manufacturing, none of the taiwanese foundries have had great success building high speed logic devices on the scale of CPUs. This is why Via's C3 processor is just now reaching 1GHz and why UMC is partnering with AMD to get the necessary technology to produce processors.



    There are probably some issues with how a GPU actually works from a design standpoint as well. As far as I know they use very short pipelines for execution which can be harder to clock higher. If you remember Motorola had trouble clocking their initial 4 stage G4 to high clockspeeds. GPUs use more like 1 or 2 stage pipelines.
Sign In or Register to comment.