This is a bit of a non sequitur -- current GPUs have 4-16 vertex engines and 12-24 pixel engines, each of which can be considered a "core" in the sense that it is executing its own (shader) program. The SLI nVidia cards are two of these chips running in parallel and dividing the workload between them. GPUs are way ahead of CPUs because they are focus on a specific problem (graphics), and one that happens to be embarrassingly parallel.
Didn't 3dfx have some Voodoo cards with more than one GPU on them before they got bought out?
Here's a link that shows 3dfx had versions of the Voodoo 5 with 2 and even 4 cores on them. Pretty cool stuff. Imagine dual core versions of some current GPUs
I would expect that dual core GPUs are only a matter of time. We've seen dual card systems but that's pretty wasteful. GPUs will probably skip the dual-processor stuff and go straight to dual core if they can.
Dual-core CPUs exist because massive CPU cores, even though they're better, cost too much to design. Designing massive GPU cores doesn't seem to be a problem, so I suspect ATI and nVidia will continue on that course.
I would expect that dual core GPUs are only a matter of time. We've seen dual card systems but that's pretty wasteful. GPUs will probably skip the dual-processor stuff and go straight to dual core if they can.
Can't you read? GPUs are already 24 or more cores, so saying they will go "dual core" is just clueless.
Can't you read? GPUs are already 24 or more cores, so saying they will go "dual core" is just clueless.
While technically correct, I don't believe that's the intention of the thread. I think the discussion revolves around taking multiple GPU systems and glomming them together. Current CPUs (and GPUs) are made of a number of components and we're not talking about them individually but treating them as a single unit.
While technically correct, I don't believe that's the intention of the thread. I think the discussion revolves around taking multiple GPU systems and glomming them together. Current CPUs (and GPUs) are made of a number of components and we're not talking about them individually but treating them as a single unit.
This may be a dumb statement, but I never claimed to be an expert.
As Programmer said the current GPU's have 24 or more cores. What would be the point of glomming together 2 GPU's and having to add the necessary components to get them to talk together, kind of like a dual core CPU. Why wouldn't the Graphics GPU designers just double the existing cores to 48 or more cores.
This may be a dumb statement, but I never claimed to be an expert.
As Programmer said the current GPU's have 24 or more cores. What would be the point of glomming together 2 GPU's and having to add the necessary components to get them to talk together, kind of like a dual core CPU. Why wouldn't the Graphics GPU designers just double the existing cores to 48 or more cores.
They are and they have done - look at the Radeon X1900 XT, faster than a 7800/7900 (not sure about 7900 actually) because it has 48 cores as opposed to the nVidias 24.
There is more to a GPU than these cores, which is why a Dual GPU option if properly optimised by game hardware would yield some great results (as long as the card in question has double the memory a single GPU usually has).
They are and they have done - look at the Radeon X1900 XT, faster than a 7800/7900 (not sure about 7900 actually) because it has 48 cores as opposed to the nVidias 24.
There is more to a GPU than these cores, which is why a Dual GPU option if properly optimised by game hardware would yield some great results (as long as the card in question has double the memory a single GPU usually has).
Common Misconception. The X1900XTX only has 16 Pixel Pipelines (The different "cores" you guys are talking about) but 48 shader units. It looks like they won't be going dual-core per-se, but will have more and more cards in an SLI/CrossFire array. We're already up to four 7900GTXs. BTW, it's a see-saw in terms of the 7900GTX vs. the X1900XTX. No one has any definete data and when you go to different sites, they're all over the place.
Common Misconception. The X1900XTX only has 16 Pixel Pipelines (The different "cores" you guys are talking about) but 48 shader units. It looks like they won't be going dual-core per-se, but will have more and more cards in an SLI/CrossFire array. We're already up to four 7900GTXs. BTW, it's a see-saw in terms of the 7900GTX vs. the X1900XTX. No one has any definete data and when you go to different sites, they're all over the place.
That is true. Isn't this also true of the nVidia cards (at least the 7800) so I've heard - 16 pipelines and 24 shader units. I've heard these been dubbed "half cores" or something like that. However for the sake of this argument I think such inaccuracies won't make much difference. As I said, there is more to a GPU than shader pipelines.
Dual GPU cards would be nice because not everyone wants to pay for a decent SLI board, which are more costly that normal single slot boards.
While technically correct, I don't believe that's the intention of the thread. I think the discussion revolves around taking multiple GPU systems and glomming them together. Current CPUs (and GPUs) are made of a number of components and we're not talking about them individually but treating them as a single unit.
GPUs will never go dual core. Modern GPUs are already designed in a very modular manner. If a process can provide double the space for a chip they will design a chip with double the resources. On top of that, a GPU resource ( shader, pixel pipeline ) is much smaller than a whole core, so increases in the number of resources can happen much faster as the amount of space required for one extra isnt double ( ie: GPUs can take advantage of small process improvements to improve performance ).
What will continue to happen more and more is dual/quad card and chip designs. Why? Because at the threshold for economic chip production it is possible to go beyond that performance limit in a viable manner. ie: For a chip like the x1800 which is about as big a chip as can be made, it isnt possible to increase the chip size and maintain acceptable yeilds. But any two working x1800s can be selected and ganged together, getting well beyond the capabilities of the chip manufacturers limits.
a "dual core GPU" PC (non-notebook) is already out, i think asus has a dual core GPU card that you can SLI (so as to have four cores)
Yes, I'm glad someone gets what I (and the thread poster) are talking about! Technically state-of-the-art GPUs are already multi-core designs but it shouldn't be expected that two or more could be used to drive a single display. And like current CPUs, separate multi-processor systems will eventually be placed on the same silicon, reducing costs and sharing other components.
To answer more clearly the original poster's question, Apple will pretty much NEVER EVER EVER use multicore GPUS. if ever, only very specialised apple solution providers will offer such things.
edit: I guess never say never, but by NEVER I guess I mean the eternity in computer terms that is the span of 2 years.
Comments
They also use GDDR4 ram. They are going to be an outrageous jump in performance, and they are supposed to offer full directx 10 support.
I read this awhile ago though, so I could be wrong.
Originally posted by tensdanny38
the next-gen nvidia cards I read have two cores.
They also use GDDR4 ram. They are going to be an outrageous jump in performance, and they are supposed to offer full directx 10 support.
I read this awhile ago though, so I could be wrong.
asus already has them out for retail purchase. newegg is selling them.
Originally posted by giddyup69
asus already has them out for retail purchase. newegg is selling them.
no no, I meant the geforce 8 series cards. They are a totally new product coming out in like 6 months.
Here's a link that shows 3dfx had versions of the Voodoo 5 with 2 and even 4 cores on them. Pretty cool stuff. Imagine dual core versions of some current GPUs
http://video-cards.carte3d.com/3dfx.php
Here's a more current one from Nvidia:
http://www.newegg.com/Product/Produc...2E16814122232R
Originally posted by Xool
I would expect that dual core GPUs are only a matter of time. We've seen dual card systems but that's pretty wasteful. GPUs will probably skip the dual-processor stuff and go straight to dual core if they can.
Can't you read? GPUs are already 24 or more cores, so saying they will go "dual core" is just clueless.
Originally posted by Programmer
Can't you read? GPUs are already 24 or more cores, so saying they will go "dual core" is just clueless.
While technically correct, I don't believe that's the intention of the thread. I think the discussion revolves around taking multiple GPU systems and glomming them together. Current CPUs (and GPUs) are made of a number of components and we're not talking about them individually but treating them as a single unit.
Originally posted by Xool
While technically correct, I don't believe that's the intention of the thread. I think the discussion revolves around taking multiple GPU systems and glomming them together. Current CPUs (and GPUs) are made of a number of components and we're not talking about them individually but treating them as a single unit.
This may be a dumb statement, but I never claimed to be an expert.
As Programmer said the current GPU's have 24 or more cores. What would be the point of glomming together 2 GPU's and having to add the necessary components to get them to talk together, kind of like a dual core CPU. Why wouldn't the Graphics GPU designers just double the existing cores to 48 or more cores.
Originally posted by rickag
This may be a dumb statement, but I never claimed to be an expert.
As Programmer said the current GPU's have 24 or more cores. What would be the point of glomming together 2 GPU's and having to add the necessary components to get them to talk together, kind of like a dual core CPU. Why wouldn't the Graphics GPU designers just double the existing cores to 48 or more cores.
They are and they have done - look at the Radeon X1900 XT, faster than a 7800/7900 (not sure about 7900 actually) because it has 48 cores as opposed to the nVidias 24.
There is more to a GPU than these cores, which is why a Dual GPU option if properly optimised by game hardware would yield some great results (as long as the card in question has double the memory a single GPU usually has).
Originally posted by mattyj
They are and they have done - look at the Radeon X1900 XT, faster than a 7800/7900 (not sure about 7900 actually) because it has 48 cores as opposed to the nVidias 24.
There is more to a GPU than these cores, which is why a Dual GPU option if properly optimised by game hardware would yield some great results (as long as the card in question has double the memory a single GPU usually has).
Common Misconception. The X1900XTX only has 16 Pixel Pipelines (The different "cores" you guys are talking about) but 48 shader units. It looks like they won't be going dual-core per-se, but will have more and more cards in an SLI/CrossFire array. We're already up to four 7900GTXs. BTW, it's a see-saw in terms of the 7900GTX vs. the X1900XTX. No one has any definete data and when you go to different sites, they're all over the place.
Originally posted by theapplegenius
Common Misconception. The X1900XTX only has 16 Pixel Pipelines (The different "cores" you guys are talking about) but 48 shader units. It looks like they won't be going dual-core per-se, but will have more and more cards in an SLI/CrossFire array. We're already up to four 7900GTXs. BTW, it's a see-saw in terms of the 7900GTX vs. the X1900XTX. No one has any definete data and when you go to different sites, they're all over the place.
That is true. Isn't this also true of the nVidia cards (at least the 7800) so I've heard - 16 pipelines and 24 shader units. I've heard these been dubbed "half cores" or something like that. However for the sake of this argument I think such inaccuracies won't make much difference. As I said, there is more to a GPU than shader pipelines.
Dual GPU cards would be nice because not everyone wants to pay for a decent SLI board, which are more costly that normal single slot boards.
Originally posted by Xool
While technically correct, I don't believe that's the intention of the thread. I think the discussion revolves around taking multiple GPU systems and glomming them together. Current CPUs (and GPUs) are made of a number of components and we're not talking about them individually but treating them as a single unit.
GPUs will never go dual core. Modern GPUs are already designed in a very modular manner. If a process can provide double the space for a chip they will design a chip with double the resources. On top of that, a GPU resource ( shader, pixel pipeline ) is much smaller than a whole core, so increases in the number of resources can happen much faster as the amount of space required for one extra isnt double ( ie: GPUs can take advantage of small process improvements to improve performance ).
What will continue to happen more and more is dual/quad card and chip designs. Why? Because at the threshold for economic chip production it is possible to go beyond that performance limit in a viable manner. ie: For a chip like the x1800 which is about as big a chip as can be made, it isnt possible to increase the chip size and maintain acceptable yeilds. But any two working x1800s can be selected and ganged together, getting well beyond the capabilities of the chip manufacturers limits.
http://www.slizone.com/object/slizone_notebooks.html
a "dual core GPU" PC (non-notebook) is already out, i think asus has a dual core GPU card that you can SLI (so as to have four cores)
Originally posted by sunilraman
SLI for notebooks is here. I am assuming this is what the poster means by a "dual core gpu" notebook
http://www.slizone.com/object/slizone_notebooks.html
a "dual core GPU" PC (non-notebook) is already out, i think asus has a dual core GPU card that you can SLI (so as to have four cores)
Yes, I'm glad someone gets what I (and the thread poster) are talking about! Technically state-of-the-art GPUs are already multi-core designs but it shouldn't be expected that two or more could be used to drive a single display. And like current CPUs, separate multi-processor systems will eventually be placed on the same silicon, reducing costs and sharing other components.
edit: I guess never say never, but by NEVER I guess I mean the eternity in computer terms that is the span of 2 years.