Again true, I saw a video of supposed SSS in action in a PS3 game called Lair and it looked pretty fake to me - basic translucency.
But they are advertising the PS3 as able to reach 2 teraflops and the quad G5 can only do about 75 gigaflops, though I think there might be some mix up between system flops and CPU flops:
I think it's safe to assume it will be about 250 gigaflops max, which is over 3 times the quad G5. Heck the world's 500th fastest supercomputer is 2 teraflops and that has about 1000 processors. So in that respect, I'd assume it's capable of producing some really nice output.
As you say, there will be harsher approximations made than the likes of Pixar would be allowed to do but biased rendering software always approximates so the techniques are still valid. One bounce radiosity on a PS3 is still radiosity.
I checked out their demo on their website and regardless of whether it's "true" radiosity it is pretty sweet. I wonder if that'll still work with all the other crap that needs to get piled into games.
Also, the 2 teraflops might be true if they include the Cell and the GPU, both which have a lot of vector units. Each vector instruction can count as 4 flops.
the PS3 radiosity might well be ambient occlusion, which is a very good trick to simulate radiosity.
I don't think so. Ambient Occlusion doesn't dampen based on an object's proximity to a light source but to other objects. The demos clearly show the movement of the light source determining the brightness of a surface.
I also don't think they would call it radiosity if it was AO because they use different techniques. AO doesn't generally do color bleeding either.
Speaking of Renderman though, did you guys know that they built Pixar's Renderman into Nextstep - the predecessor to OS X. It's really mind-blowing to see what functionality was actually in that operating system. Skip to 30 minutes to see Jobs talking about Renderman:
[QUOTE=Hiro]That's just plain BS based on the fact YOU CHOOSE to be a "1 or 2 app guy at a time".
I think a few games have just been developed for 4 cores, there's photoshop - a couple other rendering apps, but that is about it. You might as well have paperweights inside otherwise.
I think a few games have just been developed for 4 cores, there's photoshop - a couple other rendering apps, but that is about it. You might as well have paperweights inside otherwise.
Not at all. Context switches are the most expensive single operation a CPU undertakes and n cores will by definition reduce the number of context switches made by a factor of n. Since all the software we are concerned with runs on top of an OS and the OS runs code to respond to an application's needs, having those extra cores available makes it possible for the OS to service most of those API calls without forcing a full context switch for every call. That eliminates even more wasted context switching effort, even if you are dealing with single threaded applications.
You have to INTENTIONALLY write REALLY retarded single thread code to overcome the benefits of multiple cores. Glenda and her team at Aspyr show that's possible but I'm not going to condemn a technology because a team of developers actively make bad porting choices in the name of cutting development costs.
I don't think so. Ambient Occlusion doesn't dampen based on an object's proximity to a light source but to other objects. The demos clearly show the movement of the light source determining the brightness of a surface.
I also don't think they would call it radiosity if it was AO because they use different techniques. AO doesn't generally do color bleeding either.
Speaking of Renderman though, did you guys know that they built Pixar's Renderman into Nextstep - the predecessor to OS X. It's really mind-blowing to see what functionality was actually in that operating system. Skip to 30 minutes to see Jobs talking about Renderman:
to sum up, based on the improvements in the last 5 years, I would guess that the following will be a high average in 5 years time.
8 cores running at 3ghz
(Custom DSP's for certain apps if required)
8GB Ram
64-128GB Flash Drives backed up to 1TB Hard drives
GPU's - 64 pipelines running at about 1GHZ
HD Drive RW
Once that becomes mainstream - no one is going to need another computer for a very long time.
I can understand where you are coming from, but I think you are wrong. I have heard people predict that 'X' computing power is all anyone will ever need since the 80s, and the predictions have always been wrong. I do agree that someday our computers will be so powerful that there will be no reason to upgrade, but I don't think that will happen for at least a decade or two.
thats all very impressive, but what evidence is there that these algorithms are built or licenced into PS3?
I don't know if those particular algorithms will be used, they are just various ways in which true radiosity can be done in real-time without necessarily using GI approximation like AO. I've no doubt they will use AO in some games to improve performance further but it tends to have a distinct appearance that some people might not like. Here is an example of something that uses real-time AO already:
Radiosity rendering usually has a softer and more glowing appearance.
I'm not familiar with the various algorithms but from what I can gather, AO generally requires raytracing (though I've read about shadow-map based AO) but radiosity can be done with or without it, leading to orders of magnitude speed-up:
"Rasterization vs Ray Tracing in opinions of AMD, Intel, ATI, NVIDIA etc. Many predict use of both techniques in different games. They are partially right, however rasterization with Realtime Radiosity is so strong (fast) competition, that it will stop Ray Tracing progress everywhere except shiny car racing games."
Windows is changing from MHz race based CPUs to multi core. Applications will follow this trend especially really demanding ones like games, video "transformations" and some scientific apps.
to really menefit form multicore the apps have to be written for it. As I recall Quake 4 engine is not that so even with 4 cores the main share of Q4 is handled by one core and the audio is put on an other CPU and thats it. The latest unreal engien is said to be more geared toward more CPUs.
The latest generation of tv game boxes learn the programmers to wring out performance of many CPUs...
I can understand where you are coming from, but I think you are wrong. I have heard people predict that 'X' computing power is all anyone will ever need since the 80s, and the predictions have always been wrong. I do agree that someday our computers will be so powerful that there will be no reason to upgrade, but I don't think that will happen for at least a decade or two.
I whole heartedly agree with everything you said here.
Anyone here running on 640KB of RAM?
Everything gets outdated eventually...even if you think it can't, or want it to.
I whole heartedly agree with everything you said here.
Anyone here running on 640KB of RAM?
Everything gets outdated eventually...even if you think it can't, or want it to.
I still disagree
there is alot of difference between the 80's and now. Back then computers were in their infancy, and the difference between 1 mhz and 2 mhz, doubled the computing capacity. And more importantly - there was a need for it.
this 'need' pretty much happened until Intel reached 3ghz
And people bought into this because with such limited resources, there was a need.
We now have operating systems that have pretty much evolved to about 90% of what we will ever want - and they run fine on a 3ghz single processor - dual processing makes it sweet and is preferable, quads will be luxury and octos will be ferrari.
Now think about the applications the 'average' person will run. Again this is all possible under 3ghz single core processors - dual core makes it sweet, quads are luxury and octos are ferraris
What 'Application' does the 'Average' Joe run that could possibly need more than 8 cores?
There is nothing today that 'average' joe uses that wont run superbly in 8 cores. Once we have 'proper' multithreaded apps, thing like encoding a 2 hour video is going to happen in minutes - now there are always those who want it to happen in 'seconds' - but they are going to be a small minority, because average joe is going to be happy enough to wait a couple of minutes - there isn't going to be a pressing demand from average joe to make this happen faster.
There is a difference between waiting an hour - to waiting a minute - to waiting a second.
Ive just read up on nVidia's next gen 8800GTX - it looks like a beast - one hell of a beast. One can reasonably assume that in 5 years that part will be 'cheap' average, just as my 9700pro I got 4 years ago, is 'cheap average' today. But whereas there is a need for me today to want an 8800GTX - we are talking about parts that are so good, the 'need' to have something better just slips away.
There comes a point, when the 'average' is just so good, that there is just no real need for anything better.
In 5 years time, when Im rendering radiosity, when I've got 16x antialaising, blurry reflections, and built a virtual 10billion strong army of polygon Athiests that look as good as a photograph, are scripted to act autonomously (under my rules of control - bwhahahahah) and their swords slice up the fundies according to the rules of physics they so decry, and their blood flows across the landscape according to the laws of fluids - when all this happens *instantly* - what am I going to want next?
Then wonder what average Joe is ever going to want more.???
There just isn't the development in software to warrant this exponential increase in computer systems.
Granted, i know there are people who want & need more - enthusiasts, scientists, extreme gamers, but they aren't average Joe, and when Joe gets this mythical system i outlined above - when that is cheap avearage - then the industry is going to be in a fair bit of trouble.
Joe is going to want more when his everyday input device is a combination of context sensitive visual and audio recognition. The mouse will be replaced by laser cast virtual trackpads. Keyboards replaced by sub-vocalization rendering, essentially silent voice recognition with the ability to automatically discriminate which signals are meant for the computer and which aren't. The OS agent will always be performing a myriad of background tasks on behalf of each user that uses the machine, and not just when that user is logged in.
8 4Ghz cores will be so slow the public access free machines on street corners won't use them because they won't properly be able to recognize you and feed the proper semi-subliminal advertising.
Once upon a time I thought who could ever want more than a IIFx with an Apple Gfx card. That was a machine that could supposedly do 30fps video. Now that seems primitive and glacially slow. Nature and programmers abhor a vacuum. We will come up with ever more exponentially rising compute costs. Count on it.
Oh ye of little faith. We don't need consciousness, just more power to fake it past the fact we haven't got silicon-based consciousness. More power lets us spend more cycles generating a more realistic statistical parsing of user inputs, without detracting from the relatively fixed CPU budget of composing email or navigating links etc.
Nature abhors a vacuum, and unused CPU cycles are a vacuum of infinite attraction to programmers. The programmers that deliver value in filling that coming vacuum will make a pretty penny.
hehe. well it would be nice to have star-trek computers, but I just dont think that we will be able to fake conscoiusness until we fully understand what it is.
Comments
I'm only going by reports:
http://www.gamasutra.com/php-bin/new...hp?story=10031
http://www.nzone.com/object/nzone_aq...scription.html
Again true, I saw a video of supposed SSS in action in a PS3 game called Lair and it looked pretty fake to me - basic translucency.
But they are advertising the PS3 as able to reach 2 teraflops and the quad G5 can only do about 75 gigaflops, though I think there might be some mix up between system flops and CPU flops:
http://news.com.com/PlayStation+3+st...3-5709571.html
I think it's safe to assume it will be about 250 gigaflops max, which is over 3 times the quad G5. Heck the world's 500th fastest supercomputer is 2 teraflops and that has about 1000 processors. So in that respect, I'd assume it's capable of producing some really nice output.
As you say, there will be harsher approximations made than the likes of Pixar would be allowed to do but biased rendering software always approximates so the techniques are still valid. One bounce radiosity on a PS3 is still radiosity.
I checked out their demo on their website and regardless of whether it's "true" radiosity it is pretty sweet. I wonder if that'll still work with all the other crap that needs to get piled into games.
Also, the 2 teraflops might be true if they include the Cell and the GPU, both which have a lot of vector units. Each vector instruction can count as 4 flops.
the PS3 radiosity might well be ambient occlusion, which is a very good trick to simulate radiosity.
I don't think so. Ambient Occlusion doesn't dampen based on an object's proximity to a light source but to other objects. The demos clearly show the movement of the light source determining the brightness of a surface.
http://www.geomerics.com/index.php?page=lighting
http://www.fizyka.umk.pl/~mzielin/radio_htm.html
http://www.sccs.swarthmore.edu/users...dex.html#bib01
I also don't think they would call it radiosity if it was AO because they use different techniques. AO doesn't generally do color bleeding either.
Speaking of Renderman though, did you guys know that they built Pixar's Renderman into Nextstep - the predecessor to OS X. It's really mind-blowing to see what functionality was actually in that operating system. Skip to 30 minutes to see Jobs talking about Renderman:
http://youtube.com/watch?v=j02b8Fuz73A
If only OS X was that snappy on such old hardware.
I think a few games have just been developed for 4 cores, there's photoshop - a couple other rendering apps, but that is about it. You might as well have paperweights inside otherwise.
I think a few games have just been developed for 4 cores, there's photoshop - a couple other rendering apps, but that is about it. You might as well have paperweights inside otherwise.
Not at all. Context switches are the most expensive single operation a CPU undertakes and n cores will by definition reduce the number of context switches made by a factor of n. Since all the software we are concerned with runs on top of an OS and the OS runs code to respond to an application's needs, having those extra cores available makes it possible for the OS to service most of those API calls without forcing a full context switch for every call. That eliminates even more wasted context switching effort, even if you are dealing with single threaded applications.
You have to INTENTIONALLY write REALLY retarded single thread code to overcome the benefits of multiple cores. Glenda and her team at Aspyr show that's possible but I'm not going to condemn a technology because a team of developers actively make bad porting choices in the name of cutting development costs.
I don't think so. Ambient Occlusion doesn't dampen based on an object's proximity to a light source but to other objects. The demos clearly show the movement of the light source determining the brightness of a surface.
http://www.geomerics.com/index.php?page=lighting
http://www.fizyka.umk.pl/~mzielin/radio_htm.html
http://www.sccs.swarthmore.edu/users...dex.html#bib01
I also don't think they would call it radiosity if it was AO because they use different techniques. AO doesn't generally do color bleeding either.
Speaking of Renderman though, did you guys know that they built Pixar's Renderman into Nextstep - the predecessor to OS X. It's really mind-blowing to see what functionality was actually in that operating system. Skip to 30 minutes to see Jobs talking about Renderman:
http://youtube.com/watch?v=j02b8Fuz73A
If only OS X was that snappy on such old hardware.
thats all very impressive, but what evidence is there that these algorithms are built or licenced into PS3?
to sum up, based on the improvements in the last 5 years, I would guess that the following will be a high average in 5 years time.
8 cores running at 3ghz
(Custom DSP's for certain apps if required)
8GB Ram
64-128GB Flash Drives backed up to 1TB Hard drives
GPU's - 64 pipelines running at about 1GHZ
HD Drive RW
Once that becomes mainstream - no one is going to need another computer for a very long time.
I can understand where you are coming from, but I think you are wrong. I have heard people predict that 'X' computing power is all anyone will ever need since the 80s, and the predictions have always been wrong. I do agree that someday our computers will be so powerful that there will be no reason to upgrade, but I don't think that will happen for at least a decade or two.
thats all very impressive, but what evidence is there that these algorithms are built or licenced into PS3?
I don't know if those particular algorithms will be used, they are just various ways in which true radiosity can be done in real-time without necessarily using GI approximation like AO. I've no doubt they will use AO in some games to improve performance further but it tends to have a distinct appearance that some people might not like. Here is an example of something that uses real-time AO already:
http://qutemol.sourceforge.net/
Radiosity rendering usually has a softer and more glowing appearance.
I'm not familiar with the various algorithms but from what I can gather, AO generally requires raytracing (though I've read about shadow-map based AO) but radiosity can be done with or without it, leading to orders of magnitude speed-up:
"Rasterization vs Ray Tracing in opinions of AMD, Intel, ATI, NVIDIA etc. Many predict use of both techniques in different games. They are partially right, however rasterization with Realtime Radiosity is so strong (fast) competition, that it will stop Ray Tracing progress everywhere except shiny car racing games."
http://realtimeradiosity.com/
Pixar used AO in Cars because well they had shiny cars and fast radiosity wouldn't have helped except with the ambient lighting.
to really menefit form multicore the apps have to be written for it. As I recall Quake 4 engine is not that so even with 4 cores the main share of Q4 is handled by one core and the audio is put on an other CPU and thats it. The latest unreal engien is said to be more geared toward more CPUs.
The latest generation of tv game boxes learn the programmers to wring out performance of many CPUs...
I can understand where you are coming from, but I think you are wrong. I have heard people predict that 'X' computing power is all anyone will ever need since the 80s, and the predictions have always been wrong. I do agree that someday our computers will be so powerful that there will be no reason to upgrade, but I don't think that will happen for at least a decade or two.
I whole heartedly agree with everything you said here.
Anyone here running on 640KB of RAM?
Everything gets outdated eventually...even if you think it can't, or want it to.
I whole heartedly agree with everything you said here.
Anyone here running on 640KB of RAM?
Everything gets outdated eventually...even if you think it can't, or want it to.
I still disagree
there is alot of difference between the 80's and now. Back then computers were in their infancy, and the difference between 1 mhz and 2 mhz, doubled the computing capacity. And more importantly - there was a need for it.
this 'need' pretty much happened until Intel reached 3ghz
And people bought into this because with such limited resources, there was a need.
We now have operating systems that have pretty much evolved to about 90% of what we will ever want - and they run fine on a 3ghz single processor - dual processing makes it sweet and is preferable, quads will be luxury and octos will be ferrari.
Now think about the applications the 'average' person will run. Again this is all possible under 3ghz single core processors - dual core makes it sweet, quads are luxury and octos are ferraris
What 'Application' does the 'Average' Joe run that could possibly need more than 8 cores?
There is nothing today that 'average' joe uses that wont run superbly in 8 cores. Once we have 'proper' multithreaded apps, thing like encoding a 2 hour video is going to happen in minutes - now there are always those who want it to happen in 'seconds' - but they are going to be a small minority, because average joe is going to be happy enough to wait a couple of minutes - there isn't going to be a pressing demand from average joe to make this happen faster.
There is a difference between waiting an hour - to waiting a minute - to waiting a second.
Ive just read up on nVidia's next gen 8800GTX - it looks like a beast - one hell of a beast. One can reasonably assume that in 5 years that part will be 'cheap' average, just as my 9700pro I got 4 years ago, is 'cheap average' today. But whereas there is a need for me today to want an 8800GTX - we are talking about parts that are so good, the 'need' to have something better just slips away.
There comes a point, when the 'average' is just so good, that there is just no real need for anything better.
In 5 years time, when Im rendering radiosity, when I've got 16x antialaising, blurry reflections, and built a virtual 10billion strong army of polygon Athiests that look as good as a photograph, are scripted to act autonomously (under my rules of control - bwhahahahah) and their swords slice up the fundies according to the rules of physics they so decry, and their blood flows across the landscape according to the laws of fluids - when all this happens *instantly* - what am I going to want next?
Then wonder what average Joe is ever going to want more.???
There just isn't the development in software to warrant this exponential increase in computer systems.
Granted, i know there are people who want & need more - enthusiasts, scientists, extreme gamers, but they aren't average Joe, and when Joe gets this mythical system i outlined above - when that is cheap avearage - then the industry is going to be in a fair bit of trouble.
8 4Ghz cores will be so slow the public access free machines on street corners won't use them because they won't properly be able to recognize you and feed the proper semi-subliminal advertising.
Once upon a time I thought who could ever want more than a IIFx with an Apple Gfx card. That was a machine that could supposedly do 30fps video. Now that seems primitive and glacially slow. Nature and programmers abhor a vacuum. We will come up with ever more exponentially rising compute costs. Count on it.
Nature abhors a vacuum, and unused CPU cycles are a vacuum of infinite attraction to programmers. The programmers that deliver value in filling that coming vacuum will make a pretty penny.