Multiprocessing/Multiprocessors question

dmzdmz
Posted:
in General Discussion edited January 2014
I just received my MacPro last week -- my other computer is an Athlon 64 X2 3800. Great machines, but I have a question: For the average user Is there really any point to this dual/quad core business?



On the XP machine, it's pretty clear that OmniPage or even most of the CS2 apps don't use the Athlon's other core, you can see that by monitoring the processes...



....but in OS X I have been under the impression that you don't have to write a program to take advantage of the other core(s), that the load is spit at the kernel level. Is this true?



Is this multicore paradigm only good for high-end production people/apps, not us lowly Indesign types?
«1

Comments

  • Reply 1 of 34
    marcukmarcuk Posts: 4,442member
    there has been no point to any of the computers released in the last 3 years for the average user.



    Im still on a 4 year old PC, it was pretty cutting edge when I got it, and would be considered low-medium end now, but the fact is, it still does exactly the same things just aswell as it did when it was new. And it can handle pretty much all the software you can throw at it.



    It is only just beginning to struggle with games, Half-life 2 is about as far as it got comfortably.



    For the moment dual/quad core is really only useful for people who have many apps open at a time, or background video encoding, while your working.



    Im a 1 or 2 app guy at a time, until standard apps become multithreaded, there just is no need for more than 2 cores.



    My plan is to get a dual quadcore Mac Pro next year and a PS3. I expect 8 cores and a dedicated games machine will last for atleast 5 years, if not 10.



    The days of replacing a computer every 6 months to a year are all but over, the next step - to 4 or 8 cores, will be the last that most of us will ever need to make for a very long time. How the PC industy copes with this is going to be *interesting*



    Personally, i cant wait to see Cinema4d render 8 threads at once, and that is pretty much the only application I use that can make use of all that power, but after that, it all gets a bit pointless - everything else will run on a 1 or two cored machine, pretty much indefinately.
  • Reply 2 of 34
    dmzdmz Posts: 5,775member
    Quote:
    Originally Posted by MarcUK


    there has been no point to any of the computers released in the last 3 years for the average user.



    That was exactly what I was afraid of.
  • Reply 3 of 34
    The real benefit lies in executing parallel tasks.



    For example, if you want to burn a cd and surf the web at the same time.



    We use dual-core at work for our development machines. It helps alot when using VMWare - the guest OS seems more responsive, especially when using other applications on the host.



    If you find yourself running several applications at the same time, you will notice the benefit.



    Dual-core will not typically speed-up any single specific applicaiton. Most apps are single-threaded. It is noticeable if you use several apps at the same time.
  • Reply 4 of 34
    hirohiro Posts: 2,663member
    Quote:
    Originally Posted by MarcUK


    there has been no point to any of the computers released in the last 3 years for the average user.



    Im still on a 4 year old PC, it was pretty cutting edge when I got it, and would be considered low-medium end now, but the fact is, it still does exactly the same things just aswell as it did when it was new. And it can handle pretty much all the software you can throw at it.



    It is only just beginning to struggle with games, Half-life 2 is about as far as it got comfortably.



    For the moment dual/quad core is really only useful for people who have many apps open at a time, or background video encoding, while your working.



    Im a 1 or 2 app guy at a time, until standard apps become multithreaded, there just is no need for more than 2 cores.



    My plan is to get a dual quadcore Mac Pro next year and a PS3. I expect 8 cores and a dedicated games machine will last for atleast 5 years, if not 10.



    The days of replacing a computer every 6 months to a year are all but over, the next step - to 4 or 8 cores, will be the last that most of us will ever need to make for a very long time. How the PC industy copes with this is going to be *interesting*



    Personally, i cant wait to see Cinema4d render 8 threads at once, and that is pretty much the only application I use that can make use of all that power, but after that, it all gets a bit pointless - everything else will run on a 1 or two cored machine, pretty much indefinately.





    That's just plain BS based on the fact YOU CHOOSE to be a "1 or 2 app guy at a time".



    Extra cores make extra computing resources available to the OS scheduler. That means the OS can be more flexible in cjhoosing what to run when, enabling "Teh Snappy!". I shook my head for years as folks bitched about a slow GUI -- not on a DP machine. That's an everyday annoyance avoidance thing, something that can make a user much happier with their machine.



    I don't shut down apps hardly at all. I don't wait for apps to launch for weeks at a time because they already are. I've worked this way for almost 7 years starting with my G4DP500 and now a CoreDuo iMac. I have 59 processes open for 227 threads and I am not pushing the machine, just not babying it. Those 227 threads are all individually schedulable by the OS and can run on either core, meaning the likelihood I will have to wait for something to get it's turn is cut in half.



    There is a lot of interaction between most apps and the OS, 10.4 has separated a lot of that out in the API set and the second core is schedulable to service many of those requests without interrupting the original thread. That is HUGE in CPU time savings, and the further development of those concepts in 10.5 will only make more cores even better for everyday performance. It's not that the the OS is chopping up the app into different pieces, that's still ridiculously hard and fraught with performance penalties; but the OS is able to perform more functions as an independently schedulable service rather than an inline part of the applications code. This is exactly twhere the so called magic OpenGL performance gains are coming from - just simple bottleneck elimination.



    So you can choose to run the same way as you were forced to in the mid to late '90s and not need any more power, or you can let the multitasking power of your machine allow you to operate in a slightly different way today -- not closing apps, worrying about memory allocation etc. -- and treat the box like a set of instantly accessible services that you never shut off.



    Naw, why would anyone ever want that????
  • Reply 5 of 34
    In gaming, will Quad/Octo core processing be taken advantage of do any of you guys think ?

    I dont want to by a Mac Pro now, and then realise that I couldve got 3-4 more FPSs if I waited !
  • Reply 6 of 34
    dmzdmz Posts: 5,775member
    Quote:
    Originally Posted by Hiro


    That's just plain BS based on the fact YOU CHOOSE to be a "1 or 2 app guy at a time".



    There is a lot of interaction between most apps and the OS, 10.4 has separated a lot of that out in the API set and the second core is schedulable to service many of those requests without interrupting the original thread. That is HUGE in CPU time savings, and the further development of those concepts in 10.5 will only make more cores even better for everyday performance. It's not that the the OS is chopping up the app into different pieces, that's still ridiculously hard and fraught with performance penalties; but the OS is able to perform more functions as an independently schedulable service rather than an inline part of the applications code. This is exactly twhere the so called magic OpenGL performance gains are coming from - just simple bottleneck elimination.



    ...that takes some of the sting out of it -- so it's not something of a half-truth that the OS is throwing things like drawing a window off to the other core -- just that the main app thread isn't being sliced. So when it calls something from the OS that's almost guaranteed to be on another core (??)



    Does anyone know if XP does this?
  • Reply 7 of 34
    marcukmarcuk Posts: 4,442member
    Quote:
    Originally Posted by dmz


    ...that takes some of the sting out of it -- so it's not something of a half-truth that the OS is throwing things like drawing a window off to the other core -- just that the main app thread isn't being sliced. So when it calls something from the OS that's almost guaranteed to be on another core (??)



    Does anyone know if XP does this?



    having been on a dual processor system for 4 years, and having to use single processor systems at work, I can say for sure that the smoothness of XP is much better under 2 cores or dual processors.
  • Reply 8 of 34
    marcukmarcuk Posts: 4,442member
    Quote:
    Originally Posted by Hiro


    That's just plain BS based on the fact YOU CHOOSE to be a "1 or 2 app guy at a time".



    Extra cores make extra computing resources available to the OS scheduler. That means the OS can be more flexible in cjhoosing what to run when, enabling "Teh Snappy!". I shook my head for years as folks bitched about a slow GUI -- not on a DP machine. That's an everyday annoyance avoidance thing, something that can make a user much happier with their machine.



    I don't shut down apps hardly at all. I don't wait for apps to launch for weeks at a time because they already are. I've worked this way for almost 7 years starting with my G4DP500 and now a CoreDuo iMac. I have 59 processes open for 227 threads and I am not pushing the machine, just not babying it. Those 227 threads are all individually schedulable by the OS and can run on either core, meaning the likelihood I will have to wait for something to get it's turn is cut in half.



    There is a lot of interaction between most apps and the OS, 10.4 has separated a lot of that out in the API set and the second core is schedulable to service many of those requests without interrupting the original thread. That is HUGE in CPU time savings, and the further development of those concepts in 10.5 will only make more cores even better for everyday performance. It's not that the the OS is chopping up the app into different pieces, that's still ridiculously hard and fraught with performance penalties; but the OS is able to perform more functions as an independently schedulable service rather than an inline part of the applications code. This is exactly twhere the so called magic OpenGL performance gains are coming from - just simple bottleneck elimination.



    So you can choose to run the same way as you were forced to in the mid to late '90s and not need any more power, or you can let the multitasking power of your machine allow you to operate in a slightly different way today -- not closing apps, worrying about memory allocation etc. -- and treat the box like a set of instantly accessible services that you never shut off.



    Naw, why would anyone ever want that????





    Each to their own. Even if I had an octo core dual processor, i very much doubt I would want more than 2 or 3 apps open at a time.



    But the fact is that even with all your apps open, probably only the frontmost one is actually doing anything with the CPU, let me guess - you're watching an HD movie, encoding a DVD, flicking through iTunes, rendering a 3d model, writing an eMail, searching for ExtraTerrestrial life, folding some genes, and viewing Pr0n all at the same time. Well that doesn't make you an 'average' user.



    Boasting about how many processes you have running is quite sad IMO. The less processes that are running the better. I've seen systems like you describe (our computer system at work was set up by a twat who doesn't know shit - he just like too see icons appear in the system tray - i think it gives him a sense of feeling special that he has all these apps running) and its horrible. Sure sign of a moran who doesn't know jack about computers. Maybe thats not you - just a generalisation.



    Perhaps its an American thing - leave your computer running 24/7, waste all its resources, waste electricity, screw the environment. How many tonnes of CO2 did you release because you cant be bothered to wait a few minutes a day for your computer to boot and a few apps to launch? -
  • Reply 9 of 34
    dmzdmz Posts: 5,775member
    Quote:
    Originally Posted by MarcUK


    having been on a dual processor system for 4 years, and having to use single processor systems at work, I can say for sure that the smoothness of XP is much better under 2 cores or dual processors.



    Ha! I'm on XP right now, scanning on a sheetfed scaner, and writing this on Firefox. I think you're right.
  • Reply 10 of 34
    MarvinMarvin Posts: 15,309moderator
    I'm generally of the opinion that more than 2-cores or processors are of little use to most people but two processors are much better than one.



    I've used single processors all the time before I tried a quad G5 and everything is just so smooth with multiple CPUs. Even with all cores full up, the system still doesn't stutter. A single CPU just chokes on full usage. The fans seem to go less with multi-core systems too.



    I now have a dual-core Intel Mini (I deliberately avoided the Solo because it was too slow) and I love it. I'm currently ripping a DVD to H264 in the background and I can still use my computer as normal.



    On the quad, I do notice a lot of wasted resources for most tasks. Even Final Cut Pro uses about 2 CPUs at most for encoding, which I think is quite appalling. But when I use apps like Shake that use all 4 processors, I can really see the difference when there's a tight deadline.
  • Reply 11 of 34
    marcukmarcuk Posts: 4,442member
    I think the average user would be happy with two/four cores for a very long time to come.



    So anyway, Intel is about to release 4 core's and AMD soon to follow. No doubt about it, this is a high end system for the average user for a very long time.



    On the horizon, is HD encoding/decoding, which as far as I can see, is about the only thing that could possibly push these new chips. The killer app. BUT within three years HD players are going to be so common, i guess like $100 devices. And lets face it for everyone doing HD encoding to make their own movies, there are 100 people who won't be.



    Games developers are going to start multithreading their apps. I guess its going to take 3-5 years before they really get the hang of it, and in that timeframe, GPU's are going to be so powerful, I guess they'll be rendering photorealism on the fly, possibly with radiosity, so games aren't going to be so important for pushing CPU requirements in the future.



    So that leaves only pro apps like 3d rendering and video, which most of us aren't going to do. There is going to be a revolution in 3d soon, because someone is bound to release a realtime renderer using a GPU - and that company will own the market. Basically a futuristic game engine from 3 years in the future with modelling possibilities. Infact, such an engine will be better than the current crop of pro 3d apps, because once you start adding pixel shading to the mix, current 3d apps start to look a bit dull and clunky.



    Same with video - ATI and nVidia both have an API in development to move much of the CPU into the GPU, so that pretty much kills that. Pro Audio is pretty much covered with 4 cores and a GPU architecture. AMD is touting DSP in an opteron socket.



    So we are really left with a bunch of custom scientific apps that no one uses anyway. Unless protein folding becomes a consumer app, then I guess that in 5 years, CPU development and requirements are really going to die a painful death.



    How do you think Intel, AMD are going to cope with this? AMD seems to have done the smart thing in buying ATi, Intel has the resources to go it alone. But if no-body needs a new computer after the next generation of upgrades, both companies are really going to be in the shit.



    PS Intel, just announced they are working on an "80" core chip, that might be commercially viable in the next 5 years.
  • Reply 12 of 34
    marcukmarcuk Posts: 4,442member
    to sum up, based on the improvements in the last 5 years, I would guess that the following will be a high average in 5 years time.



    8 cores running at 3ghz

    (Custom DSP's for certain apps if required)

    8GB Ram

    64-128GB Flash Drives backed up to 1TB Hard drives

    GPU's - 64 pipelines running at about 1GHZ

    HD Drive RW



    Once that becomes mainstream - no one is going to need another computer for a very long time.
  • Reply 13 of 34
    MarvinMarvin Posts: 15,309moderator
    Quote:
    Originally Posted by MarcUK


    Games developers are going to start multithreading their apps. I guess its going to take 3-5 years before they really get the hang of it, and in that timeframe, GPU's are going to be so powerful, I guess they'll be rendering photorealism on the fly, possibly with radiosity, so games aren't going to be so important for pushing CPU requirements in the future.



    So that leaves only pro apps like 3d rendering and video, which most of us aren't going to do. There is going to be a revolution in 3d soon, because someone is bound to release a realtime renderer using a GPU - and that company will own the market. Basically a futuristic game engine from 3 years in the future with modelling possibilities. Infact, such an engine will be better than the current crop of pro 3d apps, because once you start adding pixel shading to the mix, current 3d apps start to look a bit dull and clunky.



    They already have photorealistic GPU rendering in the form of Nvidia Gelato:



    http://www.nvidia.com/page/gz_home.html



    It's not real-time at the moment but still pretty fast. As for pixel shading, that actually started in 3D software and moved into GPUs when they were capable of it. Renderman has had programmable shading since the 80s, possibly earlier.



    The PS3 can do real-time HDRI, SSS, radiosity, motion blur etc. but only thanks to the number of CPUs it has.



    Quote:
    Originally Posted by MarcUK


    then I guess that in 5 years, CPU development and requirements are really going to die a painful death.



    I don't think so. Software is having a hard enough time transferring to 64-bit let alone 256-bit. If they can get backwards compatibility with 32-bit and 64-bit software and at least 2GB of VRam then maybe but I don't think it'll be for a long time. I don't think the chips in GPUs are anywhere near as complex as CPUs either. If anything, I can see the opposite happening. CPU becoming so fast that we don't need GPUs any more.



    Quote:
    Originally Posted by MarcUK


    How do you think Intel, AMD are going to cope with this? AMD seems to have done the smart thing in buying ATi, Intel has the resources to go it alone. But if no-body needs a new computer after the next generation of upgrades, both companies are really going to be in the shit.



    I can see that happening. When they reach a point where you can render an HD quality movie on a single machine in reasonable time, the market will grind to a halt.



    However, I can also see the companies drawing upgrades out as long as possible to keep the market growing. Even if Intel could release an 80 core CPU, I doubt they would. Gillette could release a 20-blade razor but they've had two blades, then 3, then 4 and now 5. They've also put a battery in it to vibrate it and added a trimmer. It's all about marketing.
  • Reply 14 of 34
    marcukmarcuk Posts: 4,442member
    Quote:
    Originally Posted by Marvin


    They already have photorealistic GPU rendering in the form of Nvidia Gelato:



    http://www.nvidia.com/page/gz_home.html



    It's not real-time at the moment but still pretty fast. As for pixel shading, that actually started in 3D software and moved into GPUs when they were capable of it. Renderman has had programmable shading since the 80s, possibly earlier.



    The PS3 can do real-time HDRI, SSS, radiosity, motion blur etc. but only thanks to the number of CPUs it has.







    I don't think so. Software is having a hard enough time transferring to 64-bit let alone 256-bit. If they can get backwards compatibility with 32-bit and 64-bit software and at least 2GB of VRam then maybe but I don't think it'll be for a long time. I don't think the chips in GPUs are anywhere near as complex as CPUs either. If anything, I can see the opposite happening. CPU becoming so fast that we don't need GPUs any more.







    I can see that happening. When they reach a point where you can render an HD quality movie on a single machine in reasonable time, the market will grind to a halt.



    However, I can also see the companies drawing upgrades out as long as possible to keep the market growing. Even if Intel could release an 80 core CPU, I doubt they would. Gillette could release a 20-blade razor but they've had two blades, then 3, then 4 and now 5. They've also put a battery in it to vibrate it and added a trimmer. It's all about marketing.





    Woo!. Thanks for that Marvin, Gelato looks like it will totally own 3d rendering within a couple of years. I wish nVidia all the best with such a product, hope Ati are working on something too. Shame it only works with Maya and 3ds Max at the moment.



    Have you tried it? I cant as i have not nVidia graphics at the moment.
  • Reply 15 of 34
    MarvinMarvin Posts: 15,309moderator
    Quote:
    Originally Posted by MarcUK


    Woo!. Thanks for that Marvin, Gelato looks like it will totally own 3d rendering within a couple of years. I wish nVidia all the best with such a product, hope Ati are working on something too. Shame it only works with Maya and 3ds Max at the moment.



    Have you tried it? I cant as i have not nVidia graphics at the moment.



    I can't try it I'm afraid. It seems it's limited to Windows and Linux. Here's the usual answer they give:



    Quote:

    Will support for Mac OS X be forthcoming?

    Currently, we do not have plans to create an OS X version. If you are interested in an OS X version of Gelato Pro, let us know. If we find there is sufficient market demand, we will do a Mac port.



    http://www.nvidia.com/object/gz_faq.html#q23



    Yeah, 'cos Linux has such a huge market share .



    This is one reason I'd like to see GPUs made redundant. If we can get DDR3 Ram and massive amounts of throughput from a computer without a GPU then we can say goodbye to incompatibility (not completely but significantly for gaming) and graphics driver crashes for good and no more difficulty in virtualizing hardware acceleration.
  • Reply 16 of 34
    Quote:
    Originally Posted by Marvin


    The PS3 can do real-time HDRI, SSS, radiosity, motion blur etc. but only thanks to the number of CPUs it has.



    The PS3 absolutely cannot do radiosity. Radiosity calculations for even simple scenes is minutes per frame. Radiosity is far, far more complex than ray tracing, which is still too slow to really do on the GPU.



    Even Pixar doesn't do radiosity every frame?they do it once per scene and then approximate it.



    SSS is probably also an approximation, as opposed to true SSS. The others aren't too hard: PS2, Gamecube, and XBOX and call do motion blue, and HDRI really isn't that computationally complex (although obnoxiously, most current GPU's don't support it )
  • Reply 17 of 34
    hirohiro Posts: 2,663member
    Quote:
    Originally Posted by MarcUK


    Each to their own. Even if I had an octo core dual processor, i very much doubt I would want more than 2 or 3 apps open at a time.



    But the fact is that even with all your apps open, probably only the frontmost one is actually doing anything with the CPU, let me guess - you're watching an HD movie, encoding a DVD, flicking through iTunes, rendering a 3d model, writing an eMail, searching for ExtraTerrestrial life, folding some genes, and viewing Pr0n all at the same time. Well that doesn't make you an 'average' user.



    Boasting about how many processes you have running is quite sad IMO. The less processes that are running the better. I've seen systems like you describe (our computer system at work was set up by a twat who doesn't know shit - he just like too see icons appear in the system tray - i think it gives him a sense of feeling special that he has all these apps running) and its horrible. Sure sign of a moran who doesn't know jack about computers. Maybe thats not you - just a generalisation.



    Perhaps its an American thing - leave your computer running 24/7, waste all its resources, waste electricity, screw the environment. How many tonnes of CO2 did you release because you cant be bothered to wait a few minutes a day for your computer to boot and a few apps to launch? -



    Who pissed in your corn flakes?



    Just another neo-luddite that doesn't want to change their ways and then tear down anyone who says it can be better than merely surviving and slogging through.



    And no I wasn't boasting, I'm actually quite milquetoast and boring in my daily computing, it only takes about a half dozen open apps to generate statistics like that. I just don't hamstring my machines capabilities unnecessarily and you have obviously never bothered to look so you show a total lack of clue on the whole subject. Now tell us all, where does the false holier than thou attitude come from?



    Quote:

    Perhaps its an American thing - leave your computer running 24/7, waste all its resources, waste electricity, screw the environment. How many tonnes of CO2 did you release because you cant be bothered to wait a few minutes a day for your computer to boot and a few apps to launch? -



    Making shit up to try to put folks down is just plain stupid, I have no respect for such malcontents. Let alone incorrect ones.
  • Reply 18 of 34
    MarvinMarvin Posts: 15,309moderator
    Quote:
    Originally Posted by gregmightdothat


    The PS3 absolutely cannot do radiosity. Radiosity calculations for even simple scenes is minutes per frame. Radiosity is far, far more complex than ray tracing, which is still too slow to really do on the GPU.



    Even Pixar doesn't do radiosity every frame?they do it once per scene and then approximate it.



    I'm only going by reports:



    http://www.gamasutra.com/php-bin/new...hp?story=10031

    http://www.nzone.com/object/nzone_aq...scription.html



    Quote:
    Originally Posted by gregmightdothat


    SSS is probably also an approximation, as opposed to true SSS. The others aren't too hard: PS2, Gamecube, and XBOX and call do motion blue, and HDRI really isn't that computationally complex (although obnoxiously, most current GPU's don't support it )



    Again true, I saw a video of supposed SSS in action in a PS3 game called Lair and it looked pretty fake to me - basic translucency.



    But they are advertising the PS3 as able to reach 2 teraflops and the quad G5 can only do about 75 gigaflops, though I think there might be some mix up between system flops and CPU flops:



    http://news.com.com/PlayStation+3+st...3-5709571.html



    I think it's safe to assume it will be about 250 gigaflops max, which is over 3 times the quad G5. Heck the world's 500th fastest supercomputer is 2 teraflops and that has about 1000 processors. So in that respect, I'd assume it's capable of producing some really nice output.



    As you say, there will be harsher approximations made than the likes of Pixar would be allowed to do but biased rendering software always approximates so the techniques are still valid. One bounce radiosity on a PS3 is still radiosity.
  • Reply 19 of 34
    marcukmarcuk Posts: 4,442member
    the PS3 radiosity might well be ambient occlusion, which is a very good trick to simulate radiosity.
  • Reply 20 of 34
    Quote:
    Originally Posted by MarcUK


    the PS3 radiosity might well be ambient occlusion, which is a very good trick to simulate radiosity.



    That's probably it. That's what Pixar uses.
Sign In or Register to comment.