Intel's first quad-core chips to arrive this year

13»

Comments

  • Reply 41 of 56
    hirohiro Posts: 2,663member
    Quote:

    Originally posted by ChevalierMalFet

    Uh, when did we enter world-as-myth? Have I been reading too much Heinlein?



    But seriously, good call the the non-MP nature of Cloverton; which I think makes both Cloverton and Kentwood loosing propositions from a performance standpoint, though likely a bit cheaper in a complete system.

    [edit: thanks for the clarification MWSwami, though that CPU bus is disappointing]




    Non-MP in this case means 4 cores not 8, 12, 16 +++ cores. A 4 core module is still nothing to sneeze at and definitely not worthy of being called a "losing proposition".





    Quote:

    So Leopard's going to have überthreading of some kind? You have this on recoord? I wouldn't count on one OS or the other having across the board speed advantages now that we are on the same hardware.



    More work on making the kernel locks narrower in scope and adding more of them. This means one call that would have locked up the entire system with a beachball (10.2 and earlier style locks; 10.3/4 is a little better but not enough), like a directory query for a unmounted network volume, will now just lock up that portion of the filesystem. This should allow a lot more oomph from multi-threading as the more threads you have, the more opportunity to make one of those locking calls. So a lock may block one thread, but not all of them as it used to. This will make a huge difference in throughput and unleash a whole new level of multi-threadding performance potential.
  • Reply 42 of 56
    mwswamimwswami Posts: 166member
    Aah, I know what the confusion is regarding Clovertown not being MP capable.



    While reading Intel Xeon Processor Numbers at Intel.com, I discovered that Intel makes a distinction between DP (dual processor) and MP (multi processor, >2 capable).



    So Clovertown is definitely not MP but it is DP capable. So, Wikipedia is correct.
  • Reply 43 of 56
    hirohiro Posts: 2,663member
    I wouldn't give it that much credit. The Wikipedia article makes no reference of the distinction. Being right by accident and sloppy language when the omission of something makes for an inconsistency doesn't get my vote for accuracy.
  • Reply 44 of 56
    jeffdmjeffdm Posts: 12,951member
    Quote:

    Originally posted by slughead

    I don't see the point of having more than 2 cores.



    It seems to me that the only reason to do this is to avoid upgrading processor speeds.. just add another core and theoretical processing power goes up, right?



    ...



    I'm waiting for a dual core 5ghz before I upgrade. Benchmark programs are fine and dandy, but the real world calls for something a bit more practical.




    The problem is the frequency scaling and putting Moore's law into perspective.



    Doubling the frequency with a particular CMOS device approximately multiplies its power consumption by four. Doubling the processor cores in theory doubles the power consumption, though CoreDuo only takes 5W more power than CoreSolo. Moore's law talks about processor complexity, not total compute power, and that means an approximate doubling of transistors every one and a half years, and that has to go somewhere. The easiest way to use transistors is to add cores and increase cache. If there was an effective way to increase performance of one core rather than adding more cores, I think the industry would still be following that path, but in reality there are diminishing returns on that.



    Also, more cores might mean more power consumption, but if you aren't using the power, I think the unused cores can be turned off.
  • Reply 45 of 56
    slugheadslughead Posts: 1,169member
    Quote:

    Originally posted by JeffDM

    Doubling the frequency with a particular CMOS device approximately multiplies its power consumption by four. Doubling the processor cores in theory doubles the power consumption, though CoreDuo only takes 5W more power than CoreSolo. Moore's law talks about processor complexity, not total compute power, and that means an approximate doubling of transistors every one and a half years, and that has to go somewhere. The easiest way to use transistors is to add cores and increase cache. If there was an effective way to increase performance of one core rather than adding more cores, I think the industry would still be following that path, but in reality there are diminishing returns on that.



    Thanks! I didn't know any of that (though I knew higher clock = higher power, I didn't know the numbers).



    Power consumption is an issue, but it shouldn't mean that we should ditch speed for transistors.



    Yes, multiple cores are more efficient in power and they comply with moore's law, but that doesn't mean it's a substitution for higher clock speeds.



    I know posting this on a mac board is usually futile as Apple was always ranting about the "mhz myth." However, it remains constant for demanding single-threaded processing tasks like gaming.



    I think it's great that we're going towards more "efficient" processors, but "effectiveness" is also important.



    'Low power' does not mean 'better', and as you pointed out, in the end, the raw 'torque' of Hz will mean better performance for many tasks, and it will always come at a higher price.



    If a CPU can perform more cycles per second than another CPU, and they each perform the same amount of operations per cycle, then the more Hz, the better. The limiting factor would be RAM, FSB, etc., but NOT the CPU.



    We're on the verge of another spike in CPU speeds pretty soon, just like the one that occurred when we found silicon + germanium = OMG FAST!



    You're right about that diminishing returns on Hz as well. Going in that direction is counter to miniaturization as well as a 'long hard slog' to move forward.



    While jamming 4 processors into a chip will lead to more performace, but it's probably unlikely anyone will notice aside from people who run servers and render things all day.



    For consumers, a modern dual core will do, and gaming performance wont increase except with new GPU's and higher clock speeds--both of which are slowing down in development for the time being.
  • Reply 46 of 56
    programmerprogrammer Posts: 3,458member
    Quote:

    Originally posted by slughead

    While jamming 4 processors into a chip will lead to more performace, but it's probably unlikely anyone will notice aside from people who run servers and render things all day.



    For consumers, a modern dual core will do, and gaming performance wont increase except with new GPU's and higher clock speeds--both of which are slowing down in development for the time being.




    More cores and more parallelism per core is the way of the future. Frequency scaling (at the rate it was seen through the 80s and 90s) is not sustainable, but throughput scaling is. As this trend becomes more pervasive the software community will adapt and you will find that in most of your software you will notice the impact of the increased hardware parallelism. Software that doesn't adapt either doesn't need more performance (and a lot falls into that category), or it will be crushed by the competition.
  • Reply 47 of 56
    slugheadslughead Posts: 1,169member
    Quote:

    Originally posted by Programmer

    More cores and more parallelism per core is the way of the future. Frequency scaling (at the rate it was seen through the 80s and 90s) is not sustainable, but throughput scaling is. As this trend becomes more pervasive the software community will adapt and you will find that in most of your software you will notice the impact of the increased hardware parallelism. Software that doesn't adapt either doesn't need more performance (and a lot falls into that category), or it will be crushed by the competition.



    As I said before, there are many things that will remain unaffected by multiple cores.



    As I said before, this includes gaming--where little more than 10% of the processes can be put into other threads.



    People can rant about threaded code all they want to, but as a programmer, you should already know that many tasks cannot be threaded, logically, ever.



    For the unthreaded, multiple cores are meaningless, and many things will remain almost entirely unthreaded for the foreseeable future.



    It bothers me that people just act like you can thread everything. Anyone who's ever written software for any length of time would know that's bogus.



    I have written a few apps that do extensive work in different threads, but the majority of apps are still finding a result and acting upon simply that result. It's the nuts and bolts of apps that account for most of the processing, and those are almost always linear.



    Obvious exceptions being pro software, but we'll leave that out.



    Gaming is an obvious example of something that is nearly impossible to thread in a significant way. Maybe that extra processor will give you another 10% FPS in some spots but not even remotely as much as getting the same FPU from a single processor.



    Multithreading is a bunch of hype as far as comparing it to gains in clock speeds. The only thing I could see really gaining from this is slight speed-ups in every day tasks like file management, e-mail, and web browsing. There's already a plugin for firefox that loads links from pages you're looking at and caches them. Don't forget about Apple's caching of folder contents in the background. People are literally scraping the bottom of the barrel for ways to use these unused cores, because they're rarely used otherwise.



    So I say again: 2 cores? OK. 4 cores? Just give me a 10% higher clock speed for my money, as that'll actually be noticable most of the time.
  • Reply 48 of 56
    Wrong thread.



  • Reply 49 of 56
    kupan787kupan787 Posts: 586member
    Quote:

    Originally posted by slughead

    I don't see the point of having more than 2 cores.



    It seems to me that the only reason to do this is to avoid upgrading processor speeds.. just add another core and theoretical processing power goes up, right?



    When doing things that can only run on one core, my dual 2.5Ghz G5 wouldn't be any slower than a quad, penta, or eleventybillion core machine.



    I mean, I'm all for it, but many tasks aren't threaded and some will never be.




    http://techreport.com/onearticle.x/10247



    Quote:

    The functionality is said to allow two CPU cores to operate as a single one, theoretically improving performance in applications not optimized for multiple threads [...] A new BIOS for Intel 975X motherboards seems to add a "Core Multiplexing Technology" (CMT) option



    If this pans out, it could make things very interesting for single threaded apps.
  • Reply 50 of 56
    kupan787kupan787 Posts: 586member
    Quote:

    Originally posted by slughead

    As I said before, this includes gaming--where little more than 10% of the processes can be put into other threads.



    I think that people coding for the PS3 and Xbox 360 would beg to differ. In the next few years, games will be written for these (and other) multi-cored machines.
  • Reply 51 of 56
    slugheadslughead Posts: 1,169member
    Quote:

    Originally posted by kupan787

    http://techreport.com/onearticle.x/10247



    ----------



    If this pans out, it could make things very interesting for single threaded apps.




    That'd certainly do the trick, wouldn't it?



    That is, of course, a rumor with fancy-looking drawings attached. It does seem like a logical step though.



    As far as the triple core xbox 360 and 7 core Ps3, just because they have the power, doesn't mean it's going to get used. A lot of this could be marketting. "WE HAVE MORE FPU THAN YOU!".. gamers are into that sort of thing, that's why they bought Radeon 9800's even though they had a problem with melting fans.



    It seems far-fetched to say that multipile cores in the next -gen machines are a ruse, but it wouldn't be the first time.



    Remember that Intel was giving away free stuff a few years ago, and all you had to do to get it was sit in on a lecture explaining that megahertz were everything, and AMD's naming scheme (XP3200 is really a ~2,600Mhz) was meant to deceive. Of course, the XP3200 was outperforming Intel's 3ghz pentiums, but they might have overlooekd that .



    Certainly the future in gaming will bring more multithreaded code, but I maintain that the vast majority of operations will be performed by a single thread as far as gaming is concerned. A multi-core GPU, however.. well that's a different story.



    In short, I think that the only reason the Xbox360 included three cores was to assure the buying public that their machines would not become obsolete for a very long time. This will also force the gaming industry to do as much multithreading as possible, but multithreading requires skilled and experienced programmers. This will represent a drastic cultural change for the gaming industry. Even with all the skill in the world though, highly-interactive programs that require lots of power end up focusing on a single thread.



    Hopefully, some super-genius will figure out a software alternative to CMT.. That'd just change everything.
  • Reply 52 of 56
    hirohiro Posts: 2,663member
    Quote:

    Originally posted by slughead

    Hopefully, some super-genius will figure out a software alternative to CMT.. That'd just change everything.



    I could write one in an afternoon, but it would be slower than frozen dogshit. Certian types of operations need to be done in hardware or avoided altogether because of the speed penalties and anything that has to do with the on silicon CPU instruction cycle fits that requirement.
  • Reply 53 of 56
    Quote:

    Originally posted by kupan787

    http://techreport.com/onearticle.x/10247







    If this pans out, it could make things very interesting for single threaded apps.




    I think I remember reading somewhere that this was a no go for Intel (Ars I believe). In any event, you'd need some serious branch prediction not to have to throw away 56 cycles or so every time you had a prediction error on a 4 core design (I just picked a 14 stage pipeline out of the air for the estimate).



    I don't really think this is terribly feasible, nor desirable. In time, Software Engineers will figure hash out how they can change their program design to make better use of multiple cores, and until then we will hear lots from frustrated programmers trying to get decent performance out of logic designed for single core CPUs.
  • Reply 54 of 56
    hirohiro Posts: 2,663member
    Yeah. They are gonna regret thinking synchronization was BS during their OS courses. It's not that big a deal if you deal with it from the start, but damn near impossible to graft successfully into a project later. And it has only made sense to intelligently multi-thread just about everything in the last ten years or so.



    <sarcasm>So it's not like it's something that we should already have a handle on or anything, right?</sarcasm>
  • Reply 55 of 56
    thttht Posts: 5,452member
    Would the "Reverse Hyperthreading" and "Core Multiplexing" rumors simply go away now. Please. Core Multiplexing may exist, but isn't going to be for some technology that speeds up single thread processes using two cores.



    It's really all a bunch of hooey and all of the tech websites should be embarrassed for propogating it. If it was that easy, dollars to donut, it'd be a whole lot easier just to add more execution units in the core, and would be a whole lot faster as well. The CPU manufacturers would love to do that instead of wasting precious wafer space on extra cores.



    But it really isn't going to happen. CPU manufacturers are moving to multi-core designs, while all of the research is going into auto-parallelizing compilers. Software cycles take a lot longer than CPU cycles. Ie, it'll take 3 or 4 years to truly change to multithreaded designs, if not longer and assuming it can be properly threaded, before a lot of gain will be realized.



    I still think there is more room to extract out single threaded performance. Merom, Conroe, and Woodcrest managed it with OOO prefetching, a wider core, and instruction fusion. There's probably some more that can be eeked out.
  • Reply 56 of 56
    hirohiro Posts: 2,663member
    Seconded.



    I think the biggest gains will come from applications programmers not being afraid of threading and synchronization though. Picking and implementing the right set of threads can make VERY significant differences in applications' performance. But it takes a willingness to quit thinking serially (which is hard for many) and stop being afraid of synchronization (which is harder for most). Because if you are afraid of synchronization you will probably goon the choices and implementation causing more performance problems than you solve.
Sign In or Register to comment.