AMD ClawHammer 64-bit 3400+ VS other processors

2»

Comments

  • Reply 21 of 29
    [quote]Originally posted by Eskimo:

    <strong>





    Actually you are incorrect. While SPEC doesn't have any suite specifically designed to test a SIMD the compiler team that Intel pours millions of dollars each year into has developed compilers for SPEC (and by SPEC bylaws available to the public) which vectorize many of the SPEC applications to take advantage of SSE2. AMD's inclusion of SSE in their AthlonXP is one reason their SPEC scores saw a noticable improvement.</strong><hr></blockquote>



    Ah, how naive of me to put it past Intel to compile SPEC with vectorising compilers



    Do you have a source for this info? (not that I don't believe you, but if I repeat this and someone asks _me_ for a source, I'd like to be able to point to one)
  • Reply 22 of 29
    programmerprogrammer Posts: 3,467member
    [quote]Originally posted by Mac Sack Black:

    <strong>



    Ah, how naive of me to put it past Intel to compile SPEC with vectorising compilers



    Do you have a source for this info? (not that I don't believe you, but if I repeat this and someone asks _me_ for a source, I'd like to be able to point to one)</strong><hr></blockquote>





    Intel has been "cheating" on the SPECmarks for years. Their compilers do all sorts of things to recognize particular algorithms used by the official SPECmarks and speed them up -- sometimes to the detrmiment of non-SPECmark code! The result is that the SPECmark scores on Intel machines is often distorted and doesn't reflect performance on real applications. They don't exactly admit to this so its hard to find an official source you can point to as proof.



    This isn't to say that their compilers aren't any good -- on the contrary they are quite good. Its just that they aren't as good as the benchmarks would have you believe.
  • Reply 23 of 29
    eskimoeskimo Posts: 474member
    [quote]Originally posted by Mac Sack Black:

    <strong>



    Ah, how naive of me to put it past Intel to compile SPEC with vectorising compilers



    Do you have a source for this info? (not that I don't believe you, but if I repeat this and someone asks _me_ for a source, I'd like to be able to point to one)</strong><hr></blockquote>



    Do you want a source that Intel uses their own compilers for SPEC? If so look up any result at SPEC, next to it all configuration information has to be documented in html, pdf, txt, ps, and .cfg formats.

    <a href="http://www.spec.org/osg/cpu2000/results/res2001q3/cpu2000-20010827-00813.html"; target="_blank">here </a> is an example of a 2.0GHz P4 processor, note the compilers used are Intel's own.



    You want a source that Intel vectorizes for SSE2 in their own compiler? <a href="http://intel.com/technology/itj/q12001/articles/art_6.htm"; target="_blank"> This paper</a> is all about Intel's C++ compiler and how it optimizes for SIMD operations. Intel has whole sections of their software developer site devoted to teaching you how to vectorize your code. They even have online tutorials if you are interested.



    Of course even though these resources are available the majority of applications for the PC are still compiled with Microsoft's Visual C Studio which does not include the SSE2 enhancements without a plugin from Intel.
  • Reply 24 of 29
    yes, it's no doubt that intel compilers are... to use a coined phrase... trick compilers. But that shouldn't matter should it? isn't it enought that the intel compiler makes things fast (and no, not just for spec operations)

    look here:

    <a href="http://www.aceshardware.com/read.jsp?id=40000189"; target="_blank">http://www.aceshardware.com/read.jsp?id=40000189</a>;

    i don't quite think it's fair to say "intel's chips are performing as well as they are because they have better compiler technology" but rather "intel's chips are performing poorly because they're using poor compiler technology".



    as to intel cheating on spec scores, first of all, everyone does. secondly, Sun does it the worst. they've optimized one specific function to work many many times faster. (sorry, i can't remember it off the top of my head and i couldn't find a link, but i think it was in one of Paul Demone's articles).



    also, yes, as of now visual C (and visual .net) won't help performance, but software that takes a long time to render(such as alies with Maya, lightwave, 3dmax) all have added sse help (in some cases intel chips are actually as fast as athlons, heh)



    Ted
  • Reply 25 of 29
    eskimoeskimo Posts: 474member
    [quote]Originally posted by Detnap:

    Sun does it the worst. they've optimized one specific function to work many many times faster. (sorry, i can't remember it off the top of my head and i couldn't find a link, but i think it was in one of Paul Demone's articles).

    <hr></blockquote>



    That would be 179.art check out <a href="http://www.spec.org/osg/cpu2000/results/res2001q4/cpu2000-20011119-01121.html"; target="_blank">this benchmark</a> of one of their Sun Blade 1000. Note the extradinory performance of that one test over ther others . They claim it's fair and square, I think every compiler engineer in the world would love to see what they did .



    I don't think it's "cheating" to optimize your compilers for SPEC. Since by SPEC rules that compiler must be made available to the public. I do think it is "cheating" to claim that your performance on SPEC using highly optimized compilers can be directly applied to real world performance where real world apps don't use that optimized compiler.
  • Reply 26 of 29
    [quote]Originally posted by Detnap:

    <strong>as to intel cheating on spec scores, first of all, everyone does. </strong><hr></blockquote>



    Agreed, but some go further than others. You example of Sun's recent benchmarking-oddity is a case in point. In the past we've also seen Intel "sprucing up" the SPECmark source code a little with #pragmas and the like to get their compiler to work better. And we've seen them use their compiler to compare Intel processor performance to AMD processor performance... which is grossly unfair because optimization for one is usually pessimization for the other (and for previous Intel chips!!). We've also seen loops removed from the code completely... and incorrectly which looked like they were an attempt to make SPECmarks go faster. No proof, of course, and they fixed the "bug" when it was reported. Reminds me of ATI coding their drivers to run faster for specific benchmarks while breaking everything else. This stuff happens because people base buying decisions on these benchmarks.
  • Reply 27 of 29
    g-dogg-dog Posts: 171member
    I actually visited Intel for a job shadow the other day. I talked to one of their now programmers, was a hardware guy for a couple years though. I told him that I was a mac guy.



    This is no exageration.



    His eyes lit up, he stood up in his cubicle, looked around to see if anybody was listening in. Then leaned over to me and said, "I actually prefer macs over PCs." He then even went on explaining why they (the G4 mainly) are much better then the P4. We talked about this until I left about an hour and a half later. What was most interesting was that during our talk, he actually didn't just stick to comparing the G4 to the P4. He went on saying what the next generation of processors would bring to the game, such as the iTanium. however he said there was a problem that would keep them from scaling over 800 mhz for atleast another 6-8 months and intel was going to heavily rely on them once they get past the problem.

    Then the big one came.

    He said, "Even though the itanium's new capabilities are going to be great, Motorola's new proc is going to be something truly to behold." He actually said that intel would really have some trouble in their hands trying to get the iTanium to even come close to keeping up and that new designs woudn't be ready for a very long time.
  • Reply 28 of 29
    well, i think it's important to note that important apps are starting to get optimizations (again, noteably maya and lightwave). so even if the majority of apps don't have it, i'd argue that the majority of apps don't need it (ie, why does office xp need sse2 opimizations? i mean that paperclip only takes up so much processor time)

    even though Chris cox said that there'll be marginal improvement in photoshop with sse extensions, i believe we'll see it in.



    [quote]In the past we've also seen Intel "sprucing up" the SPECmark source code a little with #pragmas and the like to get their compiler to work better.<hr></blockquote>i thought that spec had specific rules about changing source code. perhaps some linkage?

    [quote]And we've seen them use their compiler to compare Intel processor performance to AMD processor performance... which is grossly unfair because optimization for one is usually pessimization for the other<hr></blockquote>but of course, intel's compiler helps the performance on amd's chips as well. i mean of course we shouldn't see 3dnow! optimizations in an intel compiler, but it still provides an improvement (look here:

    <a href="http://www6.tomshardware.com/cpu/00q4/001125/p4-06.html"; target="_blank">http://www6.tomshardware.com/cpu/00q4/001125/p4-06.html</a>;

    i think the point that i'm desparately trying to make is that yes, right now not everything is optimized, but more and more programs are and the advances seen in spec can be seen in the real world (ie high end "long tasked" programs)



    [quote]Reminds me of ATI coding their drivers to run faster for specific benchmarks while breaking everything else.<hr></blockquote>they just did it again (with the new radion chips). these guys never learn. but again, it's different in this case because again, as i said, intel's improvements are visable (maya says that version 4 improved performance 44% for the pentium 4, 30% (or so) for the athlon) and i don't think that's anything to sneeze at.



    G-Dog:

    remember, the itanium isn't really "working" just yet, and even after the second generation itanium comes out, it'll still be .18 micron and intel will need to do a die shrink on it to catch up with the industry. With that die shrink, it should scale quite well. again, it shouldn't matter because the itanium in the first few years won't even be in the same market as any moto chip. its compitition will be the power4-II (and stuff like that) i think it would be crazy to think that moto would come out with any sort of enterprise server type chip. moto is into "small".

    Now, the g4 vs the p4, yes, the g4 does have it's advantages (lower power consumption, better simd, easier to program simd, runs osx), but the p4 still has the big one. It's faster.



    Ted
  • Reply 29 of 29
    tom's hardware has a full report about a P4 OC'd fine to 3.1 Ghz it is impresive the new ones have a 512k L2 they do smear Athlons now. I fear this means the Mhz gap will continue to grow not to say it matters or intel makes better but they do overclock well a friend of mine has a 533 celeron to 1066 and that shit amazes me.
Sign In or Register to comment.