Intel unleashes Mac-bound "Woodcrest" server chip

1235729

Comments

  • Reply 81 of 565
    jeffdmjeffdm Posts: 12,951member
    Quote:

    Originally posted by backtomac

    When will hyperthreading be resurrected. Seems like a good way to enhance performance without increasing power requirements or heat.



    I thought it generally wasn't as useful as anyone might have expected. It was useful for certain niche tasks but now that dual core is standard, it doesn't seem to be so necessary.
  • Reply 82 of 565
    kukitokukito Posts: 113member
    Intel has a cheap (at least according to them) water cooling solution. It was designed for Extreme Edition systems but I imagine it could be adapted to Woodcrest.



    Intel and AMD are both working on "reverse hyperthreading", also known as Core Multiplexing Technology and is a rumored feature for the Core 2 Duo. In theory it would allow one thread to be split up among the two cores to increase performance.
  • Reply 83 of 565
    aegisdesignaegisdesign Posts: 2,914member
    Quote:

    Originally posted by sunilraman

    Given even current Xeon and Opteron servers AFAIK do *not* use liquid cooling (admittedly they are different form factors of 1U, 2U or whatever rack style) ... the Mac Pros even the highest end will most likely *not* use liquid cooling. Some sort of smart heatpipe system with *ahem* thermal paste applied properly ... Fan noise should not be an issue with dampening material, 120mm fans or whatever.



    I think Apple engineering will deliver internals that deliver the Woodcrest dualie/quad etc power that Mac Pros should have, and they should be able to manage the acoustics quite well. Well, that's the faith I have currently with the Mac Pros.



    Washable filters that prevent dust buildup inside the case that you could easily slide in and out would be nice as well, but that's pushing the wishlist a bit.




    Servers don't need liquid cooling because they don't have to be quiet as they're hidden away in server rooms with extreme air conditioning.



    Ask any server admin and when they take their ear plugs out and their coats off, they'll tell you about their troglodyte existence in the server room and their desire to be sat next to a slightly toasty but quiet desktop computer.
  • Reply 84 of 565
    sunilramansunilraman Posts: 8,133member
    Heh. The server room should be renamed the "server cave"
  • Reply 85 of 565
    sunilramansunilraman Posts: 8,133member
    Core multiplexing [Hyperthreading for Core] link:

    http://www.xtremesystems.org/forums/...d.php?t=104178



    Hardmac.com has a sensible quick word on it:

    "So in the future, non optimized applications will benefit from multicore CPU, even though it will never be as fast as a multicore-aware version of the same application. So this technology might be important for already existing applications, but developers should always try to get the best out of multicore CPU; especially for Pro applications."
  • Reply 86 of 565
    sunilramansunilraman Posts: 8,133member
    Core multiplexing images below. Looks sweet in theory...!







  • Reply 87 of 565
    I sort of wish they wouldn't do it. This strikes me as more of a crutch for programs that aren't MP-aware. I think Apple's big developer boost in the next two years will be when they make Cocoa multi-threading as painless as possible, becuase by WWDC 2007, dual-core will be standard issue in almost all computers.
  • Reply 88 of 565
    mwswamimwswami Posts: 166member
    If it works, it should be very welcome by the gaming community as there are a lot of single thread CPU hungry games out there.
  • Reply 89 of 565
    jeffdmjeffdm Posts: 12,951member
    Quote:

    Originally posted by ZachPruckowski

    I sort of wish they wouldn't do it. This strikes me as more of a crutch for programs that aren't MP-aware. I think Apple's big developer boost in the next two years will be when they make Cocoa multi-threading as painless as possible, becuase by WWDC 2007, dual-core will be standard issue in almost all computers.



    How many years have they had to improve Cocoa's multi-threading?



    This solution seems a bit weird, it would effectively be a very wide issue processor, which makes me wonder why they did dual core. But I imagine that this would improve all programs, single or multithreaded. There will always be portions of programs that can't so easily be broken down to fit multiple threads.
  • Reply 90 of 565
    chuckerchucker Posts: 5,089member
    Why would you need threading on a level as high as Cocoa?
  • Reply 91 of 565
    Quote:

    Originally posted by Chucker

    Why would you need threading on a level as high as Cocoa?



    What I meant was "in XCode". As in:



    1) Write your normal application in XCode in Obj-C

    2) Compile, and check some "multi-processor aware" box

    3) There is no step three.



    As I understand it, software programers are scared (wrong word) because effective multi-threading is a pain in the butt. Therefore, if Apple wants to make Obj-C attractive to developers, and get more Mac-only or Mac-first applications, they need to make programming for the Mac easy and powerful.
  • Reply 92 of 565
    chuckerchucker Posts: 5,089member
    It's not possible like that. Xcode cannot guess what parts of a software can run in parallel (synchronous), and what parts, instead, depend on each other already being finished (asynchronous).
  • Reply 93 of 565
    thttht Posts: 5,444member
    Quote:

    Originally posted by sunilraman

    Core multiplexing images below. Looks sweet in theory...!





    Too bad that in reality this core multiplexing tech and "reverse" hyperthreading is a bunch of hooey.
  • Reply 94 of 565
    thttht Posts: 5,444member
    Quote:

    Originally posted by hmurchison

    Hyperthreading on a Core Architecture processor is going ot be much harder. That's because we've taken the pipeline stages of the Netburst architecture down from a long 31 pipes to about 14 in the Conroe/Merom/Woodcrest core. That doesn't leave Intel a lot of room to utilize pipes that aren't being used (which is what HT technology does).



    I think we'll see HT come back but it may take a bit of work.




    Partially right murch. It is indeed a chip's utilization of the exection units that could make it a good candidate for SMT. Long pipelined processors make good candidates, but short-pipelined processors with lots of execution units also make good candidates.



    Seeing as ICM has the most execution resources of any x86 CPU to date, it could make a better candidate than the Netburst architecture. I don't see SMT coming back as in Core2 CPUs, but for servers it is a cheap win and Intel could bring it back into a future ICM-based Xeon.



    Technology is working against SMT however. What's the point of SMT when 8-core and 16-core processors will be available for consumers 2 fab nodes down the line, ie, 4 years into the future.



    SMT could be the best of all worlds though. A 10-stage pipeline 6-way processor with 4+ integer units, 4 FPUs and 3+ L/S units with SMT could have been the best of all worlds scenario. It could have performed like a dual-core with 3 IUs, 2 FPUs, and 2 LSUs per core, or performed like 6-way processor with 4 IUs/FPUs at its disposal.



    It would be quite a complex challenge to build, while mutl-core processors are quite easy to design and fab in comparison.
  • Reply 95 of 565
    hmurchisonhmurchison Posts: 12,425member
    Quote:

    Partially right murch. It is indeed a chip's utilization of the exection units that could make it a good candidate for SMT. Long pipelined processors make good candidates, but short-pipelined processors with lots of execution units also make good candidates.



    Thank you for making me a more informed AI Slut LOL. Is there a correlation between pipeline states and execution that's fairly common?



    I agree about the cores it's amazing how fast we went from single core processors to 4-core in Kenstfield/Clovertown next year. Hell the more the merrier.
  • Reply 96 of 565
    sunilramansunilraman Posts: 8,133member
    [QUOTE]Originally posted by mwswami

    If it works, it should be very welcome by the gaming community as there are a lot of single thread CPU hungry games out there.




    A pity though in the latest games the GPU is the bottleneck. If there was some super-fast bus and some way for the CPU to "help" the GPU a bit, then that would be cool.



    But right now, multiplayer, and even with physics, a single core AMD64 Athlon at 2.0ghz and 512k of cache, is enough for games.



    The next step for better performance in games in order of importance is:

    -high7 series nVidia GPU or x1800 ati

    -1GB up to 2GB

    -7200rpm up to 10,000rpm sata or RAID 0 to reduce load times



    Like I mentioned somewhere else previously, the gaming upgrade path is a vicious, vicious, life-sucking money-sapping killer. I'm ready to switch to PS2/ XBOX360 sometime over the next few years.......... In July 2007 running the latest and greatest games on 1GB, 7200rpm sata, and nVidia 6600GT 128mb ram, well, it'll be on the "low" setting and there'll be quite some thumb-twiddling waiting for parts of the game to load and all that. Ironically, even at that stage only will single core CPUs just *start* to get maxed out. Anyways by July 2007 only dualcores will be on the market, I suspects.
  • Reply 97 of 565
    placeboplacebo Posts: 5,767member
    I really disagree. An Athlon64 2000+ is far inadequate for smooth play in Battlefield 2, Unreal Tournament 2007, and other large-scale multiplayer games. Add physics to the equation, which are becoming more complex and more prevalent in new games, and a dual core, especially with today's new dual-core optimized titles, looks very appealing. The Unreal Engine 3, Epic Games' next-generation engine, is powering well over a dozen next-generation PC titles, and fully supports dual-core processors.
  • Reply 98 of 565
    sunilramansunilraman Posts: 8,133member
    Athlon64 3000+ at 2ghz ??
  • Reply 99 of 565
    sunilramansunilraman Posts: 8,133member
    So this is just great then. In 2007 I'll have to get an AthlonX2 or Intel Core-something, plus a high-7series nVidia card, plus another 1GB, plus set up RAID 0. Nice.



    Looks like Half Life 2 (Episode 1 ++) will be the last major game I'll be able to enjoy smooth fun gameplay, and a few other major titles for the next year.



    Once UT2007-engine driven games come out, yeah, 2007 looks like me switching to console gaming. UT2007 looks great, but the demands on hardware will be high. Vista can kiss my a$$, I'll stick to XP2 as long as possible. But gaming wise, *sigh*. Can't keep fuxxing constantly upgrading my PC. Just for one or two decent titles every few months*.



    *Like Myst V I thought it would be great but -- WTF was going on with that? I could get past the first 10 minutes when you go to the beach and then there is some ladder you can't reach and weird writing on tablet stuff. WTF ??!!
  • Reply 100 of 565
    jeffdmjeffdm Posts: 12,951member
    Quote:

    Originally posted by sunilraman

    So this is just great then. In 2007 I'll have to get an AthlonX2 or Intel Core-something, plus a high-7series nVidia card, plus another 1GB, plus set up RAID 0. Nice.



    I wonder how much RAID-0 would really help a game. There's been a little bit of debate about it, but I think the conclusion is that it doesn't help much if you are handling a lot of small files but it does help in handling a few very large files.
Sign In or Register to comment.