Intel unleashes Mac-bound "Woodcrest" server chip

12324252628

Comments

  • Reply 541 of 565
    melgrossmelgross Posts: 33,579member
    Quote:
    Originally Posted by BradMacPro


    http://www.macrumors.com/pages/2006/...02151736.shtml



    Macrumors' source is expecting Conroes in low end Mac Pro and Woodcrest in high end. Also Xeon (Woodcrest) xServe. They don't expect 3GHz for 1U form factor, but Dell does offer 3GHz 1U server.



    Who cares what they think. They are just as knowledagable as we are.
  • Reply 542 of 565
    aegisdesignaegisdesign Posts: 2,914member
    Quote:
    Originally Posted by melgross


    What purpose could Pixar find for this? It's useless for rendering their work, and it isn't needed to view it either.



    Why?



    Surely they'd like their roughs to be as accurate as possible as quickly as possible without tying up a render farm. I wasn't talking about final renders, just working renders as the animators are working. Back when I started with a 68000 Amiga dabbling with 3D in Sculpt3D I had a 640x512 32 colour line display to work on. Over time that then became gourad shaded polygons then smooth badly lit shapes with no reflections or refractions. Now we're getting to realtime game level graphics pretty much. A really faster rendering engine sat beside your desk takes it a bit further. I'm sure animators would love to animate water in realtime, weather effects, you name it.
  • Reply 543 of 565
    melgrossmelgross Posts: 33,579member
    Quote:
    Originally Posted by aegisdesign


    Why?



    Surely they'd like their roughs to be as accurate as possible as quickly as possible without tying up a render farm. I wasn't talking about final renders, just working renders as the animators are working. Back when I started with a 68000 Amiga dabbling with 3D in Sculpt3D I had a 640x512 32 colour line display to work on. Over time that then became gourad shaded polygons then smooth badly lit shapes with no reflections or refractions. Now we're getting to realtime game level graphics pretty much. A really faster rendering engine sat beside your desk takes it a bit further. I'm sure animators would love to animate water in realtime, weather effects, you name it.



    For several reasons. If they had to do that, they could use a subset of their farm, and still do it faster. Secondly, film rendering is done in several passes. Not that rarely, 24 times per frame. Each time adding additional effects.



    Film animation is 2,000 x 4,000, and going to 3,000 x 6,000, with some work now being done at 4,000 x 8,000. Game boards, no matter how good, and in whatever number can't handle that.



    Thirdly, the algorithms used for film are more sophisticated than that used for games. While GPU's are programmable, they are still limited to what is built into the GPU. Cpu's have no such limitation.



    Physics, as done in animated films is also far more sophisticated than that just beginning to be done in games. There is no comparable system in a GPU to make use of the formulas.



    Here is an interesting article about that.



    http://www.tomshardware.com/2006/08/...aming_physics/
  • Reply 544 of 565
    aegisdesignaegisdesign Posts: 2,914member
    ok, I'll just stop there melgross since you're not reading what I wrote.
  • Reply 545 of 565
    sunilramansunilraman Posts: 8,133member
    Quote:
    Originally Posted by melgross


    For several reasons. If they had to do that, they could use a subset of their farm, and still do it faster. Secondly, film rendering is done in several passes. Not that rarely, 24 times per frame. Each time adding additional effects.



    Film animation is 2,000 x 4,000, and going to 3,000 x 6,000, with some work now being done at 4,000 x 8,000. Game boards, no matter how good, and in whatever number can't handle that.



    Thirdly, the algorithms used for film are more sophisticated than that used for games. While GPU's are programmable, they are still limited to what is built into the GPU. Cpu's have no such limitation.



    Physics, as done in animated films is also far more sophisticated than that just beginning to be done in games. There is no comparable system in a GPU to make use of the formulas.



    Here is an interesting article about that.



    http://www.tomshardware.com/2006/08/...aming_physics/





    Melgross, I was reading the latest issue of CineFex. http://www.cinefex.com/ The print version.



    Firstly, nVidia has a system that leverages initial work done by the GPU to then go to CPU to produce decent photorealistic images. (Gelato and Sorbetto). This is not sufficient for film work though.



    If we are talking about movie visual effects, there are a lot of details with 3D pipeline, cloth simulation, water, etc, photogrammetry, and compositing.



    On your points. I just read Cinefex today so I am going to go hard on this one.



    Firstly, film rendering yes is done in several passes, because this gives the final compositing a wide latitude to "hand-tweak" different elements. For example, adjusting specular highlights that's done as a single pass, that is tuned based on the look. Same with maybe shadow detail, etc. This doesn't mean that they could not use the Quadroplex as the 3D tool - using a bit of the farm is totally unnecessary - the farm is for concrete, final renders. The Quadroplex is for the initial visualisation and positioning in the 3D pipeline - moving up from what 3D animators see as say simple-textured objects to maybe a scene which has more complex lighting - it can give them more in terms of them working real-time with something that would look close(r) to the final render without having to "waste" time/ render farm doing test renders.



    Secondly, film VFX output is talking about 2K and 4K. Again, depending on how awesome your Quadroplex is, your 3D tool will bring you closer to working at those actual resolutions so that your work in realtime looks closer to the render passes.



    Thirdly, algorithms you are talking about are very arbitrary and the 3D vfx film pipeline is very complex. You have everything from cloth simulation to custom water effects simulation to projection of textures onto 3D objects, to even "de-aging" which was used in Xmen3 to make Magneto look younger when he first visits Jean Grey who is just a child. In Xmen 3, in the scene where Jean Grey/ Phoenix rips apart Charles Xavier, the whole background of the house with all the objects swirling around was completely 3D. They photographed a real house set and mapped the textures of the whole thing to 3D objects. They then custom applied a hard-body deformation/ destruction algorithm for the scene where everything is getting ripped apart, with the final bit Charles Xavier getting "atomized". I'm indulging myself here to essentially agree with you though. What happens with these algorithms are strongly custom-scripted to process and render via CPU. What I will say is your third point here is not quite on target because we are talking about the *display* of what the VFX people see on their screens. Let's say for the house-object-destruction scene. Current GPUs they use may only allow them to use low-res textures to visualise a rough render in real time of all the house objects flying around and tearing apart. The Quadroplex with 1GB texture memory or more, for example, could let them visualise higher res textures in that scene - the 3D object manipulation and destruction is a CPU algorithm but the display of 3D objects texture mapped is a GPU thing.



    Fourthly, same point as above example. Game physics, yeah, is nowhere remotely close to the shit that film vfx people do - cloth, soft-body dynamics, hard-body dynamics, custom scripted fluid dynamics, water effects, flame effects, smoke, etc. This is all CPU. The point here is the visualisation when working within the 3D tool. We are focusing on the display side of things.



    Ultimately the Quadroplex use in film vfx (as nVidia would like to market it, IMO) is that it can produce a realtime environment for compositing and 3D animation that moves it a level closer to the final render passes. Less test-renders and a better visualisation environment is the goal here.
  • Reply 546 of 565
    sunilramansunilraman Posts: 8,133member
    GPUs are getting more powerful and there is a lot of work required for it to take on other tasks such as certain previously-done-in-CPU algorithms.



    The focus as I see it for GPU technology in Film VFX environments is a 3D visualisation, animation and compositing environment that allows textures, lighting, mapping, etc, moving closer and closer to the test renders and eventually closer and closer to the render passes, and then eventually, closer and closer to the final render where you can in realtime tweak your 3D+compositing tool such that you can have say, tons of 3D Geometry-highres textured, 10+ flame layers, 10+ water effects layers, 10+ lighting elements, 10+ layers of real footage of actors/ physical effects, all highly GPU-accelerated, so you are working essentially in realtime with something very close to the final render. This of course working closely alongside a powerful workstation CPU multicore that handles the trickier bits of 3D geometry manipulation and particle effects, etc. The key is maximising GPU and CPU to work essentially in realtime with something very close to the final shot. Directors and VFX supervisors would absolutely love that.



    As the tools evolve so will the final render passes and the workflow of locking down the final shots - it's not a fixed thing between GPU and CPU on the workstation side and the Render Farm on the other side. It's a dialogue of sorts... and will evolve as GPU and CPUs on the workstation side get better, and Render Farm distribution, management, and technology get better too.



    Remember that in several years all the principal and on-site photography is going to be shot in digital 2K or 4K high definition cameras. That will mix things up to, in terms of GPU, CPU, render farms, workflow, etc....
  • Reply 547 of 565
    melgrossmelgross Posts: 33,579member
    Quote:
    Originally Posted by aegisdesign


    ok, I'll just stop there melgross since you're not reading what I wrote.



    I did read it. I always read what you write. But I said that it still wouldn't be of use because it can't duplicate properly enough of what they need to see if is is coming out correctly. That was the jist of my last post.
  • Reply 548 of 565
    melgrossmelgross Posts: 33,579member
    Quote:
    Originally Posted by sunilraman


    Melgross, I was reading the latest issue of CineFex. http://www.cinefex.com/ The print version.



    Firstly, nVidia has a system that leverages initial work done by the GPU to then go to CPU to produce decent photorealistic images. (Gelato and Sorbetto). This is not sufficient for film work though.



    If we are talking about movie visual effects, there are a lot of details with 3D pipeline, cloth simulation, water, etc, photogrammetry, and compositing.



    On your points. I just read Cinefex today so I am going to go hard on this one.



    Firstly, film rendering yes is done in several passes, because this gives the final compositing a wide latitude to "hand-tweak" different elements. For example, adjusting specular highlights that's done as a single pass, that is tuned based on the look. Same with maybe shadow detail, etc. This doesn't mean that they could not use the Quadroplex as the 3D tool - using a bit of the farm is totally unnecessary - the farm is for concrete, final renders. The Quadroplex is for the initial visualisation and positioning in the 3D pipeline - moving up from what 3D animators see as say simple-textured objects to maybe a scene which has more complex lighting - it can give them more in terms of them working real-time with something that would look close(r) to the final render without having to "waste" time/ render farm doing test renders.



    Secondly, film VFX output is talking about 2K and 4K. Again, depending on how awesome your Quadroplex is, your 3D tool will bring you closer to working at those actual resolutions so that your work in realtime looks closer to the render passes.



    Thirdly, algorithms you are talking about are very arbitrary and the 3D vfx film pipeline is very complex. You have everything from cloth simulation to custom water effects simulation to projection of textures onto 3D objects, to even "de-aging" which was used in Xmen3 to make Magneto look younger when he first visits Jean Grey who is just a child. In Xmen 3, in the scene where Jean Grey/ Phoenix rips apart Charles Xavier, the whole background of the house with all the objects swirling around was completely 3D. They photographed a real house set and mapped the textures of the whole thing to 3D objects. They then custom applied a hard-body deformation/ destruction algorithm for the scene where everything is getting ripped apart, with the final bit Charles Xavier getting "atomized". I'm indulging myself here to essentially agree with you though. What happens with these algorithms are strongly custom-scripted to process and render via CPU. What I will say is your third point here is not quite on target because we are talking about the *display* of what the VFX people see on their screens. Let's say for the house-object-destruction scene. Current GPUs they use may only allow them to use low-res textures to visualise a rough render in real time of all the house objects flying around and tearing apart. The Quadroplex with 1GB texture memory or more, for example, could let them visualise higher res textures in that scene - the 3D object manipulation and destruction is a CPU algorithm but the display of 3D objects texture mapped is a GPU thing.



    Fourthly, same point as above example. Game physics, yeah, is nowhere remotely close to the shit that film vfx people do - cloth, soft-body dynamics, hard-body dynamics, custom scripted fluid dynamics, water effects, flame effects, smoke, etc. This is all CPU. The point here is the visualisation when working within the 3D tool. We are focusing on the display side of things.



    Ultimately the Quadroplex use in film vfx (as nVidia would like to market it, IMO) is that it can produce a realtime environment for compositing and 3D animation that moves it a level closer to the final render passes. Less test-renders and a better visualisation environment is the goal here.



    I'm familiar with it, and it does have a way to go. you actually made some of my points for me. These GPU engines are still game engines. they will give a rough (very rough) visualization of what the final work will be. But the problem with that, at this time, at least, is that it is too far from the final product.



    And, yes, the algorythms are arbitrary. That was the point. They also take up much computing time.



    While I'm not saying that these systems are of NO value, they don't deliver enough to bring the roughs as close as is needed to give them a good idea if what they are doing is working out or not.
  • Reply 549 of 565
    melgrossmelgross Posts: 33,579member
    Quote:
    Originally Posted by sunilraman


    GPUs are getting more powerful and there is a lot of work required for it to take on other tasks such as certain previously-done-in-CPU algorithms.



    The focus as I see it for GPU technology in Film VFX environments is a 3D visualisation, animation and compositing environment that allows textures, lighting, mapping, etc, moving closer and closer to the test renders and eventually closer and closer to the render passes, and then eventually, closer and closer to the final render where you can in realtime tweak your 3D+compositing tool such that you can have say, tons of 3D Geometry-highres textured, 10+ flame layers, 10+ water effects layers, 10+ lighting elements, 10+ layers of real footage of actors/ physical effects, all highly GPU-accelerated, so you are working essentially in realtime with something very close to the final render. This of course working closely alongside a powerful workstation CPU multicore that handles the trickier bits of 3D geometry manipulation and particle effects, etc. The key is maximising GPU and CPU to work essentially in realtime with something very close to the final shot. Directors and VFX supervisors would absolutely love that.



    As the tools evolve so will the final render passes and the workflow of locking down the final shots - it's not a fixed thing between GPU and CPU on the workstation side and the Render Farm on the other side. It's a dialogue of sorts... and will evolve as GPU and CPUs on the workstation side get better, and Render Farm distribution, management, and technology get better too.



    Remember that in several years all the principal and on-site photography is going to be shot in digital 2K or 4K high definition cameras. That will mix things up to, in terms of GPU, CPU, render farms, workflow, etc....



    That will take a long time. While all of the GPU evolution is taking place, cpu evolution is also continuing.



    While GPU's are evolving faster, they have a very long way to go. Many in the field don't think that GPU's have much of a place in film development yet.



    As I said, perhaps someday. But with dual, quad, and eight core cpu's out, or around the corner, and GPU's consuming ever increasing amounts of power, I don't see the relationship between them changing any time soon.
  • Reply 550 of 565
    mjteixmjteix Posts: 563member
    Quote:
    Originally Posted by onlooker


    But if apple is using the intel chipset there will be no hypertransport in which Apple was one of the first players on the board of the hypertransport consortium. I find the whole thing intriguing. Apple on intel, and how friendly is intel actually? We will know immediately when we see Apples motherboard. If it's the standard Blackfoot chipset, or whatever it was called I forget, intel and Apple are obviously already playing politics, and this will directly translate to lesser performance than the Apple initiative, and intel cooperative teams designing Apples board specifically like Apple has in the past.



    Although what I do hope for is the availability of the Nvidia SLI-2X cards. If your not going to be cutting edge ( as they have always claimed to be ) at least they can attempt to make us happy. While they bend over, and take it from intel.



    I didn't want to stop all these funny dreams of yours about different flavors of ice cream and quadroplex engines, but I found a more realistic article today, that I wanted to share with you, concerning chipsets, hypertransport and nvidia (of course).



    Well, it looks like Intel's blessed nvidia's nForce 590 SLI Intel Edition chipset (it's for Core 2 Duo, not Woodcrest, but it's a beginning, I suppose), more info here.



    Quick specs: 1066FSB, dual 16x PCIe slots SLI-ready (46 lines total), dual channel DDR2-800 memory, dual GB Ethernet, USB, PATA, SATA II, Digital Audio, and lots of PCI/PCIe lines for FW, Bluetooth, Airport, etc... Surprisingly (or not) the north and south bridges are linked via HyperTransport (8GB/s)... I believe it may support a 16-4-16-4 configuration of PCIe slots in a 4-slot Conroe-based Mac Pro.



  • Reply 551 of 565
    Quote:
    Originally Posted by mjteix


    I didn't want to stop all these funny dreams of yours about different flavors of ice cream and quadroplex engines, but I found a more realistic article today, that I wanted to share with you, concerning chipsets, hypertransport and nvidia (of course).



    Well, it looks like Intel's blessed nvidia's nForce 590 SLI Intel Edition chipset (it's for Core 2 Duo, not Woodcrest, but it's a beginning, I suppose), more info here.



    Quick specs: 1066FSB, dual 16x PCIe slots SLI-ready (46 lines total), dual channel DDR2-800 memory, dual GB Ethernet, USB, PATA, SATA II, Digital Audio, and lots of PCI/PCIe lines for FW, Bluetooth, Airport, etc... Surprisingly (or not) the north and south bridges are linked via HyperTransport (8GB/s)... I believe it may support a 16-4-16-4 configuration of PCIe slots in a 4-slot Conroe-based Mac Pro.







    apple is more likely to use a intel chip then a ATI or NVIDIA one and that may mean no SLI Or crross fire in a mac. Even more so for a 2 cpu system there are NVIDIA Chip sets for AMD that work with 2 cpus and let you use SLI.
  • Reply 552 of 565
    onlookeronlooker Posts: 5,252member
    Quote:
    Originally Posted by Joe_the_dragon


    apple is more likely to use a intel chip then a ATI or NVIDIA one and that may mean no SLI Or crross fire in a mac. Even more so for a 2 cpu system there are NVIDIA Chip sets for AMD that work with 2 cpus and let you use SLI.





    Joe there has been smoking the dragon weed from Kilimanjaro if he thinks there is going to be an intel graphics chip ~ala Mac mini~ inside a woodcrest based Apple workstation.



    If there is an intel board that uses SLI a few days before WWDC I say that is our first form of a hint of what is coming people.





    So now it's



    ~ OMG WWDC PORN AND SLI! OMG ~



    Hey! I want a bag of whatever Joes been smokin!
  • Reply 553 of 565
    sunilramansunilraman Posts: 8,133member
    Melgross: Fair enough with regard to your posts. We will have to wait and see how nVidia's Quadroplex sales go and how their penetration into the Film VFX industry goes.
  • Reply 554 of 565
    sunilramansunilraman Posts: 8,133member
    ARGHGHHGH where's my signature?????
  • Reply 555 of 565
    sunilramansunilraman Posts: 8,133member
    Quote:
    Originally Posted by mjteix


    I didn't want to stop all these funny dreams of yours about different flavors of ice cream and quadroplex engines, but I found a more realistic article today, that I wanted to share with you, concerning chipsets, hypertransport and nvidia (of course).



    Well, it looks like Intel's blessed nvidia's nForce 590 SLI Intel Edition chipset (it's for Core 2 Duo, not Woodcrest, but it's a beginning, I suppose), more info here.



    Quick specs: 1066FSB, dual 16x PCIe slots SLI-ready (46 lines total), dual channel DDR2-800 memory, dual GB Ethernet, USB, PATA, SATA II, Digital Audio, and lots of PCI/PCIe lines for FW, Bluetooth, Airport, etc... Surprisingly (or not) the north and south bridges are linked via HyperTransport (8GB/s)... I believe it may support a 16-4-16-4 configuration of PCIe slots in a 4-slot Conroe-based Mac Pro.







    Yeah this will be the start-off point for SLI'd Conroe systems. But Why is the graphics 16x on the SPP side and another 16x on the MCP side? I guess this is to accommodate 2 16x PCIe cards, one of which is NOT a GPU? Otherwise I hope the HyperTransport link will do well for the nForce having to collate GPU SLI data from one card on the SPP side and the other card on the MCP side. Hmmm.. Confused, I am.
  • Reply 556 of 565
    sunilramansunilraman Posts: 8,133member
    Quote:
    Originally Posted by Joe_the_dragon


    Apple is more likely to use a intel chip then a ATI or NVIDIA one and that may mean no SLI Or crross fire in a Mac. Even more so for a 2 cpu system there are NVIDIA Chip sets for AMD that work with 2 cpus and let you use SLI.



    I'm with Joe, Onlooker, w.r.t. Apple going for a custom Intel Woodcrest motherboard in the Mac Pro. I'm rating 95 to 1 odds against SLI showing up. You never know, but there's where my chips are at the moment. Anyways SLI aside, Apple having an nVidia chipset means having to deal with both Intel and nVidia, whereas as we have seen they want a one-stop-shop no-bullshit (although nVidia I am confident can rise to the challenge) vendor partnership with Intel.



    Nonetheless, for the PC people, and PC gamers, the nForce 5-series chipsets for Intel Conroe will be interesting, dare I say sexy and tasty. PCs with 2 CPUs are not very common in the enthusiast/ mainstream/ enterprise desktop space. So Conroe PCs will become the new "standard" for Intel *desktops* going through 2007, while we see the Netburst Pentium dualcores carve out the low-mid to low end. I love my nVidia nForce4 SLI (x8 x8 PCIExpress in SLI though) on Asus A8N SLI [normal]. I look forward to the Asus motherboards based on nForce 5-series chipsets. One concern is the 590 with the 2 PCIExpress 16x on separate bridges, but Tomshardware and AnandTech will SLI benchmark the hell out of the chipset so we'll see how well HyperTransport flies



    AMD will be playing catchup, 2 CPUs are not common for enthusiast/ mainstream/ corporate desktop environments. AMD from now through 2007 will compete on price and lower (than previous AMDs) TDPs of their X2's and FX's in single-CPU dualcore setups on AM2 socket. On the low-mid to low end through 2007 we'll see singlecore Athlons and Semprons. In one year's time we'll see how AMD-ATI goes, how and when they can move down to 65nm on CPUs and 90nm/ 65nm (???) on the GPUs, and also how ATI chipsets will fare. I wonder if nVidia is going to lose interest in AMD-nVidia boards, but I think nVidia and AMD both can't afford to just cut off the huge momentum, or at least decent slice of the pie, of AMD-nVidia solutions across gaming, mainstream, and corporate desktop markets. AMD-nVidia solutions have an excellent reputation in the PC gamer/ enthusiast market, with momentum built in the past few years as Pentium 4/ D/ Extreme Netbursts floundered.
  • Reply 557 of 565
    Quote:
    Originally Posted by onlooker


    Joe there has been smoking the dragon weed from Kilimanjaro if he thinks there is going to be an intel graphics chip ~ala Mac mini~ inside a woodcrest based Apple workstation.



    If there is an intel board that uses SLI a few days before WWDC I say that is our first form of a hint of what is coming people.





    So now it's



    ~ OMG WWDC PORN AND SLI! OMG ~



    Hey! I want a bag of whatever Joes been smokin!




    I was talking about the intel chip set not a intel video card.
  • Reply 558 of 565
    onlookeronlooker Posts: 5,252member
    Quote:
    Originally Posted by Joe_the_dragon


    I was talking about the intel chip set not a intel video card.





    I kinda new that, but I felt like Joking around.
  • Reply 559 of 565
    Intel's chipsets support CrossFire, but I don't know how well that's going to go now that AMD went off and eloped with Intel's graphical wife.
  • Reply 560 of 565
    sunilramansunilraman Posts: 8,133member
    Quote:
    Originally Posted by theapplegenius


    Intel's chipsets support CrossFire, but I don't know how well that's going to go now that AMD went off and eloped with Intel's graphical wife.



    Well, Intel is kind of polygamous - AMD eloped with *one* of Intel's graphical wives.

    Could be good for the remaining marriages - nVidia and the Intel First Wife (Integrated Graphics)
Sign In or Register to comment.