The Intel Powermac / Powermac Conroe / Mac Pro thread

1192022242548

Comments

  • Reply 421 of 946
    gene cleangene clean Posts: 3,481member
    Quote:

    Originally posted by Placebo

    And yet they're still going to wait for WWDC to release the towers.



    'Cause they think that, if it's a secret, people will buy more of them.
  • Reply 422 of 946
    smalmsmalm Posts: 677member
    Quote:

    Originally posted by onlooker

    But I do now. The nForce4 SLI X16 chipset is the way to go. Wicked stuff.



    Better read the specs again Onlooker. The second 16x PCI-E is hanging on the MCP not the SPP. This is really not what you are looking about.
  • Reply 423 of 946
    placeboplacebo Posts: 5,767member
    I don't give a shit about SLI as long as they have a lineup of market-price GPUs to choose from.
  • Reply 424 of 946
    wmfwmf Posts: 1,164member
    Quote:

    Originally posted by MacRonin

    GMA950 or later is a shoe-in for the MacServe (?) refresh…



    Nope; server chipsets don't support integrated graphics at all. I would guess an ATI ES100 is more likely.
  • Reply 425 of 946
    macroninmacronin Posts: 1,174member
    Quote:

    Originally posted by wmf

    Nope; server chipsets don't support integrated graphics at all. I would guess an ATI ES100 is more likely.



    Well, they need to do a server chipset that does integrated graphics, because losing half of your expansion capabilities for occasional monitoring is a lousy tradeoff?
  • Reply 426 of 946
    wmfwmf Posts: 1,164member
    Quote:

    Originally posted by MacRonin

    Well, they need to do a server chipset that does integrated graphics, because losing half of your expansion capabilities for occasional monitoring is a lousy tradeoff…



    In servers they attach the VGA controller to the slow PCI bus (not PCI-X or PCI Express), losing no capabilities.
  • Reply 427 of 946
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by smalM

    Better read the specs again Onlooker. The second 16x PCI-E is hanging on the MCP not the SPP. This is really not what you are looking about.



    Are you talking about the AMD version that uses the MCP as the southbridge to connect the nForce4 System Platform Processor (SPP). That is only on the AMD vesion.



    The Intel core logic already incorporates a System Platform Processor (SPP) so the MCP included on the current Intel chipsets has its PCI Express lanes disabled; enabling them is all that NVIDIA needs to do.
  • Reply 428 of 946
    macroninmacronin Posts: 1,174member
    Quote:

    Originally posted by wmf

    In servers they attach the VGA controller to the slow PCI bus (not PCI-X or PCI Express), losing no capabilities.



    But in Xserves there are a total of two (2) expansion slots, and if you want any type of video out, you use a slot, thereby losing half of your expansion capabilities?



    Because you see, sometimes one might want to have a Fibre Channel card installed for access to a XSAN setup?



    And one might also wish to have an internal SATA RAID controller installed at the same time running the internal HDDs?



    I personally would prefer to have the ability to switch from server to server with a KVB box for certain things, as well as using good old Apple Remote Desktop for other tasks?



    So yeah, integrated graphics would be nice?



    Digital out as opposed to VGA would be nice also? Who says you gotta have a klunky old CRT in the server room?? How about a nice 23" ACD as server central control instead??!?



    ;^p
  • Reply 429 of 946
    emig647emig647 Posts: 2,455member
    Quote:

    Originally posted by onlooker

    Looks like we have a few people anxiously awaiting the Mac Pro arrival. I'm actually hoping it has two -16X full speed PCI-E slots. I'll throw two quadro's in there without thinking twice about it.

    If your wondering what you would do with those extra cores. Any type of rendering should see almost double the performance. Most 3D rendering is done from the CPU not the GPU, but with Apples core image, and core video I think some is offloaded to the GPU. (like it needs it). All I really do know is my Mental Ray render times should be pretty impressive. And with two quadro's I should be able to work, and move models, and scenes around with billions of polygons without any fuss. {end of excited rant mode]




    Which application are you talking about Onlooker? As far as I know none of the major 3d rendering apps have implemented core graphics yet. Or are you not talking about 3d rendering?
  • Reply 430 of 946
    smalmsmalm Posts: 677member
    Quote:

    Originally posted by onlooker

    Are you talking about the AMD version



    Both versions have one x16 PCI-E on the SPP and one on the MCP.

    The Intel version has 4 additional x1 PCI-E on the SPP the AMD version has only 2.



    I found pictures on anandtech.
  • Reply 431 of 946
    smalmsmalm Posts: 677member
    Intel announced some benchmarks.
  • Reply 432 of 946
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by emig647

    Which application are you talking about Onlooker? As far as I know none of the major 3d rendering apps have implemented core graphics yet. Or are you not talking about 3d rendering?



    I was thinking of RenderMan for Maya, and C4D, but I have no idea if they actually use it yet. I am under the possible "illusion" that once Apple has a workstation that runs windows apps along side Mac OS versions that some developers may see just what they can pull off out of one of their Mac stations. Luxology (modo) will probably be the first to leverage all they can from the Mac if C4d, and RenderMan have not started. Hopefully Apple will show 3D, and game developers how to accomplish this task efficiently at WWDC.

    Using the GPU as another rendering processor has to be as intriguing to a developer as it is to a user. Todays single core GPU's are still running circles around multi core CPU's. Add in the SLI bridge, and we could see a huge boost in performance for the smaller CG house, individual user, student, and workstation rooted CG artist. It's things like this that will get developers like Softimage (XSI), and Autodesk (3DSMax, and now Maya) to take notice of the possibility that Apple may be making the better platform for the 3D workstation, and they should consider bringing their apps over to the Mac once they take notice of what their competitors have accomplished.
  • Reply 433 of 946
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by smalM

    Both versions have one x16 PCI-E on the SPP and one on the MCP.

    The Intel version has 4 additional x1 PCI-E on the SPP the AMD version has only 2.



    I found pictures on anandtech.




    So what's the problem with that? And compared to what? What is supposedly going to be faster? Anything?





    If anyone is curiously wondering:



    MEDIA AND COMMUNICATIONS PROCESSORS (MCP)

    SYSTEM PLATFORM PROCESSORS (SPP)



    Here is some reading to do if your interested. I don't see why anyone is griping about the MCP though. I don't see anything faster than the Nforce 4 SLI 16X out there.



    http://www.xbitlabs.com/news/chipset...812032457.html



    BTW, Notice DELL Is exclusively first on board with the new Nvidia 16X SLI for consumer, and enthusiasts alike.
  • Reply 434 of 946
    emig647emig647 Posts: 2,455member
    Quote:

    Originally posted by onlooker

    I was thinking of RenderMan for Maya, and C4D, but I have no idea if they actually use it yet. I am under the possible "illusion" that once Apple has a workstation that runs windows apps along side Mac OS versions that some developers may see just what they can pull off out of one of their Mac stations. Luxology (modo) will probably be the first to leverage all they can from the Mac if C4d, and RenderMan have not started. Hopefully Apple will show 3D, and game developers how to accomplish this task efficiently at WWDC.

    Using the GPU as another rendering processor has to be as intriguing to a developer as it is to a user. Todays single core GPU's are still running circles around multi core CPU's. Add in the SLI bridge, and we could see a huge boost in performance for the smaller CG house, individual user, student, and workstation rooted CG artist. It's things like this that will get developers like Softimage (XSI), and Autodesk (3DSMax, and now Maya) to take notice of the possibility that Apple may be making the better platform for the 3D workstation, and they should consider bringing their apps over to the Mac once they take notice of what their competitors have accomplished.




    It is intriguing to developers. I was at wwdc 04 when they first showed off this technology... needless to say... I was floored! Problem that I see is I don't see any correlation between the Core Graphics' library functions and what a 3d modeling program has to offer. Mostly they deal with images and video.



    As far as I know, Maxon hasn't implemented any type of CoreGraphics. As far as the rest I'm unsure because I don't use them. But I still fail to see what parts of the library they could use. The CoreGraphics library is fairly high level which means it would take a massive inheritance overhaul to just talk to the gpu and do a simple 'offload to the gpu' type of thing. I'm sure these developers that make these programs could do that in their sleep if they wanted to. Curious as to why they haven't...
  • Reply 435 of 946
    emig647emig647 Posts: 2,455member
    Quote:

    Originally posted by onlooker



    BTW, Notice DELL Is exclusively first on board with the new Nvidia 16X SLI for consumer, and enthusiasts alike.




    you said dell...... :P. I take it you're talking about pre-built box manufacturers? Those boards have been avail for.... what over a month now?
  • Reply 436 of 946
    emig647emig647 Posts: 2,455member
    I want to go on record and say this.



    I for one believe Intel has a strong few years ahead of them. I believe their roadmap is a lot nicer than AMD's. I'm happy that Apple has gone with Intel for processors. As far as i'm concerned the last few years have been a transition period. Think back to the ATI vs NVidia days a few years ago.



    ATI was rocking NVidia all over the place. During the 9600xt / 9800xt and a little before that... NVidia sucked... they had the 5200, 5600 and 5900. Nvidia wasn't winning one benchmark. But to me that was a transition period... then NVidia came out swinging with the 6600 and 6800. Since has been in the lead IMO. During this transition period NVidia was building new fabs (like intel) had a better roadmap ahead of them than ATI (like intel) and had strong industry support (like intel).



    Mark my words, intel is going to cream amd in the coming years.
  • Reply 437 of 946
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by emig647

    you said dell...... :P. I take it you're talking about pre-built box manufacturers? Those boards have been avail for.... what over a month now?



    I don't see why repeating what I said is so funny? Please explain?
  • Reply 438 of 946
    emig647emig647 Posts: 2,455member
    Quote:

    Originally posted by onlooker

    I don't see why repeating what I said is so funny? Please explain?



    I was laughing at Dell being the first to adopt. Like it was some big thing that dell adopted. It was inevitable that the pre-built manufacturers were going to start adopting. Inside joke I guess. I don't have a lot of respect for dell and the products they release. It seems to me that everyone goes bonkers when dell does something ... 'omfg dell switched to amd' 'omfg dell bought alienware'... We've all had access to Asus's 16x SLI board for a while now. Of course dell was going to start using it. Why wouldn't they?
  • Reply 439 of 946
    onlookeronlooker Posts: 5,252member
    Quote:

    Originally posted by emig647

    I was laughing at Dell being the first to adopt. Like it was some big thing that dell adopted. It was inevitable that the pre-built manufacturers were going to start adopting. Inside joke I guess. I don't have a lot of respect for dell and the products they release. It seems to me that everyone goes bonkers when dell does something ... 'omfg dell switched to amd' 'omfg dell bought alienware'... We've all had access to Asus's 16x SLI board for a while now. Of course dell was going to start using it. Why wouldn't they?



    The reason I plugged the DELL was because Apple openly uses DELL as their arch nemesis, or rival in prebuilt machines IMHO.



    Personally in the workstation marketplace I was a fan (and owner) of the Alienware price, and performance vs. BOXX, but then - you know who - (DELL) noticed what a great thing they had going, and decided to scoop them up. I was a little pissed off about that one, but I think Apple can now become a real force in the workstation marketplace for all things with what they "can possibly" offer that no one else can.



    Quote:

    Originally posted by emig647

    It is intriguing to developers. I was at wwdc 04 when they first showed off this technology... needless to say... I was floored! Problem that I see is I don't see any correlation between the Core Graphics' library functions and what a 3d modeling program has to offer. Mostly they deal with images and video.





    Rendering the image is what it's all about isn't it? Obviously the Modeling, animation parts of the application isn't what you would be developing it for.

    You would be incorporating it into the render engine. Wouldn't you be using the lib to influence how, and what % CPU, and GPU were extracting your data through Image codec. Maybe a dual Nvidia GPU's under SLI bridge can render an image much faster than a CPU. Doesn't the lib determine what percentages can, or need to be offloaded in other similar situations already?
  • Reply 440 of 946
    emig647emig647 Posts: 2,455member
    Quote:

    Originally posted by onlooker

    The reason I plugged the DELL was because Apple openly uses DELL as their arch nemesis, or rival in prebuilt machines IMHO.



    Personally in the workstation marketplace I was a fan (and owner) of the Alienware price, and performance vs. BOXX, but then - you know who - (DELL) noticed what a great thing they had going, and decided to scoop them up. I was a little pissed off about that one, but I think Apple can now become a real force in the workstation marketplace for all things with what they "can possibly" offer that no one else can.





    You're a smart guy onlooker... I never understood why you were so intrigued with Alienware when you could build your own machines. Either way, I never was a fan of alienware... but now that Dell has acquired them... they are dust. On top of that, what dell has done with their technology is a disgrace. Have you seen the new dellienware computers? *throws up*



    Quote:



    Rendering the image is what it's all about isn't it? Obviously the Modeling, animation parts of the application isn't what you would be developing it for.

    You would be incorporating it into the render engine. Wouldn't you be using the lib to influence how, and what % CPU, and GPU were extracting your data through Image codec. Maybe a dual Nvidia GPU's under SLI bridge can render an image much faster than a CPU. Doesn't the lib determine what percentages can, or need to be offloaded in other similar situations already?




    The functions and objects that are available through the coregraphics library are not direct to the gpu. In other words there is a ton of code that apple wrote to make coregraphics work. You create the object in place of an image. That object has the under the hood abilities to talk to the gpu... it's transparent to the dev really. But that object also adds some functions to manipulate the object. In other words... there isn't a clean way to say... use xx percent of the gpu vs. cpu when creating this image. It basically does all of that for you.



    I suppose during the hardcore renderings... you could offload the creation of the image to the gpu... but honestly... i don't see that saving a lot of time. At least in cinema 4d (the only one i'm truly familiar with rendering)... it renders line by line. Dual processor it starts top and half way down and goes does.... line by line. Usually those renderings take some hardcore cpu TIME to get each line. So you offload a line to the gpu... it might take as much or close to as much time as to just have the cpu handle it. Then the final image is made... then what? Is it being manipulated afterwards? If so then yah CoreGraphics can help... but unless its being further manipulated once the rendering is done I fail to see how it can help. Maybe i'm missing something?



    Either way, Have you used cinema on a pc with sli? If so how much did it help during modeling? Do you think alias and maxon would have to implement code for sli in the mac version if macs went sli?
Sign In or Register to comment.