or Connect
AppleInsider › Forums › Mac Hardware › Current Mac Hardware › Reasons behind lack of SLI support in OS X?
New Posts  All Forums:Forum Nav:

Reasons behind lack of SLI support in OS X?

post #1 of 7
Thread Starter 

After reading a number of nMP benchmarks on BareFeats, I still fail to understand why NVIDIA does not have an agreement with Apple to support SLI - after all, we are talking here about a MUCH larger hardware and gaming base than for instance Linux, which is already SLI-ready. Moreover, the hardware is already there (since Macs booting under Windows do support SLI).

 

So my humble question is: why?

iMac Intel 27" Core i7 3.4, 16GB RAM, 120GB SSD + 1TB HD + 4TB RAID 1+0, Nuforce Icon HDP, OS X 10.10.1; iPad Air 64GB; iPhone 5 32GB; iPod Classic; iPod Nano 4G; Apple TV 2.

Reply

iMac Intel 27" Core i7 3.4, 16GB RAM, 120GB SSD + 1TB HD + 4TB RAID 1+0, Nuforce Icon HDP, OS X 10.10.1; iPad Air 64GB; iPhone 5 32GB; iPod Classic; iPod Nano 4G; Apple TV 2.

Reply
post #2 of 7

Most likely no one wrote the drivers for it. They only support what they decide to support.

post #3 of 7
Quote:
Originally Posted by brlawyer View Post

After reading a number of nMP benchmarks on BareFeats, I still fail to understand why NVIDIA does not have an agreement with Apple to support SLI

You mean why AMD doesn't have an agreement to support CrossFire. The nMP uses AMD GPUs. It would be SLI if they used NVidia GPUs.

It seems like it would be a trivial thing for them to add. It's really just splitting the required frame and rendering each part separately. It doesn't seem like it would be very difficult to implement. They just load up the memory on each GPU with the same data and then tell GPU 1 to render all the even lines into the framebuffer and then GPU2 to render all the odd lines or it can be left/right parts of the frame and then display the framebuffer.

I don't think they really intend the second GPU to be used like that though. It seems as though the second GPU is intended to be used as a co-processor. When you use a single GPU with OpenCL tasks, the user interface lags because it has to try rendering on the GPU too. I think this is what happens with Final Cut Pro X. The timeline is more laggy than it is in other apps.

For apps that can use the two GPUs best together, the developer can take advantage of both.

Sometimes it's not a good idea to use both because it generates more heat. If a game runs at 90FPS maxed out with 2 GPUs and uses say 400W, it's mostly wasted. Running at 45FPS at 200W is better. A single D700 GPU should run every game maxed out above 30FPS, except really poorly optimized games.

Having the option would be nice so people don't have to wait on the app developer to use both.
post #4 of 7
Thread Starter 
Quote:
Originally Posted by Marvin View Post


You mean why AMD doesn't have an agreement to support CrossFire. The nMP uses AMD GPUs. It would be SLI if they used NVidia GPUs.

It seems like it would be a trivial thing for them to add. It's really just splitting the required frame and rendering each part separately. It doesn't seem like it would be very difficult to implement. They just load up the memory on each GPU with the same data and then tell GPU 1 to render all the even lines into the framebuffer and then GPU2 to render all the odd lines or it can be left/right parts of the frame and then display the framebuffer.

I don't think they really intend the second GPU to be used like that though. It seems as though the second GPU is intended to be used as a co-processor. When you use a single GPU with OpenCL tasks, the user interface lags because it has to try rendering on the GPU too. I think this is what happens with Final Cut Pro X. The timeline is more laggy than it is in other apps.

For apps that can use the two GPUs best together, the developer can take advantage of both.

Sometimes it's not a good idea to use both because it generates more heat. If a game runs at 90FPS maxed out with 2 GPUs and uses say 400W, it's mostly wasted. Running at 45FPS at 200W is better. A single D700 GPU should run every game maxed out above 30FPS, except really poorly optimized games.

Having the option would be nice so people don't have to wait on the app developer to use both.

 

You're correct - my question concerns both architectures, particularly the one currently supported by Apple in the new nMPs.

iMac Intel 27" Core i7 3.4, 16GB RAM, 120GB SSD + 1TB HD + 4TB RAID 1+0, Nuforce Icon HDP, OS X 10.10.1; iPad Air 64GB; iPhone 5 32GB; iPod Classic; iPod Nano 4G; Apple TV 2.

Reply

iMac Intel 27" Core i7 3.4, 16GB RAM, 120GB SSD + 1TB HD + 4TB RAID 1+0, Nuforce Icon HDP, OS X 10.10.1; iPad Air 64GB; iPhone 5 32GB; iPod Classic; iPod Nano 4G; Apple TV 2.

Reply
post #5 of 7
Quote:
Originally Posted by Marvin View Post

You mean why AMD doesn't have an agreement to support CrossFire. The nMP uses AMD GPUs. It would be SLI if they used NVidia GPUs.

It seems like it would be a trivial thing for them to add. It's really just splitting the required frame and rendering each part separately. It doesn't seem like it would be very difficult to implement. They just load up the memory on each GPU with the same data and then tell GPU 1 to render all the even lines into the framebuffer and then GPU2 to render all the odd lines or it can be left/right parts of the frame and then display the framebuffer.

I don't think they really intend the second GPU to be used like that though. It seems as though the second GPU is intended to be used as a co-processor. When you use a single GPU with OpenCL tasks, the user interface lags because it has to try rendering on the GPU too. I think this is what happens with Final Cut Pro X. The timeline is more laggy than it is in other apps.

For apps that can use the two GPUs best together, the developer can take advantage of both.

Sometimes it's not a good idea to use both because it generates more heat. If a game runs at 90FPS maxed out with 2 GPUs and uses say 400W, it's mostly wasted. Running at 45FPS at 200W is better. A single D700 GPU should run every game maxed out above 30FPS, except really poorly optimized games.

Having the option would be nice so people don't have to wait on the app developer to use both.

They don't use SFR anymore. It would be AFR where the cards take turns rendering entire frames. 1 draws all the odds 2 draws all the evens. That being said it would be easy to implement I thinkit has to do with Apple and their anal insistence on having control of everything and nvidia not signing off on such when it comes to their tech.

Whoever said OS X supports sli on a pc, where do you get your info? I would love to be proven wrong as this would allow me to essentially install OS X now 1tongue.gif. Alas I'm not willing to have my extra Titans kick rocks so until it is possible.... Well yea 1tongue.gif
post #6 of 7
Quote:
Originally Posted by klepp0906 View Post

I thinkit has to do with Apple and their anal insistence on having control of everything and nvidia not signing off on such when it comes to their tech.

Whoever said OS X supports sli on a pc, where do you get your info?

I didn't say OS X supports SLI, I was saying that if Apple had used NVidia cards then it would be appropriate to ask why they hadn't implemented SLI. They didn't use NVidia cards, they used AMD cards and the feature for AMD is CrossFire.
post #7 of 7

From what I've seen is that apps that can leverage multi-GPU can use it without SLI or Crossfire.  It will be purely for using the GPUs for computational tasks rather than rendering.  

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Current Mac Hardware
AppleInsider › Forums › Mac Hardware › Current Mac Hardware › Reasons behind lack of SLI support in OS X?