A PS4 version of a game shouldn't need much more than a recompile to run on a Mac or Linux machine. They'll keep some exclusivity (even just timed) on the console but it means good things for developers looking to publish titles to as large an audience as possible. Tomb Raider was brought over to the Mac last month:
A PS4 version of a game shouldn't need much more than a recompile to run on a Mac or Linux machine. They'll keep some exclusivity (even just timed) on the console but it means good things for developers looking to publish titles to as large an audience as possible. Tomb Raider was brought over to the Mac last month:
If you get or have the Steam version, it'll offer installation on the Mac side.
Yes, in terms of performance Windows era is being surpassed by a combination of Mantle and OpenGL/OpenCL optimized drivers on FreeBSD [PS4]/Linux [SteamOS].
In terms of deployments, iOS and Android have more game users than Windows, in the present.
NextGen consoles [PS4/XBox One] will surpass their prior sales of the PS3/XBox360, but with the PS4 winning in user base.
AMD wins the console wars.
AMD extending Mantle API from the Consoles to the PC on Windows, Linux and FreeBSD means Windows DX11 days are numbered. More and more houses are enabling this beta API in their top flight games.
And seeing as you probably don't own one, be thankful for the rest of us who get the dual FirePro GPGPUs OpenCL 1.2/2.x ready, unlike Nvidia who will never extend support beyond the 1.1 level for OpenCL. The latest CUDA 6.0.1 tops out at that. They have made zero commitment to OpenCL 1.2/2.x.
Apple and AMD are fully committed to it.
Apple can coordinate with AMD and have full Mantle API support opened up and then you'll be glad Nvidia wasn't the solution.
Is there any reason you prefer OpenCL? As far as I can tell, there are more developers working on CUDA stuff, and CUDA seems to be a bit nicer.
A PS4 version of a game shouldn't need much more than a recompile to run on a Mac or Linux machine. They'll keep some exclusivity (even just timed) on the console but it means good things for developers looking to publish titles to as large an audience as possible. Tomb Raider was brought over to the Mac last month:
If you get or have the Steam version, it'll offer installation on the Mac side.
Except the Mac version isn't a port of the PS4 version (the next gen consoles got an updated version), it is a port of the Windows one (minus a number of features by the sounds of things)
Fair enough. I am just worried about the fact that nVidia is not all that gung-ho about support OpenCL, and the AMD GPUs are distinctly an also-ran in that market.
Apple doesn't care about the kind of scientific computing you are probably doing.
It is not clear that Apple cares about ANY kind of scientific computing, their machines are more geared to image processing stuff (and it works for scientific computing, great, but not their goal in life).
It is not clear that Apple cares about ANY kind of scientific computing, their machines are more geared to image processing stuff (and it works for scientific computing, great, but not their goal in life).
Having attended a scientific computing conference where two Apple employees were the featured guests, I have to disagree.
Having attended a scientific computing conference where two Apple employees were the featured guests, I have to disagree.
I am glad to hear it, but a lot of scientific computing is done on compute farms made of generic hardware -- even the front end machines are often Linux workstations (because that's what the back end runs). Since a lot of people carry around apple laptops (e.g., yours truly), there should be a bigger role for Apple, but they have not seemed very aggressive about pursuing it.
a lot of scientific computing is done on compute farms made of generic hardware -- even the front end machines are often Linux workstations (because that's what the back end runs). Since a lot of people carry around apple laptops (e.g., yours truly), there should be a bigger role for Apple, but they have not seemed very aggressive about pursuing it.
You do know who came up with OpenCL? Also, your statement about generic hardware counters what you were saying about CUDA. If it's generic then OpenCL is much more suitable.
The term [I]scientific computing[/I] is broad and nebulous. What you appear to be taking about is not specifically scientific computing at all. You are talking about clusters. Scientific software makes scientific computing. A cluster can be used for computing intensive tasks--scientific or otherwise. Clusters are usually built of components for white box computers. and powered by Linux. Each node is cheaper than an iPhone.
Where is the room for Apple aggression in this scenario? You can build a fairly nice working cluster in an afternoon. If you can't, then Apple's help is probably not all that you need.
You do know who came up with OpenCL? Also, your statement about generic hardware counters what you were saying about CUDA. If it's generic then OpenCL is much more suitable.
nVidia is pretty dominant in the graphics card market, and compute farms are quite likely to have Fermi coprocessors. This is my personal experience. As for OpenCL, I am well aware of its history, but again, Apple was more interested in image processing (which, of course, does come up in non-video production applications, but video WAS their prime mover).
The term scientific computing is broad and nebulous. What you appear to be taking about is not specifically scientific computing at all. You are talking about clusters. Scientific software makes scientific computing. A cluster can be used for computing intensive tasks--scientific or otherwise. Clusters are usually built of components for white box computers. and powered by Linux. Each node is cheaper than an iPhone.
Where is the room for Apple aggression in this scenario? You can build a fairly nice working cluster in an afternoon. If you can't, then Apple's help is probably not all that you need.
Well, as a matter of fact, my comment was inspired by my current experience. I have been considering buying a MacPro, because I am do do some GPU based stuff, and while I do have servers off in Central Europe somewhere doing computations, it would be nice to have greater immediacy for some things. Problems: (a) the MacPro only supports 64GB of RAM (b) the MacPro has AMD GPUs. A lot of the best tools I know of (google Parrakeet compiler for python, or Numba or NumbaPro) are fairly CUDA-specific. CUDA out of the box from nVidia has nice libraries, and when CUDA 6.0 comes out, you will be able to just slot them into your code without thinking about GPUs at all.
If Apple cared, they would certainly be able to address problem (a), and they also have sufficient resources to address the larger problem (b) (both of these are by no means specific to me, interestingly the first is more relevant for symbolic vs numeric computing. But they don't. So, most likely, my money will be spent on upgrading one of my local linux boxes to a K40 GPU.
So is not a Mac Pro It should be renamed a PC Pro. You need windows to get the full power of the GPUs. Amazing staff. The Mac Pro is one of the best Windows PCs ever.
Comments
Believe it or not, gaming on Linux will soon surpass Windows.
reality fail.
He may have meant in terms of performance not in volume of gamers using Linux:
http://www.extremetech.com/gaming/133824-valve-opengl-is-faster-than-directx-even-on-windows
although there is also this:
http://www.extremetech.com/gaming/159476-ps4-runs-orbis-os-a-modified-version-of-freebsd-thats-similar-to-linux
A PS4 version of a game shouldn't need much more than a recompile to run on a Mac or Linux machine. They'll keep some exclusivity (even just timed) on the console but it means good things for developers looking to publish titles to as large an audience as possible. Tomb Raider was brought over to the Mac last month:
http://www.tuaw.com/2014/01/27/tomb-raider-a-reboot-game-that-also-reboots-mac-gaming/
If you get or have the Steam version, it'll offer installation on the Mac side.
How do you enable crossfire in windows8? Or is this defaulted with the drivers?
It's in the Catalyst Control Centre, Performance tab.
He may have meant in terms of performance not in volume of gamers using Linux:
http://www.extremetech.com/gaming/133824-valve-opengl-is-faster-than-directx-even-on-windows
although there is also this:
http://www.extremetech.com/gaming/159476-ps4-runs-orbis-os-a-modified-version-of-freebsd-thats-similar-to-linux
A PS4 version of a game shouldn't need much more than a recompile to run on a Mac or Linux machine. They'll keep some exclusivity (even just timed) on the console but it means good things for developers looking to publish titles to as large an audience as possible. Tomb Raider was brought over to the Mac last month:
http://www.tuaw.com/2014/01/27/tomb-raider-a-reboot-game-that-also-reboots-mac-gaming/
If you get or have the Steam version, it'll offer installation on the Mac side.
Yes, in terms of performance Windows era is being surpassed by a combination of Mantle and OpenGL/OpenCL optimized drivers on FreeBSD [PS4]/Linux [SteamOS].
In terms of deployments, iOS and Android have more game users than Windows, in the present.
NextGen consoles [PS4/XBox One] will surpass their prior sales of the PS3/XBox360, but with the PS4 winning in user base.
AMD wins the console wars.
AMD extending Mantle API from the Consoles to the PC on Windows, Linux and FreeBSD means Windows DX11 days are numbered. More and more houses are enabling this beta API in their top flight games.
Well that's ironic
Does anyone know of this would work in a Windows VM or only in Boot Camp?
By the same token, does OpenCL work in a VM, or does the VM not have a direct enough access?
And seeing as you probably don't own one, be thankful for the rest of us who get the dual FirePro GPGPUs OpenCL 1.2/2.x ready, unlike Nvidia who will never extend support beyond the 1.1 level for OpenCL. The latest CUDA 6.0.1 tops out at that. They have made zero commitment to OpenCL 1.2/2.x.
Apple and AMD are fully committed to it.
Apple can coordinate with AMD and have full Mantle API support opened up and then you'll be glad Nvidia wasn't the solution.
Is there any reason you prefer OpenCL? As far as I can tell, there are more developers working on CUDA stuff, and CUDA seems to be a bit nicer.
Except the Mac version isn't a port of the PS4 version (the next gen consoles got an updated version), it is a port of the Windows one (minus a number of features by the sounds of things)
Apple doesn't care about the kind of scientific computing you are probably doing.
OpenCL runs on CPUs too so you get access to lots of memory. IBM servers, ARM, x86, GPUs etc:
http://www.khronos.org/conformance/adopters/conformant-products
CUDA is solely NVidia products.
On what are you basing this?
OpenCL runs on CPUs too so you get access to lots of memory. IBM servers, ARM, x86, GPUs etc:
http://www.khronos.org/conformance/adopters/conformant-products
CUDA is solely NVidia products.
Fair enough. I am just worried about the fact that nVidia is not all that gung-ho about support OpenCL, and the AMD GPUs are distinctly an also-ran in that market.
I'm sorry to say this but you should buy a PC.
Apple doesn't care about the kind of scientific computing you are probably doing.
It is not clear that Apple cares about ANY kind of scientific computing, their machines are more geared to image processing stuff (and it works for scientific computing, great, but not their goal in life).
Having attended a scientific computing conference where two Apple employees were the featured guests, I have to disagree.
Having attended a scientific computing conference where two Apple employees were the featured guests, I have to disagree.
I am glad to hear it, but a lot of scientific computing is done on compute farms made of generic hardware -- even the front end machines are often Linux workstations (because that's what the back end runs). Since a lot of people carry around apple laptops (e.g., yours truly), there should be a bigger role for Apple, but they have not seemed very aggressive about pursuing it.
You do know who came up with OpenCL? Also, your statement about generic hardware counters what you were saying about CUDA. If it's generic then OpenCL is much more suitable.
Where is the room for Apple aggression in this scenario? You can build a fairly nice working cluster in an afternoon. If you can't, then Apple's help is probably not all that you need.
You do know who came up with OpenCL? Also, your statement about generic hardware counters what you were saying about CUDA. If it's generic then OpenCL is much more suitable.
nVidia is pretty dominant in the graphics card market, and compute farms are quite likely to have Fermi coprocessors. This is my personal experience. As for OpenCL, I am well aware of its history, but again, Apple was more interested in image processing (which, of course, does come up in non-video production applications, but video WAS their prime mover).
The term scientific computing is broad and nebulous. What you appear to be taking about is not specifically scientific computing at all. You are talking about clusters. Scientific software makes scientific computing. A cluster can be used for computing intensive tasks--scientific or otherwise. Clusters are usually built of components for white box computers. and powered by Linux. Each node is cheaper than an iPhone.
Where is the room for Apple aggression in this scenario? You can build a fairly nice working cluster in an afternoon. If you can't, then Apple's help is probably not all that you need.
Well, as a matter of fact, my comment was inspired by my current experience. I have been considering buying a MacPro, because I am do do some GPU based stuff, and while I do have servers off in Central Europe somewhere doing computations, it would be nice to have greater immediacy for some things. Problems: (a) the MacPro only supports 64GB of RAM (b) the MacPro has AMD GPUs. A lot of the best tools I know of (google Parrakeet compiler for python, or Numba or NumbaPro) are fairly CUDA-specific. CUDA out of the box from nVidia has nice libraries, and when CUDA 6.0 comes out, you will be able to just slot them into your code without thinking about GPUs at all.
If Apple cared, they would certainly be able to address problem (a), and they also have sufficient resources to address the larger problem (b) (both of these are by no means specific to me, interestingly the first is more relevant for symbolic vs numeric computing. But they don't. So, most likely, my money will be spent on upgrading one of my local linux boxes to a K40 GPU.