Originally Posted by wizard69
I'm a big fan of the idea of morphing the Pro into a high performance compute module that can easily talk to its neighbors over a high speed link.
The problem with such a machine is that it tends to throw users out of their comfort zone. You see this in the forums when anything other than the current Mac Pro chassis is suggested as a way forward. Add completely new technology to that new chassis and some users will be overwhelmed.
Me too. Personally I’d like to see the Mac Pro product niche define a standalone desktop supercomputer (relative to Apple’s consumer products available at the time at any rate), offering a viable, resilient High Performance Computation platform that can realistically address known hard problems in the scientific domain - not just be a beast at video. My personal area of interest would be running numerical general relativity codes for the simulation of black hole collisions, which can easily consume sustained Teraflop compute rates for days, and several hundred GB of memory or more.
The scientific community - an admittedly small user base for Apple which is sadly neglected these days - would be all over a product that could offer OS X - based, relatively affordable, self-contained Terascale computing. Big iron is already at the Petascale level, but there simply aren’t enough installations in the world for shared time on such machines to satisfy everybody’s needs, and entry-level Petascale HPC capabilities start in the $200K range. Imagine if Apple could change the game for individual researchers by offering a workstation that, even in it’s basic configuration, could be considered a serious compute alternative to a slice of time on a large facility at an institution.
Of course some HPC companies are already evolving their platforms along a “hybrid” approach, with a multi-core CPU + massively multi-core GPU architecture (though admittedly utilising the custom interconnect fabric and unusual system topologies prevalent in this domain). The venerable supercomputer maker Cray is partnering with nVidia to use their Tesla K10 and K20 cards (which contain about ~3,000 CUDA cores...) in it’s current XK6 series of machines (which use AMD Opterons as CPUs, and run Linux, incidentally).
As for the ability of software to utilise a large number of GPU cores, it’s becoming increasingly easier to leverage these. Mathematica (science bias I know!) has been multi-CPU-core aware for some time, but the interest with version 8 is Wolfram now provide a built-in CUDALink package which allows almost transparent use of the huge number of cores on eg a Tesla GPU card directly from the high-level Mathematica environment. It’s a shame they recommend Dell and HP machines on which to do this.
It’s interesting to note that Apple are apparently re-partnering with nVidia on the laptop graphics front, however perhaps they should take a leaf out of Cray’s book and work more closely with nVidia (or Cray?!) to build the next generation Mac Pro: a truly differentiated machine orders of magnitude more capable than the nearest consumer product, where the graphics card is not merely an add-on but GPU acceleration is regarded as an intrinsic part of the architecture. In this sense, it would matter less to the future of the Mac Pro about where exactly Intel is in it’s CPU roadmap. Probably not this WWDC though....