Originally Posted by hmm
I don't believe high performance computing clusters typically use a PCI bridge as an interconnect.
It depends upon at which level you are interconnecting. PCI may be used to access a computation accelerator such as a GPU.
This probably sounded like a good idea to you and others, but I have yet to see to see it suggested by anyone with even remote knowledge of engineering. The other issue is that no one has really pointed out how such a thing could be set up.
Well there is a question of scale here, of you are thinking HPC center type clusters I see no future for TB there. However if a cluster to you is two to four Mac Pros tied together there is potential. It all depends upon software support as far as I can see. It also depends uponf how you define a cluster, there are many ways to group machines together to solve a problem, Apple could simply use TB to send well defined work units to another box.
I am guessing this is a joke? It's really silly how no one considers logistics such as the case for using a cluster vs a single workstation.
I do tend to agree that people are getting excited over things they don't understand. Realistically you could cluster Macs together now to solve problems.
Does thunderbolt support zero copy protocols? Will we see thunderbolt switches?
Probably not. I really beleive that Apple and Intel had well defined goals for TB when they started development, combine this with the fact that Intel has a superchip near release with a cluster interface built in and I have to say no to TB "switches". I had to quote that because TB is built around crossbar switch technology.
Does GCD support a PCI bridge interconnect? Does it have some method of failover support? Have they started on thunderbolt switches? You'd need some kind of switch for issues like traffic shaping to deal with multiple IO requests. Details seem to be ignored.
While those are all good questions you seem to mis the point here, if Apple is about to market a cluster solution it will be dead simple and defined for a specific group of uses. It won't be a solution where huge racks of Mac Pros are tied together. At best it will be a desktop "clustering" solution. At best it would tie a few machines together, we're few means less than seven. The reality is there are far better solutions in heavy deployment for larger clusters.
Infiniband is that solution. As can be seen here http://www.computerworld.com/s/article/9225703/Intel_superchip_will_be_aimed_at_high_performance_computing
Intel is on the road to integrating the tech into its processors directly. It would be most interesting if Apples long delay is not for Sandy Bridge E but rather for this mystery chip with InfiniBand integrated on board. Such a chip would make for a very interesting approach for the next Mac Pro, a machine where clustering isn't an after thought at all. By the way Apple would still need TB on such a machine.
All of the above sounds good, but I'm not so certain that it is a chip that will be ready this year. Hard to tell though, you wouldn't think Intel would announce anything at all unless e hardware was close.