Located on the University's main campus at Urbana-Champaign, the new Turing Cluster currently consists of 640 dual-processor Xserve G5s, each with two G5 processors running at 2GHz and 4GB of RAM, for a total of 1280 processors. A high-bandwidth, low-latency Myrinet network from Myricom is being used to connect the cluster machines.
The cluster uses wired Ethernet networking routed by full-duplex 100Mb switches from Cisco Systems. It also features a 1Gb link between the front-end array and the primary Cisco switch, and runs a version of Apple's Mac OS X Server 10.3 "Panther" operating system.
Until recently, the University's Turing Cluster ran version 7.2 of Red Hat Linux using only 208 dual-processor machines, which were supplied by both Dell and Hewlett-Packard. But last summer, Michael Heath, Director of Computational Science and Engineering, saw that the Intel-based cluster had reached the end of its life span in terms of reliability. Additionally, the cluster was no longer generating interest with its aging computational ability, he said.
Heath set out on a mission to revamp the Turing Cluster into a facility that would be open to local users, scientists, and the University's student population of over 35,000. He presented his endeavor to several corporations, of which, he said, Apple responded quite eagerly.
After generating the necessary funding through donations from over a half-dozen University departments -- including the Beckman Institute, College of Engineering, and Department of Computer Science -- Heath and a team of supporters began nailing down the details of the cluster with Apple. By October 2004, they had constructed a 64-node mini-cluster in a temporary server room as a small-scale model of the entire system.
Speaking with AppleInsider this week, Heath revealed that the full-scale 640-node Xserve G5 cluster has been fully functional for about a month now, but is not yet sufficiently stable. He said it would take approximately one more month to stabilize the cluster, acquire a few missing parts, and exchange a couple of Xserve G5 units, which were dead-on-arrival.
The new Turing Cluster from Start to Finish Click images for larger view.
The cluster is housed in a newly renovated server room that can handle a cooling load of up to 550,000 BTU per hour and supports over 45 tons of cooling capacity using four distinct cooling systems -- three of which can adequately support the cluster at any given time.
In the near future, the University hopes to be able to perform tests to rank the cluster's computational ability, which, according to its specifications, could peak at nearly 10 teraflops.
Heath declined to provide specifics of the University's arrangement with Apple, but a web page dedicated to the new Turing Cluster says Apple provided the hardware under a combination purchase/donation agreement.
And while Health conceded that there is no formal upgrade plan for the future of the cluster, he said his team has been seriously considering expanding the cluster to 1280 nodes, double its current size.