Originally posted by wizard69
I'm not sure why people think that all these applicaitons need to be recompiled. You would still be running on a Mac OS/X operating system and you would still be using the PPC instruction set.
C and C++ do not do parallellism. They are designed to assume that there is one CPU. All means to distribute work across multiple CPUs or cores has to be done outside of those languages, using system libraries. 99% of the time, it's done manually by the programmer, and the threads are very coarse because the application only expects 1 or 2 large, fast CPUs.
If you want that application to run on 4 small CPUs (or possibly many more in the case of Cell) a mere recompile would be the best
case. The more execution cores you have to target, the more redesign
you'd have to do. Because there is no provision in C or in C++ for dividing up work, and historically there's been no need, except in supercomputing applications, this is not easy. In particular, debugging and troubleshooting threaded code is exasperating, and the exasperation increases supralinearly with the number of threads.
Well an interesting problem, but the number of processors are not an issue. Even now developers have a varing number of processors that the application may run on. Two on the PowerMac G5 soon to be four when multi processor cores come out. So the number is not the issue.
It's not an issue now because the only dual-processor machines also happen to use the fastest available chips, so threading is a luxury. On the platform proposed in this thread, it would become a necessity, and that would have a tremendous impact on application design - especially among the big, monolithic Carbon applications originally designed for an OS that had mediocre threading support bolted on late in its life. Furthermore, an application designed to spread itself over a large number of individually weak chips won't run as well on a platform built on one or two fast chips, because threading carries overhead.
Now the lack of a vector unit could hamper things but this is no differrent than having 603 hardware out and about. Add a vector unit and what I think will happen is that the developers imply won't care. Ultiment optimization will alway be for the Desktop line anyways. Think about it how many developers optimise their software for any of the laptops.
This makes no sense. Professional applications are optimized for professional machines, not "desktops" or "laptops". PowerBooks are professional machines. Apple pitches them that way, and people use them that way. A PowerBook that couldn't run AltiVec-heavy apps like Photoshop, DVD Studio Pro, Final Cut Pro, Logic, etc., would not sell at anywhere near the rate of the current PowerBook.
The market is moving to notebooks, generally. Apple sold an unheard-of 197,000 PowerBooks last quarter. This is exactly the wrong time to start gimping them.
Every indication is though that cell is a concept, not so much a specific architecture, so it is possible that IBM may try to merge the concepts from several design fronts into a low power processor for Apple.
One more time. Cell, upper-case-C, is a particular implementation shipping from IBM soon. Cell, lower-case-c, is a noun referring to a self-contained entity. Cellular computing is a concept. They are three different, if not unrelated, things.
As to the suitability of Xgrid: Generally, you can choose between a solution that scales up well and a solution that scales down well. Xgrid is suitable for powerful CPUs connected by (relatively) low bandwidth. Because of that, it is less suitable for weaker CPUs connected by high bandwidth. As soon as you start distributing computations across nodes, you have to become sensitive to both available resources and available bandwidth, because any inefficiency will squander a surprising amount of your available power. A solution for powerful CPUs with low bandwidth interconnects is only suitable for that situation.