What it takes to support more processor architectures?
(a follow-up for my previous posts on this thread)
Many members posting here think that changing processor architecture means rewriting a lot of code by the majority of developers in order to "optimize" it for the new processor. This is not the case. Here is why.
From programmers point of view, processors differ mainly in the following:
- byte order (big endian vs. little endian)
- data register size (may affect data alignment) and addressable memory (processor "bittness")
- vector processing units or other processor-specific features
The instruction set is handled by the compiler, unless you are writing in assembler (very, very few programmers do). Every processor vendor will make sure there is a compiler available for the most widely used languages for it's processor. One of the most widely used compilers, used by Apple's Xcode as well, is GCC - GNU Compiler Collection, which provides compilers for a number of programming languages for a wide range of processors. Other compilers, e.g. from Intel or IBM, may be used, but they are not free for developers and are bound to specific processors, and are not commonly used. So, in general, the programmer needs to do NOTHING
to compile for the processor architecture on which he/she is compiling. The compilers are capable to cross-compile - to compile for an architecture, different from the one it is running on. For example, you can compile for PPC on Intel computer and vice versa.
In many cases the developer has to take care of byte order, in particular, when reading data streams from disc or network. In order to keep the code portable, there are some simple rules which should be followed here. Apple provides several ways for bite-swapping, including macros and functions. Bite-swapping errors may introduce bugs, but now apple had already fixed most of them (it has to support both endian versions for now). It is not a big deal to keep the code endian-savvy. Moreover, it is considered bad programming practice not to do so.
Most of the old mac fans tend to believe that mac developers were working a lot to support AltiVec. This is not the case. Apple did, and the the majority of the results of this work is embedded in a very small number of frameworks and libraries (compared to the entire code base). The only thing the developers need to do is to use them. The AltiVec optimized code is a small part of the Mac OS and large part of it is "concentrated" in a known places, so there is no need to go through all the code to make the switch if needed. Some developers do use processor vector units explicitly, but a very small percent of the developer community. Most of the developers rely on auto vectorization by the compiler, or, more importantly, on other compiler optimizations. You may be surprised to know, but most of the executable code out there, including the OS itself, is OPTIMIZED FOR SIZE, NOT FOR SPEED
There is much more trouble to support 32 bit and 64 bit than the other differences. This is more difficult to explain but I will give it a try. When you have a function call in your code, you don't care about the processors instruction set, endianness or vector units. This is handled by the compiler. But the function always has a definite return type, for example, integer or float. So far so good. But what you should do if you want to take advantage of the processors larger data register size? You need a new function which returns the corresponding long data type. In your code you have to check the processor for which the code compiles, and select the function you want to use. And you have to make this for every call for a potentially large number of functions. This is a HUGE problem. To make it worce, most of the code relies on different libraries. You need libraries which are specifically made to support 64 bit (as said earlier, just recompiling for a different processor is much simpler), so you may have to wait untill the author of the said libary makes it 64 bit savvy.
Leopard will have full 64 bit support (that is, all Leopard's frameworks and libraries). Some of the applications (from Apple and third parties) will not. But Apple will provide tools to make the support for 32 and 64 bit much easier than the above scenario.
To summarize, Apple's code portability is a HUGE advantage over Windows (Linux is very portable however). Now, when Apple improved and "tested" it's portability, it will be ultra-stupid not to keep it that way, even when the current crop of PPC installed base fades away. Most of the code in Mac OS will survive more than one processor architecture. And Apple will want to "have options", both for future processor architectures, and for licensing Mac OS to third parties (remember, Windows has big trouble supporting multiple architectures).
Sorry for the long post. Hope it makes some points more clear.