[quote]Originally posted by Amorph:
<strong>What 64-bit means: Essentially, that the instruction words (the block of memory containing an instruction for the CPU to execute) is 64 bits wide</strong><hr></blockquote>
That's probably not a very good criterium. x86 CPUs, for example, have a completely non-uniform instruction length, and thus would be something between 8-bit and 256-bit, depending on instruction, prefixes, modifiers and operands.
[quote]<strong>and the CPU fetches instructions and data in 64 bit chunks ("words").</strong><hr></blockquote>
Not a valid criterium either. x86 processors have been fetching data from main memory in 64 bit chunks since the original pentium, and I think the same is true for current PPCs. Datapaths to the L1 cache are sometimes even as wide as 256 bit (P3).
[quote]<strong>Notice that the amount of memory the CPU can address is not on that list. It's not uncommon for CPU address registers to be wider than the general registers: For instance, the 8-bit 65C02 had a 16 bit memory bus to allow it to address a whopping 64K. The 7450 reportedly has a 36-bit memory bus.</strong><hr></blockquote>
Since most current CPUs use their GP registers for address calculations, this is not really an issue.
True, some CPUs have address buses that are wider than their GP registers, but they still usually have 32bit address operands for their instructions. To reach the additional memory space provided by the other 4 address lines, they resort to using additional segment registers. Note that these always *add* to width of the GP registers, so the number of address bus lines is always bigger thatn the GP register width, so moving to 64 bit GPRs *will* automatically increase addressabel memory space.
[quote]<strong>The effects of going from 32-bit to 64-bit are, generally:
64-bit precision computations go a lot faster. (Note, however, that a 64-bit G5 will still trail the lowly 604e in raw precision: The 604e came with a special 80 bit
FPU, and a lot of scientists really miss that little feature.)</strong><hr></blockquote>
It actually did? x86 processors do too, but hardly anybody else. Also, I wonder how memory alignment was handled with 80 bit (not a power of 2) floats...
[quote]<strong>This will speed up Lightwave 3D, and a few other high-end, high-precision applications (Maya?!), and it might prompt high-end 32-bit apps like Photoshop to switch to 64-bit precision when possible. More is always better as far as precision goes.
Furthermore, I have heard rumblings that 64-bit color is in the pipeline; if not now, soon.</strong><hr></blockquote>
Hmm, I'd agree in most cases, but I wonder what the benefit would be in graphics - a 64-bit color space would probably be of little use given the human eye's restrictions.
[quote]<strong>If the main memory bus is 64 bit, you can address truly insane amounts of real and, with an appropriately tweaked filesystem, insane amounts of external storage space and virtual memory. ("insane" ~= 1.6 exabytes, by an offhand calculation, where 1 exabyte = 1 million gigabytes).</strong><hr></blockquote>
Actually, the main memory bus has been 64 bits wide for a long time. You're probably referring to the address bus.
[quote]<strong>Application files (object files) bloat, because instead of loading instructions and data in 32-bit chunks, it loads them in 64-bit chunks. This means that if your instruction or data only takes up 32 bits then you're wasting a lot of space - the other 32 bits are padded and/or ignored.</strong><hr></blockquote>
This is definitely true.
Obviously, the same was true when the transition from 16 bit to 32 bit was made, but there was a significant difference: Back then, the limits imposed by being 16-bit were visible (and hindering) in everyday use, so a trading in some bandwidth in exchange for larger address space was very acceptable for most users. Nowadays, on the other hand, 32 bit address space is not such an immediately limiting factor. It's only a problem in a few situations (mainly huge databases, scientific stuff probably uses floats [already 64 bit wide today] anyway), and most users are not likely to ever run across it. As such, most users would not see any benefit at all (everybody who has ever run a program larger than 4GB please raise your hand
, but would still have to sacrifice memory bandwidth. So, in general, I think moving the complete line to 64 bits is not a good option for Apple at this time.
[quote]<strong>With some adaptation, a handful of instructions which previously had to be loaded first as a 32-bit "instruction" word and then as a 32-bit "data" word might be able to run faster simply because the CPU can load them both at once. This is a hypothetical, not by any means a given. It might very well not provide enough of a performance boost to be worth the trouble of implementing it.</strong><hr></blockquote>
As stated above, memory transfers in all current 32 bit CPUs are already 64 bits wide (as is SDRAM, incidentially).