Originally posted by wizard69
This is exactly what I'm getting at. There is no reason to infer that there is precision lost or course measuring tools or anything of that nature. In the end you are specifing the same value - 0.25 represents the same thing as 1/4.
No, it's not what you're getting at. You're missing the point. The mathematical real number 0.25 is the mathematical real 1 divided by the mathematical real 2. Anything measured is a lot messier than that, and any responsible scientist has to account for that. A measured value of "0.25" could be any of: 0.24999, 0.253, 0.25000000001, or even precisely 1/2 (although what are the odds of that?). You don't know
what the exact value is, so any assumptions beyond the initial signficant digits are almost guaranteed to be false.
If you have a measurement of 0.25 and change as you say it is silly to dispose of that "change" until you know that it is relavant. There may be little precision in that measurement but you do want to keep and track all of the resolution that you had when you made the memsurement. You seem to be making the mistake or confusing resolution with precision, they are not the same thing.
At this point I have no idea what you're talking about. Precision represents the accuracy with which something can be represented. It applies both to measurements and to representations in floating point, which is why people refer to "64 bit precision" and "precision tools".
How is resolution different from precision, anyway? Both specify a quantum value beneath which the representation is no longer accurate.
You've misinterpreted what I've said. If you have a measurement of "0.25 and change" you don't know what that change is
. It could be zero, or it could not be. Disposing of it is not an issue, because you don't know what it is in the first place. If you could measure it in any meaningful way, there would be significant digits to represent it! The fact that FP might, over the course of calculations, introduce a whole bunch of extra digits (but not significant digits, because you can't get signal from noise), is an unwelcome artifact. It's not anything you can use, and it's not the "change" I was referring to.
Frankly you have not made any case at all for keeping doubles out of a SIMD unit of any type. All you have to accept is that singles, that is 32 bit floats, do not have the dynamic range for many applications.
All you have to do is answer the question: How many applications, and are they worth the cost of implementing a vector unit vs. using parallelism and conventional FP units? It's nice that there are supercomputers that can do this, but Apple doesn't make supercomputers (silly marketing hype aside). I don't pretend to know the answer to that question, but it's not a simple question, and it can't be blown off.
We don't know that IBM and Apple (and probably Mot) are revisiting VMX to provide this functionality, either. There are all kinds of capabilities they could add that would greatly improve its appeal for streaming and signal processing work, without changing the sizes of the registers or supporting 64 bit anything.
Since this is a chip that apparently hasn't even left the stage of realization, the above concerns are really not valid.
No, they're valid, they just aren't anything more than "concerns." I'm not saying can't
. I'm merely pointing out that the concerns are hairy enough, and the payoff uncertain enough, and other features desirable enough, that the support you want might or might not
happen even at 65nm. Whether it happens depends on a large number of variables whose values are currently unknown.
Personally, looking at the problem, I am leaning toward "won't happen." The current top of the line PowerMac can crunch through 4 FP calculations per clock (2 CPUs with 2 FPUs each). There you go - no additional hardware necessary, and twice the memory bandwidth and twice the cache that would be available to a 64-bit-savvy VMX unit on one CPU.
Trade offs in processor design are not going away, it is a matter of getting the best bang for the buck for the processors targeted market.
So what are the tradeoffs in going to 256 bit or 512 bit registers, and how easy are they to surmount? Is it worth it to the target market? There is zero use for it in the embedded space (any time soon), zero use on the desktop (any time soon), so that leaves the workstation and server markets. Servers might be able to use it for IPv6 networking, but IBM seems to have other ideas for that sort of thing (FastPath, which will intercept a lot of the system interrupts and allow the main CPU to keep crunching away).