[[[Because apps the need number crunching and require Double Precision FPU won't see any advantages from Altivec. Developers for POV Ray and E-On have mentioned this and it probably affects a large number of potential Altivec'd apps out there. ]]]
This is kina a myth... Double-precision is certainly not NEEDED everywhere. It's a biased benchmark to begin with. Oh and just so you know, My friend David K. Every worked with the fellow that developed POV-Ray way-back-when and it wasn't really written with the Mac in mind. BTW, you should all hop on over to Dave's site and check out his articles discussing benchmarks, They are a great read,
Anyway, I posted a little something on a different thread back in early July. I'll repost parts of it here. It talks about double precision and why YOU WOULD NOT WANT DOUBLE PRECISION IN THE VECTOR UNIT!
How many are familiar with the old excuse:
"Um, but you see, it's not *our* fault the speed is underwhelming; there are just some things that AltiVec simply cannot be used for".
How often have we all heard this? The fact is that the "some things" always turned out to be ONE thing or specific things... "Our apps require double precision and AltiVec cannot be used in any way to perform double precision calculations"
Again, consumers were feeling disappointed and annoyed at Apple. As usual, I snooped around and found some interesting tidbits that many people fail to notice then I checked them for accuracy and validity by asking some legitimate sources... Many users and marketing-types absolutely swear by the "quality" of renders that a double-precision calc would produce. I notice that these claims fail to mention any threshold with respect to human limitations of sight and vision. There is a point where the human eye, no matter how good your vision, will not be able to discern/resolve any increase in resolution/quality even if it was there. And since we are talking about full-motion animated 3D scenes shot on current monitors and TVs, many tricks can be played out on human vision; even the best of it
From what I've discovered, It's reasonable to believe that you don't *need* double precision for 3D, unless you are really, really sloppy with your algorithms (and code).
Double precision calcs are usually employed because you can get away with a lot more slop in your coding. Here is a small rant about this endless nonsense about double precision in the vector unit. I obtained the Info from a trusted source -- a Ph.D. who happens to be a PPC/AltiVec programmer... I decided to cut and paste the info so I could reply more quickly to this forum discussion.
[[[Q: Is an updated double precision-centric AltiVec unit the way to go?
This is why:
The vector registers have room for four single precision floats to fit in each one. So for single precision, you can do four calculations at a time with a single AltiVec instruction. AltiVec is fast because you can do multiple things in parallel this way.
Most AltiVec single precision floating point code is 3-4 times faster than the usual scalar single precision floating point code for this reason. The reason that it is more often only three times faster and not the full four times faster (as would be predicted by the parallelism in the vector register I just mentioned) is that there is some additional overhead for making sure that the floats are in the right place in a vector register, that you don't have to deal with in the scalar registers. (There is only one way to put a floating point value in a scalar register.)
Double precision floating point values are twice as big (take up twice as many bytes) as single precision floating point values. That means you can only cram two of them into the vector register instead of four. If our experience with single precision floating point translates to double precision floating point, then the best you could hope to get by having double precision in AltiVec is a (3 to 4)/2 = 1.5 to 2 times speed up.
Is that enough to justify massive new hardware on Motorola's or Apple's part?
In my opinion, no.
This is especially true when one notes that using the extra silicon to instead add a second or third scalar FPU could probably do a better job of getting you a full 2x or 3x speed up, and the beauty part of this is that it would require absolutely no recoding for AltiVec. In other words, it would be completely backwards compatible with code written for older machines, give *instant speedups everywhere* and require no developer retraining whatsoever. This would be a good thing.
Even if you still think that SIMD with only two way parallelism is better than two scalar FPU's, you must also consider that double precision is a lot more complicated than single precision. There is no guarantee that pipeline lengths would not be a lot longer. If they were, that 1.5x speed increase might evaporate -- Quickly.
Yes, Intel has SSE2, which has two doubles in a SIMD unit. Yes, it is faster -- for Intel. It makes sense for Intel for a bunch of reasons that have to do with shortcomings in the Pentium architecture and nothing to do with actual advantages with double precision in SIMD.
To begin with Intel does not have a separate SIMD unit like PowerPC does. If you want to use MMX/SSE/SSE2 on a Pentium, you have to shut down the FPU. That is very expensive to do. As a work around, Intel has added Double precision to its SIMD so that people can do double precision math without having to restart the FPU. You can tell this is what they had in mind because they have a bunch of instructions in SSE2 that only operate on one of the two doubles in the vector. They are in effect using their vector engine as a scalar processing unit to avoid having to switch between the two. Their compilers will even recompile your scalar code to use the vector engine in this way because they avoid the switch penalty.
Okay, so Intel has double precision in their vector unit and despite what I have said, you still think that is absolutely wonderful. But do they Really have a double precision vector unit? The answer is not so clear. Their vector unit actually does calculations on the two doubles in the vector in a similar "one at a time fashion" to the way an ordinary scalar unit would. They only can get one vector FP op through [every two cycles] for this reason. AltiVec has no such limitation.
AltiVec can push through one vector FP op per cycle, doing four floating point operations simultaneously (up to 20 in flight concurrently). AltiVec also has a MAF core, which in many cases does two FP operations per instruction. This is the reason why despite large differences in clock frequency, AltiVec can meet and often beat the performance of Intel's vector engine.
The other big dividend that they get from double precision SIMD is the fact that they can get two doubles into one register. When you only have eight registers this is a big deal! [PowerPC has 32 registers for each of scalar floating point and AltiVec!] In 90% of the cases, we programmers don't need more space in there and the registers the PPC provides are just fine.
Simply put, (from a developers position) we just don't need double precision in the vector engine, and we wouldn't derive much benefit from it if we had it. The worst thing that could possibly happen for Mac developers is that we get it, because that would mean that the silicon could not be used to make some other part of the processor faster and more efficient, and a lot of code would need to be rewritten for little to no performance benefit. It wouldn't be a logical tradeoff.
The only way this would be worthwhile would be to double the width of the vector register so that we get 4x parallelism for double precision FP arithmetic.
And with respect to 3D apps *requiring* double precision...
Most 3D rendering apps do not NEED double precision everywhere. They just need it in a few places, and often (if they really decide to look) they may find that there are more robust single-precision algorithms out there that would be just as good. In the end they should be using those algorithms anyway, because the speed benefits for SIMD are twice as good for single precision than they are for double precision.
Apps like that can get a lot more mileage out of the PowerPC if they just increase the amount of parallelism as much as possible in their data processing. Don't just take one square root at a time, do four etc. And this isn't even taking into account multiprocessing just yet or even AltiVec for that matter. The scalar units alone, by virtue of their pipelines, are capable of doing three to five operations simultaneously! However if you don't give them 3-5 things to do at every given moment, this power goes unused. Unfortunately, this can be noticed in quite a few Mac applications already on the market where performance doesn't seem to be as solid as it should be. What is baffling is why Mac many developers aren't taking advantage of this power. What it boils down to is that most of these apps just do one thing at a time (for the most part), and in turn are wasting 60-80% of the CPU cycles. That's a lot of waste. What's nice is that the AltiVec unit is also pipelined, so it is important to do a lot in parallel there too. The only problem is that developers actually have to make a conscious effort to use the processor the way it was designed to be used. ]]] - (Anonymous source) ]]]
Anyway, I hope that cleared up a few things that have been on the minds of some Mac users. Again, I'm not an expert, but I do research these things. I ask experts and professors and PPC designers and programmers all in an attempt to gain a better understanding of what's really going on.
Some of you might find this bit of info interesting. Check it:
The two titles to "zero-in" on are:
- Vector implementation of multiprecision arithmetic
- Octuple-precision floating-point on Apple G4
102 - Mac OS X Performance Optimization with Velocity Engine