On Tue, Apr 9, 2013 at 9:07 PM, Carlos E. R.
On Tuesday, 2013-04-09 at 20:00 -0300, Cristian Rodríguez wrote:
El 09/04/13 18:44, Carlos E. R. escribió:
You mean that some of the hardware cpu math instructions are inaccurate, and thus we have to use again software implementations, as we did 20 years ago before the math coprocesor was invented? That's a shame :-(
There is -ffast-math compile option for cases that you dont need/want high precision.
Ah, well, that's some thing.
But still, it is a shame that we can not use HW functions if we need reliability, don't you think? And of course, it may happen that different processor brands and models can yield different results, no?
This is terrible!
I'm not blaming glibc devs on this, they are doing the correct thing. I hope that you can choose at compile time HW speed, fast software functions, or absolutely accurate software.
But that the hardware math functions in the processors are inaccurate is an absolute shame!
Actually, the OP links show that it's not that they're "innaccurate" per-se, it's stated on the IA docs (and I do remember reading that, though I didn't find it now when I re-checked) that it's designed to be used within the -Pi/2..Pi/2 range, and using the instructions outside the range would yield less precise results, that reduction to that range ought to be done in software, with the aid of fprem. Now, not sure what did libm does, but the "fast and wrong" version didn't even call fprem, so I guess the slowdown could come from it (since it has to be invoked in a loop). If not, using fprem may be much faster than invoking libm. I guess this could be testing by calling sincos(fmod(x, M_PI)) on the old, inaccurate gcc version, and checking the result. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org