Thomas Hertweck wrote:
On 13/01/10 07:37, Per Jessen wrote:
[...] No myth at all. Perhaps you haven't met many very good assembler programmers? It's not really a topic I have a desire to debate (I know from experience that I am right).
I am (at least with one leg) in the HPC software development business and I say your point of view is somewhat biased.
Yeah, that I can't deny.
Nowadays there's hardly any reason for writing assembler code directly. There are, of course, a few occasions where you are right and hand-optimized assembler code will outperform code generated by any compiler. But that's not a statement you can apply globally.
I think it is - what you may have is a situation where the difference is not measurable or not significant, or not worth it when weighed up against the other factors (maintainability, productivity).
Of course if you write bad C/Fortran/C++ or whatever code and you hope for the compiler to come up with some magic to create an ultra-fast assembler out of it, then you're wrong.
Certainly, that would not be a fair comparison.
I would like to see the C code you used as baseline reference and information about the compiler and access to your hand-optimized code to actually have a look at it myself. Without showing evidence, you just make empty statements.
Well, it's up to people who believe otherwise to prove me wrong :-) All I did were some plain wall-time measurements, I didn't use the C-code performance as a baseline, I just took a quick look at the generated code, and knew I could do better. AFAIC, the main argument is that if the compiler uses 100 instructions to implement an algorithm, and I use 90, that's 10 instructions saved (all instructions being equal). I've been writing assembler code for more than 20 years, and I'm quite certain I can do better. /Per Jessen, Zürich -- To unsubscribe, e-mail: opensuse-programming+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-programming+help@opensuse.org