[Bug 1177260] Switching python-numpy to plain BLAS causes significant performance regression
https://bugzilla.suse.com/show_bug.cgi?id=1177260 https://bugzilla.suse.com/show_bug.cgi?id=1177260#c48 --- Comment #48 from Egbert Eich <eich@suse.com> --- (In reply to Stefan Br�ns from comment #47)
Yes, a dependency on both libraries, directly or indirectly, even over multiple indirections.
E.g. everything that embeds a python interpreter, and uses numpy, or scipy. Other examples probably exist.
Ok, I've given it a try, please check SR 1067697. As a heads up, I'm currently chasing another issue: we started to use a later C compiler to build openblas on SLE/Leap as it has more avx512 compiler intrinsics leaving the fortran compiler to be the stock version - in the hopes to retain the frotran ABI of this version. It turned out it didn't as the final link of libopenblas.so.0 is done with gcc (the newer version) - which pulled in libfortan5. So essentially, the fortran ABI is broken. So we are debating to go back to the stock compiler version - at least for x86_64. However, with the latest version update, the build with the stock gcc version (gcc7) has been broken again(*) when Intel added Cooperlake support. Of course, the side effect would be that we will lose a lot of optimization for modern Intel CPUs. (*) Intel did this before already which I had to go after as well. -- You are receiving this mail because: You are on the CC list for the bug.
participants (1)
-
bugzilla_noreply@suse.com