(In reply to Egbert Eich from comment #7) > (In reply to Stefan Br�ns from comment #6) > > (In reply to Egbert Eich from comment #5) > > > (In reply to Stefan Br�ns from comment #3) > > > > I have no problem with using plain BLAS on architectures where openBLAS does > > > > not exist (e.g. RISC-V, its vectorization support does not fit openBLAS) or > > > > which are so rarely used the extra (theoretical) support burden does not pay > > > > off. > > > > > > > > The current change, regardless of the performance, causes fragmentation and > > > > extra work: > > > > > > > > - The HPC builds do not run the test suite, regressions can go in unnoticed > > > > > > This should be fixed regardless of this issue. > > > > > > > - when openBLAS is good enough for HPC, it should be good enough for regular > > > > use > > > > > > This had nothing to do with 'good enough'. > > > On SLE it is to keep the support matrix small - especially on platforms I'm > > > less familiar with, like ppc and s390. BLAS has been available on SLE for a > > > long time and is consumed by other packages, so it is there already and will > > > not go away easily. It might be an option to drop it in favor of OpenBLAS, > > > however. > > > > > > > - Having different code for e.g. Leap/SLE and Tumbleweed doubles the work > > > > required when there are bugs specific to either BLAS or openBLAS. > > > > > > Yes, of course. > > > > > > > numpy is used as the canonical array format for python bindings of many > > > > scientific packages, many of these are not available as HPC modules (and as > > > > far as I can see, HPC modules do not exist for Leap). IMHO with this change > > > > Leap becomes an unusable toy not meant for scientific work. > > > > > > Yes, this is what I don't want either. Technically, you could use these > > > packages with the HPC version of numpy - by setting the environment > > > accordingly - but this would be awkward. > > > > > > (In reply to Stefan Br�ns from comment #4) > > > > Another option would be to finally fix update-alternatives for > > > > BLAS/openBLAS. Currently the openBLAS build mangles the soname so > > > > update-alternatives does not really work. > > > > > > I haven't looked at this, yet, but wouldn't this require for BLAS/OpenBLAS > > > to be ABI compatible? Not sure if this is the case ... > > > > Of course the ABIs are compatible, thats what these libraries are for, dito > > for ATLAS, Intel MKL and AMD ACML. > > > > Right, cublas, ARM's blas libarry etc. I've seen header differences between > implementations introducing subtle incompatibilities. This was not with > openblas > - or anything HPC-releated. > > > > I will revert the change - but this will most likely not happen this week, > > > yet, as I'd like to discuss a couple of things with maintainers beforehand. > > > > I have looked into the problem again, the real problem is the missing CBLAS > > build dependency which causes numpy to use a naive internal implemenation, > > 'DOUBLE_matmul_inner_noblas' instead of dgemm. This is a compile time > > decision, i.e. blas libraries are completely(?) disabled. > > > > Yes, it seems like unless HAVE_CBLAS is set (see > numpy/distutils/system_info.py). On SLE, there currently is no CBLAS, so > requiring it would mean it would be pulled in instead. > Therefore I'd rather see netlib's BLAS replaced by OpenBLAS and dropped - > I need to take this up with the person responsible for netlib's BLAS and > LAPACK in SLE, though. 1. openBLAS is not available for RISC-V, and likely never will 2. directly linking to openblas_* forfeits the possibility to use a different BLAS/LAPACK implementation. 3. Netlib (c)blas + lapack is ~0.7 + 7 MByte, openblas is ~28 MByte. > It looks like numpy still uses BLAS functions in umath_linalg.c - even if > CBLAS isn't available. > > > After adding cblas-devel, it uses the runtime-configured > > (update-alternatives) library again. With netlib BLAS, the time is down > > again to ~60 seconds, and when switching top openblas_pthreads0, the > > previous performance is restored. > > > > Unfortunately, openblas has a lower priority than netlib, so this requires > > manual configuration: > > > > sudo update-alternatives --config libblas.so.3 > > There are 2 choices for the alternative libblas.so.3 (providing > > /usr/lib64/libblas.so.3). > > > > Selection Path Priority Status > > ------------------------------------------------------------ > > * 0 /usr/lib64/blas/libblas.so.3 50 auto mode > > 1 /usr/lib64/blas/libblas.so.3 50 manual mode > > 2 /usr/lib64/libopenblas_pthreads.so.0 20 manual mode > > > > Dito for liblapack.so.3 > > Right. libopenblas is not a provider of libblas.so.3..., libcblas.so.3..., > liblapack.so.3... either. It does not provide the SONAME, so it is not picked up on the RPM level. The same is true for any other BLAS implementation I am aware of. But thats orthogonal to the question whether it is picked up as a runtime provider using update-alternatives. As long as some file/symlink named libblas.so.3 is in the runtime linker path (either ld.conf* or LD_LIBRARY_PATH), it will be picked up, even if it is a library with a completely different SONAME and file name.