Comment # 6 on bug 1177260 from
(In reply to Egbert Eich from comment #5)
> (In reply to Stefan Br�ns from comment #3)
> > I have no problem with using plain BLAS on architectures where openBLAS does
> > not exist (e.g. RISC-V, its vectorization support does not fit openBLAS) or
> > which are so rarely used the extra (theoretical) support burden does not pay
> > off.
> > 
> > The current change, regardless of the performance, causes fragmentation and
> > extra work:
> > 
> > - The HPC builds do not run the test suite, regressions can go in unnoticed
> 
> This should be fixed regardless of this issue.
> 
> > - when openBLAS is good enough for HPC, it should be good enough for regular
> > use
> 
> This had nothing to do with 'good enough'.
> On SLE it is to keep the support matrix small - especially on platforms I'm
> less familiar with, like ppc and s390. BLAS has been available on SLE for a
> long time and is consumed by other packages, so it is there already and will
> not go away easily. It might be an option to drop it in favor of OpenBLAS,
> however.
> 
> > - Having different code for e.g. Leap/SLE and Tumbleweed doubles the work
> > required when there are bugs specific to either BLAS or openBLAS.
> 
> Yes, of course.
>  
> > numpy is used as the canonical array format for python bindings of many
> > scientific packages, many of these are not available as HPC modules (and as
> > far as I can see, HPC modules do not exist for Leap). IMHO with this change
> > Leap becomes an unusable toy not meant for scientific work.
> 
> Yes, this is what I don't want either. Technically, you could use these
> packages with the HPC version of numpy - by setting the environment
> accordingly - but this would be awkward.
> 
> (In reply to Stefan Br�ns from comment #4)
> > Another option would be to finally fix update-alternatives for
> > BLAS/openBLAS. Currently the openBLAS build mangles the soname so
> > update-alternatives does not really work.
> 
> I haven't looked at this, yet, but wouldn't this require for BLAS/OpenBLAS
> to be ABI compatible? Not sure if this is the case ...

Of course the ABIs are compatible, thats what these libraries are for, dito for
ATLAS, Intel MKL and AMD ACML.

> I will revert the change - but this will most likely not happen this week,
> yet, as I'd like to discuss a couple of things with maintainers beforehand.

I have looked into the problem again, the real problem is the missing CBLAS
build dependency which causes numpy to use a naive internal implemenation,
'DOUBLE_matmul_inner_noblas' instead of dgemm. This is a compile time decision,
i.e. blas libraries are completely(?) disabled.

After adding cblas-devel, it uses the runtime-configured (update-alternatives)
library again. With netlib BLAS, the time is down again to ~60 seconds, and
when switching top openblas_pthreads0, the previous performance is restored.

Unfortunately, openblas has a lower priority than netlib, so this requires
manual configuration:

sudo update-alternatives --config libblas.so.3 
There are 2 choices for the alternative libblas.so.3 (providing
/usr/lib64/libblas.so.3).

  Selection    Path                                  Priority   Status
------------------------------------------------------------
* 0            /usr/lib64/blas/libblas.so.3           50        auto mode
  1            /usr/lib64/blas/libblas.so.3           50        manual mode
  2            /usr/lib64/libopenblas_pthreads.so.0   20        manual mode

Dito for liblapack.so.3


You are receiving this mail because: